If you want to configure IPFIX on one or more ESXi hosts that are attached to a vDS (Virtual Distributed Switch) backed by network adapters that are DPU-enabled (also known by the term "SMARTNIC"), you must create a vmkernel interface on the 'ops' TCP/IP stack. See the reference at Add a Switch IPFIX Profile
Once you have configured this, you can determine if there are IPFIX packets flowing from the ESXi host to a VCF Operations for Networks Collector node, using the steps below.
STEPS:
- From the vSphere client logged into vCenter, from the Hosts and Clusters view, select the subject ESXi host, and then Configure --> VMkernel adapters
- Confirm that you have a vmkernel interface configured With "ops" under the column heading for TCP/IP stack
- Confirm that the Network label (dvPortGroup name) is one that has routability to a VCF Operations for Networks Collector node.
- Confirm that the IP address / Netmask combination configured for the vmkernel interface will be able to reach the IP address / Netmask combination assigned to the VCF Operations for Networks Collector node.
- SSH into your subject ESXi host with root privileges.
- In this example, we are using a Lab setup with the following attributes:
- vmkernel interface called "vmk5" with an IP address of ###.###.###.102 / 24
- vDS called "DSwitch-#########-#######-DPU" that has physical uplink assignments of vmnic2 (Uplink1) and vmnic3 (Uplink2)
- Under normal circumstances when you SSH into an ESXi host and enter the command esxcfg-vswitch -l you would expect to see the vmkernel interfaces all listed (including, in this example, vmk5 as this is what we configured on the "ops" netstack)
- However, with a DPU-enabled host, you must run the following commands to access the "host within a host" that is characteristic of a DPU enabled configuration:
dpuctl list
- Output in our example:
Device Alias Vendor Model Base PCI Address
vmdpu0 Pensando Elba 0000:2b:00.0
- Next, SSH into the "host within a host" with the following command:
dpuctl ssh vmdpu0
- Output in our example showed typical messages associated with an SSH login to an ESXi host:
WARNING:
All commands run on the ESXi shell are logged and may be included in
support bundles. Do not provide passwords directly on the command line.
Most tools can prompt for secrets or accept them from standard input.
VMware offers powerful and supported automation tools. Please
see https://developer.vmware.com for details.
The ESXi Shell can be disabled by an administrative user. See the
vSphere Security documentation for more information.
[root@localhost:~]
- NOTE: You may see output like the following:
ssh: connect to host ###.###.###.# port 22: Connection refused
- This is because SSH is disabled by default in many DPU configurations.
- To enable SSH, enter the command "
dpuctl ssh --start vmdpu0" (in our example, vmdpu0 is the only alias)
- Then repeat the command:
dpuctl ssh vmdpu0 to login to the DPU "host within a host"
- Now, if you enter the command esxcfg-vswitch -l you will see the vmkernel interfaces all listed, including, in this example, vmk5 as this is what we configured on the "ops" netstack.
- Enter the command
esxtop and select n (for network) and you will observe which physical uplink (under the column TEAM-PNIC) is carrying the traffic for the vmkernel interface (shown the column headed "USED-BY" -- in our example, vmk5)
- In our example, this is vmnic134
- Now, confirm that you can vmkping from the "ops" vmkernel interface to the VCF Operations for Networks Collector.
- In our example, the VCF Operations for Networks Collector is at ###.###.###.26 / 24
- The command that we use here in our Lab is
vmkping -I vmk5 ###.###.###.26 -S ops
- In our example, the output looks like this:
64 bytes from 192.168.0.26: icmp_seq=0 ttl=64 time=0.881 ms
64 bytes from 192.168.0.26: icmp_seq=1 ttl=64 time=0.982 ms
64 bytes from 192.168.0.26: icmp_seq=2 ttl=64 time=0.696 ms
- If you do not see results like this, you do not have a route between the IP address / Netmask you have configured for the vmkernel interface on the "ops" netstack, and the VCF Operations for Networks Collector.
- You can now capture the packets using a command like the following:
pktcap-uw --uplink vmnic134 --capture EnsPortWriterTx,EnsPortWriterQueue,EnsPortWriterFlush -o - | tcpdump-uw -r - -enn
- NOTE: Please see the KB Packet capture on ESXi using the pktcap-uw tool for more information on packet capturing options.
- You may see the message "This command will call experimental vProbe builtins, continue? [Y/n]" and you can just enter "Y" to start the capturing and displaying (in this example, to the screen)