Troubleshooting retransmitted and out-of-order packets for VXLAN-backed networks with NSX for vSphere 6.x
book
Article ID: 303233
calendar_today
Updated On:
Products
VMware NSX
Issue/Introduction
When transmitting multiple flows through a 1 Gbps or 10 Gbps network adapter, you notice less than wire-rate performance
Performance is somewhat higher for VLAN-backed traffic compared to VXLAN-backed traffic
Packet captures show re-transmitted packets and out-of-order packets
Environment
NSX for vSphere 6.x
Cause
These conditions may be resulting from the type of packet offload supported by the physical NIC in the host. Specifically, physical NICs provide varying support for VXLAN hardware offload functions or may provide such support for non-VXLAN traffic. VMware recommends to contact the physical NIC vendor for more information.
Alternately, you can run the tcpdump command (For example, tcpdump vv nli eth0) on the source and destination virtual machines running on the ESXi hosts. If TSO and LRO are working as expected, you see packets larger than the MTU size in the packet capture.
Resolution
This section of the document shows how to use the tcpdump command as well as the VMware packet capture utility to troubleshoot the conditions described in the Symptom section:
Use the net-stats –l command to learn the internal port ID of the virtual machine virtual NIC and uplink vmnic.
Use the following steps to troubleshoot this condition by capturing packets at specified I/O chain points along the transmission path. In this example scenario, the transmitting virtual machine has an IP address of 10.20.##.##, and the remote VTEP has an IP address of 192.168.###.###.
Note: This troubleshooting procedure requires knowledge of the VMware packet capture utility.
Inside the virtual machine using the tcpdump command. Look for any re-transmitted packets (either the packet segment or the full packet (at the MTU)).
tcpdump -i eth4 host 10.20.##.##
At the vNIC's Port_Input(), where the vNIC backend injects packets into the virtual switch. Look for any re-transmitted packets (either the packet segment or the full packet (at the MTU)). In this example scenario, we will capture the first 128 bytes of the outgoing packets.
After the VXLAN software TSO on vmnic1. Confirm the packet segments and the full packets have reached this point. In this example scenario, use the following command to capture the first 128 bytes of packets destined to the remote VTEP (which has an IP address of 192.168.###.###).
If the packet captures indicate that both segmented and full packets have reached #3 and that out of order packets are arriving at #4 (in other words, after VXLAN software TSO and before the NIC driver), investigate the FIFO scheduler. The FIFO scheduler drops a re-transmitted outgoing packet which exceeds the queue length. Normally, the default queue length (500 packets) is sufficient to match the throughput enabled by the host processor and avoids packet drops even for bursty traffic. However, in some use cases, the NIC driver becomes fully occupied and returns packets to the FIFO scheduler, which in turn prepends them to the software transmit queue for later transmission. Enabling software-based TSO increases the number of packets (depending on the MTU size) and may fill the software queue, leading to packet drops and decreased throughput.
To resolve this issue, try increasing the length of the queue. Note that a longer queue means storing more packets and in turn occupying more memory. To increase the FIFO scheduler’s queue length, in the vSphere Web client, navigate to Host > Configuration > Advanced Settings > Net > Net.MaxNetifTxQueueLen