Qedentv performance impact with NSX-T GENEVE Configuration.
search cancel

Qedentv performance impact with NSX-T GENEVE Configuration.


Article ID: 311924


Updated On:


VMware vSphere ESXi


Low throughput is observed in an overlay (GENEVE) network configured using NSX-T.


VMware vSphere ESXi 7.0.3


In an overlay network with GENEVE configuration, low throughput and spikes in CPU usage may be observed for VM workloads because LRO (Large Receive Offloads) aggregation does not occur in the HW. 
LRO (aggregation) doesn’t occur in this configuration, because GENEVE packets have GENEVE Options present in the packet header. Changes were made in VMWare NSX-T, where GENEVE Options are now present in the GENEVE header. Due to a limitation in NIC FW, qedentv driver doesn't support TPA (LRO) for packets with GENEVE options present in the header. However, TPA (LRO) is supported with GENEVE packets without the Options in the header. 
This is not a bug, but a limitation within the NIC FW. The absence of TPA (LRO) is due to the nature of the system configuration i.e., Geneve packets with Options.


Currently there is no resolution.

Since LRO is not supported for the configurations which have packets with GENEVE options, RSS(supported by qedentv NIC) can provide great benefits and improve the throughput. 

Below are the settings that can help improve the performance:
1. Steps to enable RSS for VM workloads
In the .vmx file for each virtual machine that will use this feature, add ethernetX.pNicFeatures = "4" ("X" here is the number of virtual network card to which the feature should be added)
The above step can also be found in VMware performance best practices document. Here is the link to the doc (Page 47). Performance Best Practices forVMware vSphere 7.0

Steps to change VMX file change and it requires VM reboot.
Find the below steps to add entry in VMX file:
Select the VM -> Edit Setting ->VM Options -> Advanced -> Edit Configuration -> Add parameters -> (Add the Kay and Value) -> Save (edited)
Note : ethernetX.pNicFeatures, (where X is the number of the virtual network card to which the feature should be added)
Value: "4".

Also refer to page 48: Performance Best Practices forVMware vSphere 7.0, Update 3

2. Configure Jumbo MTU on vNIC (Optional)
Further, if 9K MTU is configured on the physical adapter, configuring 8800 MTU on the vNic interface (VM) will provide more benefit.