When using the VMXNET3 driver on a virtual machine on ESXi, you see significant packet loss during periods of very high traffic bursts. The virtual machine may even freeze entirely. Doing one of the following may resolve the issue:
Other potential symptoms associated with this issue include:
Data transfer throughput degradation is observed between two virtual machines across different sites when utilizing a third-party replication utility (such as Robocopy). Statistical analysis of the vmxnet3 vNICs during active data transfers reveals severe buffer exhaustion. Polling the vNIC Rx summary via vsish on the destination host returns the following state:
Command syntax :
[root@ESX:~] vsish -e get /net/portsets/<Switch Name>/ports/<PortNumber>/vmxnet3/rxSummary | grep "running out of buffers"[root@ESX:~] vsish -e get /net/portsets/<Switch Name>/ports/<PortNumber>/vmxnet3/rxSummary | grep "1st ring"
Example :
[root@ESX:~] vsish -e get /net/portsets/DvsPortset-0/ports/100###334/vmxnet3/rxSummary | grep "running out of buffers" running out of buffers:190866
[root@ESX:~] vsish -e get /net/portsets/DvsPortset-0/ports/100###334/vmxnet3/rxSummary | grep "1st ring" 1st ring size:1024 # of times the 1st ring is full:190866
Concurrent polling of the physical uplink via esxcli network nic stats get records zero packet drops, isolating the fault to the virtual machine boundary.
Example :[root@ESX:~] esxcli network nic stats get -n=vmnic2NIC statistics for vmnic2 Packets received: 584755911853 Packets sent: 871394375675 Bytes received: 834950249254403 Bytes sent: 1302967954984868 Receive packets dropped: 0 Transmit packets dropped: 0 Multicast packets received: 44488970 Broadcast packets received: 29356204 Multicast packets sent: 1055465 Broadcast packets sent: 655 Total receive errors: 0 Receive length errors: 0 Receive over errors: 0 Receive CRC errors: 0 Receive frame errors: 0 Receive FIFO errors: 0 Receive missed errors: 0 Total transmit errors: 0 Transmit aborted errors: 0 Transmit carrier errors: 0 Transmit FIFO errors: 0 Transmit heartbeat errors: 0 Transmit window errors: 0
vSphere ESXi
vCenter Server
This can occur due to a lack of receive and transmit buffer space or when receiving traffic that is speed-constrained, for example: traffic filter
net-stats -l | grep -i VMNAME<PortNumber> 5 9 <Switch Name> <MAC Address> VMNAME.eth0vsish -e get /net/portsets/<Switch Name>/ports/<PortNumber>/vmxnet3/rxSummary | grep "1st ring"1st ring size:512# of times the 1st ring is full:276
vsish -e get /net/portsets/<Switch Name>/ports/<PortNumber>/vmxnet3/rxSummary | grep "running out of buffers"running out of buffers:3198"# of times the 1st ring is full" and/or "running out of buffers" is zero, then changing the ring buffer settings will not affect your symptoms. NOTE: The virtual NIC counters as seen by the vsish command are reset, when the Virtual Machine is vMotioned or is Power Cycled.
1.You can view the current values using the command:
ethtool -g <interface>
where <interface> is the interface name as it appears in your OS, e.g. eth0.
2.You can set a value using a capital G, followed by the interface name, followed by pairs of settings and values, for example:
ethtool -G <interface> rx 4096
ethtool -G <interface> rx 4096 rx-jumbo 4096 rx-mini 2048 tx 4096
Refer to The output of esxtop show dropped receive packets at the virtual switch for detailed instructions on changing these values.
RX Ring #2