When monitoring network performance, the esxtop utility might show a high percentage of receive packets dropped (%DRPRX) at the virtual switch level. While esxtop reports these drops at the virtual switch, it is important to note that the packets are actually being dropped during the handoff between the virtual switch and the guest operating system's network driver.
When the guest OS network driver runs out of receive (RX) buffer memory—either due to sudden traffic microbursts or a lack of CPU cycles to process the queue—the packets are treated on a strict First-In, First-Out (FIFO) basis. Once the queue is completely full, any newly arriving packets are immediately discarded, resulting in degraded network performance, forced retransmissions, and application latency.
Warning: The network driver changes described in this article may impact guest networking at the time of the change, and a guest OS reboot may be required for the settings to fully apply. VMware highly recommends scheduling maintenance downtime for the affected virtual machine before proceeding.
Dropped receive packets at the virtual switch typically occur due to a bottleneck in either the guest's allocated network memory or its available CPU cycles:
Insufficient RX Buffer Memory: Virtual machine operating systems allocate memory (RAM) to process incoming network traffic. Under heavy network load or sudden microbursts, the guest OS may not have allocated a large enough receive (RX) buffer queue to store the rapid influx of incoming packets.
CPU Starvation: Network processing is highly dependent on CPU availability. If the virtual machine lacks sufficient CPU resources—either due to high in-guest CPU utilization or ESXi host-level CPU contention (e.g., high CPU Ready time)—the guest OS cannot process the interrupts required to empty the network buffers fast enough.
In either scenario, once the RX buffers fill up entirely, the network driver cannot accept new data. As a result, newly arriving packets are dropped on a First-In, First-Out (FIFO) basis between the virtual switch and the guest operating system's network driver.
Warning:
The ring size setting can be modified dynamically from within the guest operating system.
You can view the current values using the following command:
ethtool -g <interface>
(Note: Replace <interface> with the interface name as it appears in your OS, e.g., eth0.)
You can set a new value using a capital -G, followed by the interface name, followed by pairs of settings and values. For example:
ethtool -G <interface> rx 4096
Or, to configure multiple rings:
ethtool -G <interface> rx 4096 rx-jumbo 4096 rx-mini 2048 tx 4096
rx: Refers to receive ring buffer #1, which is used for receiving packets other than jumbo frames (up to 1500 bytes).
rx-jumbo: Refers to ring buffer #2, which is used exclusively for jumbo frames.
tx and rx-mini: Used for transmit frames and tiny frames, respectively.
Important Notes for Linux:
LRO Packet Drops: A Linux virtual machine enabled with Large Receive Offload (LRO) functionality on a VMXNET3 device might experience packet drops on the receiver side when RX Ring #2 runs out of memory. This occurs when the virtual machine is handling packets generated by LRO. Up to ESXi 5.5 Update 3, the maximum ring size for this parameter is 2048. For ESXi 5.5 Update 3 Patch 08, and ESXi 6.0 Patch 3 and later, the maximum value is 4096.
Reboot Persistence: Changes made using ethtool will generally not persist after a reboot. To keep these changes after a restart, you will need to save them using a method such as a startup script (e.g., ETHTOOL_OPTS in /etc/sysconfig/network-scripts/ifcfg-eth0) or a configuration manager (e.g., nmcli connection modify <conn> ethtool.ring-rx <value>). The method varies by Linux distribution. Contact your OS vendor for information on how to do this.
OS Limits: Even when ESXi supports a higher maximum value, the actual usable maximum value may be limited by the Linux guest operating system's kernel version and other variables.
In ESXi/ESX 4.0 Update 2 and later, you can configure the buffer parameters natively in the Device Properties from the Device Manager in Windows guest operating systems. (Note: These settings cannot be adjusted in the Windows guest operating system in ESXi/ESX 3.x.x and earlier).
When jumbo frames are enabled, the network card utilizes a second ring, Rx Ring #2 Size.
You can adjust the following parameters:
Rx Ring #1 Size
Rx Ring #2 Size
Tx Ring Size
Small Rx Buffers: You can modify the number of small RX buffers separately. The maximum value is 8192.
Large Rx Buffers: Controls the number of large buffers that are used in both RX Ring #1 and #2 when jumbo frames are enabled.
Tuning Tip: For some processes (e.g., traffic that arrives in bursts), you might need to increase the size of the ring, while for others (e.g., applications that are slow in processing receive traffic), you might increase the number of receive buffers.
If the driver is already loaded, unload it by entering the following command:
rem_drv vmxnet3s
To change the size of the RX rings, edit the vmxnet3.conf file available in both the /kernel/drv/amd64/ and /kernel/drv/ directories.
Reload the driver and plumb the interface by entering the following commands:
add_drv -i "pciex15ad,7b0" vmxnet3s
ifconfig vmxnet3sx plumb
For the E1000 virtual network driver in a Linux guest operating system, RX buffers can be modified from the guest operating system in exactly the same way as on a physical machine. The maximum value that can be manually configured is 4096. Determine an appropriate setting by experimenting with different buffer sizes.
To apply the appropriate setting, run the following command:
ethtool -G <interface> rx <value>
For the Intel PRO driver in Windows, receive buffers can be modified from the guest operating system in exactly the same way as on a physical machine.
To determine the appropriate setting by experimenting with different buffer sizes, load the Intel PRO driver in the guest operating system, navigate to Device Manager, and modify the Receive Buffers in the driver’s properties.
To increase the buffer value, add this line in the virtual machine’s .vmx configuration file:
ethernetX.numRecvBuffers = "<value>"
(Where X refers to the sequence number of your virtual NIC, e.g., ethernet0).
Maximum Limits:
On ESX 4.x and later, the maximum RX buffer supported is 512.
On ESXi 3.x, the maximum RX buffer supported is 128.
If the packet drops are being caused by CPU starvation (indicated by high CPU Ready %RDY times on the ESXi host) rather than microbursts of traffic, increasing the RX buffers will only delay the dropped packets. To resolve the root cause, you must ensure the virtual machine has consistent access to physical CPU cycles to process its network queues.
Consider the following strategies:
Balance Cluster Workloads: Migrate virtual machines to other ESXi hosts within the cluster to evenly distribute the compute load. If VMware Distributed Resource Scheduler (DRS) is enabled, ensure it is configured aggressively enough to balance CPU utilization and prevent localized host congestion.
Configure CPU Reservations: For highly sensitive or heavily utilized network appliances, configure a CPU Reservation for the impacted virtual machine. This guarantees that the virtual machine will always receive the requested CPU cycles, preventing it from being starved by "noisy neighbors" on the same host.
Right-Size the Virtual Machine: Ensure the virtual machine is not allocated more virtual CPUs (vCPUs) than it actually needs. Over-provisioning vCPUs can make it harder for the ESXi CPU scheduler to find available physical cores, paradoxically increasing CPU Ready time and worsening network performance.
The esxtop utility is a valuable tool for determining which physical uplinks (vmnics) are carrying traffic for specific virtual machine workloads.
To view workload network traffic in esxtop:
Establish an SSH session to the ESXi host using root privileges.
Launch the utility by running the following command:
esxtop
Press n to switch to the network display screen.
The screen will update to display line items for each workload, detailing which physical network interface is currently carrying its traffic.
Interpreting the "Pnic" Column:
Specific Interface: Typically, the column displays a dedicated interface name assigned to the workload, such as vmnic0 or vmnic1.
Aggregated Links: If the column displays a value like All(2), All(3), or All(4), it indicates that one or more uplinks are configured in an EtherChannel (also referred to as a port channel). This applies to both static port channels and Link Aggregation Control Protocol (LACP) configurations. The number in parentheses represents the total number of active uplinks participating in the bundle.