Virtual machine network resource shares are too few.
Network packet size is too large, which results in high network latency. Use the VMware AppSpeed performance monitoring application or a third-party application to check network latency.
Network packet size is too small, which increases the demand for the CPU resources needed for processing each packet. Host CPU, or possibly virtual machine CPU, resources are not enough to handle the load.
Resolution
Verify that the latest version of VMware Tools is installed in the virtual machines.
Validation:TRUE. VMware Tools provides optimized drivers for virtual hardware, including network adapters (like VMXNET3). Running outdated or generic drivers (e.g., E1000) instead of the paravirtualized ones can severely impact network throughput, increase CPU utilization within the VM, and lead to higher latency. Keeping VMware Tools up-to-date ensures you're leveraging the latest performance enhancements and bug fixes.
VMware recommends using multiple NICs on the associated virtual switch to increase the overall network capacity for portgroups that contain many virtual machines or several virtual machines that are very active on the network.
Validation:TRUE. This is a fundamental best practice for both performance and resilience (failover). NIC teaming (also known as bond, port-channel, or EtherChannel on physical switches) allows you to aggregate the bandwidth of multiple physical NICs and distribute network traffic across them, thereby increasing total throughput and providing redundancy in case one NIC fails.
Verify the speed and duplex settings of the installed network adapters.
Validation:TRUE. Mismatched speed and duplex settings between an ESXi host's physical NIC and the connected physical switch port are a classic cause of severe network performance degradation, high error rates, and connection flapping. It's crucial that both sides are configured consistently (e.g., both 10 Gbps Full Duplex, or both set to Auto-Negotiate successfully). Manual configuration should match exactly on both ends.
Relevant KBs and Tech Docs:
Configuring the speed and duplex of an ESX / ESXi host network adapter:
Ensure auto-negotiation is consistent on both ends, or if setting manually, ensure it's identical (e.g., both 1000/Full, not one auto and one 1000/Full).
Verify that the portgroup and virtual switch are not configured for promiscuous mode.
Validation:TRUE. Promiscuous mode allows a virtual machine to see all network traffic on the virtual switch or port group, even traffic not destined for its MAC address. While useful for network analysis tools (like sniffers), it significantly increases the CPU overhead on the ESXi host as it must process and deliver all this extra traffic to the VM. If not explicitly required, promiscuous mode should be disabled for performance reasons.
Configuring promiscuous mode on a virtual switch or portgroup:
Verify the host is not overloaded. Networking relies on available processor resources. If the CPUs on the host are being used at capacity, network performance suffers.
Validation:TRUE. Network packet processing (both inbound and outbound) on an ESXi host requires CPU cycles. This includes virtual switch operations, driver processing, and any software-defined networking functions. If the host's CPUs are saturated (e.g., due to too many VMs, demanding applications, or inefficient resource scheduling), network I/O performance will degrade significantly, leading to higher latency and lower throughput, even if the physical NICs themselves are not saturated.
Verify the appropriate network driver for the virtual machine based on environmental needs.
Validation:TRUE. This closely relates to Step 1. The choice of virtual network adapter directly impacts VM network performance. For almost all modern workloads on ESXi, the VMXNET3 adapter is the recommended choice due to its paravirtualized nature, high performance, low CPU overhead, and advanced features (e.g., IPv6 offloads, jumbo frames). Legacy adapters like E1000 or E1000E are emulated and offer significantly lower performance and higher CPU cost.
Choosing a network adapter for your virtual machine: