VMware Tested Maximum Configuration
VMware limit testing is performed with these settings:
- 1500-byte MTU (no jumbo frames)
- The default number of queues for the NIC
These are maximums tested by VMware on systems with all NICs in a system in the same configuration. The testing below does not include mixed configurations where more than one kind of NIC is present in the system at the same time. That is, the adapters in one row cannot be combined with adapters in another row.
Manufacturer and NIC model | NIC Driver | Maximum Number of Supported NIC Ports | Number of CPUs | Memory |
Intel PCI-x NIC | e1000 | 32 | 12 | 5.2 GB |
Intel PCI-e NIC | e1000e | 24 | 16 | 32 GB |
Intel Zoar 1GigE | igb | 16 | 16 | 32 GB |
Broadcom 1GigE | tg3 | 32 | 16 | 32 GB |
Broadcom 1GigE | bnx2 | 16 | 16 | 32 GB |
NVIDIA 1GigE | forcedeth | 2 | 4 | 8 GB |
Neterion 10GigE | s2io | 4 | 8 | 32 GB |
Netxen 10GigE | nx_nic | 4 (8 in ESXi 5.0) | 8 | 32 GB |
Netxen 1GigE | nx_nic | 32 | | |
Intel 10GigE | ixgbe | 4 (8 in ESXi 5.0) | 4 | 16 GB |
Broadcom 10GigE 5771x | bnx2x | 4 (8 in ESXi 5.0) | 8 | 72 GB |
HP Flex-10 | bnx2x | 4 (physical NICs) | | |
The maximum configurations in the table above have been verified using ESX 4.0 build 164009 using the drivers provided with that release. The configurations above have not been tested in any of these conditions:
- With drivers released on separate CDs outside of the base ESXi/ESX installation
- With jumbo frames (MTU > 1500 bytes)
- With any NetQueue settings other than the defaults
Note: Some hardware vendors re-brand existing cards, but the underlying hardware will be the same. Refer to your hardware information to identify the correct hardware model.
Maximum Configurations
You can model the behavior of multiple adapter/driver classes running concurrently on a single system based on estimated CPU and memory requirements for various NICs. When using a large number of NICs on an ESXi/ESX system, make sure that the system has adequate memory and an appropriate number of CPUs. You must thoroughly qualify all such configurations prior to deployment.
Based on this modeling, VMware believes that you should be able to deploy these untested maximum configurations:
ESXi/ESX 4.xFor 1500 MTU configurations:
- 4 x 10G ports or
- 16 x 1G ports or
- When combining different speeds: 2 x 10G ports + 8 x 1G ports
For jumbo frame (for MTU up to 9000 bytes) configurations:
- 4 x 10G ports (only if the number of cores in the system is more than 8) or
- 12 x 1G ports or
- When combining different speeds: 2 x 10G ports and 4 x 1G ports
For more information on networking maximums, see the
vSphere 4.0 Configuration Maximums or
vSphere 4.1 Configuration Maximums documents.
ESXi 5.0, 5.1 and 5.5For 1500 MTU and jumbo frame (for MTU up to 9000 bytes) configurations:
- 8 x 10G ports
or
- 32 x 1G ports
or
- A combination of 10Gb and 1Gb ethernet ports:
- ESXi 5.0 and 5.1 : 6 X 10Gb + 4 X 1Gb ports
- ESXi 5.5 : 8 X 10Gb + 4 X 1Gb ports
For more information on networking maximums, see:
Notes:
- VMware recommends that you use a mixed environment to avoid a single point of failure for a driver issue.
- These estimates do not take into account platform-specific limitations. The configurations depend on the hardware, number of cores, overall system memory, and other factors.
- Some 1 GigE NICs can support a maximum configuration of 24 or 32 networking ports as indicated in the table above (in ESXi/ESX 4.x), but note that these are with 1500 MTU configurations.
Additional Information
For translated versions of this article, see: