As a first troubleshooting step, ensure that you are not exceeding the configuration maximums. For more information, see:
To resolve this issue, reduce the number RX queues used by each NIC port.
Note: Reducing the number of RX queues may have a performance impact for NICs handling a high I/O load, especially when dealing with 10GbE NICs.
You can see the allocation of vectors on the host in the vmkernel log at boot time. This example was reported by the vmkernel during bootup:
Mar 19 16:35:16 esxtest1 vmkernel: 0:00:00:22.279 cpu12:4145)VMK_PCI: 1115: device 000:003:00.0 allocated 9 vectors (intrType 3)
You can cross reference the PCI identifier (000:003:00.0, in this case) to a specific physical NIC using the output of this command:
# esxcfg-nics -l
You see output similar to:
vmnic2 0000:03:00.00 tg3 Down 0Mbps Half xx:xx:xx:xx:xx:xx 1500 Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet
In this example, you can see that the message is related to vmnic2, which is a Broadcom card based on the tg3 driver. This particular vmnic has 9 interrupts assigned to handle I/O requests. This is because the Broadcom tg3 async driver assigns 8 RX NetQueues per port.
This command reduces the number of Rx NetQueues per port from 8 to 7 on a server with 10 tg3 ports:
# esxcfg-module -s force_netq=7,7,7,7,7,7,7,7,7,7 tg3
Note: The number of 7s in the force_netq parameter array must be the same as the number of tg3 ports on the ESX/ESXi host.
If this was an ixgbe Intel NIC, you would change the VMDQ function. This is an example command for an ixgbe based Intel card:
# esxcfg-module -s VMDQ=15,15,15,15,15,15,15,15,15,15 ixgbe
Note: The range of receive queues specific by the VMDQ parameter is 1 to 16. The inputs to the VMDQ parameter above should match the number of ixgbe devices on the host.
If reducing RX NetQueues to 7 does not resolve the issue, you can reduce by increments of one until the issue is resolved.