Enabling NetQueue on Intel Gigabit Network Devices Using the igb Driver in ESX/ESXi 4.x and ESXi 5.x
search cancel

Enabling NetQueue on Intel Gigabit Network Devices Using the igb Driver in ESX/ESXi 4.x and ESXi 5.x

book

Article ID: 344494

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

The asynchronous version of the ESX/ESXi 4.x and ESXi 5.0/5.1 igb driver uses VMware's NetQueue technology to enable Intel Virtual Machine Device Queues (VMDq) support for Ethernet devices based on the Intel 82576 and 82580 Gigabit Ethernet Controllers. VMDq is optional and disabled by default.


Environment

VMware vSphere ESXi 5.1
VMware ESX 4.1.x
VMware ESX 4.0.x
VMware ESXi 4.0.x Installable
VMware ESXi 4.0.x Embedded
VMware vSphere ESXi 5.0
VMware ESXi 4.1.x Installable
VMware ESXi 4.1.x Embedded

Resolution

Enabling VMDq

To enable VMDq:

  1. Ensure the correct version of the driver is installed and enabled to load automatically at boot:

    # esxcfg-module -e igb

    Notes:
  2. Set per each port the optional VMDq load parameters for the igb module.
    • Configure IntMode=2. Setting a value of 2 for this option specifies using MSI-X, which enables the Ethernet controller to direct interrupt messages to multiple processor cores. MSI-X must be enabled in order for NetQueue to work with VMDq.
    • Set a value for the parameter to indicate the number of transmit and receive queues. The parameter value ranges from 1 to 8 since Intel 82576 and 82580 based network devices provide a maximum of 8 transmit queues and 8 receive queues per port. The value used sets both the transmit and receive queues to the same number.

      For a quad-port adapter, the following configuration turns on VMDq in full on all four ports:

      # esxcfg-module -s "IntMode=2,2,2,2 VMDQ=8,8,8,8" igb

      The VMDq configuration is flexible. Systems with multiple ports are enabled and configured by comma-separated lists. The values are applied to the ports in the order they are enumerated on the PCI bus.

      For example:

      # esxcfg-module -s IntMode=0,0,2,2, ... ,2,2 VMDQ=1,1,8,8, ... ,4,4 igb

      Shows:
      • The values configured for ports 1 and 2 are: IntMode=0 and VMDQ=1
      • The values configured for ports 3 and 4 are: IntMode=2 and VMDQ=8
      • The values configured for the last two ports are: IntMode=2 and VMDQ=4

  3. Reboot the ESX host.
Limitation Notes:
  • With standard sized Ethernet packets (MTU = 1500 or less), the maximum number of ports supported in VMDq mode is 8, with each port using 8 transmit and receive queues.
  • When using Jumbo Frames (MTU between 1500 and 9000) and VMDq, the maximum number of supported ports is 4, and also the number of transmit and receive queues per port must be reduced to 4.

Verifying that VMDq is enabled

To verify that VMDq is enabled:

  1. Check the options configured for the igb module: 
    # esxcfg-module -g igb 

    The output should appear similar to:

    igb enabled = 1 options = 'IntMode=2,2,2,2,2,2,2,2 VMDQ=8,8,8,8,8,8,8,8' 

    The enabled value must equal 1, which indicates the igb module will load automatically. IntMode and VMDQ must be set for each port. The example above shows a configuration with 8 ports, where all interfaces are configured in full VMDq mode.

  2. Determine which ports use the igb driver using esxcfg-nics. Confirm the driver successfully claimed all supported devices present in the system (enumerate them using lspci and compare the list with the output of esxcfg-nics -l). Query the statistics on each interface using ethtool. If VMDq has been enabled successfully, statistics for multiple transmit and receive queues are shown (see tx_queue_0 through tx_queue_7 and rx_queue_0 through rx_queue_7 in the example below).

    # esxcfg-nics -l

    Name PCI Driver Link Speed Duplex MAC Address MTU Description
    vmnic0 04:00.00 bnx2 Up 1000Mbps Full xx:xx:xx:xx:xx:xw 1500 Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-T
    vmnic1 08:00.00 bnx2 Down 0Mbps Half xx:xx:xx:xx:xx:xx 1500 Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-T
    vmnic2 0d:00.00 igb Up 1000Mbps Full xx:xx:xx:xx:xx:xy 1500 Intel Corporation 82576 Gigabit Network Connection
    vmnic3 0d:00.01 igb Up 1000Mbps Full xx:xx:xx:xx:xx:xz 1500 Intel Corporation 82576 Gigabit Network Connection
    vmnic4 0e:00.00 igb Up 1000Mbps Full xx:xx:xx:xx:xx:x1 1500 Intel Corporation 82576 Gigabit Network Connection
    vmnic5 0e:00.01 igb Up 1000Mbps Full xx:xx:xx:xx:xx:x2 1500 Intel Corporation 82576 Gigabit Network Connection
    vmnic6 10:00.00 igb Up 1000Mbps Full xx:xx:xx:xx:xx:x3 1500 Intel Corporation 82580 Gigabit Network Connection
    vmnic7 10:00.01 igb Up 1000Mbps Full xx:xx:xx:xx:xx:x4 1500 Intel Corporation 82580 Gigabit Network Connection
    vmnic8 10:00.02 igb Up 1000Mbps Full xx:xx:xx:xx:xx:x5 1500 Intel Corporation 82580 Gigabit Network Connection
    vmnic9 10:00.03 igb Up 1000Mbps Full xx:xx:xx:xx:xx:x6 1500 Intel Corporation 82580 Gigabit Network Connection

    # lspci | grep -e 82576 -e 82580

    0d:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    0d:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    0e:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    0e:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    10:00.0 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)
    10:00.1 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)
    10:00.2 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)
    10:00.3 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)

    # ethtool -S vmnic6

    NIC statistics:
    rx_packets: 0
    tx_packets: 0
    rx_bytes: 0
    tx_bytes: 0
    rx_broadcast: 0
    tx_broadcast: 0
    rx_multicast: 0
    tx_multicast: 0
    multicast: 0
    collisions: 0
    rx_crc_errors: 0
    rx_no_buffer_count: 0
    rx_missed_errors: 0
    rx_aborted_errors: 0
    tx_carrier_errors: 0
    tx_window_errors: 0
    tx_abort_late_coll: 0
    tx_deferred_ok: 0
    tx_single_coll_ok: 0
    tx_multi_coll_ok: 0
    tx_timeout_count: 0
    rx_long_length_errors: 0
    rx_short_length_errors: 0
    rx_align_errors: 0
    tx_tcp_seg_good: 0
    tx_tcp_seg_failed: 0
    rx_flow_control_xon: 0
    rx_flow_control_xoff: 0
    tx_flow_control_xon: 0
    tx_flow_control_xoff: 0
    rx_long_byte_count: 0
    tx_dma_out_of_sync: 0
    tx_smbus: 0
    rx_smbus: 0
    dropped_smbus: 0
    rx_errors: 0
    tx_errors: 0
    tx_dropped: 0
    rx_length_errors: 0
    rx_over_errors: 0
    rx_frame_errors: 0
    rx_fifo_errors: 0
    tx_fifo_errors: 0
    tx_heartbeat_errors: 0
    tx_queue_0_packets: 0
    tx_queue_0_bytes: 0
    tx_queue_0_restart: 0
    tx_queue_1_packets: 0
    tx_queue_1_bytes: 0
    tx_queue_1_restart: 0
    tx_queue_2_packets: 0
    tx_queue_2_bytes: 0
    tx_queue_2_restart: 0
    tx_queue_3_packets: 0
    tx_queue_3_bytes: 0
    tx_queue_3_restart: 0
    tx_queue_4_packets: 0
    tx_queue_4_bytes: 0
    tx_queue_4_restart: 0
    tx_queue_5_packets: 0
    tx_queue_5_bytes: 0
    tx_queue_5_restart: 0
    tx_queue_6_packets: 0
    tx_queue_6_bytes: 0
    tx_queue_6_restart: 0
    tx_queue_7_packets: 0
    tx_queue_7_bytes: 0
    tx_queue_7_restart: 0
    rx_queue_0_packets: 0
    rx_queue_0_bytes: 0
    rx_queue_0_drops: 0
    rx_queue_0_csum_err: 0
    rx_queue_0_alloc_failed: 0
    rx_queue_1_packets: 0
    rx_queue_1_bytes: 0
    rx_queue_1_drops: 0
    rx_queue_1_csum_err: 0
    rx_queue_1_alloc_failed: 0
    rx_queue_2_packets: 0
    rx_queue_2_bytes: 0
    rx_queue_2_drops: 0
    rx_queue_2_csum_err: 0
    rx_queue_2_alloc_failed: 0
    rx_queue_3_packets: 0
    rx_queue_3_bytes: 0
    rx_queue_3_drops: 0
    rx_queue_3_csum_err: 0
    rx_queue_3_alloc_failed: 0
    rx_queue_4_packets: 0
    rx_queue_4_bytes: 0
    rx_queue_4_drops: 0
    rx_queue_4_csum_err: 0
    rx_queue_4_alloc_failed: 0
    rx_queue_5_packets: 0
    rx_queue_5_bytes: 0
    rx_queue_5_drops: 0
    rx_queue_5_csum_err: 0
    rx_queue_5_alloc_failed: 0
    rx_queue_6_packets: 0
    rx_queue_6_bytes: 0
    rx_queue_6_drops: 0
    rx_queue_6_csum_err: 0
    rx_queue_6_alloc_failed: 0
    rx_queue_7_packets: 0
    rx_queue_7_bytes: 0
    rx_queue_7_drops: 0
    rx_queue_7_csum_err: 0
    rx_queue_7_alloc_failed: 0

Disabling VMDq

To disable VMDq:
  1. To return the igb driver to default (non-VMDq) mode, erase the optional VMDq load parameters:

    #esxcfg-module -s "" igb

  2. Reboot the ESX host.