Troubleshooting disk latency when using Jumbo Frames with iSCSI or NFS datastores
search cancel

Troubleshooting disk latency when using Jumbo Frames with iSCSI or NFS datastores

book

Article ID: 306948

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction


Symptoms:
You see disk latency when using Jumbo Frames with iSCSI or NFS datastores.

Environment

VMware vSphere ESXi 5.5
VMware vSphere ESXi 5.1
VMware vSphere ESXi 7.0
VMware vSphere ESXi 8.0

Cause

The disk latency may happen if the storage processor or ESX/ESXi host is not configured properly for the MTU size which you selected.

Resolution

To ensure the host is configured properly for the defined MTU size:

  1. Log in to ESX or ESXi host using SSH. For more information, see:
  2. Run this command from the ESX/ESXi host:

    # vmkping -s MTU_header_size -d IP_address_of_NFS_or_iSCSI_server

    Where:
    -s sets the packet size
    -d indicates do not fragment the packet

    Note: Assuming header size = 216, then -s value (packet size) would be: 9000 - 216 = 8784
    Example command:

    # vmkping -s 8784 -d 192.168.1.100

  3. If you receive a response, this means that the communication is occurring at a desired MTU. If you do not receive the response, run the vmkping command without -d option:

    # vmkping -s MTU_header_size IP_address_of_NFS_or_iSCSI_server

  4. If you receive a response, this means that the configuration issue still exists, but that the large packets are being fragmented. This issue may lead to disk latency, the disruption of networking or storage for other components in your environment. Fragmenting and re-assembling packets uses a lot of CPU resources on the switch, storage processor, and ESX hosts.

  5. Verify that the ESX/ESXi host is configured properly for Jumbo Frames. Run this command:

    # esxcfg-nics -l
    Name PCI Driver Link Speed Duplex MAC Address MTU Description
    vmnic0 0000:01:00.00 bnx2 Up 1000Mbps Full xx:xx:xx:xx:xx:xx 1500 Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet
    vmnic1 0000:01:00.01 bnx2 Up 1000Mbps Full xx:xx:xx:xx:xx:xy 1500 Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet
    vmnic2 0000:02:00.00 bnx2 Up 1000Mbps Full xx:xx:xx:xx:xx:xz 9000 Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet


  6. Verify that the vSwitch is configured for Jumbo Frames once you find a value under the MTU column that matches your desired MTU size.

    # esxcfg-vswitch -l
    Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
    vSwitch0 128 3 128 1500 vmnic0

    PortGroup Name VLAN ID Used Ports Uplinks
    Management Network 0 1 vmnic0

    Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
    vSwitch1 128 2 128 1500 vmnic1

    PortGroup Name VLAN ID Used Ports Uplinks
    VMnet VLAN 5 0 0 vmnic1

    Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
    vSwitch2 128 3 128 9000 vmnic2

    PortGroup Name VLAN ID Used Ports Uplinks
    iSCSI_1 0 1 vmnic2

  7. Verify the MTU column for the vSwitch that has the VMkernel port configured on it also matches the MTU size. For more information, see iSCSI and Jumbo Frames configuration on ESX/ESXi (1007654) or Enabling Jumbo Frames for VMkernel ports in a virtual distributed switch (1038827).

    Note: If the VMkernel or vSwitch are configured for Jumbo Frames, then there is a configuration problem either on a network component such as network switches or routers, or with the storage processor.

  8. Verify that all devices between the ESX host and the storage array (including physical network switches) are configured to support Jumbo Frames for the desired MTU size.


Additional Information