Testing VMkernel network connectivity with the vmkping command
search cancel

Testing VMkernel network connectivity with the vmkping command

book

Article ID: 344313

calendar_today

Updated On:

Products

VMware vCenter Server VMware vSphere ESXi

Issue/Introduction

For troubleshooting purposes, it may be necessary to test VMkernel network connectivity between ESXi hosts in your environment.

This article provides you with the steps to perform a vmkping test between your ESXi hosts.

Environment

VMware vCenter Server 8.x
VMware vCenter Server 7.x
VMware vSphere ESXi 8.x
VMware vSphere ESXi 7.x

Resolution

The vmkping command sources a ping from the local VMkernel port.

Instructions to test vmkernel ping connectivity with vmkping:

  1. Connect to the ESXi host using an SSH session. For more information, see Using ESXi Shell in ESXi
     
  2. In the command shell, run this command:

    vmkping -I vmkX #.#.#.#

    where x.x.x.x is the hostname or IP address of the server that you want to ping and vmkX is the vmkernel interface to ping out of.
     
  3. If you have Jumbo Frames configured in your environment, run the vmkping command with the -s and -d options.

    vmkping -d -s 8972 #.#.#.#
    • To test 1500 MTU, run the command:

      vmkping -I vmkX #.#.#.# -d -s 1472

      Note: In the command, the -d option sets DF (Don't Fragment) bit on the IPv4 packet. 8972 is the size needed for 9000 MTU in ESXi.
  4. If the vmotion is configured on vmotion tcp/ip stack, run the below command:
            vmkping -I vmk0 #.#.#.# -S vmotion

 

Verification of your MTU size can be obtained from a SSH session by running this command:

esxcfg-nics -l

Output should be similar to:

esxcfg-nics -l

Name PCI Driver Link Speed Duplex MAC Address MTU Description
vmnic0 0000:02:00.00 e1000 Up 1000Mbps Full ##:##:##:##:##:## 9000 Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
vmnic1 0000:02:01.00 e1000 Up 1000Mbps Full ##:##:##:##:##:## 9000 Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)


esxcfg-vmknic -l

Output should be similar to:

esxcfg-vmknic -l

Interface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type

vmk1 iSCSI IPv4 10.10.10.10 255.255.255.0 10.10.10.255 ##:##:##:##:##:## 9000 65535 true STATIC

A successful ping response is similar to:

vmkping -I vmk0 10.0.0.1

PING server(10.0.0.1): 56 data bytes
64 bytes from 10.0.0.1: icmp_seq=0 ttl=64 time=10.245 ms
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.935 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.926 ms
--- server ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.926/4.035/10.245 ms


An unsuccessful ping response is similar to:

vmkping 10.0.0.2
PING server (10.0.0.2) 56(84) bytes of data.
--- server ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 3017ms


Note: The commands shown above are the same for ipv6. Just need to add the -6 option in the command, for example:

vmkping -6 and replace #.#.#.# by an ipv6 address ##:##:##:##:##:##:##:##

Full list of the vmkping options are:

vmkping [args] [host]
 

arg use
-4  use IPv4 (default)
-6 use IPv6
-c <count> set packet count
-d set DF bit (do not fragment) in IPv4 or Disable Fragmentation (IPv6)
-D vmkernel TCP stack debug mode
-i <interval> set interval (secs)
-I <interface> set outgoing interface, such as "-I vmk1"
-N <next_hop> set IP*_NEXTHOP- bypasses routing lookup
for IPv4, -I is required to use -N
-s <size> set the number of ICMP data bytes to be sent
The default is 56, which translates to a 64 byte ICMP frame when adding the 8 byte ICMP header (these sizes do not include the header)
-t <ttl> set IPv4 Time To Live or IPv6 Hop Limit
-v verbose
-W <time> set timeout to wait if no responses are received (secs)
-X XML output format for esxcli framework
-S sets the network stack instance name. If unspecified, the default stack is used. Note: only works for IPv4, not IPv6)

For testing TEP - TEP VMK connectivity between hosts (vxlan stack) :

vmkping -I vmk10 -S vxlan <destination_host's_TEP_VMK_IP>

example :
vmkping -I vmk10 -S vxlan #.#.#.#
PING #.#.#.# (#.#.#.#): 56 data bytes
64 bytes from #.#.#.#: icmp_seq=0 ttl=64 time=1.218 ms
64 bytes from #.#.#.#: icmp_seq=1 ttl=64 time=0.716 ms
64 bytes from #.#.#.#: icmp_seq=2 ttl=64 time=1.097 ms

--- #.#.#.# ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.716/1.010/1.218 ms


Notes:

  • If you see intermittent ping success, this might indicate you have incompatible NICs teamed on the vmkernel port. Either team compatible NICs or set one of the NICs to standby.
  • If you do not see a response when pinging by the hostname of the server, initiate a ping to the IP address. Initiating a ping to the IP address allows you to determine if the problem is a result of an issue with hostname resolution. If you are testing connectivity to another VMkernel port on another server remember to use the VMkernel port IP address because the server's hostname usually resolves to the service console address on the remote server.



Additional Information