Testing VMkernel network connectivity with the vmkping command
search cancel

Testing VMkernel network connectivity with the vmkping command


Article ID: 344313


Updated On:


VMware vCenter Server VMware vSphere ESXi


For troubleshooting purposes, it may be necessary to test VMkernel network connectivity between ESXi hosts in your environment.

This article provides you with the steps to perform a vmkping test between your ESXi hosts.


VMware vSphere ESXi 7.0.0
VMware ESX 4.1.x
VMware vSphere ESXi 5.0
VMware vSphere ESXi 6.0
VMware VirtualCenter 2.5.x
VMware ESXi 4.1.x Installable
VMware vCenter Server 5.5.x
VMware ESX 4.0.x
VMware vCenter Server 5.1.x
VMware ESXi 4.0.x Embedded
VMware vSphere ESXi 5.5
VMware ESXi 3.5.x Installable
VMware ESXi 3.5.x Embedded
VMware vCenter Server 6.0.x
VMware vCenter Server 4.0.x
VMware vSphere ESXi 5.1
VMware vSphere ESXi 6.5
VMware ESX Server 3.0.x
VMware ESXi 4.1.x Embedded
VMware vCenter Server 5.0.x
VMware vCenter Server 4.1.x
VMware ESXi 4.0.x Installable
VMware vCenter Server 6.5.x
VMware ESX Server 3.5.x
VMware VirtualCenter 2.0.x


The vmkping command sources a ping from the local VMkernel port.

Instructions to test vmkernel ping connectivity with vmkping:
  1. Connect to the ESXi host using an SSH session. For more information, see Using ESXi Shell in ESXi 5.x, 6.x and 7.x (2004746).
  2. In the command shell, run this command:

    vmkping -I vmkX x.x.x.x

    where x.x.x.x is the hostname or IP address of the server that you want to ping and vmkX is the vmkernel interface to ping out of.
  3. If you have Jumbo Frames configured in your environment, run the vmkping command with the -s and -d options.

    vmkping -d -s 8972 x.x.x.x

    Note: In the command, the -d option sets DF (Don't Fragment) bit on the IPv4 packet. 8972 is the size needed for 9000 MTU in ESXi.
To test 1500 MTU, run the command:

vmkping -I vmkX x.x.x.x -d -s 1472

Verification of your MTU size can be obtained from a SSH session by running this command:

esxcfg-nics -l

Output should be similar to:

esxcfg-nics -l

Name PCI Driver Link Speed Duplex MAC Address MTU Description
vmnic0 0000:02:00.00 e1000 Up 1000Mbps Full xx:xx:xx:xx:xx:xx 9000 Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
vmnic1 0000:02:01.00 e1000 Up 1000Mbps Full xx:xx:xx:xx:xx:xx 9000 Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)

esxcfg-vmknic -l

Output should be similar to:

esxcfg-vmknic -l

Interface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type

vmk1 iSCSI IPv4 XX:XX:XX:XX:XX:XX 9000 65535 true STATIC

A successful ping response is similar to:

vmkping -I vmk0

PING server( 56 data bytes
64 bytes from icmp_seq=0 ttl=64 time=10.245 ms
64 bytes from icmp_seq=1 ttl=64 time=0.935 ms
64 bytes from icmp_seq=2 ttl=64 time=0.926 ms
--- server ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.926/4.035/10.245 ms

An unsuccessful ping response is similar to:

PING server ( 56(84) bytes of data.
--- server ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 3017ms

Note: The commands shown above are the same for ipv6. Just need to add the -6 option in the command, for example:

vmkping -6 and replace x.x.x.x by an ipv6 address xx:xx:xx:xx:xx:xx:xx:xx

Full list of the vmkping options are:

vmkping [args] [host]
-4 use IPv4 (default)
-6use IPv6
-c <count>set packet count
-dset DF bit (do not fragment) in IPv4 or Disable Fragmentation (IPv6)
-Dvmkernel TCP stack debug mode
-i <interval>set interval (secs)
-I <interface>set outgoing interface, such as "-I vmk1"
-N <next_hop>set IP*_NEXTHOP- bypasses routing lookup
for IPv4, -I is required to use -N
-s <size>set the number of ICMP data bytes to be sent
The default is 56, which translates to a 64 byte ICMP frame when adding the 8 byte ICMP header (these sizes do not include the header)
-t <ttl>set IPv4 Time To Live or IPv6 Hop Limit
-W <time>set timeout to wait if no responses are received (secs)
-XXML output format for esxcli framework
-Ssets the network stack instance name. If unspecified, the default stack is used. Note: only works for IPv4, not IPv6)

  • If you see intermittent ping success, this might indicate you have incompatible NICs teamed on the vmkernel port. Either team compatible NICs or set one of the NICs to standby.
  • If you do not see a response when pinging by the hostname of the server, initiate a ping to the IP address. Initiating a ping to the IP address allows you to determine if the problem is a result of an issue with hostname resolution. If you are testing connectivity to another VMkernel port on another server remember to use the VMkernel port IP address because the server's hostname usually resolves to the service console address on the remote server.

Additional Information

VMware Skyline Health Diagnostics for vSphere - FAQ
Troubleshooting vMotion fails with network errors