a) vSAN: MTU check (ping with large packet size)
b) Cluster partition
c) Object health
d) Performance issue on vSAN data-store due to MTU mismatch.
e) Can not browse Data-store or register VMs.
VMware vSAN 7.x
VMware vSAN 8.x
The hosts are unable to communicate with each other over vSAN traffic due to an MTU mismatch.
Run the command "esxcli vsan network list" to identify the VMK used for vSAN traffic.esxcli vsan network listInterfaceVmkNic Name: vmk1IP Protocol: IPInterface UUID: yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyAgent Group Multicast Address: xxx.x.x.xAgent Group IPv6 Multicast Address: xxxx: :x:x:xAgent Group Multicast Port: zzzzzMaster Group Multicast Address: xxx.x.x.xMaster Group IPv6 Multicast Address: xxxx: :x:x:xMaster Group Multicast Port: zzzzzHost Unicast Channel Bound Port: zzzzzData-in-Transit Encryption Key Exchange Port: 0Multicast TTL: 5Traffic Type: vsan
In the above example, it is confirmed that vmk1 is used for vSAN traffic.
esxcfg-vswitch -l" to identify the vSwitch used for vSAN traffic and check the MTU configured on it.esxcfg-vswitch -lDVS Name Num Ports Used Ports Configured Ports MTU MTUSwitch name 2520 10 512 9000
DVPort ID In Use Client512 1 vmnicl 513 1 vmnic0514 0 515 0 0 1 vmk0128 1 vmk1256 1 vmk2
esxcfg-vmknic -l" to verify the MTU set on the VMkernel adapter (vmk)esxcfg-vmknic -lvmk1 128 IPv4 900065535 true STATIC DefaultTCPIPStack.Run the command "esxcfg-nics -l" to confirm the MTU configured on the physical nics (vmnics).esxcfg-nics -lName PCI Driver Link Speed Duplex MAC Address MTU Descriptionvmnico xxxx: xx: xx: x vmxnet Up 10000Mbps Full xx:xx:xx:xx:xx:xx:xxxx 9000vmnicl xxxx: xx: xx: x vmxnet Up 10000Mbps Full xx:xx:xx:xx:xx:xx:xxxx 9000
In the above example, it is confirmed that vmnics are configured with MTU 9000.
Note: Repeat the above procedure for all the hosts in the cluster and make sure the MTU should be consistent across the network. It's important to ensure that the MTU setting is consistent across the entire environment — including the vSphere VMkernel interfaces, VMNICs, and the physical switch ports. In some cases, MTU mismatches can occur even within vSphere itself, between the VMkernel and the VMNICs..
PING xx.xx.xxx.xx ( xx.xx.xxx.xx): 8972 data bytes--- xx.xx.xxx.xx ping statistics ---3 packets transmitted, 0 packets received, 100% packet lossvmkping -I vmkx -d -s 1472 <IP adress of faulty node>"PING xx.xx.xxx.xx (xx.xx.xxx.xx): 1472 data bytes1480 bytes from xx.xx.xxx.xx: icmp_seq=0 ttl=64 time=0.118 ms1480 bytes from xx.xx.xxx.xx: icmp_seq=1 ttl=64 time=0.116 ms1480 bytes from xx.xx.xxx.xx: icmp_seq=2 ttl=64 time=0.106 ms--- xx.xx.xxx.xx ping statistics ---3 packets transmitted, 3 packets received, 0% packet lossround-trip min/avg/max = 0.106/0.113/0.118 msIf the correct MTU for the environment is 1500 and the VMK is set 9000 change it to 1500 to allow for cluster creation.
To configure Jumbo Frames on a vDS in vSphere Web Client:
To configure Jumbo Frames on a vSS in vSphere Web Client:
1. In the vSphere Web Client, navigate to the host.
2. On the Configure tab, click Virtual Switches.
3. Navigate to the virtual switch, then click Edit.
4. Set the MTU value to 9000.
To enable Jumbo Frames on a VMkernel port using the vSphere Web Client in vCenter Server:
1. In the vSphere Web Client, navigate to the host.
2. On the Configure tab, click VMkernel Adapters.
3. Click Edit.
4. Set the MTU value to 9000.
Note: You can increase the MTU size up to 9000 bytes.
Find the KB article to set MTU for vmkernel and vmnic within vSphere.