VMware vSAN 7.x
VMware vSAN 8.x
The hosts are unable to communicate with each other over vSAN traffic due to an MTU mismatch.
Run the command "esxcli vsan network list" to identify the VMK used for vSAN traffic.esxcli vsan network listInterfaceVmkNic Name: vmk1IP Protocol: IPInterface UUID: ########-####-####-####-########Agent Group Multicast Address: ###.#.#.#Agent Group IPv6 Multicast Address: ####: :#:#:#Agent Group Multicast Port: ####Master Group Multicast Address: ###.#.#.#Master Group IPv6 Multicast Address: : :#:#:#####Master Group Multicast Port: #####Host Unicast Channel Bound Port: #####Data-in-Transit Encryption Key Exchange Port: 0Multicast TTL: 5Traffic Type: vsan
In the above example, it is confirmed that vmk1 is used for vSAN traffic.
esxcfg-vswitch -l" to identify the vSwitch used for vSAN traffic and check the MTU configured on it.esxcfg-vswitch -lDVS Name Num Ports Used Ports Configured Ports MTU MTUSwitch name 2520 10 512 9000
DVPort ID In Use Client### 1 vmnicl ### 1 vmnic0### 0 ### 0 # 1 vmk0### 1 vmk1### 1 vmk2
esxcfg-vmknic -l" to verify the MTU set on the VMkernel adapter (vmk)esxcfg-vmknic -lvmk1 128 IPv4 900065535 true STATIC DefaultTCPIPStack.Run the command "esxcfg-nics -l" to confirm the MTU configured on the physical nics (vmnics).esxcfg-nics -lName PCI Driver Link Speed Duplex MAC Address MTU Descriptionvmnico ####: ##: ##: # vmxnet Up 10000Mbps Full ##:##:##:##:##:##:#### 9000vmnicl ####: ##: ##: # vmxnet Up 10000Mbps Full ##:##:##:##:##:##:#### 9000
In the above example, it is confirmed that vmnics are configured with MTU 9000.
Note: Repeat the above procedure for all the hosts in the cluster and make sure the MTU should be consistent across the network. It's important to ensure that the MTU setting is consistent across the entire environment — including the vSphere VMkernel interfaces, VMNICs, and the physical switch ports. In some cases, MTU mismatches can occur even within vSphere itself, between the VMkernel and the VMNICs..
PING ##.##.###.## ( ##.##.###.##): 8972 data bytes--- ##.##.###.## ping statistics ---3 packets transmitted, 0 packets received, 100% packet lossvmkping -I vmkx -d -s 1472 <IP adress of faulty node>"PING ##.##.###.## (##.##.###.##): 1472 data bytes1480 bytes from ##.##.###.##: icmp_seq=0 ttl=64 time=0.118 ms1480 bytes from ##.##.###.##: icmp_seq=1 ttl=64 time=0.116 ms1480 bytes from ##.##.###.##: icmp_seq=2 ttl=64 time=0.106 ms--- ##.##.###.## ping statistics ---3 packets transmitted, 3 packets received, 0% packet lossround-trip min/avg/max = 0.106/0.113/0.118 msIf the correct MTU for the environment is 1500 and the VMK is set 9000, change it to 1500 to allow for cluster creation.
1. In the vSphere Web Client, navigate to the host.
2. On the Configure tab, click Virtual Switches.
3. Navigate to the virtual switch, then click Edit.
4. Set the MTU value to 9000.
1. In the vSphere Web Client, navigate to the host.
2. On the Configure tab, click VMkernel Adapters.
3. Click Edit.
4. Set the MTU value to 9000.
Note: MTU size can be increased up to 9000 bytes.
Refer Enabling Jumbo Frames on virtual switches