Symptoms:
You're trying to add a new data store that uses NFS storage to your ESXi host.
You may see the following error in the vSphere Client notification window and recent tasks list:
"An error occurred during host configuration: . Operation failed, diagnostics report: Mount failed: Unable to complete sysinfo operation. Please see the VMkernel log file for details.: Unable to connect to NFS server: VSI node (5001:)"
The error contains the text "Unable to connect to NFS server"
The error does not contain "The mount request was denied by the NFS server"
vSphere ESXi
An NFS error that indicates unable to connect rather than failed or denied indicates that the server could not be reached.
The inability to reach the server can be caused by many configuration issues, such as an incorrect IP address, firewall rule, incorrect VLAN or incorrect uplink.
If other clients are successfully using the NFS server and the correct IP has been specified, the next steps are to check the network uplink and VLAN.
Many network environments use a separate storage network that utilizes a separate VLAN and often requires the use of a specific uplink. Not every uplink can be used with every VLAN. If other ESXi hosts or servers on the network have successfully connected to the storage server, reviewing their configuration can reveal information about the correct VLAN and uplink. See Troubleshooting VLAN Connectivity Issues
Be aware of how native VLANs, access ports, trunk ports and allowed VLAN lists operate, as these affect the ability to access a particular VLAN on an uplink.
See VLAN configuration on virtual switches, physical switches, and virtual machines for an overview of these concepts.
Identify the vmkernel that will be used when attempting to connect to the NFS server. This generally corresponds to the vmkernel that is within the same subnet as the IP address of the target server.
For best results, exactly one VMkernel should be associated with this subnet, and subnets should not overlap. Use an IP subnet or IP range calculator if you are unsure. If no VMkernel IP configurations match the subnet of a target IP, the traffic will be routed through the default gateway.
Establish a shell session to the ESXi host that is attempting to add the NFS storage.
Use the vmkping command to test connectivity to the server, replacing <vmk#> with the vmkernel you identified above and <IP address> with the IP address of the NFS server.
vmkping -I <vmk#> <IP address>
If this ping fails, you might not have connectivity to the NFS server on the uplinks associated with the VMkernel.
Using the information known about your network, you can proceed to test different uplinks to see if the NFS server can be reached following these steps:
Run the vmkping command you used above, substituting <vmk#> with the test VMkernel. For each VLAN number, including None, repeat the ping test for each uplink by editing the Teaming and Failover policy to specify only that uplink as Active.
After this test, if you have identified an uplink or set of uplinks that will reach the NFS server along with the correct VLAN setting, you can apply these settings to the port group associated with the VMkernel used for the NFS connection, or simply keep the new port group and/or VMkernel you have created. Where multiple uplinks can reach the NFS server, these can be configured as Active/Active or Active/Standby in the Teaming and Failover policy.
Key takeaway: Create a new port group and test all uplinks along with all associated VLAN configurations. Storage servers can often only be accessed through specific uplinks and VLANs due to switch configuration and network topology decisions.