Running the command esxcli storage nfs list on the affected host shows Accessibility: false and Mounted: false.Volume Name Host Share Accessible Mounted Read-Only isPE Hardware Acceleration
----------- --------- ------------- ------------ ------------- -------------- -------- ---------------------
Datastore1 192.#.#.# /Datastore1 false false false false Unknown
Datastore2 192.#.#.# /Datastore2 false false false false Unknown
Attempting to mount the NFS datastore fails with the error: “Datastore is already exported by a volume with the same name.”
esxcli storage nfs add -H <hostname> -s <share> -v <volumename>
Unable to create new NAS volume: <hostname:/share> is already exported by a volume with the name <volume name>
[root@esx11-3:/vmfs/volumes/61fde382-42a6at6e-1a9d-48df3723b874/log]
Unmounting and remounting the NFS volume results in the error: “Unable to connect to NFS server.”
Check vmkernel Port for NFS:
• Confirm whether a dedicated vmkernel port is configured for NFS traffic.
• If no dedicated port is configured, the management vmkernel interface is used for NFS connectivity by default.
Ensure the MTU settings are consistent across all vmkernel adapters and associated virtual switches. Mismatched MTU settings can result in failed or degraded connectivity.
Run the following command to test connectivity from the ESXi host to the NFS server:
vmkping -I <vmk#> -s <MTU> -d <NFS_Target_IP>
Replace <vmk#> with the correct vmkernel interface (e.g., vmk1), <MTU> with the configured MTU size (e.g., 1500 or 9000), and <NFS_target_IP> with the IP address of the NFS server.
If the vmkping command fails with 100% packet loss, investigate:
• The network configuration of the affected ESXi host.
• VLAN settings and tagging on both the host and physical switch ports.
VMware ESXi 7.x
VMware ESXi 8.x
The issue is caused by a misconfiguration of the uplinks on the Distributed Virtual Switch (DVSwitch) for the impacted ESXi host. Specifically, the uplink assigned to the portgroup used for NFS connectivity does not allow traffic for the VLAN required for NFS access. As a result, the ESXi host will be unable to establish a connection to the NFS server.
Note: The interface names (e.g., vmk3, vmnic8, vmnic9) and VLAN IDs (e.g., 1, 2) mentioned below may vary depending on the network configuration in other environments. Always validate against your specific setup.
Found that vmk3 is the vmkernel interface configured for NFS traffic, tagged with VLAN 1.
Running a vmkping test using vmk3 to the NFS server fails with 100% packet loss.
However, vmkping using the management vmkernel interface vmk0 to the same NFS server succeeds, indicating that the NFS server is reachable from the host network.
Further network analysis revealed:
• A LAG (Link Aggregation Group) is configured on the DVSwitch.
• vmnic9 is assigned to the NFS portgroup uplink but does not allow VLAN 1 traffic.
• vmnic8, which is also part of the LAG and used for management traffic, allows both VLAN 1 and VLAN 2 (used by management).
• Because NFS traffic was relying on vmnic9, which blocks VLAN 1, the NFS connection failed, while management traffic continued to function correctly via vmnic8.
Update the DVSwitch uplink configuration on the impacted ESXi host to use the correct vmnic, which allows traffic for the VLAN configured.
Refer the below document for detailed steps on how to update the uplinks for DVswitch
Configure Physical Network Adapters on a vSphere Distributed Switch
After making the changes, vmkping to the NFS server will be successful, confirming network connectivity.
The NFS datastores will be automatically remounted and will be accessible.