This article provides steps to workaround the issue if unable to unmount NFS datastore in ESXi 7.x 8.x.
When using NFS datastores on some NetApp NFS filer models on an ESXi host, the following symptoms may appear.
NFSLock: 515: Stop accessing ## ######### #NFS: 283: Lost connection to the server ###.###.###.# mount point /vol/datastore##, mounted as ########-########-####-############ ("datastore##")NFSLock: 477: Start accessing ## ######### againNFS: 292: Restored connection to the server ###.###.###.# mount point /vol/datastore##, mounted as -########-########-################ ("datastore##")
<YYYY-MM-DD>T<time> Z cpu2:8194)StorageApdHandler: 277: APD Timer killed for ident []########-########</time><YYYY-MM-DD>T<time></time> 607Z cpu2:8194)StorageApdHandler: 402: Device or filesystem with identifier [] has exited the All Paths Down state.########-########<YYYY-MM-DD>T<time></time> Z cpu2:8194)StorageApdHandler: 902: APD Exit for ident []!########-########<YYYY-MM-DD>T<time></time> Z cpu16:8208)NFSLock: 570: Start accessing ## ############## again<YYYY-MM-DD>T<time></time> Z cpu2:8194)WARNING: NFS: 322: Lost connection to the server ##.##.##.# mount point /vol/nfsexamplevolume, mounted as -########-########-################ ("NFS_EXAMPLE_VOLUME")<YYYY-MM-DD>T<time></time> Z cpu2:8194)WARNING: NFS: 322: Lost connection to the server ##.##.##.# mount point /vol/nfsexamplevolume2, mounted as ("NFS_EXAMPLE_VOLUME2")-########-########-################
<YYYY-MM-DD>T<time></time> Z: [vmfsCorrelator] ###############: [esx.problem.vmfs.nfs.server.disconnect] ###.###.###.# /vol/datastore## ########-########-####-############ volume-name:datastore## <YYYY-MM-DD>T<time></time> Z: [vmfsCorrelator] ##############: [esx.problem.vmfs.nfs.server.restored] ###.###.###.# /vol/datastore## ########-########-####-############ volume-name:datastore##No Time Source Destination Protocol Length Info###### ###.###### ##.#.#.## ##.#.#.## RPC 574 [TCP ZeroWindow] Continuation###### ###.###### ##.#.#.## ##.#.#.## TCP 1514 [TCP ZeroWindow] [TCP segment of a reassembled PDU]NFS Queue depth settings :
|----Option Name..................................MaxQueueDepth
|----Current Value................................4294967295
|----Default Value................................4294967295
|----Min Value....................................1
|----Max Value....................................4294967295
|----Hidden.......................................false
|----Parent......................................./NFS/
|----Path........................................./NFS/MaxQueueDepth
VMware vSphere ESXi 7.x
VMware vSphere ESXi 8.x
Workaround 1
To workaround this issue and prevent it from occurring, reduce the NFS.MaxQueueDepth advanced parameter to a much lower value. This reduces or eliminates the disconnections.
When sufficiently licensed, utilize the Storage I/O Control feature to work around the issue. An Enterprise Plus license for all ESXi hosts is required to use this feature.
When Storage I/O Control is enabled, it dynamically sets the value of MaxQueueDepth, circumventing the issue.
Workaround 2
To set the NFS.MaxQueueDepth advanced parameter using the vSphere Client:
NFS.MaxQueueDepth.NFS.MaxQueueDepth advanced parameter via the command line:
esxcfg-advcfg -s 64 /NFS/MaxQueueDepthesxcfg-advcfg -g /NFS/MaxQueueDepth