This issue is resolved in
- VMware vSphere ESXi 6.7 Patch release ESXi670-202004002
- VMware vSphere 7.0b
Above patches are available for download in your support.broadcom.com account.
Workaround:
To workaround this issue, disable the automatic unmap processing on all the hosts sharing the volume.
Note: You can continue to use manual unmap using esxcli to reclaim space after disabling automatic unmap.
- Run this command from one of the hosts sharing the volume.
esxcli storage vmfs reclaim config set --volume-label VolName --reclaim-priority=none
- Unmount and remount the volume on all hosts accessing the volume for reclaim priority change to take effect.
esxcli storage filesystem unmount -l VolName
esxcli storage filesystem mount -l VolName
If you are unable to unmount the volume, automatic unmap will need to be toggled on/off on all the hosts accessing the volume.
Alternative step "In case unable to unmount the volume""
Run this command to toggle reclaim priority low/none on all the hosts accessing the volume for the reclaim priority change to take effect.
esxcli storage vmfs reclaim config set --volume-label VolName --reclaim-priority=low
esxcli storage vmfs reclaim config set --volume-label VolName --reclaim-priority=none
- List the volumes with automatic unmap processing enabled with this command:
vsish -e ls /vmkModules/vmfs3/auto_unmap/volumes/
Notes:
- None of the volumes with backend storage array having > 1 mb unmap granularity should appear in the volumes listed out by the above command.
- If hosts are still listed after the Alternate Step 2, you should fall back to the preferred Step 2 which is to remount the volume in question.