The datastore cannot be unmounted after performing a Storage DR activity.
search cancel

The datastore cannot be unmounted after performing a Storage DR activity.

book

Article ID: 412324

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Symptoms:

  • A storage failover is performed, after which the datastore needs to be unmounted from the ESXi hosts at the target site because the VMs are manually powered on at the Production site.
  • In vCenter, the option to unmount the datastore is not available.
  • When the datastore unmount operation is initiated from the ESXi host client, no error is displayed; however, the operation does not complete successfully.
  • Attempts to unmount the datastore using esxcli commands are also unsuccessful.
  • The lsof command does not display any active process using the datastore, and no virtual machines or templates are currently associated with it.
  • The storage rescan operation becomes stuck.

Environment

  • vSphere ESXi 7.x
  • vSphere ESXi 8.x

Cause

  • The datastore cannot be unmounted because stale heartbeat and lock references remain after the DR activity.
  • These stale paths cause I/O errors and trigger the error “Too many users accessing this resource.” As a result, the hostd service becomes unresponsive, blocking datastore operations.
  • The /var/run/log/vmkernel.log file shows heartbeat messages failing with I/O errors, which prevents the datastore from releasing the existing locks.

Data': HB at offset 3248128 - Setting pulse failed: I/O error:
2025-09-26T06:25:30.929Z cpu0:2100417 opID=73cde11)  [HB state abcdef02 offset 3248128 gen 1 stampUS 129077594711 uuid 4b3d3c84-525ab950-5c52-######## jrnl <FB 0> drv 14.81 lockImpl 4 ip ##.###.##.##]
2025-09-26T06:25:30.930Z cpu28:2097866)ScsiDeviceIO: 4167: Cmd(0x45b97f005588) 0xfe, CmdSN 0x3a2e from world 2100417 to dev "naa.#####" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x7 0x22 0x0

  • The /var/run/log/vmkwarning.log file confirms repeated failures to initialize VMFS distributed locking, showing the following error:

2025-09-25T00:06:16.071Z cpu12:2099732 opID=e02f9dbf)WARNING: HBX: 2445: Failed to initialize VMFS distributed locking on volume 6198ffce-dea55a38-864c-########: Too many users accessing this resource
2025-09-25T02:24:24.001Z cpu12:2099728 opID=16fb6189)WARNING: HBX: 2445: Failed to initialize VMFS distributed locking on volume 6198ffce-dea55a38-864c-########: Too many users accessing this resource

  • The /var/run/log/hostd.log file shows multiple threads stuck for extended durations while attempting to access the datastore catalog.

2025-09-26T06:38:41.390Z warning hostd[2108744] [Originator@6876 sub=IoTracker] In thread 2100435, access("/vmfs/volumes/6198ffce-dea55a38-864c-########/catalog") took over 116274 sec.
2025-09-26T06:38:41.390Z warning hostd[2108744] [Originator@6876 sub=IoTracker] In thread 2101140, access("/vmfs/volumes/6198ffce-dea55a38-864c-########/catalog") took over 112288 sec.

  • This confirms that the hostd process remains blocked and is unable to release the datastore resources.

Resolution

  1. Place the ESXi host in maintenance mode.
  2. Reboot the ESXi host.
  3. After the reboot, verify that the host becomes responsive and that datastore operations, such as unmount or rescan, complete successfully.