vSAN disk appears absent in vSAN Disk Management. (vSphere Client > vSAN Cluster > Configure > vSAN - Disk Management > Select affected host > view the affected disk group containing the absent vSAN disk) with one or all disks in Permanent disk loss.
In hardware management UI (Dell iDRAC, HPE OneView and so on) no disk errors are seen.
The vSAN disk appears as absent in vSAN Disk management on host1 because the disk has gone faulty at the hardware level.
There are no disk alerts in hardware management UI, since the disk is not at all detected at the hardware end.
The host's /var/run/log/vmkernel.log may show the below error logging indicating that the disk has hit a hardware fault:
2025-11-22T13:53:37.985Z In(182) vmkernel: cpu80:2098032)HPP: HppScsiAADetermineStatus:57: Device naa.############ path vmhba1:C#:T#:L# hit an unrecoverable hardware error2025-11-22T13:53:37.985Z In(182) vmkernel: cpu80:2098032)HPP: HppPathGroupMovePath:688: Path "vmhba1:C#:T#:L#" state changed from "active" to "permanently lost"2025-11-22T13:53:37.985Z Wa(180) vmkwarning: cpu80:2098032)WARNING: HPP: HppDeviceUpdateState:5242: Device 'naa.############' is changing to 'permanent device loss' from 'on'.2025-11-22T13:53:37.985Z Wa(180) vmkwarning: cpu80:2098032)WARNING: ScsiDevice: 1794: Device :naa.############ has been removed or is permanently inaccessible.2026-02-03T11:48:51.022Z In(14) vobd[2097954]: [vSANCorrelator] 9152575353208us: [vob.vsan.lsom.devicerepair] vSAN device ###########-8686-9a2d-8034-################ is being repaired due to I/O failures, and will be out of service until the repair is complete. If the device is part of a dedup disk group, the entire disk group will be out of service until the repair is complete.2026-02-03T11:48:51.022Z In(14) vobd[2097954]: [vSANCorrelator] 9152543955667us: [esx.problem.vob.vsan.lsom.devicerepair] Device ###########-8686-9a2d-8034-################ is in offline state and is getting repaired.2026-02-03T11:48:51.032Z In(14) vobd[2097954]: [vSANCorrelator] 9152575363324us: [vob.vsan.pdl.offline] vSAN device ###########-8686-9a2d-8034-################ has gone offline.2026-02-03T13:02:23Z In(14) vsandevicemonitord[2103440]: [223791411904]: Device t10.NVMe____Dell_Ent_NVMe_#####_3.2TB______________############### state is DISKGROUP_UNDER_PDL2026-02-03T13:02:23Z In(14) vsandevicemonitord[2103440]: [223791411904]: Device t10.NVMe____Dell_Ent_NVMe_#####_3.2TB______________############### state is DISKGROUP_UNDER_PDL2026-02-03T13:02:23Z In(14) vsandevicemonitord[2103440]: [223791411904]: Device t10.NVMe____Dell_Ent_NVMe_#####_3.2TB______________############### state is DISKGROUP_UNDER_PDL To confirm if the disk error is genuine or a false alarm, run the following steps:
If the disk status error persists after completing the validation steps above, please proceed with the following instructions:,
Place the host with the absent disk in maintenance mode with "Ensure accessibility".
Engage the hardware vendor and get the failed disk replaced physically in the server.
Then depending on type of failed disks (cache or capacity) and if deduplication is enabled or not, follow the below steps to replace the new drive:
If deduplication is enabled on the cluster or if the absent disk was a cache device:
Delete the disk group containing the absent vSAN disk.
Re-create the disk group with the existing disks and the new disk.
If deduplication is not enabled or if the absent disk was a capacity device:
Remove the absent vSAN disk from the disk group.
Add the new disk to the disk group.
This KB explains more details on Requirements when replacing disks in a vSAN cluster.
If deleting the absent vSAN disk or disk group fails follow this KB.