The vSAN disk appears as Absent in vSAN Disk Management
book
Article ID: 326548
calendar_today
Updated On:
Products
VMware vSANVMware vSAN 8.xVMware vSAN 7.x
Issue/Introduction
vSAN Health service reports a warning about disk(s)
Unable to Remove an Absent vSAN disk from vCenter UI
Unable to remove vSAN disk reference where disk has already been replaced
After physically replacing a failed vSAN Capacity drive: Part has been replaced, but it is in unclaimed condition
The vCenter vSphere UI, vCenter > Host and Cluster View > vSAN cluster > Monitor > Skyline Health > RETEST > Operational Health : will display "Absent Disk" with overall Health with red mark
General vSAN error. vSAN disk data evacuation resource check has failed for disk or disk-group vsan:########-####-####-####-############ (########-####-####-####-############) with mode noAction on host 10.x.x.4. Go to vSAN Data Migration Pre-Check page for more details..
In vCenter vSphere UI, vCenter > Host and Cluster View > vSAN cluster > Configure > vSAN > Disk Management, you see a Disk group with a red exclamation diamond and, in the detail window below, it is marked as Absent vSAN Disk.
Multiple vSAN nodes reporting Absent vSAN disk error at the same time.
When trying to remove absent disk it fails with an error " A General system error occurred"
You may also see inaccessible objects being reported in Cluster -> Monitor -> Virtual objects
To further verify the absent disks, var/run/log/vmkernel.log may capture the events such as "failed to read the device", "can't find MD device", "device not found":
[from /var/run/log/vmkernel.log]
YYYY-MM-DDTHH:MM:SS vmkwarning: cpu#:3336825)WARNING: StorageDeviceVsi: 426: Device vsan:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx not found.
YYYY-MM-DDTHH:MM:SS vmkwarning: cpu#:2101826 opID=4ddeb394)WARNING: StorageDeviceVsi: 426: Device vsan:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx not found.
YYYY-MM-DDTHH:MM:SS vmkwarning: cpu#:13289611)WARNING: PLOG: PLOGProbeDevice:6851: Failed to read the device <naa.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx:1> : Not found
YYYY-MM-DDTHH:MM:SS vmkwarning: cpu#:13289611)WARNING: PLOG: PLOGProbeDevice:6851: Failed to read the device <naa.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx:1> : Not found
2026-04-08T10:04:29.965Z In(182) vmkernel: cpu#:13289611)PLOG: PLOGMapDataPartition:2989: can't find MD device by UUID xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
YYYY-MM-DDTHH:MM:SS vmkernel: cpu#:13289611)PLOG: PLOGMapDataPartition:2989: can't find MD device by UUID xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Unknown
Device: Unknown
Display Name: Unknown
Is SSD: false
VSAN UUID: ########-####-####-####-############
VSAN Disk Group UUID:
VSAN Disk Group Name:
Used by this host: false
In CMMDS: false
On-disk format version: -1
Deduplication: false
Compression: false
Checksum:
Checksum OK: false
Is Capacity Tier: false
Encryption Metadata Checksum OK: true
Encryption: false
DiskKeyLoaded: false
Is Mounted: false
Creation Time: Unknown
In the ESXI host logs /var/run/log/vobd.log will show below messages
vobd.log:YYYY-MM-ddThh:mm:ss.342Z: [scsiCorrelator] 4079214243471us: [esx.problem.scsi.device.state.permanentloss] Device: naa.############### has been removed or is permanently inaccessible. Affected datastores (if any): Unknown.
vobd.log:YYYY-MM-ddThh:mm:ss.374Z: [scsiCorrelator] 4104631067753us: [vob.scsi.device.state.permanentloss] Device :naa.############### has been removed or is permanently inaccessible.
/var/run/log/vmkernel.log, check for the SCSI sense code against the device identifier:
YYYY-MM-ddThh:mm:ss.sssZ In(182) vmkernel: cpu#:2098242)ScsiDeviceIO: 4672: Cmd(0x45de37490640) 0x25, CmdSN 0x113d38b from world 0 to dev "naa.###############" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x4 0x44 0xe2
This scsi sense code decodes a hardware error:
Sense Key [0x4] HARDWARE ERROR
Additional Sense Data 44/E2 Additional Sense Data unknown.
OP Code 0x25 READ CAPACITY(10)
Sense Key [0x4] in the above example is a hardware error
When reviewing smart data for the device you may see it reporting a health status of failed and/or offline
# esxcli storage core device smart get -d naa.xxxxxxxxxxxxxxxx
Parameter Value Threshold Worst
---------------------------- ----------------- --------- -----
Health Status FAILED/OFFLINE N/A N/A
Media Wearout Indicator N/A N/A N/A
Write Error Count 0 N/A N/A
Read Error Count 1638320548 N/A N/A
Power-on Hours N/A N/A N/A
Power Cycle Count 621 N/A N/A
Reallocated Sector Count N/A N/A N/A
Raw Read Error Rate N/A N/A N/A
Drive Temperature 31 N/A N/A
Driver Rated Max Temperature N/A N/A N/A
Write Sectors TOT Count N/A N/A N/A
Read Sectors TOT Count N/A N/A N/A
Initial Bad Block Count N/A N/A N/A
Running the command vdq -Hi will show the impacted disk(s) as UUIDs and not a device
Mappings:
DiskMapping[0]:
SSD: naa.################
MD: naa.################
MD: naa.################
MD: naa.################
MD: ########-####-####-####-############ <-- Instead of showing the NAA ID, the vSAN UUID of the disk is shown
Environment
VMware vSAN 8.x
VMware vSAN 9.x
Cause
Physical disk failures can occur due to hardware exhaustion or accidental removal before the disk is properly decommissioned from the vSAN disk group.
If multiple disks show "Absent" across different disk groups on the same node, suspect a backend enclosure issue.
If the host cannot read the device, it is likely due to a persistent hardware error or an intermittent driver/firmware glitch.
Resolution
Please contact the hardware vendor to replace the faulty disk.
NOTE: Ensure the DISK_UUID is accurately cross-referenced before proceeding with the removal of any 'Absent' disk.
Get the UUID of the absent disk either from vCenter > Configure > vSAN > Disk Management or by running
If there is no data associated with the DISK_UUID , it should not display results and return the command prompt indicating that no objects have data associated to the VSAN_UUID.
If you have inaccessible objects, please open a case with Broadcom Support (see KB: Creating and managing Broadcom support request (SR) cases) for assistance in determining if the objects would be recoverable after replacement. Be especially mindful of this in multiple disk failure scenarios.
The following article explain the replacement of the cache and capacity disk
In the vCenter vSphere UI, if the above operation fails then follow the below steps to remove the disk from the disk group:
Replace the physical disk from the hardware level by powering off the ESXi
If the server hardware supports Hot Swap of the disk , the faulty disk can be replaced after putting the ESXi host in Maintenance Mode
The newly added disk should be visible in Host and Clusters View -> ESXi Host -> Configure -> Storage Devices
If Hot Swap is not supported, the reboot of the ESXI will show the new disk in the vCenter vSphere UI -> Host and Cluster view -> ESXI Host -> Configure -> Storage Devices.
To remove the failed disk from the vSAN cluster after it has been validated :
After the device is determined to be an empty reference, remove the DISK_UUID using command below:
$ esxcli vsan storage remove -u DISK_UUID
Re-run the same command to verify if the listing for this volume is removed:
$ esxcli vsan storage list|grep DISK_UUID
Refresh the view in the vCenter vSphere UI and the volume will also be removed there. ( vCenter > Host and Cluster View > vSAN cluster > Configure > vSAN > Disk Management )
If the command to remove the drive fails with this error:
Unable to remove device: Unable to complete Sysinfo operation. Please see the VMkernel log file for more details.: Sysinfo error: Not found See VMkernel log for details.
Check if the vSAN cluster is configured with dedup and compression , you need to destroy and re-create the entire Disk Group which contains the affected disk UUID
The Hardware is identified at BIOS level by the installed firmware on the server. The SCSI code handling changes from a vendor to vendor and hence some of the vendors may provide the hot swappable HDD option for failed disk
However, the ESXi OS requires a reboot of the host to identify the new disk ( hardware changes in the system ) when the existing failed disk is replaced in the same slot to refresh the driver information in the ESXi kernel.
A newly added disk in the empty slot will be identified in real time. However, this disk replacement and identification behavior changes from vendor to vendor.
In cases where no physical hardware failure is detected despite persistent read issues, a host reboot may restore functionality. Furthermore, ensure the storage controller's firmware and driver versions are verified for compatibility.