vCenter is inaccessible and datastores have size of 0
search cancel

vCenter is inaccessible and datastores have size of 0

book

Article ID: 413706

calendar_today

Updated On:

Products

VMware vCenter Server

Issue/Introduction

Multiple virtual machines are inaccessible, and a datastore appears to have a size of 0. Upon investigation, from the ESXi Host Web Client, it's confirmed that the datastore is inaccessible, and all connected virtual machines are down.

Environment

ESXi 7.x

ESXi 8.x

Cause

The root cause is a storage-level problem where all paths to the datastore are in an All Paths Down (APD) state. This condition prevents the ESXi hosts from seeing or accessing the storage LUN, rendering the datastore size as 0 and making its resident virtual machines unavailable.

Furthermore, performing a vMotion migration of unaffected virtual machines is not possible, as the management agents may be affected by the APD condition, and the ESXi host may become unmanaged. As a result, a reboot of an affected ESXi host forces an outage to all non-affected virtual machines on that host.

This condition is confirmed by the esxcli output, which shows the device status as "dead timeout".

  • SSH to the related host. Login with root credentials. Run the following command against the suspect device naa ID. 

esxcli storage core device list -d naa.################

naa.################
   Display Name: IBM Fibre Channel Disk (naa.############...)

Status: dead timeout

  •  All paths as "dead."

[ESX:/vmfs/volumes] esxcfg-mpath -bd naa.################
naa.################ : IBM Fibre Channel Disk (naa.################
   vmhba5:C0:T7:L40 LUN:40 state:dead fc Adapter: Unavailable Target: Unavailable
   vmhba5:C0:T0:L40 LUN:40 state:dead fc Adapter: Unavailable Target: Unavailable
   vmhba3:C0:T0:L40 LUN:40 state:dead fc Adapter: Unavailable Target: Unavailable
   vmhba3:C0:T6:L40 LUN:40 state:dead fc Adapter: Unavailable Target: Unavailable

  • The ESXi host vmkernel.log shows repeated "No connection" errors for the device

2025-10-04T15:21:03.309Z In(182) vmkernel: cpu130:2098417)ScsiVmas: 1094: Inquiry for VPD page 00 to device naa.################ failed with error No connection

 

  • The ESXi Host vobd.log confirms the APD timeout. This condition prevents the ESXi hosts from seeing or accessing the storage LUN, making the datastore and its VMs unavailable.

2025-10-04T13:59:04.601Z In(14) vobd[2098532]:  [APDCorrelator] 5296381555520us: [vob.storage.apd.timeout] Device or filesystem with identifier [naa.################] has entered the All Paths Down Timeout state after being in the All Paths Down state for 140 seconds. I/Os will now be fast failed.

Resolution

Recovery from an All Paths Down (APD) state requires a coordinated effort between storage and virtualization teams:

  • Restore Storage Connectivity: Engage your storage array or fabric vendor to resolve the path status. Connectivity must be stable at the hardware layer before proceeding.
  • Identify Affected Hosts: Locate all ESXi hosts reporting the dead timeout status for the affected LUN.
  • Plan for Outage: Because the APD condition often causes management agents (hostd/vpxa) to hang, vMotion of unaffected VMs is generally not possible.
  • Reboot ESXi Hosts: Perform a physical reboot of each affected ESXi host to clear residual references to the APD device and restore host management.

Additional Information

  • If an I/O request is issued from a guest, the operating system should timeout and stop the I/O.
  • Due to the nature of an APD situation, there is no clean way to recover.
  • The APD situation needs to be resolved at the storage array/fabric layer to restore connectivity to the host.
  • All affected ESXi hosts may require a reboot to remove any residual references to the affected devices that are in an APD state.

See KB Permanent Device Loss (PDL) and All-Paths-Down (APD) on host for more details.