ESX Host Unresponsiveness and VM Inaccessibility Due to Storage Latency or Fabric Issues
search cancel

ESX Host Unresponsiveness and VM Inaccessibility Due to Storage Latency or Fabric Issues

book

Article ID: 392616

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

  • The ESX host may appear connected in the vCenter Server inventory but does not respond to management operations.

  • The host may also show Unresponsive or Not Connected, and attempting to re-connect the host to vCenter through the vSphere UI fails.
  • Some, or all, virtual machines (VMs) on the affected host become inaccessible and may show as Disconnected in the vSphere UI.

  • Power operations such as Power On, Power Off, or Reset may fail for VMs residing on the host.

  • Restarting ESX management agents (hostd, vpxa) may not restore host or VM responsiveness.

  • Navigation in the host's Direct Console User Interface (DCUI accessed via KVM, IPMI, etc.) may present long delays moving between or within the menus and accessing the shell through DCUI may not function correctly. 
  • In the /var/log/vmkernel.log file, the following warning messages may be seen:

    • "ALERT: hostd performance has degraded due to high system latency"

    • "Devices/volumes experiencing 'Internal Target Failure'"

    • Sense Code 0xB 44/00 = Aborted Command / Internal Target Failure
      *Note: Use the Broadcom Sense Code Decoder to interpret sense data.

    • Example:
      YYYY-MM-DDTHH:MM:SSZ cpu77:2101555)ALERT: hostd performance has degraded due to high system latency
      -----
      YYYY-MM-DDTHH:MM:SSZ cpu84:2098465)ScsiDeviceIO: 4115: Cmd(0x45b9e4484008) 0x2a, CmdSN 0x800e0032 from world 2114460 to dev "naa.600####" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xb 0x44 0x0
      YYYY-MM-DDTHH:MM:SSZ cpu70:2098465)NMP: nmp_ThrottleLogForDevice:3867: Cmd 0x2a (0x45d9c2651988, 2103527) to dev "naa.600####" on path "vmhba0:C0:T2:L49" Failed:
  • In the /var/log/vmkwarning.log, messages like "state in doubt; requested fast path state update..." may appear as well as messages stating hostd detected to be non-responsive and/or PDL (permanent device loss):

    • Examples:
      YYYY-MM-DDTHH:MM:SSZ cpu104:2162384)WARNING: nfnic: <1>: fnic_abort_cmd: 3890: Abort for cmd tag: 0x3fc in pending state
      YYYY-MM-DDTHH:MM:SSZ cpu103:2098465)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "naa.600####" state in doubt; requested fast path state update...
      -----
      YYYY-MM-DDTHH:MM:SSZ Al(###) vmkalert: cpu0:2102327)ALERT: hostd detected to be non-responsive
      -----
      YYYY-MM-DDTHH:MM:SSZ Wa(###) vmkwarning: cpu0:2097905)WARNING: NMP: nmp_PathDetermineFailure:3536: Cmd (0x1a) PDL error (0x5/0x25/0x0) - path vmhba2:C0:T3:L0 device naa.#### - triggering path failover
      YYYY-MM-DDTHH:MM:SSZ Wa(###) vmkwarning: cpu0:2097905)WARNING: NMP: nmp_DeviceRetryCommand:130: Device "naa.####": awaiting fast path state update for failover with I/O blocked. No prior reservation exists on the device.
      
  • Hostd logs show increasing latency messages. This can happen even if the vmkernel logs do not show driver or SCSI I/O messages:
    • Examples:
      YYYY-MM-DDTHH:MM:SSZ Wa(###) Hostd[2099104] [Originator@6876 sub=IoTracker] In thread #######, stat("/vmfs/volumes/datastoreUUID/folderName/VMname-sesparse.vmdk") took over 43799 sec.

      YYYY-MM-DDTHH:MM:SSZ Wa(###) Hostd[2099104] [Originator@6876 sub=IoTracker] In thread #######, stat("/vmfs/volumes/datastoreUUID/folderName/VMname-sesparse.vmdk") took over 43809 sec

      This may result in "ALERT: hostd detected to be non-responsive" messages in the vmkernel logs.

Cause

This issue typically occurs when high system latency or storage-related delays impact the responsiveness of the ESX management service, hostd. As a result, the host becomes unresponsive to management operations while still appearing connected in vCenter. Contributing factors may include:

  • Storage array performance degradation

  • Fabric issues, such as SAN switch/zoning delays or intermittent path failures

  • SCSI command failures with sense key 0xB / ASC 44/00 indicating Internal Target Failure

  • Aborted commands observed due to path or array-level issues

 

Resolution

Resolution

The issue may be caused by storage array performance degradation or a fabric-related issue. To resolve it:

  1. Engage the storage vendor to investigate latency at the storage array level.

  2. Check the storage fabric health, including SAN switches, zoning, and connectivity between the ESX host and storage array.

  3. Monitor storage response times to identify anomalies or bottlenecks in the data path.

Workaround

To temporarily recover from the unresponsive state and regain access to the affected virtual machines (VMs), perform the following:

  1. Hard reset the affected ESX host using the KVM/IPMI console.

  2. Upon reboot, the High Availability (HA) mechanism may trigger, if configured, causing VMs to restart on available hosts within the cluster.

Additional Information