VMs entered “Not Ready” state during network activity even though redundant uplinks were active.
search cancel

VMs entered “Not Ready” state during network activity even though redundant uplinks were active.

book

Article ID: 414137

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Symptoms :

  • Virtual machines become unresponsive or enter the “Not Ready” state during scheduled network activity or switch maintenance.
  • vCenter may also report storage-related alarms such as:
    • Lost access to volume due to connectivity issues.”
    • Device or filesystem entered All Paths Down (APD) state.




Environment

  • ESXi 7.x
  • ESXi 8.x

Cause

  • The issue occurs due to a misconfiguration in NVMe over TCP multipathing.
  • The ESXi host detects active paths only on one NVMe over TCP adapter, even though multiple adapters are configured. This results in loss of storage connectivity when the associated uplink becomes unavailable.

Validate the number of active NVMe over TCP adapters

Run the following command to check the active adapters and their associated vmnics:

# localcli_nvme-adapter-list

Adapter  Adapter Qualified Name                                         Transport Type  Driver     Associated Devices
-------  --------------------------------------------------------------  --------------  ---------  ------------------

vmhba66  aqn:nvmetcp:#-#-#-#-#-#-#                                       TCP             nvmetcp    vmnic3
vmhba65  aqn:nvmetcp:#-#-#-#-#-#-#                                       TCP             nvmetcp    vmnic5

Validate active storage paths

# esxcfg-mpath_-b.txt

eui.#####################: NVMe TCP Disk (eui.##############################)
   vmhba66:C0:T0:L17 LUN:17 state:active Local HBA vmhba66 channel 0 target 0
   vmhba66:C0:T1:L17 LUN:17 state:active Local HBA vmhba66 channel 0 target 1
   vmhba66:C0:T2:L17 LUN:17 state:active Local HBA vmhba66 channel 0 target 2
   vmhba66:C0:T5:L17 LUN:17 state:active Local HBA vmhba66 channel 0 target 5

This output confirms that paths for vmhba65 is missing, and all active paths are using vmhba66, resulting in a single-uplink dependency.

/var/log/vobd.log shows NIC failures followed by path loss and APD events:

2025-10-06T04:10:12.119Z In(14) vobd[2098052]:  [netCorrelator] 2994208731026us: [vob.net.vmnic.linkstate.down] vmnic3 linkstate down


2025-10-06T04:10:12.121Z In(14) vobd[2098052]:  [psastorCorrelator] 2994227787838us: [esx.problem.storage.connectivity.lost] Lost connectivity to storage device eui.####################################. Path vmhba66:C0:T0:L17 is down. Affected datastores: "#############################".
2025-10-06T04:10:12.123Z In(14) vobd[2098052]:  [psastorCorrelator] 2994227789690us: [esx.problem.storage.connectivity.lost] Lost connectivity to storage device eui.##############################. Path vmhba66:C0:T2:L17 is down. Affected datastores: "#################################".
2025-10-06T04:10:12.125Z In(14) vobd[2098052]:  [psastorCorrelator] 2994227791555us: [esx.problem.storage.connectivity.lost] Lost connectivity to storage device eui.##########################. Path vmhba66:C0:T5:L17 is down. Affected datastores: "##################################".
2025-10-06T04:10:12.127Z In(14) vobd[2098052]:  [psastorCorrelator] 2994227793459us: [esx.problem.storage.connectivity.lost] Lost connectivity to storage device eui.###########################. Path vmhba66:C0:T1:L17 is down. Affected datastores: "###########################".
2025-10-06T04:10:12.119Z In(14) vobd[2098052]:  [APDCorrelator] 2994227786116us: [esx.problem.storage.apd.start] Device or filesystem with identifier [eui.################################] has entered the All Paths Down state

These entries confirm that when the uplink connected to vmhba66 goes down, all storage paths are lost, causing VM unresponsiveness.

Resolution

The issue requires both the storage and network vendors to review and correct the NVMe over TCP multipathing configuration to ensure proper path redundancy.

Additional Information