The ESXi host is connected to shared storage via Fibre Channel (FC), and the LUNs are mapped to multiple ESXi hosts.
After a host reboot, one of the ESXi hosts is unable to access the shared datastores, while other hosts continue to have access.
The FC vmhba adapters appear online, and no configuration changes were made to the host.
The status of the HBA cards can be checked via the vSphere Client: Host > Configure > Storage Adapters
Alternatively, run the following commands via SSH to check adapter status:
esxcli storage core adapter list
esxcli storage san fc list
(vmhba3 and vmhba7 are the FC adapters in use as per the below screenshot. This can vary as per your configuration)
Ensure the FC adapters in use are compatible and supported with your ESXi version.
Refer to the VMware KB: “Determining Network/Storage firmware and driver version in ESXi” to validate the driver/firmware versions.
Execute a fabric login reset on the affected FC adapters, followed by a storage rescan on the host to check if the LUNs reappear. Refer to the VMware KB: “Forcing a Fabric Login reset on Fibre Channel and FCOE Adapters” for detailed steps.
VMware vSphere ESXi 7.x
VMware vSphere ESXi 8.x
The issue is caused by an incomplete or incorrect LUN mapping configuration on the storage array for the impacted ESXi host.
Analysis of the /var/log/boot.gz logs on the problematic host shows that the Fabric Login (FLOGI) completes successfully, indicating that the host is able to connect to the fabric. However, no targets are returned during the target discovery phase, suggesting a visibility issue at the storage layer.
2025-02-23T13:37:17.102Z cpu0:2098048)ql_fcoe:vmhba7:ql_fcoe_parse_link_service_fip_response:4236:Info: Search for ox_id 8eb2025-02-23T13:37:17.102Z cpu0:2098048)ql_fcoe:vmhba7:FipFabricLoginCompletion:1838: Info: Called for Fabric: 2f:1f:00:de:fb:83:40:012025-02-23T13:37:17.1022 cpu0:2098048)ql_fcoe:vmhba7:FabricLoginCompletion:1378:Info: FabricLoginCompletion: Fabric 0x431344753070 FLOGI/FDISC status Success2025-02-23T13:37:17.1022 cpu0:2098048) FabricLoginCompletion flags 0x109.2025-02-23T13:37:17.102Z cpu0:2098048) FabricLoginCompletion setting NPIV SUPPORTED flag.2025-02-23T13:37:17.102Z cpu0:2098048)ql_fcoe:vmhba7:FabricLoginCompletion:1572:Info: Fabric login completed Fabric (USE_FIP) 0x431344753070 Sess 0x43134474e320
......................2025-02-23T13:37:17.1062 cpu0:2098048)ql_fcoe:vmhba7:QLFdmiPrintHbaInfo:5098:Info:NumberOfPortEntries: 12025-02-26T05:33:03.290Z cpu23:2097983)ql_fcoe:vmhba3:StartGPN_FF_Targets:3407: Info: Called
A comparison with logs from a working host shows that a list of targets are returned when the target info was queried
2025-02-18T03:17:45.342Z cpu40:2098048)ql_fcoe:vmhba7:QLFdmiPrintPortInfo:5437:Info: OSDeviceName : vmhba7
2025-02-18T03:17:45.342Z cpu40:2098048)ql_fcoe:vmhba7:QLFdmiPrintPortInfo:5438:Info: HostName : rf-esxi-rp01-re01-22
2025-02-18T03:17:45.343Z cpu40:2098048)ql_fcoe:vmhba7:FDMIRegistrationCallback: 5061:Info: Start StartScr
2025-02-18T03:17:45.343Z cpu40:2098048)ql_fcoe:vmhba7:StartSCR: 689:Info: Called
2025-02-18T03:17:45.343Z cpu40:2098048)ql_fcoe:vmhba7:5CRCallback:598:Info: Called, status Success
2025-02-18T03:17:45.343Z cpu40:2098048)ql_fcoe:vmhba7:StartGPN_FF_Targets: 3407: Info: Called
2025-02-18T03:17:45.344Z cpu40:2098048)ql_fcoe:vmhba7:GPN_FTCallback:2815:Info: 0x43155fb4f780 O: 20:00:34:80:Od:8a:a3:04 9a0109 0
2025-02-18T03:17:45.344Z cpu40:2098048)ql_fcoe:vmhba7:GPN_FTCallback:2815:Info: 0x43155fb4f780 1: 23:fd:00:a0:98:ca:35:45 9a05d5 0
.................................................
2025-02-18T03:17:45.344z cpu40:2098048)ql_fcoe:vmhba7:GPN_FTCallback:2815:Info: 0x43155fb4f798 10: 24:0d:00:a0::ca:35:45 9a06f2 0
2025-02-18T03:17:45.344Z cpu40:2098048)ql_fcoe:vmhba7 :_ AddFcpSessionToSessionList: 2519: Info: === FcpSess 0x43155
fb7a6f0 Sess 0x43155fb76e10 9a0693 SessionsArray[11] added to FepSessionsArray [0]
2025-02-18T03:17:45.344Z cpu40:2098048)ql fcoe:vmhba7: AddFepSessionToSessionList : 2519: Info: === FepSess 0x43155
To resolve this issue, engage your storage vendor and remap the affected LUNs to the impacted ESXi host