Troubleshooting “Potentially Mismatched Fibre Channel HBA Pairs” in VMware ESXi
search cancel

Troubleshooting “Potentially Mismatched Fibre Channel HBA Pairs” in VMware ESXi

book

Article ID: 410551

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

When an ESXi host reports:

Potentially mismatched Fibre Channel HBA pairs.

#esxcli storage core path list | grep  "Adapter: vmhba"  | sort |uniq -c

    994    Adapter: vmhba3
    879    Adapter: vmhba4
    834    Adapter: vmhba5
   1039   Adapter: vmhba6


 

Environment

ESXi 7.x and above.

 

Cause

There may be a Fibre Channel configuration problem.

The host has detected a different number of storage paths on its Fibre Channel (FC) HBAs.

Large discrepancies usually point to zoning, cabling, or login issues in the SAN fabric.

 

Resolution

1. Verify HBA status in ESXi

Run:

#esxcli storage core adapter list

#esxcli storage core adapter stats get -A vmhbaX

Check:

    • Link state (should be online).

    • Error counters (CRC errors, link failures).

2. Review path inventory

List paths per HBA:

#esxcli storage core path list | grep vmhba3

#esxcli storage core path list | grep vmhba4

Look for:

    • Missing LUNs on one adapter.

    • Paths in dead or off state.

3. Inspect the SAN fabric

    • Confirm each HBA WWPN is logged in on the switches.

    • Validate zoning: each host HBA should be zoned to the same target ports as its peers.

    • Check for port errors or flapping.

4. Validate cabling and hardware

    • Ensure cables are connected to the correct switch and port.

    • Replace suspect SFPs or cables.

    • Confirm HBAs are on VMware’s Hardware Compatibility List (HCL) and running the recommended driver/firmware.

5. Rescan and compare results

After corrections:

#esxcli storage core adapter rescan --all

#esxcli storage core path list | grep vmhba

Path counts should now be balanced across HBAs.

When a mismatch is acceptable

Some configurations deliberately present storage through only one fabric (e.g., boot LUNs, RDMs, or isolated workloads).
If so, document the design to prevent future confusion.

 

Additional Information