ESXi network and storage disconnects during upstream interconnect module firmware upgrade
search cancel

ESXi network and storage disconnects during upstream interconnect module firmware upgrade

book

Article ID: 433255

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

  • During an external chassis interconnect firmware upgrade, VMware ESXi hosts enter a not responding state. Virtual machines lose network connectivity and datastore access is interrupted.
  • The ESXi vobd logs (/var/log/vobd.log) report concurrent uplink and storage redundancy degradation but there are always a number of backup uplinks up:
    [vob.net.dvport.uplink.transition.down] Uplink: vmnic2 is down. Affected dvPort: ##/## ## ## ## ## ## ## ##-## ## ## ## ## ## ## ##. 3 uplinks up. Failed criteria: 128
    [vob.net.vmnic.linkstate.down] vmnic vmnic2 linkstate down
    [esx.problem.net.dvport.redundancy.degraded] Uplink redundancy degraded on DVPorts: "##/## ## ## ## ## ## ## ##-## ## ## ## ## ## ## ##". Physical NIC vmnic2 is down.
    [esx.problem.storage.redundancy.degraded] Path redundancy to storage device naa.#################### degraded. Path vmhba67:C0:T1:L1 is down. Affected datastores: "##########".
  • After the interconnect firmware upgrade completed, all uplinks/vmnics come back up but the hosts and virtual machines still network network access issues.

Cause

The issue is caused by a Layer 2 forwarding failure within the upstream interconnect fabric following a reboot sequence, which drops switched frames despite maintaining an active Layer 1 physical link to the ESXi host.

Resolution

Engage the hardware vendor to analyze the interconnect diagnostic bundles to determine why the forwarding plane halted while the port state remained active.

Additional Information

Network connectivity loss on ESXi hosts using Static Port Channels during silent physical switch failure