VOBD message from var/run/log/vobd.log will confirm the unavailability of working uplinks
[esx.problem.net.dvport.redundancy.lost] Lost uplink redundancy on DVPorts ### Physical NIC vmnic0 is down.
[esx.problem.net.dvport.connectivity.lost] Lost network connectivity on DVPorts ### Physical NIC vmnic1 is down.
VMkernel logs show entries similar to:
Note: These may vary based on the network card interface model/driver in use
WARNING: bnxtnet: hwrm_send_msg:389: [vmnic0 : 0x452106aee000] HWRM cmd resp_len timeout, cmd_type 0x106(HWRM_CFA_FLOW_STATS) seq 63810
WARNING: bnxtnet: hwrm_send_msg:389: [vmnic1 : 0x45210a38a000] HWRM cmd resp_len timeout, cmd_type 0x106(HWRM_CFA_FLOW_STATS) seq 15810
WARNING: bnxtnet: hwrm_send_msg:389: [vmnic0 : 0x452106aee000] HWRM cmd resp_len timeout, cmd_type 0x0(HWRM_VER_GET) seq 63811
WARNING: bnxtnet: hwrm_get_version:2501: [vmnic0 : 0x452106aee000] VER_GET failed- FW_STATUS_REG: 0x89021
WARNING: bnxtnet: hwrm_snd_fw_msg:538: [vmnic0 : 0x452106aee000] Looks like FW is crashed/non-responsive.
WARNING: bnxtnet: hwrm_snd_fw_msg:540: [vmnic0 : 0x452106aee000] Dumping FW trace and reporting link down to OS
bnxtnet: bnxtnet_report_link_down_to_uplink:1527: [vmnic0 : 0x452106aee000] Reporting Link down
WARNING: bnxtnet: hwrm_fill_fw_msg:935: [vmnic0 : 0x452106aee000] Sending HWRM message failed
WARNING: bnxtnet: cmd_cmpl_wait:1164: [vmnic0 : 0x452106aee000] FW went bad, stop waiting for queue flush
WARNING: bnxtnet: hwrm_send_msg:389: [vmnic1 : 0x45210a38a000] HWRM cmd resp_len timeout, cmd_type 0x0(HWRM_VER_GET) seq 15811
WARNING: bnxtnet: hwrm_get_version:2501: [vmnic1 : 0x45210a38a000] VER_GET failed- FW_STATUS_REG: 0x89021
WARNING: bnxtnet: hwrm_snd_fw_msg:538: [vmnic1 : 0x45210a38a000] Looks like FW is crashed/non-responsive.
WARNING: bnxtnet: hwrm_snd_fw_msg:540: [vmnic1 : 0x45210a38a000] Dumping FW trace and reporting link down to OS
bnxtnet: bnxtnet_report_link_down_to_uplink:1527: [vmnic1 : 0x45210a38a000] Reporting Link down
WARNING: bnxtnet: hwrm_fill_fw_msg:935: [vmnic1 : 0x45210a38a000] Sending HWRM message failed
VMware vSphere 7.x
VMware vSphere 8.x
Network interface card may fail resulting in a pause of network traffic on the ESXi Server. Depending on the configuration, management VMkernel interface may end up with no working uplinks resulting in outage
A reboot may help in restoring the server to normal state.
Hardware vendor must be engaged for further troubleshooting/investigation related to network interface card failure.
Note: It is a best practice to keep the network interface card firmware/driver versions up-to-date. Please use Hardware Compatibility Guide to understand the supportability and availability of firmware/drivers for the network interface card
You can review the knowledgebase article Network adapter (vmnic) is down or fails with a Failed Criteria Code to understand more about troubleshooting network interface card failure and related outages