To determine the cause of the failure or eliminate common NIC issues:
-
Check the current status of the vmnic from either the VMware vSphere Client or the ESXi service console:
esxcli network nic down -n vmnicX
esxcli network nic up -n vmnicX
The Link Status column specifies the status of the link between the physical network adapter and the physical switch. The status can be either Up or Down. If there are several network adapters, with some being up and some down, you might have to verify that they are connected to the intended physical switch ports. This can be done by bringing down each of the ESXi host's ports on the physical switch and running the command to observe which vmnic is affected.
2. Check that the vmnic referred to in the event message is still connected to the switch and configured properly:
- Make sure that the network cable is still connected to the switch and to the host.
- Check that the switch connected to the system is still functioning properly and has not been misconfigured. Refer to the switch documentation for details.
- Check for activity between the physical switch and the vmnic. This might be indicated either by a network trace or activity LEDs.
- Check that the NIC driver is up to date: Determining Network/Storage firmware and driver version in ESXi.
3. Search for the word "vmnic" in the vobd log file.
If you see "vmnic down" or "vmnic up" messages, the NIC may be flapping. Note: Some NICs report the NIC link up state only, not down. If the NIC is reported as "up" and the host was not rebooting, this is an indication that the NIC is flapping and not reporting the down state to ESXi.
Check for a failed criteria code with the vmnic messages. If there is a failed criteria code listed, please see step 4 below.
If there is no failed criteria code, and everything was checked in step 2 above, call the hardware vendor about this flapping.
4. In the vobd file,the vmnic failure may be classified with a Failed criteria code in this log. This code explains the reason for the vmnic failure.
Example:
2020-11-17T15:37:00.330Z: [netCorrelator] 4836107000843us: [vob.net.dvport.uplink.transition.down] Uplink: vmnic4 is down. Affected dvPort: ##/50 24 e2 d9 41 e2 48 58-## ## ## ## ## ## ## ##. 3 uplinks up. Failed criteria: 128
Time - Event - Uplink# - State - Port - vSwitch - # Active Uplinks left - Failed Criteria
Note: # Active Uplinks left is indication of a failover which identifies the number of active uplinks left in the teaming policy of the virtual switch.
The following are the failed criteria codes.
1 – Link speed reported by the driver (exact match for compliance)
2 – Link speed reported by the driver (equal or greater for compliance)
4 – Link duplex reported by the driver
8 – Link LACP state down
32 – Beacon probing
64 – Errors reported by the driver or hardware
128 – Link state reported by the driver
256 – The port is blocked
512 – The driver has registered the device
Note: Failed Criteria 128 is driver reporting link state down. This can be caused by unplugging the network cable or administratively downing the physical switchport. If this was not an intended link outage it will likely be an issue with the driver, firmware, SFP+ module, cable, and/or switchport of the physical switch. Check the driver by following the below KB, and call the host hardware vendor for further troubleshooting when Failed criteria 128's are seen in the vobd log. For more information, see
Determining Network/Storage firmware and driver version in ESXi.
Note: The failure codes are accumulative so can be added together when multiple criteria are met.
When there are multiple failures, you see entries similar to these in the vobd.log file:
2012-04-05T11:22:10.449Z: [netCorrelator] 1123644995238us: [vob.net.pg.uplink.transition.down] Uplink: vmnic3 is down. Affected portgroup: ########. 0 uplinks up. Failed criteria: 130
The failed criteria here is 130, which is 2 + 128. This is a combination of these two failure codes:
Link speed reported by the driver (equal or greater for compliance)
Link state reported by the driver