The ESXi host continues to report "All paths down" and recovers immediately.
All the Datastore disconnects and reconnects.
YYYY-MM-DDThh:mm:ss.ms: [APDCorrelator] 52674929398118us: [vob.storage.apd.timeout] Device or filesystem with identifier [########-########] has entered the All Paths Down Timeout state after being in the All Paths Down state for 140 seconds. I/Os will now be fast failed.YYYY-MM-DDThh:mm:ss.ms: [APDCorrelator] 52677564443591us: [esx.problem.storage.apd.timeout] Device or filesystem with identifier [########-########] has entered the All Paths Down Timeout state after being in the All Paths Down state for 140 seconds. I/Os will now be fast failed.YYYY-MM-DDThh:mm:ss.ms: [APDCorrelator] 52675036141243us: [vob.storage.apd.exit] Device or filesystem with identifier [########-########] has exited the All Paths Down state.YYYY-MM-DDThh:mm:ss.ms: [vmfsCorrelator] 52675036141196us: [vob.vmfs.nfs.server.restored] Restored connection to the server ##.##.##.## mount point /NFS_Datastore_name, mounted as ########-########-0000-000000000000 ("NFS_Datastore_name")YYYY-MM-DDThh:mm:ss.ms: [vmfsCorrelator] 52677671191328us: [esx.clear.vmfs.nfs.server.restored] ##.##.##.## /NFS_Share_path ########-########-0000-000000000000 NFS_Datastore_nameYYYY-MM-DDThh:mm:ss.ms: [APDCorrelator] 52677671191299us: [esx.clear.storage.apd.exit] Device or filesystem with identifier [########-########] has exited the All Paths Down state.
YYYY-MM-DDThh:mm:ss.msZ info hostd[3036934] [Originator@6876 sub=SoapAdapter.HTTPService.HttpConnection opID=f75b1a3f] Failed to read header; <io_obj p:0x00000049ad7d1738, h:132, <TCP '127.0.0.1 : 8307'>, <TCP '0.0.0.0 : 0'>>: N7Vmacore15SystemExceptionE(Connection reset by peer: The connection is terminated by the remote end with a reset packet. Usually, this is a sign of a network problem, timeout, or service overload.)YYYY-MM-DDThh:mm:ss.msZ info hostd[3036934] [Originator@6876 sub=IO.Connection opID=f75b1a3f] Failed to shutdown socket; <io_obj p:0x00000049ad7d1738, h:132, <TCP '127.0.0.1 : 8307'>, <TCP '0.0.0.0 : 0'>>, e: 104(shutdown: Connection reset by peer)YYYY-MM-DDThh:mm:ss.ms cpu84:268309480)WARNING: nenic: enic_isr_msix_err:194: [0000:62:00.5] Hit qerror. 1 in last 52634958 sec. Total:1YYYY-MM-DDThh:mm:ss.ms cpu22:2098204)WARNING: nenic: enic_wq_error:2943: [0000:62:00.5] WQ[0] error_status 4 fetch_index 32 posted_index 82YYYY-MM-DDThh:mm:ss.ms cpu22:2098204)WARNING: nenic: enic_wq_error:2959: [0000:62:00.5] WQ[0] desc[34] : 00 80 1a d4 35 00 00 00 00 10 00 00 00 00 00 00YYYY-MM-DDThh:mm:ss.ms cpu22:2098204)WARNING: nenic: enic_wq_error:2959: [0000:62:00.5] WQ[0] desc[33] : 00 70 7a 5b 09 00 00 00 00 10 00 00 00 00 00 00YYYY-MM-DDThh:mm:ss.ms cpu22:2098204)WARNING: nenic: enic_wq_error:2959: [0000:62:00.5] WQ[0] desc[32] : 00 60 3a 14 4f 00 00 00 00 10 00 00 00 00 00 00YYYY-MM-DDThh:mm:ss.ms cpu22:2098204)WARNING: nenic: enic_wq_error:2959: [0000:62:00.5] WQ[0] desc[31] : 00 50 ba a8 2b 00 00 00 00 10 00 00 00 00 00 00YYYY-MM-DDThh:mm:ss.ms cpu22:2098204)WARNING: nenic: enic_wq_error:2959: [0000:62:00.5] WQ[0] desc[30] : 00 40 ca 2b 43 00 00 00 00 10 00 00 00 00 00 00YYYY-MM-DDThh:mm:ss.ms cpu22:2098204)WARNING: nenic: enic_wq_error:2959: [0000:62:00.5] WQ[0] desc[29] : 00 30 2a c8 02 00 00 00 00 10 00 00 00 00 00 00YYYY-MM-DDThh:mm:ss.ms cpu22:2098204)WARNING: nenic: enic_wq_error:2959: [0000:62:00.5] WQ[0] desc[28] : 00 20 0a 3e 45 00 00 00 00 10 00 00 00 00 00 00YYYY-MM-DDThh:mm:ss.ms cpu22:2098204)WARNING: nenic: enic_wq_error:2959: [0000:62:00.5] WQ[0] desc[27] : 00 10 0a 16 44 00 00 00 00 10 00 00 00 00 00 00YYYY-MM-DDThh:mm:ss.ms cpu22:2098204)nenic: enic_ext_cq:925: [0000:62:00.5] CMD_CQ_ENTRY_SIZE_SET not supported.YYYY-MM-DDThh:mm:ss.ms cpu22:2098204)nenic: enic_ext_cq:941: [0000:62:00.5] CQ entry size set to 16 bytes
Verify the NIC drivers and Firmware are running on the latest version and are compatible.
VMware vSphere ESXi 7.x
VMware vSphere ESXi 8.x
This issue is noticed when the NIC drivers are running on an incompatible firmware version.
It is noticed that the driver and firmware were not compatible with each other. Hence, recommend upgrading the firmware and drivers to the supported latest version.
If the issue persists, engage the CISCO hardware vendor for further diagnostics and resolution.