[ Feb 9 00:56:51 2025] sd 0:0:9:0: [sdi] tag#331 timing out command, waited 1080s [ Feb 9 00:56:51 2025] sd 0:0:9:0: [sdi] tag#331 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=1080s [ Feb 9 00:56:51 2025] sd 0:0:9:0: [sdi] tag#331 CDB: Read(16) 88 00 00 00 00 01 ff ff 88 01 00 00 00 01 00 00
FAILED Result:hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=1080s
2025-02-03T21:57:23.673Z info hostd[2103473] [Originator@6876 sub=Vimsvc.ha-eventmgr] Event 14110: An attached device naa.6########################6:1 may be offline. The file system [Datastore_Name, 61fd425a-#######-####-###########] is now in a degraded state. While the datastore is still available, parts of data that reside on the extent that went offline might be inaccessible.
2025-02-08T22:54:55.711Z: [vmfsCorrelator] 1215653206342us: [vob.vmfs.extent.offline] An attached device went offline. naa.6########################6:1file system [Datastore_Name, ]61fd425a-#######-####-###########2025-02-08T22:54:55.711Z: [vmfsCorrelator] 1215776852821us: [esx.problem.vmfs.extent.offline] An attached device naa.6########################6:1 may be offline. The file system [Datastore_Name, ] is now in a degraded state. While the datastore is still available, parts of data that reside on the extent that went offline might be inaccessible.61fd425a-#######-####-###########
VMware vSphere ESXi 8.x
VMware vSphere ESXi 7.x
This is caused by one or more extents (LUN) used by the datastore intermittently going into offline state.
The storage vendor needs to conduct further investigation to resolve the issue of LUNs (extent) intermittently going offline.
Involving the Storage vendor will ensure that the root cause of the intermittent LUN offline events is thoroughly investigated and resolved.
To identify if the specific datastore is spanned across multiple extents run this command:
# vmkfstools -Ph /vmfs/volumes/Datastore_Name
Output will look like this:
VMFS-5.81 (Raw Major Version: 14) file system spanning 7 partitions.File system label (if any): Datastore_NameMode: public ATS-onlyCapacity 9734811811840 (9283840 file blocks * 1048576), 1160520925184 (1106759 blocks) avail, max supported file size 69201586814976Volume Creation Time: Fri Feb 4 15:12:26 2022Files (max/free): 130000/129830Ptr Blocks (max/free): 64512/56503Sub Blocks (max/free): 32000/31976Secondary Ptr Blocks (max/free): 256/256File Blocks (overcommit/used/overcommit %): 0/8177081/0Ptr Blocks (overcommit/used/overcommit %): 0/8009/0Sub Blocks (overcommit/used/overcommit %): 0/24/0Volume Metadata size: 851214336Disk Block Size: 512/512/0UUID: 61fd425a-xxxxxxxx-xxxx-xxxxxxxxxxxxLogical device: 61fd4259-xxxxxxxx-xxxx-xxxxxxxxxxxxPartitions spanned (on "lvm"): naa.6##############################1:1 naa.65:1############################## naa.68:1############################## naa.64:1############################## naa.65:1############################## naa.66:1############################## naa.67:1##############################Unable to connect to vaai-nasd socket [No such file or directory]Is Native Snapshot Capable: NOOBJLIB-LIB: ObjLib cleanup done.WORKER: asyncOps=0 maxActiveOps=0 maxPending=0 maxCompleted=0