Unable to mount disk in Linux guest OS
search cancel

Unable to mount disk in Linux guest OS

book

Article ID: 389038

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Symptom 

  • An I/O error is observed in linux VM while trying to mount the disk in Linux OS. 

  • Following entries are seen in RHEL guest OS /var/log/messages log file:

 [ Feb 9 00:56:51 2025] sd 0:0:9:0: [sdi] tag#331 timing out command, waited 1080s
 [ Feb 9 00:56:51 2025] sd 0:0:9:0: [sdi] tag#331 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=1080s
 [ Feb 9 00:56:51 2025] sd 0:0:9:0: [sdi] tag#331 CDB: Read(16) 88 00 00 00 00 01 ff ff 88 01 00 00 00 01 00 00

FAILED Result:
hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=1080s

 

Validation

  • In the var/run/log/hostd.log file of the ESXi host, these messages are seen for some extents of the datastore going offline:

2025-02-03T21:57:23.673Z info hostd[2103473] [Originator@6876 sub=Vimsvc.ha-eventmgr] Event 14110: An attached device naa.6########################6:1 may be offline. The file system [Datastore_Name, 61fd425a-#######-####-###########] is now in a degraded state. While the datastore is still available, parts of data that reside on the extent that went offline might be inaccessible.

  • These messages are seen in the var/log/vobd.log file of the ESXi host:

2025-02-08T22:54:55.711Z: [vmfsCorrelator] 1215653206342us: [vob.vmfs.extent.offline] An attached device went offline. naa.6########################6:1file system [Datastore_Name 61fd425a-#######-####-###########]
2025-02-08T22:54:55.711Z: [vmfsCorrelator] 1215776852821us: [esx.problem.vmfs.extent.offline] An attached device naa.6########################6:1 may be offline. The file system [Datastore_Name 61fd425a-#######-####-###########] is now in a degraded state. While the datastore is still available, parts of data that reside on the extent that went offline might be inaccessible.

 

Environment

VMware vSphere ESXi 8.x
VMware vSphere ESXi 7.x

Cause

This is caused by one or more extents (LUN) used by the datastore intermittently going into offline state.

Resolution

The storage vendor needs to conduct further investigation to resolve the issue of LUNs (extent) intermittently going offline.

Involving the Storage vendor will ensure that the root cause of the intermittent LUN offline events is thoroughly investigated and resolved.

Additional Information

To identify if the specific datastore is spanned across multiple extents run this command:

# vmkfstools -Ph /vmfs/volumes/Datastore_Name

Output will look like this:

VMFS-5.81 (Raw Major Version: 14) file system spanning 7 partitions.
File system label (if any): Datastore_Name
Mode: public ATS-only
Capacity 9734811811840 (9283840 file blocks * 1048576), 1160520925184 (1106759 blocks) avail, max supported file size 69201586814976
Volume Creation Time: Fri Feb  4 15:12:26 2022
Files (max/free): 130000/129830
Ptr Blocks (max/free): 64512/56503
Sub Blocks (max/free): 32000/31976
Secondary Ptr Blocks (max/free): 256/256
File Blocks (overcommit/used/overcommit %): 0/8177081/0
Ptr Blocks  (overcommit/used/overcommit %): 0/8009/0
Sub Blocks  (overcommit/used/overcommit %): 0/24/0
Volume Metadata size: 851214336
Disk Block Size: 512/512/0
UUID: 61fd425a-xxxxxxxx-xxxx-xxxxxxxxxxxx
Logical device: 61fd4259-xxxxxxxx-xxxx-xxxxxxxxxxxx
Partitions spanned (on "lvm"):
        naa.6##############################1:1
        naa.6##############################5:1
        naa.6##############################8:1
        naa.6##############################4:1
        naa.6##############################5:1
        naa.6##############################6:1
        naa.6##############################7:1
Unable to connect to vaai-nasd socket [No such file or directory]
Is Native Snapshot Capable: NO
OBJLIB-LIB: ObjLib cleanup done.
WORKER: asyncOps=0 maxActiveOps=0 maxPending=0 maxCompleted=0