Datastore unmounted from hosts and unable to mount back.
search cancel

Datastore unmounted from hosts and unable to mount back.

book

Article ID: 398798

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Symptoms:

  • In vcenter, datastore will detect as unmounted from hosts.
  • Run the command "esxcli storage filesystem list". Mount status will show as false.
    Mount Point                                        Volume Name                                 UUID                                 Mounted  Type    Size            Free
    -------------------------------------------------  ------------------------------------------  -----------------------------------  -------  ------  --------------  ----
                                                       Volume name                                 ########-########-####-############   false    VMFS-6               0               0
  • Mounting the datastore will fail with error : "An error occurred during host configuration".

Validation Step: 

  • Navigate to Affected datastore > configure > Device Backing. Observe the number of extents recognized by the datastore.


    Above screenshot confirms that the datastore recognizes 2 extents.

  • Run the command "vmkfstools -Ph -v10 /vmfs/volumes/volume_name".
    VMFS-6.81 (Raw Major Version: 24) file system spanning 3 partitions.
    File system label (if any): Volume_name
    Partitions spanned (on "lvm"):
            naa.##############################01:1
            naa.##############################02:1
            (device naa.##############################03:1 might be offline)
            (One or more partitions spanned by this volume may be offline)

    Above output confirms that filesystem consists of 3 partitions and one partition is missing.

  • Attempt to mount the datastore and observe the vmkernel logs at the same time. Vmkernel logs confirms that the missing extent is expanded.
    tail -f vmkernel.log
    YYYY-MM-DDTHH:MM.SSSZ In(182) vmkernel: cpu48:28525234)LVM: 4353: [naa.##############################03:1] Device expanded (actual size ###ABC blocks, stored size ###DEF blocks)
    YYYY-MM-DDTHH:MM.SSSZ In(182) vmkernel: cpu48:28525234)LVM: 4353: [naa.##############################03:1] Device expanded (actual size ###ABC blocks, stored size ###DEF blocks)
    YYYY-MM-DDTHH:MM.SSSZ In(182) vmkernel: cpu48:28525234)LVM: 4353: [naa.##############################03:1] Device expanded (actual size ###ABC blocks, stored size ###DEF blocks)

  • Storage vmtion fails with error : 
    "A fatal internal error occurred. See the virtual machine's log for more details. YYYY-MM-DDTHH:MM.SSSZ Failed waiting for data. Error #######. Not found. YYYY-MM-DDTHH:MM.SSSZ Failed to copy source (/vmfs/volumes/########-########-####-############/VM_Name/VM_Name.vmdk) to destination (/vmfs/volumes/########-########-####-############/VM_Name/VM_Name.vmdk): Address temporarily unmapped. Failed to copy one or more disks."

  • var/run/log/vmkernel log file confirms that the svmotion failed with error message "Address temporarily unmapped".
    YYYY-MM-DDTHH:MM.SSSZ In(182) vmkernel: cpu61:13390995) FS3DM: 2375: status Address temporarily unmapped copying 1 extents between two files, bytesTrasferred = 0 extentsTransferred: 0
    YYYY-MM-DDTHH:MM.SSSZ Wa(180) vmkwarning: cpu61: 13390995)WARNING: SVM: 2891: scsi0:0 Failed SVMFDSIoctlMoveData: Address temporarily unmapped

    Perform a storage rescan. Refer After a datastore is expanded from the ESXi host client or CLI, one or more hosts report "Device shrank" and the datastore becomes inaccessible for more information.

Environment

VMware vSphere ESXi 6.x
VMware vSphere ESXi 7.x
VMware vSphere ESXi 8.x

Cause

  • The datastore is configured with multi extents. One extent is missing post expanding the size, causing the datastore to be unmounted and preventing it from being remounted.
  • Adding the extent to same datastore failed due to an existing lock.

Cause validation:

  • If the issue still persists post storage rescan, proceed below. 

  • Run VOMA check. VOMA check will confirm that the datastore is severely corrupted. Refer Using vSphere On-disk Metadata Analyzer (VOMA) to check VMFS metadata consistency for more information.

    Module name is missing. Using "vmfs" as default
    Running VMFS Checker version 2.1 in check mode
    Initializing LVM metadata, Basic Checks will be done
             ERROR: Either lOffset (#########) or len (#######) in PE entry for peID 2 (index 2) is corrupted.
             ERROR: Failed to Initialize LVM Metadata
       VOMA failed to check device : Severe corruption detected

    Total Errors Found:           0
       Kindly Consult VMware Support for further assistance

  • Voma confirmed that the datastore is Severely corrupted. Its recommend to have a backup for all the VMs before proceeding with below procedure. 
  • Add the extent to the same datastore using datastore expansion method but this will fail. 
  • var/run/log/vmkernel log file confirms that the volume is locked.
    YYYY-MM-DDTHH:MM.SSSZ Wa(180) vmkwarning: cpu1:2099496 opID=5fd78fc2)WARNING: LVM: 17704: The volume on the device naa.##############################03:1 locked, possibly because some remote host encountered an error during a volume operation and could not recover
    YYYY-MM-DDTHH:MM.SSSZ Wa(180) vmkwarning: cpu1:2099496 opID=5fd78fc2)WARNING: LVM: 7493: If you are _sure_ this is the case, please break the devicelock with `vmkfstools -B /vmfs/devices/disks/naa.##############################03:1`

Resolution

  • Break the lock on the volume using the command "vmkfstools -B /vmfs/devices/disks/naa.:1"

    vmkfstools -B /vmfs/devices/disks/naa.##############################03:1

    LVM lock on the device naa.##############################03:1 will be forcibly broken. See the vmkfstools or ESX documentation for information on breaking the LVM lock.

    Continue to break lock?

    0) _Yes
    1) _No

    Select a number from 0-1: 0
    Successfully broke LVM device lock for /vmfs/devices/disks/naa.##############################03:1

  • Perform storage rescan.
  • The missing extent will be added to the datastore and the datastore will mount to the hosts automatically post the rescan.