One or more VM's inaccessible after removing NFS datastore backing one or more VM's or virtual disks
search cancel

One or more VM's inaccessible after removing NFS datastore backing one or more VM's or virtual disks

book

Article ID: 411411

calendar_today

Updated On:

Products

VMware vCenter Server VMware vSphere ESXi VMware vSphere ESXi 8.0

Issue/Introduction

Symptoms:

  • You have a VM that shows as inaccessible in the vCenter inventory, after an NFS datastore was unmapped from the host from the NFS server end.
  • The affected VM had been recently configured to access a file, such as a virtual disk (*.vmdk) file that was located on the now inaccessible datastore, possibly due to restoring files from a backup located on an NFS datastore. 
  • One or more of the of the datastores used to store the VM configuration and/or virtual disks shows inaccessible in the VM -> Datastores tab:

  • When you Right-click the VM, the "Edit Settings" is grayed out.

Environment

  • VMware vCenter (All versions)
  • VMware ESXi (All versions)

Cause

  • A few situations could have occurred:
    1. The VM that was located on the now inaccessible datastore was not removed from the host inventory or migrated to another datastore prior to unmapping the datastore from the host.
    2. The virtual disk(s) contained in the VM's configuration are located on the now inaccessible datastore and were not removed from the VM's configuration or migrated to another datastore prior to unmapping the datastore from the host.
    3. Possibly both of the above.

 

Verification:

  • If the VM configuration is located on an accessible datastore, you can check the VM's .vmx file to see if any of the virtual disks are referencing the inaccessible datastore:

1. Verify the host the host the VM is registered to.

        • Select Hosts and Clusters Inventory Tab on the left side of the window in the vSphere Client. 
        • Click on the VM
        • Choose the Summary Tab
        • Look for the "Host" field

2. Open an ssh session to the host identified above.

3. Run the following command to verify the location of the VM configuration:

esxcli vm process list | grep -A 1 -i vmname

 

You should see output similar to the following:

Display Name: vmname
Config File: /vmfs/volumes/XXXXXXXX-XXXXXXXX-XXXX-XXXXXXXXXXXX/vmname/vmname.vmx

 

4. Check the location of each vmdk in the VM's configuration

grep -i vmdk <Config_file_from_above_output>

 

You should see a line for each virtual disk in the VM's configuration similar to the following:

scsiX:Y.fileName = "/vmfs/volumes/XXXXXXXX-XXXXXXXX/vmname/vmname.vmdk

 

5. Identify the NFS datastore from the UUIDs in the identified virtual disk files (*.vmdk files): 

 

The UUID for NFS volumes will be in the following format:  XXXXXXXX-XXXXXXXX. This is often sufficient to identify the volume (e.g. if you only had one NFS datastore).  Otherwise, you can run the following command to identify the datastore:

 

esxcli storage filesystem list

 

You should see output similar to:

 


Mount Point                                        Volume Name                                 UUID                                 Mounted  Type    Size            Free
-------------------------------------------------  ------------------------------------------  -----------------------------------  -------  ------  --------------  ----
/vmfs/volumes/XXXXXXXX-XXXXXXXX-0000-000000000000  nfs-datastore-01                            XXXXXXXX-XXXXXXXX-0000-000000000000  true     NFS41   17592186044416  17424337801216
/vmfs/volumes/XXXXXXXX-XXXXXXXX-0000-000000000000  nfs-datastore-02                            XXXXXXXX-XXXXXXXX-0000-000000000000  true     NFS41   17592186044416  17258287218688
/vmfs/volumes/AAAAAAAAA-BBBBBBBB-CCCC-DDDDDDDDDDDD  myhostname-local-01                        AAAAAAAAA-BBBBBBBB-CCCC-DDDDDDDDDDDD  true     VMFS-6    822486237184    820959510528

From this output you can confirm if one or more of the UUIDs found in the vmdk filenames identified in step #4 above matches the datastore unmapped from the ESXi host. PLEASE NOTE: UUID's of NFS volumes will contain all 0's in the last 2 segments of the UUID, as shown above. 

Resolution

  • If one more more virtual disks (but not the VM configuration itself) were located on the now inaccessible datastore -and- the virtual disk(s) are no longer needed, then you can simply removed the virtual disk(s) from the VM's configuration, assuming the VM configuration is stored on an accessible datastore:

    • To remove a virtual disk from the VM's configuration, perform the following steps: 
      • NOTE: Please verify that the virtual disk(s) are no longer needed before proceeding. 

 

cd /vmfs/volumes/datastore/vmname

      • Backup the .vmx file

cp vmname.vmx vmname.vmx_backup 

      • Edit the .vmx file:
vi vmname.vmx
      • Look for lines similar to the following:

scsi0:2.present = "true"
scsi0:2.fileName = "/vmfs/volumes/XXXXXXXX-XXXXXXXX/vmname/vmname.vmdk
scsi0:2.deviceType = "scsi-hardDisk"
sched.scsi0:2.shares = "normal"

      • Insert a # at the beginning of each of those lines, like this:

# scsi0:2.present = "true"
# scsi0:2.fileName = "/vmfs/volumes/XXXXXXXX-XXXXXXXX/vmname/vmname.vmdk
# scsi0:2.deviceType = "scsi-hardDisk"
# sched.scsi0:2.shares = "normal"

NOTE:  the # acts as a "comment", so effectively causes the ESXi to ignore those lines.  Adding the #, rather than simply deleting the line makes it easier to recover to the original state, if needed, and reduces the possibility of accidentally deleting the wrong line.  

  • If both the VM and virtual disk(s) that were located on the now inaccessible datastore and you still need them, remap the datastore to the host, then migrate the VM and/or virtual disk(s), as required:
    • Please work with your Storage Team or Storage vendor to remap the datastore to the host.
    • Once the VM is accessible again, you can then use Storage vMotion/Cold Migration to move the VM configuration and/or virtual disks to another datastore, as needed. See TechDocs: What is Migration with Storage vMotion
    • Once the migration is complete, you can unmap the datastore from the host.


  • If the entire VM is no longer needed, you can simply remove it from inventory or delete it as needed

Additional Information

  • This situation sometimes occurs with backup solutions that presents the backed up files through an NFS datastore (e.g. Rubrik Live Mount) for data recovery purposes, then reclaimed later. It is not unique to any specific solution. It can occur with any VM that has its configuration file or virtual disks stored on any NFS datastore when that datastore is suddenly unmapped from the host without migrating the VM or its virtual disks or removing the reference to the virtual disk(s) that were located on the datastore from the VM's configuration.