Error: Datastore Conflicts with an Existing Datastore in the Datacenter That Has the Same URL When Adding/Reconnecting ESXi Host
search cancel

Error: Datastore Conflicts with an Existing Datastore in the Datacenter That Has the Same URL When Adding/Reconnecting ESXi Host

book

Article ID: 316577

calendar_today

Updated On:

Products

VMware vSphere ESXi VMware vCenter Server

Issue/Introduction

Impact/Risks:
This process interacts directly with the vCenter Server Appliance (VCSA) Postgres Database, and it is recommended to have a snapshot of the VCSA prior to proceeding.

When adding an ESXi host to a vCenter Server that was previously part of a datacenter or cluster from another vCenter Server, these symptoms are experienced:

1. Adding the ESXi host fails.
2. Multiple ESXi hosts in the same cluster may become unresponsive or go to a Disconnected or Not Responding state.
3. vCenter inventory displays a datastore having the same name in 2 different datacenters. 


Datastore 'datastore name(##)' conflicts with an existing datastore in the datacenter that has the same URL (ds://vmfs/volumes/UUID/), but is backed by different physical storage.




 <esxi-hostname/ip> has a datastore that conflicts with an existing datastore in the datacenter.

Environment

VMware vSphere ESXi
VMware vCenter Server

Cause

  • This is a rare scenario that could manifest when the datastore is mounted using the original LUN in 1 of the datacenter and as a replica LUN/snapshot in another datacenter belonging to the same or a different vCenter. Also such a problem is mostly seen in very large infrastructures and very uncommon in smaller vSphere environments. 
  • This issue may also occur as a symptom of the same reason described above manifesting itself as a duplicate UUID belonging to the same datastore in the vCenter database.
  • Additionally, when a datastore is removed from vCenter inventory and is no longer visible but you hit this error, it must be removed from the vCenter database.

Resolution


1. Check the ESXi hosts to see whether they still see this datastore using CLI or from the vCenter inventory. 

[root@esxi1:~] df -h
Filesystem   Size   Used Available Use% Mounted on
VMFS-6     399.8G 165.3G    234.5G  41% /vmfs/volumes/VMFS

2. Get the naa ID from the CLI or vCenter GUI. 

NOTE: Here you see two different LUNs and VML IDs because one's the original and the other is a replica/snapshot but it's backed by the same volume on the storage array and has the same size but is assigned a different UUID by vCenter because of the LUN resignature operation.  

Troubleshooting LUNs detected as snapshot LUNs in vSphere

VMFS - naa.123#############################- ########-########-####-############ (Original LUN)

VMFS - naa.789################################- ########-########-####-############ (Replica LUN/snapshot)

[root@esxi2:~] ls -lah /dev/disks/ | grep -i naa.123############################# (Original LUN)
-rw-------    1 root     root      400.0G MMM DD HH:MM naa.123#############################
-rw-------    1 root     root      400.0G MMM DD HH:MM naa.123#############################:1
lrwxrwxrwx    1 root     root          36 MMM DD HH:MM vml.123###################################################-> naa.################################
lrwxrwxrwx    1 root     root          38 MMM DD HH:MM vml.123###################################################:1 -> naa.################################:1

[root@esxi1:~] ls -lah /dev/disks/ | grep -i naa.789############################# (Replica LUN/Snapshot)
-rw-------    1 root     root      400.0G MMM DD HH:MM naa.789#############################
-rw-------    1 root     root      400.0G MMM DD HH:MM naa.789#############################:1
lrwxrwxrwx    1 root     root          36 MMM DD HH:MM vml.789###################################################-> naa.################################
lrwxrwxrwx    1 root     root          38 MMM DD HH:MM vml.789###################################################:1 -> naa.################################:1

[root@esxi1:~] esxcfg-scsidevs -m  
naa.123#############################:1 /vmfs/devices/disks/naa.123#############################:1 ########-########-####-############  0 VMFS

[root@esxi2:~] esxcfg-scsidevs -m  
naa.789#############################:1 /vmfs/devices/disks/naa.789#############################:1 ########-########-####-############  0 VMFS

3. Run this command to check if the ESXi host is seeing any snapshots. 

[root@esxi1:~] esxcfg-volume -l
Scanning for VMFS-3/VMFS-5 host activity (512 bytes/HB, 2048 HBs).
VMFS UUID/label: 123#####-########-####-############/VMFS
Can mount: No (the original volume is still online)
Can resignature: Yes
Extent name: naa.789#############################:1     range: 0 - 6649855 (MB)

NOTE: You can also use esxcli storage vmfs snapshot list but the output may not be the same as esxcfg-volume -l

[root@esxi3:~] vmkfstools -Ph -v1 /vmfs/volumes/VMFS
VMFS-6.82 (Raw Major Version: 24) file system spanning 1 partitions.
File system label (if any): VMFS
Mode: public ATS-only
Capacity 399.8 GB, 234.4 GB available, file block size 1 MB, max supported file size 64 TB
Volume Creation Time: MMM  DD HH:MM:SS 2024
Files (max/free): 16384/16248
Ptr Blocks (max/free): 0/0
Sub Blocks (max/free): 16384/16226
Secondary Ptr Blocks (max/free): 256/255
File Blocks (overcommit/used/overcommit %): 0/169280/0
Ptr Blocks  (overcommit/used/overcommit %): 0/0/0
Sub Blocks  (overcommit/used/overcommit %): 0/158/0
Large File Blocks (total/used/file block clusters): 800/222/192
Volume Metadata size: 1781989376
Disk Block Size: 512/16384/0
UUID: 123#####-########-####-############
Partitions spanned (on "lvm"):
        naa.123#############################:1
Is Native Snapshot Capable: NO
OBJLIB-LIB: ObjLib cleanup done.
WORKER: asyncOps=0 maxActiveOps=0 maxPending=0 maxCompleted=0

NOTE: When you mount a snapshot/replica LUN by resignaturing it manually or resignatured by SRM or a 3rd party product like your SAN via a plug-in workflow etc., it resignatures VMFS with a new UUID. So, in a nutshell, if a host has 2 LUNs attached to it with a different UUID, you hit this error because vCenter is potentially seeing 2 different UUIDs backed by 2 different LUNs belonging to the same volume on the storage array. 

Datastore 'VMFS' conflicts with an existing datastore in the datacenter that has the same 
URL (ds:///vmfs/volumes/123#####-########-####-############/), but is backed by different physical storage.

Follow the procedure below to fix this error

Assuming you have data stored on this datastore and VMs running on it, please follow the procedure in the sequence mentioned below. This will require downtime of the VMs. 

1. Shutdown the VMs running on the concerned datastore
2. Unmount this datastore from all the hosts its mounted to across clusters or datacenters present within the vCenter
3. Detach the replica/snapshot LUN from all the hosts in the vCenter 
4. Perform a storage rescan at the datacenter object level
5. Mount the original (LUN) datastore to one of the hosts and go to the datastore view to remount it on the remaining hosts in the cluster/datacenter. 

This will eliminate the replica/snapshot LUN. 

Follow the procedure below to fix this error for only one ESXi host

Assuming you have data stored on this datastore and VMs running on it, please follow the procedure in the sequence mentioned below. This will require downtime of the VMs. 

Repeat steps 1-3 for all datastores that have complaints when adding an ESXi host to vCenter.

1. Shutdown and unregister the VMs running on the concerned datastore on the ESXi host
2. Unmount this datastore from the ESXi host
3. Detach the LUN from the ESXi host  (esxcli storage core device set --state=off -d NAA_ID)
4. Perform a storage rescan on the ESXi host
5. Add the ESXi host to vCenter 

Note: This will fix the issue of not being able to add the ESXi host to vCenter, but it does not eliminate the replica/snapshot LUN.

Removal of a datastore from vCenter database procedure must be followed when the datastore in question as seen in the error is not visible in vCenter inventory or doesn't go away by performing a vCenter reboot. 

Note : Ensure a snapshot is taken of the vCenter VM. In case of vCenter Servers in linked mode, please take offline snapshots of all the nodes, this means that you must shut down all VCs or PSCs that are in the SSO domain at the same time, then snapshot them, and power them on again.  If you need to revert to one of these snapshots, shut all the nodes down, and revert all nodes to the snapshot. Failure to perform these steps will lead to replication problems across the PSC databases.

To resolve this issue:

  1. Connect to the VCSA through SSH.
  2. Access the Bash Shell with the command shell.set --enabled true or just type shell.
  3. Stop the vpxd service with the command: service-control --stop vpxd
     
  4. Access the Postgres DB using this command: 
    /opt/vmware/vpostgres/current/bin/psql -d VCDB -U postgres
     
  5. vCenter 6.5 or later : Run the query select id,name,storage_url from vpx_datastore;
  6. Alternatively, run:
    select * from vpx_datastore where storage_url='<storage URL from the error message>';
  1. Look for the affected datastore (DS) UUID and find the ID.
  2. To confirm the correct ID as needed, run select * from vpx_entity where id=(ID found on previous step);
  3. Delete the entry from vpx_ds_assignment, vpx_vm_ds_space, vpx_datastore and vpx_entity using query delete from vpx_ds_assignment where DS_id=####; 

    Sample were the ID = ##1:

    delete from vpx_ds_assignment where ds_id=##1;
    delete from vpx_vm_ds_space where ds_id=##1;
    delete from vpx_datastore where id=##1;
    delete from vpx_entity where id=##1;

  4. To quit from database use \q
  5. Start the vpxd service in order for changes to be reflected:
    service-control --start vpxd
     
  6. Once the vpxd service is up again, proceed to add the ESXi host again back to the vCenter Server.

Additional Information

This KB article can be referred to, when the datastores are shown as snapshots in the vCenter Server : Troubleshooting LUNs detected as snapshot LUNs in vSphere

If the issue is related to a local datastore, removing and re-adding the ESXi host to the vCenter inventory should help resolve it. Please note that removing the ESXi host from the vCenter will also disconnect it from the vDS (vSphere Distributed Switch). Once the host is re-added to vCenter, it will need to be reconnected to the vDS.

Steps to add host to a vDS (vSphere Distributed Switch) : Add Hosts to a vSphere Distributed Switch