Impact/Risks:
This process interacts directly with the vCenter Server Appliance (VCSA) Postgres Database, and it is recommended to have a snapshot of the VCSA prior to proceeding.
When adding an ESXi host to a vCenter Server that was previously part of a datacenter or cluster from another vCenter Server, these symptoms are experienced:
1. Adding the ESXi host fails.
2. Multiple ESXi hosts in the same cluster may become unresponsive or go to a Disconnected or Not Responding state.
3. vCenter inventory displays a datastore having the same name in 2 different datacenters.
Datastore 'datastore name(##)' conflicts with an existing datastore in the datacenter that has the same URL (ds://vmfs/volumes/UUID/), but is backed by different physical storage. <esxi-hostname/ip> has a datastore that conflicts with an existing datastore in the datacenter.
VMware vSphere ESXi
VMware vCenter Server
1. Check the ESXi hosts to see whether they still see this datastore using CLI or from the vCenter inventory.
[root@esxi1:~] df -hFilesystem Size Used Available Use% Mounted onVMFS-6 399.8G 165.3G 234.5G 41% /vmfs/volumes/VMFS
2. Get the naa ID from the CLI or vCenter GUI.
NOTE: Here you see two different LUNs and VML IDs because one's the original and the other is a replica/snapshot but it's backed by the same volume on the storage array and has the same size but is assigned a different UUID by vCenter because of the LUN resignature operation.
Troubleshooting LUNs detected as snapshot LUNs in vSphere
VMFS - naa.123#############################- ########-########-####-############ (Original LUN)
VMFS - naa.789################################- ########-########-####-############ (Replica LUN/snapshot)
[root@esxi2:~] ls -lah /dev/disks/ | grep -i naa.123############################# (Original LUN)-rw------- 1 root root 400.0G MMM DD HH:MM naa.123#############################-rw------- 1 root root 400.0G MMM DD HH:MM naa.123#############################:1lrwxrwxrwx 1 root root 36 MMM DD HH:MM vml.123###################################################-> naa.################################lrwxrwxrwx 1 root root 38 MMM DD HH:MM vml.123###################################################:1 -> naa.################################:1
[root@esxi1:~] ls -lah /dev/disks/ | grep -i naa.789############################# (Replica LUN/Snapshot)-rw------- 1 root root 400.0G MMM DD HH:MM naa.789#############################-rw------- 1 root root 400.0G MMM DD HH:MM naa.789#############################:1lrwxrwxrwx 1 root root 36 MMM DD HH:MM vml.789###################################################-> naa.################################lrwxrwxrwx 1 root root 38 MMM DD HH:MM vml.789###################################################:1 -> naa.################################:1
[root@esxi1:~] esxcfg-scsidevs -m naa.123#############################:1 /vmfs/devices/disks/naa.123#############################:1 ########-########-####-############ 0 VMFS
[root@esxi2:~] esxcfg-scsidevs -m naa.789#############################:1 /vmfs/devices/disks/naa.789#############################:1 ########-########-####-############ 0 VMFS
3. Run this command to check if the ESXi host is seeing any snapshots.
[root@esxi1:~] esxcfg-volume -lScanning for VMFS-3/VMFS-5 host activity (512 bytes/HB, 2048 HBs).VMFS UUID/label: 123#####-########-####-############/VMFSCan mount: No (the original volume is still online)Can resignature: YesExtent name: naa.789#############################:1 range: 0 - 6649855 (MB)
NOTE: You can also use esxcli storage vmfs snapshot list but the output may not be the same as esxcfg-volume -l
[root@esxi3:~] vmkfstools -Ph -v1 /vmfs/volumes/VMFSVMFS-6.82 (Raw Major Version: 24) file system spanning 1 partitions.File system label (if any): VMFSMode: public ATS-onlyCapacity 399.8 GB, 234.4 GB available, file block size 1 MB, max supported file size 64 TBVolume Creation Time: MMM DD HH:MM:SS 2024Files (max/free): 16384/16248Ptr Blocks (max/free): 0/0Sub Blocks (max/free): 16384/16226Secondary Ptr Blocks (max/free): 256/255File Blocks (overcommit/used/overcommit %): 0/169280/0Ptr Blocks (overcommit/used/overcommit %): 0/0/0Sub Blocks (overcommit/used/overcommit %): 0/158/0Large File Blocks (total/used/file block clusters): 800/222/192Volume Metadata size: 1781989376Disk Block Size: 512/16384/0UUID: 123#####-########-####-############Partitions spanned (on "lvm"): naa.123#############################:1Is Native Snapshot Capable: NOOBJLIB-LIB: ObjLib cleanup done.WORKER: asyncOps=0 maxActiveOps=0 maxPending=0 maxCompleted=0
NOTE: When you mount a snapshot/replica LUN by resignaturing it manually or resignatured by SRM or a 3rd party product like your SAN via a plug-in workflow etc., it resignatures VMFS with a new UUID. So, in a nutshell, if a host has 2 LUNs attached to it with a different UUID, you hit this error because vCenter is potentially seeing 2 different UUIDs backed by 2 different LUNs belonging to the same volume on the storage array.
Datastore 'VMFS' conflicts with an existing datastore in the datacenter that has the same
URL (ds:///vmfs/volumes/123#####-########-####-############/), but is backed by different physical storage.
Follow the procedure below to fix this error
Assuming you have data stored on this datastore and VMs running on it, please follow the procedure in the sequence mentioned below. This will require downtime of the VMs.
1. Shutdown the VMs running on the concerned datastore
2. Unmount this datastore from all the hosts its mounted to across clusters or datacenters present within the vCenter
3. Detach the replica/snapshot LUN from all the hosts in the vCenter
4. Perform a storage rescan at the datacenter object level
5. Mount the original (LUN) datastore to one of the hosts and go to the datastore view to remount it on the remaining hosts in the cluster/datacenter.
This will eliminate the replica/snapshot LUN.
Follow the procedure below to fix this error for only one ESXi host
Assuming you have data stored on this datastore and VMs running on it, please follow the procedure in the sequence mentioned below. This will require downtime of the VMs.
Repeat steps 1-3 for all datastores that have complaints when adding an ESXi host to vCenter.
1. Shutdown and unregister the VMs running on the concerned datastore on the ESXi host
2. Unmount this datastore from the ESXi host
3. Detach the LUN from the ESXi host (esxcli storage core device set --state=off -d NAA_ID)
4. Perform a storage rescan on the ESXi host
5. Add the ESXi host to vCenter
Note: This will fix the issue of not being able to add the ESXi host to vCenter, but it does not eliminate the replica/snapshot LUN.
Removal of a datastore from vCenter database procedure must be followed when the datastore in question as seen in the error is not visible in vCenter inventory or doesn't go away by performing a vCenter reboot.
Note : Ensure a snapshot is taken of the vCenter VM. In case of vCenter Servers in linked mode, please take offline snapshots of all the nodes, this means that you must shut down all VCs or PSCs that are in the SSO domain at the same time, then snapshot them, and power them on again. If you need to revert to one of these snapshots, shut all the nodes down, and revert all nodes to the snapshot. Failure to perform these steps will lead to replication problems across the PSC databases.
To resolve this issue:
select * from vpx_datastore where storage_url='<storage URL from the error message>';This KB article can be referred to, when the datastores are shown as snapshots in the vCenter Server : Troubleshooting LUNs detected as snapshot LUNs in vSphere
If the issue is related to a local datastore, removing and re-adding the ESXi host to the vCenter inventory should help resolve it. Please note that removing the ESXi host from the vCenter will also disconnect it from the vDS (vSphere Distributed Switch). Once the host is re-added to vCenter, it will need to be reconnected to the vDS.
Steps to add host to a vDS (vSphere Distributed Switch) : Add Hosts to a vSphere Distributed Switch