"Conflicting VMFSdatastores" error in vCenter Server 6.X
search cancel

"Conflicting VMFSdatastores" error in vCenter Server 6.X

book

Article ID: 313984

calendar_today

Updated On:

Products

VMware vCenter Server VMware vSphere ESXi

Issue/Introduction

This article provides the work around when the ESXi host disconnects from vCenter after upgrading to 6.X.

Symptoms:
  • When upgrading ESXi from version 5.5 to 6.X, you experience these symptoms:
     
    • ESXi host disconnects from vCenter Server.
    • Unable to reconnect ESXi host to vCenter Server.
       
  • Removing the disconnected ESXi host from Cluster and trying to add it back to the same Cluster fails with the error:

    Conflicting VMFSdatastores
     
  • vCenter Server allows the ESXi host to be added to another Cluster or directly to Datacenter, however removing and readding the peer host from the previously affected Cluster to newly created Cluster fails with error:

    Conflicting VMFSdatastores
     
  • If the Datastore that vCenter Server complains about is unmounted from the ESXi host, when trying to add the ESXi host once again to the same Cluster fails with the error:

    Error: Conflicting VMFSdatastores (url=ds:///vmfs/volumes/Datastore-Name/) one of which is backed by local disk.

    Note: The datastore name is different this time.
     
  • In the %ALLUSERSPROFILE%\VMWare\vCenterServer\logs\vpxd.log (Windows) file or /var/log/vmware/vpxd.log (Linux) file, you see entries similar to:

    2016-07-28T13:30:36.075-06:00 error vpxd[05760] [Originator@6876 sub=HostAccess opID=6EF4FDED-00025C9A-31] [CheckIfConflictingDatastores] Conflicting VMFSdatastores (url=ds:///vmfs/volumes/Datastore-UUID/), one of which is backed by local disk
    2016-07-28T13:30:36.128-06:00 warning vpxd[05760] [Originator@6876 sub=vmomi.soapStub[28018] opID=6EF4FDED-00025C9A-31] Terminating invocation: server=<cs p:0000000095885e10, TCP:ESXi-Host-Name:443>, moref=vim.SessionManager:ha-sessionmgr, method=logout
    2016-07-28T13:30:36.144-06:00 info vpxd[05760] [Originator@6876 sub=vpxLro opID=6EF4FDED-00025C9A-31] [VpxLRO] -- FINISH task-internal-7405266
    2016-07-28T13:30:36.144-06:00 info vpxd[05760] [Originator@6876 sub=Default opID=6EF4FDED-00025C9A-31] [VpxLRO] -- ERROR task-internal-7405266 -- datacenter-607 -- vim.Datacenter.queryConnectionInfo: vim.fault.ConflictingDatastoreFound:
    --> Result:
    --> (vim.fault.ConflictingDatastoreFound) {
    --> faultCause = (vmodl.MethodFault) null,
    --> name = "dadch001_ds003",
    --> url = "ds:///vmfs/volumes/Datastore-UUID/",
    --> msg = ""
    --> }
    --> Args:

     
  • No stale entries present in the vCenter Server database for the reported Datastore.
  • Able to access the ESXi host using a direct vSphere client session and manage the VMs and ESXi without any issues.
  • LUNs are not detected as snapshot.

Note: The preceding log excerpts are only examples. Date, time, and environmental variables may vary depending on your environment.


Environment

VMware vCenter Server 6.0.x
VMware vCenter Server 6.7.x
VMware vCenter Server 6.5.x
VMware vSphere ESXi 6.7
VMware vSphere ESXi 6.5
VMware vSphere ESXi 6.0

Cause

While adding/reconnecting the host, vpxd checks for conflicting datastores. Basically, it tries to identify the datastores with same URL but backed by different disks.
This issue is due to the same disk recognized differently on different versions of ESXi. On ESXi 6.X version, datastore is treated as non local, whereas the same datastore is treated as local in pre 6.X version.

Old drivers set default transport type to PSCSI while newer ones set it to SAS and hence the values of isLocal is different for 5.5 & 6.X.

This is by design that in 6.X and updated OEM drivers the default transport type is set to SAS which sets isLocal to true only if its a SSD device.
 

Resolution

This is an expected behavior.

Caution:

  • Virtual machines from the Datastore reported as conflicting is inaccessible at the time of applying the workaround.
  • SvMotion the virtual machines by adding the ESXi host to vCenter Server as a standalone or to a different Cluster or take a downtime for VMs and shut them down.

 

To work around this issue:

  1. Connect directly to the ESXi host using vSphere client.
  2. Click Configuration > Storage > Datastore View.
  3. Right-click on the Datastore that the vCenter Server complains about while re-adding / re-connecting the ESXi host to the same Cluster and select Unmount option.
  4. Go to Devices View in the same window.
  5. Right-click the respective LUN associated with the Datastore that you attempted to Unmount and select Detach.
  6. The Datastore in question will now disappear from the Datastore list, the device status of corresponding Datastore will be Unmounted in italics.

    Note: If required, disconnect the storage. Ref: https://kb.vmware.com/s/article/2004605
  7. Re-connect / Re-add the ESXi host to the same Cluster as earlier.
  8. Click Configuration > Storage > Devices View.
  9. Right click on the Unmounted device and click Attach.

    Note: If required, reconnect the storage.
     
  10. Go to Datastore View in the same window.
  11. Right-click on the corresponding Datastore that was unmounted and select Mount.


Additional Information

To be alerted when this document is updated, click the Subscribe to Article link in the Actions box.