com.vmware.vc.HA.RemediatedDupVMs
during/after vMotion with VMs running on NFS datastores.
fdm.log-
2025-01-20T19:07:43.640Z verbose fdm[6111765] [Originator@6876 sub=Execution opID=clusterElection.cpp:1570-6fa78844] Execute remediation workflow locally; vm: /vmfs/volumes/<datastore-uuid>/<vm-directory>/<vm-name>
.vmx, skipLockOwnerCheck: false
2025-01-20T19:07:43.641Z verbose fdm[7197555] [Originator@6876 sub=Cluster opID=WorkQueue-2a6f4069] Invoking GetLockOwnerForVmOnNfsDs on Duplicate VM locked file; path: /vmfs/volumes/<datastore-uuid>/<vm-directory>/<vm-name>
.vmx
.lck
2025-01-20T19:07:43.641Z verbose fdm[7197555] [Originator@6876 sub=Cluster opID=WorkQueue-2a6f4069] Nfs lock file name to read lock owner details; filename /vmfs/volumes/<datastore-uuid>/<vm-directory>
/.lck-1c86
140000000000
2025-01-20T19:07:43.641Z verbose fdm[7197555] [Originator@6876 sub=Cluster opID=WorkQueue-2a6f4069] Duplicate VM lock owner identified; path: /vmfs/volumes/<datastore-uuid>/<vm-directory>
/<vm-name>.vmx.lck, own
er: host-3098486
2025-01-20T19:07:43.641Z verbose fdm[7197555] [Originator@6876 sub=Invt opID=WorkQueue-2a6f4069] Adding /vmfs/volumes/<datastore-uuid>/<vm-directory>/<vm-name>
.vmx to powering off set
VMware vSphere ESXi 7.x
VMware vSphere ESXi 8.x
Split-brain workflow was designed with the assumption that the VM lock (.lck) is held by only the owner host of the VM. If "lockType" is exclusive, it assumes that vMotion has finished and the destination host has acquired lock on .lck file in EXCLUSIVE mode. This assumption is true for VMFS and vSan. In the NFS datastore, the VM lock (.lck) file can be held by any host, so split-brain workflow requires alternate means to ensure that it does not invoke termination during vMotion and the owner host can be determined correctly.
1. Log in to VMware vSphere Web Client.
2. Take a snapshot of the vCenter VM
3. Navigate to the cluster object: Home > vCenter > Clusters.
4. Select the cluster encountering the issue
5. Select Manage.
6. Select vSphere HA
7. Select Edit
8. Select Advanced Options
9. Click Add and the enter the below value.
das.config.fdm.enableDupVmDetection = false
10. Disable vSphere HA on the cluster by deselecting "Turn ON vSphere HA"
11. Wait a few minutes for the task to complete.
12. Enable vSphere HA on the cluster again.