Error - Cannot create a failover image for group 'GID-########-6d7d-47d4-af81-############' on vSphere Replication Server 'vrms.broadcom.com' (address '10.#.#.1'). No storage is found for datastore path '[vmware-vsan] ########-a666-795b-26a1-############/hbrcfg.GID-########-6d7d-47d4-af81-############.499773.vmx.42629'.
/opt/vmware/hms/logs/hms.log:
2024-11-30 10:37:57.000 INFO com.vmware.hms.i18n.class com.vmware.hms.response.filter.I18nActivationResponseFilter [tcweb-29] (..response.filter.I18nActivationResponseFilter) [operationID=########-30f2-4a02-8c03-############-HMS-35956639,sessionID=95003A2E] | The localized message is: Cannot create a failover image for group 'GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6' on vSphere Replication Server 'vrms.broadcom.com' (address '10.#.#.1').
2024-11-30 10:37:57.000 INFO com.vmware.hms.i18n.class com.vmware.hms.response.filter.I18nActivationResponseFilter [tcweb-29] (..response.filter.I18nActivationResponseFilter) [operationID=########-30f2-4a02-8c03-############-HMS-35956639,sessionID=95003A2E] | The localized message is: No storage is found for datastore path '[vmware-vsan] ########-a666-795b-26a1-############/hbrcfg.GID-########-6d7d-47d4-af81-############.499773.vmx.42629'.
/var/log/vmware/hbrsrv.log:
2024-11-30T10:37:54.412Z error hbrsrv[01189] [Originator@6876 sub=Main groupID=GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6 opID=dd2cde30-0cc4-499a-950b-beaa495e0519-failover:3ccb:497b:3dbd:311d-HMS-35956596] HbrError for (datastoreUUID: "vsan:52############29-6e############32"), (hostId: "host-7469"), (pathname: "########-a666-795b-26a1-############/hbrcfg.GID-########-6d7d-47d4-af81-############.499773.vmx.42629"), (flags: nfc-error) stack:
2024-11-30T10:37:54.412Z error hbrsrv[01189] [Originator@6876 sub=Main groupID=GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6 opID=dd2cde30-0cc4-499a-950b-beaa495e0519-failover:3ccb:497b:3dbd:311d-HMS-35956596] [0] Class: NFC Code: 16
2024-11-30T10:37:54.412Z error hbrsrv[01189] [Originator@6876 sub=Main groupID=GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6 opID=dd2cde30-0cc4-499a-950b-beaa495e0519-failover:3ccb:497b:3dbd:311d-HMS-35956596] [1] NFC error: NFC_FILE_MISSING
2024-11-30T10:37:54.412Z error hbrsrv[01189] [Originator@6876 sub=Main groupID=GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6 opID=dd2cde30-0cc4-499a-950b-beaa495e0519-failover:3ccb:497b:3dbd:311d-HMS-35956596] [2] Code set to: Storage is not found.
2024-11-30T10:37:54.412Z error hbrsrv[01189] [Originator@6876 sub=Main groupID=GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6 opID=dd2cde30-0cc4-499a-950b-beaa495e0519-failover:3ccb:497b:3dbd:311d-HMS-35956596] [3] Set error flag: nfc-error
2024-11-30T10:37:54.412Z error hbrsrv[01189] [Originator@6876 sub=Main groupID=GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6 opID=dd2cde30-0cc4-499a-950b-beaa495e0519-failover:3ccb:497b:3dbd:311d-HMS-35956596] [4] Can't open remote file /vmfs/volumes/vsan:52############29-6e############32/########-a666-795b-26a1-############/hbrcfg.GID-########-6d7d-47d4-af81-############.499773.vmx.42629
2024-11-30T10:37:54.412Z error hbrsrv[01189] [Originator@6876 sub=Main groupID=GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6 opID=dd2cde30-0cc4-499a-950b-beaa495e0519-failover:3ccb:497b:3dbd:311d-HMS-35956596] [5] Copying config file from instance
2024-11-30T10:37:54.412Z error hbrsrv[01189] [Originator@6876 sub=Main groupID=GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6 opID=dd2cde30-0cc4-499a-950b-beaa495e0519-failover:3ccb:497b:3dbd:311d-HMS-35956596] [6] Creating image of instance of GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6
2024-11-30T10:37:54.412Z error hbrsrv[01189] [Originator@6876 sub=Main groupID=GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6 opID=dd2cde30-0cc4-499a-950b-beaa495e0519-failover:3ccb:497b:3dbd:311d-HMS-35956596] [7] Creating fail-over image of GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6 with optimized reprotect
2024-11-30T10:37:54.412Z verbose hbrsrv[01189] [Originator@6876 sub=PropertyProvider groupID=GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6 opID=dd2cde30-0cc4-499a-950b-beaa495e0519-failover:3ccb:497b:3dbd:311d-HMS-35956596] RecordOp ASSIGN: info.progress, Hbr.Replica.Task.5262e80e-f061-ab13-f45b-ae338565a1cc. Applied change to temp map.
2024-11-30T10:37:54.412Z info hbrsrv[01189] [Originator@6876 sub=Misc groupID=GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6 opID=dd2cde30-0cc4-499a-950b-beaa495e0519-failover:3ccb:497b:3dbd:311d-HMS-35956596] Completing Replica::Task 5262e80e-f061-ab13-f45b-ae338565a1cc as 3
2024-11-30T10:37:54.412Z verbose hbrsrv[01189] [Originator@6876 sub=PropertyProvider groupID=GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6 opID=dd2cde30-0cc4-499a-950b-beaa495e0519-failover:3ccb:497b:3dbd:311d-HMS-35956596] RecordOp ASSIGN: info, Hbr.Replica.Task.5262e80e-f061-ab13-f45b-ae338565a1cc. Applied change to temp map.
2024-11-30T10:37:54.412Z verbose hbrsrv[01189] [Originator@6876 sub=ReplicaTaskManager groupID=GID-3dfad3d5-6d7d-47d4-af81-69823c2942d6 opID=dd2cde30-0cc4-499a-950b-beaa495e0519-failover:3ccb:497b:3dbd:311d-HMS-35956596] Completed task 5262e80e-f061-ab13-f45b-ae338565a1cc. Cleanup after 2024-11-30 10:47:54 UTC.
/var/log/hostd.log:
2024-11-30T12:07:06.606Z In(166) Hostd[2111401]: [Originator@6876 sub=Vimsvc.TaskManager opID=esxui-896a-3ca1 sid=52fb0fcf user=root] Task Created : haTask-ha-folder-vm-vim.Folder.registerVm-11860377
2024-11-30T12:07:06.606Z In(166) Hostd[2111397]: [Originator@6876 sub=Solo.HaVMFolder opID=esxui-896a-3ca1 sid=52fb0fcf user=root] Register called: [vmware-vsan] ########-a666-795b-26a1-############/VMName.vmx
2024-11-30T12:07:06.645Z Db(167) Hostd[2111397]: [Originator@6876 sub=Vmsvc opID=esxui-896a-3ca1 sid=52fb0fcf user=root] Registering virtual machine [86]: /vmfs/volumes/vsan:52############29-6e############32/########-a666-795b-26a1-############/VMName.vmx
2024-11-30T12:07:06.646Z Db(167) Hostd[2111397]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:52############29-6e############32/########-a666-795b-26a1-############/VMName.vmx opID=esxui-896a-3ca1 sid=52fb0fcf user=root] Disk cache flushed
2024-11-30T12:07:06.647Z In(166) Hostd[2111397]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:52############29-6e############32/########-a666-795b-26a1-############/VMName.vmx opID=esxui-896a-3ca1 sid=52fb0fcf user=root] Setting failover status to false
2024-11-30T12:07:06.649Z In(166) Hostd[2111397]: [Originator@6876 sub=Libs opID=esxui-896a-3ca1 sid=52fb0fcf user=root] VigorOffline_Init: Failed to initialize VIGOR offline: The configuration file of the virtual machine is corrupted.
2024-11-30T12:07:06.649Z In(166) Hostd[2111397]: [Originator@6876 sub=Libs opID=esxui-896a-3ca1 sid=52fb0fcf user=root]
2024-11-30T12:07:06.649Z Db(167) Hostd[2111397]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:52############29-6e############32/########-a666-795b-26a1-############/VMName.vmx opID=esxui-896a-3ca1 sid=52fb0fcf user=root] LoadFromConfig translated error to vim.fault.InvalidVmConfig
2024-11-30T12:07:06.649Z Db(167) Hostd[2111397]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:52############29-6e############32/########-a666-795b-26a1-############/VMName.vmx opID=esxui-896a-3ca1 sid=52fb0fcf user=root] LoadFromConfig message: The configuration file of the virtual machine is corrupted.
2024-11-30T12:07:06.649Z Db(167) Hostd[2111392]: -->
2024-11-30T12:07:06.650Z In(166) Hostd[2111397]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:52############29-6e############32/########-a666-795b-26a1-############/VMName.vmx opID=esxui-896a-3ca1 sid=52fb0fcf user=root] Failed to load virtual machine
2024-11-30T12:07:06.650Z Wa(164) Hostd[2111397]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:52############29-6e############32/########-a666-795b-26a1-############/VMName.vmx opID=esxui-896a-3ca1 sid=52fb0fcf user=root] Failed to load VM from vigor during register Fault cause: vim.fault.InvalidVmConfig
2024-11-30T12:07:06.650Z Wa(164) Hostd[2111392]: -->
2024-11-30T12:07:06.650Z In(166) Hostd[2111397]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:52############29-6e############32/########-a666-795b-26a1-############/VMName.vmx opID=esxui-896a-3ca1 sid=52fb0fcf user=root] Marking VirtualMachine invalid
2024-11-30T12:07:06.650Z In(166) Hostd[2111397]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:52############29-6e############32/########-a666-795b-26a1-############/VMName.vmx opID=esxui-896a-3ca1 sid=52fb0fcf user=root] State Transition (VM_STATE_INITIALIZING -> VM_STATE_INVALID_CONFIG)
2024-11-30T12:07:06.651Z Db(167) Hostd[2111634]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:52############29-6e############32/########-a666-795b-26a1-############/VMName.vmx] GetResourcePool: not initialized.
2024-11-30T12:07:06.651Z Db(167) Hostd[2111397]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:52############29-6e############32/########-a666-795b-26a1-############/VMName.vmx opID=esxui-896a-3ca1 sid=52fb0fcf user=root] GetVigorCnx: VM not loaded
2024-11-30T12:07:06.652Z Wa(164) Hostd[2111397]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:52############29-6e############32/########-a666-795b-26a1-############/VMName.vmx opID=esxui-896a-3ca1 sid=52fb0fcf user=root] Unable to check for locked state: N3Vim5Fault12InvalidState9ExceptionE(Fault cause: vim.fault.InvalidState
2024-11-30T12:07:06.657Z Wa(164) Hostd[2111392]: --> )
2024-11-30T12:07:06.657Z Wa(164) Hostd[2111392]: --> [context]zKq7AVICAgAAAP////8QaG9zdGQAAANZQWxpYnZtYWNvcmUuc28AAR71U2hvc3RkAAGsIVwBmEbFATJexQGNos0Bcu3RARfViQFx2ImCwDoSAWxpYnZpbS10eXBlcy5zbwAB7/9WAI4xKACwTCgAS9lJA4J6AGxpYnB0aHJlYWQuc28uMAAE7y4PbGliYy5zby42AA==[/context]
2024-11-30T12:07:06.657Z In(166) Hostd[2111397]: [Originator@6876 sub=Vimsvc.ha-eventmgr opID=esxui-896a-3ca1 sid=52fb0fcf user=root] Event 986438 : Registered /vmfs/volumes/vsan:52############29-6e############32/########-a666-795b-26a1-############/VMName.vmx on host.broadcom.com in ha-datacenter
vSphere Replication
VMware Site Recovery Manager
VMware Live Site Recovery
If you're encountering this error, there could be several factors involved including the ones mentioned here that are more prominent.
1. Network connectivity issues between the target replication appliance and host/s where the VM is being recovered.
2. Low or insufficient space(thin provisioned datastores) on the datastore where the recovery is happening.
3. Corrupt VMX file in the replication datastore
4. Ports required by replication that are not open
Troubleshoot this error in the order mentioned here.
1. Replication status
Check the replication status of the VM in question !
1. Is the VM currently showing a OK status in the Replications tab of SRM UI ?
2. Is the VM still doing an Initial/Incremental Sync ?
3. Is the VM recovered at the target site ?
If the replication status show OK or recovered, move on to step 2. But, if the VM shows Initial/Incremental Sync, continue to check the target replication appliance and host logs.
2. Datastore space
Check the target datastore where the failover image is being created and ensure it's accessible and has enough free space. Such a problem could arise when you are using thin provisioned datastores, thin LUNs or when a datastore runs out of space.
If you don't see a problem with any of the aforementioned issues, check if this datastore has any problems by diving into the VMkernel logs from one of the hosts connected to it.
3. Network connectivity and ports
Ensure that all required ports for replication and recovery are open as per the documentation and that there's no firewall blocking them. You'll have to pay special attention if you are using NSX firewall as this requires the administrator to have heightened skills to enable/disable them as these firewall policies can get very granular.
4. Collect logs and open a support ticket
Lastly, it's very important that you collect log bundles from components at the right time if you intend on finding out a root cause of this problem from SRM support team.
Logs must be collected after a TEST recovery is run and before a CLEANUP is performed from -
1. Target replication appliance (VRMS) and the add-on server that is used to recover the VM (If the VM in question is being recovered via an add-on replication server instead of the VRMS)
2. Target host where the VM resides
3. Recovery report