/common/log/admin/app.log
,var/log/vmware/hbrsrv.log
/tmp/Fleet-appliances/<Service-Mesh>/<IX-Appliance>/var/log/vmware/hbrsrv.log
from HCX tech bundle.2022-02-17T15:16:18.152Z info hbrsrv[6AB2AD852700] [Originator@6876 sub=Host opID=hs-285dd448] Getting disk type for /vmfs/volumes/vsan:UUID/VM_UUID/VM_NAME 2022-02-17T15:16:18.202Z info hbrsrv[6AB2AD956700] [Originator@6876 sub=Host opID=hs-4ac4bf6f] Getting disk type for /vmfs/volumes/vsan:UUID/VM_UUID/VM_NAME
2022-02-17T15:17:18.919Z info hbrsrv[6AB2AD956700] [Originator@6876 sub=Delta] Configured disks for group VRID-XXXXXX: 2022-02-17T15:17:18.919Z info hbrsrv[6AB2AD956700] [Originator@6876 sub=Delta] RDID-XXXXX 2022-02-17T15:17:18.919Z info hbrsrv[6AB2AD956700] [Originator@6876 sub=Delta] RDID-XXXXXX
2022-02-17T15:17:33.881Z info hbrsrv[6AB2ADA9B700] [Originator@6876 sub=Delta opID=hsl-10579a55] Full sync complete for disk RDID-XXXXXXX (198057984 bytes transferred, 209715200 bytes checksummed) 2022-02-17T15:17:55.078Z info hbrsrv[6AB2AD8D4700] [Originator@6876 sub=Delta opID=hsl-1057c4a8] Full sync complete for disk RDID-XXXXXXX (827564032 bytes transferred, 838860800 bytes checksummed)
2022-02-17T15:20:04.403Z info hbrsrv[6AB2AD9D8700] [Originator@6876 sub=Delta opID=hsl-1057c4bc] Instance complete for disk RDID-XXXXXXX 2022-02-17T15:20:04.738Z info hbrsrv[6AB2AD70D700] [Originator@6876 sub=Delta opID=hsl-1057c4ee] Instance complete for disk RDID-XXXXXXX 2022-02-17T15:20:14.508Z info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Image opID=hs-4f3c1b62:hs-d5da:hs-4252] Creating image from group VRID-XXXXXXXX, instance 49, in XXXXXXX
2022-02-17T15:20:14.526Z info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Host opID=hs-4f3c1b62:hs-d5da:hs-4252] Getting disk type for /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-XXXXXXXX.vmdk 2022-02-17T15:20:14.822Z info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Host opID=hs-4f3c1b62:hs-d5da:hs-4252] Getting disk type for /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-XXXXXXXX.vmdk
2022-02-17T15:20:15.123Z info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Image opID=hs-4f3c1b62:hs-d5da:hs-4252] Copying cfg /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-XXXXXXXX.vmdkvmx.137 to /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-XXXXXXXX.vmdk.vmx 2022-02-17T15:20:15.410Z info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Image opID=hs-4f3c1b62:hs-d5da:hs-4252] Copying cfg /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-XXXXXXXX.vmdk.vmxf.138 to /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-XXXXXXXX.vmdk.vmxf 2022-02-17T15:20:15.430Z info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Image opID=hs-4f3c1b62:hs-d5da:hs-4252] Copying cfg /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-XXXXXXXX.vmdk.nvram.139 to /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-XXXXXXXX.vmdk.nvram
2022-02-17T15:20:32.891Z info hbrsrv[6AB2AD70D700] [Originator@6876 sub=PersistentCleanup opID=hs-565f4eb] The disk '/vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-XXXXXXXX.vmdk' (key=186) was cleaned up successfully. 2022-02-17T15:20:33.004Z info hbrsrv[6AB2AD70D700] [Originator@6876 sub=PersistentCleanup opID=hs-565f4eb] The disk '/vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-XXXXXXXX.vmdk' (key=187) was cleaned up successfully.
2022-02-17T15:20:33.148Z info hbrsrv[6AB2AD70D700] [Originator@6876 sub=PersistentCleanup opID=hs-565f4eb] The file '/vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-XXXXXXXX.vmx.137' (key=189) was cleaned up successfully. 2022-02-17T15:20:33.220Z info hbrsrv[6AB2AD70D700] [Originator@6876 sub=PersistentCleanup opID=hs-565f4eb] The file '/vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-XXXXXXXX.vmxf.138' (key=190) was cleaned up successfully. 2022-02-17T15:20:33.291Z info hbrsrv[6AB2AD70D700] [Originator@6876 sub=PersistentCleanup opID=hs-565f4eb] The file '/vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-XXXXXXXX.nvram.139' (key=191) was cleaned up successfully.
[root@cia-vmc-esx-015:~] vim-cmd vmsvc/getallvms 852 VM_Name rhel6_64Guest vmx-14
[root@cia-vmc-esx-015:~] vim-cmd hbrsvc/vmreplica.getState 852 Retrieve VM running replication state: (vim.fault.ReplicationVmFault) { faultCause = (vmodl.MethodFault) null, faultMessage = <unset>, reason = "notConfigured", state = <unset>, instanceId = <unset>, vm = 'vim.VirtualMachine:852' msg = "Received SOAP response fault from [<cs p:000000081ef64380, TCP:localhost:8307>]: getGroupState vSphere Replication operation error: Virtual machine is not configured for replication." }
[root@cia-vmc-esx-015:~] vim-cmd hbrsvc/vmreplica.getState 852 Retrieve VM running replication state: The VM is configured for replication. Current replication state: Group: VRID-XXXXX (generation=32459820918756983) Group State: full sync (74% done: checksummed 614 MB of 1000 MB, transferred 569.3 MB of 593.8 MB) DiskID RDID-XXXXXX State: full sync (checksummed 414 MB of 800 MB, transferred 380.4 MB of 404.9 MB) DiskID RDID-XXXXXX State: inactive
[root@cia-vmc-esx-015:~] vim-cmd hbrsvc/vmreplica.getState 852 Retrieve VM running replication state: The VM is configured for replication. Current replication state: Group: VRID-XXXXX (generation=32459820918756983) Group State: lwd delta (instanceId=replica-XXXXXXXX) (0% done: transferred 0 bytes of 40 KB) DiskID RDID-XXXXXXX State: lwd delta (transferred 0 bytes of 40 KB) DiskID RDID-XXXXXXX State: lwd delta (transferred 0 bytes of 0 bytes)
Note: In the event that the bulk migration workflow fails and rolls back at the cut over stage, so when rescheduling the migrations, the workflow will try to reuse the seed data already copied in the previous attempt.
Note: The recommendation is to not perform cleanup operation of failed job which will lead to the removal of seed data.
Note: In such cases, the recommendation is to relocate the source VM compute resources to another ESXi host probably a free one using vCenter vMotion. This action won't impact ongoing replication process and do not require any changes in the migration workflow.
Alternatively:
Note: DR Protection Recovery would be a more manual and lengthy process but with a higher chance for success given any infrastructure and network limitations.
IMPORTANT: A migration cannot be guaranteed under ANY circumstances therefore these and other considerations must be taken to maximize the possibilities for a successful migration under those conditions, by minimizing the impact of infrastructure and network limitations.