/common/log/admin/app.log
,ccli
>> var/log/vmware/hbrsrv.log
Note: You can also go to /tmp/Fleet-appliances/<Service-Mesh>/<IX-Appliance>/var/log/vmware/hbrsrv.log
from the HCX tech bundle.hbrsrv.log
from the target/cloud IX appliance.2022-02-17T15:16:18.152Z info hbrsrv[6AB2AD852700] [Originator@6876 sub=Host opID=hs-285dd448] Getting disk type for /vmfs/volumes/vsan:UUID/VM_UUID/VM_NAME
2022-02-17T15:16:18.202Z info hbrsrv[6AB2AD956700] [Originator@6876 sub=Host opID=hs-4ac4bf6f] Getting disk type for /vmfs/volumes/vsan:UUID/VM_UUID/VM_NAME
2022-02-17T15:17:18.919Z info hbrsrv[6AB2AD956700] [Originator@6876 sub=Delta] Configured disks for group VRID-######:
2022-02-17T15:17:18.919Z info hbrsrv[6AB2AD956700] [Originator@6876 sub=Delta] RDID-######
2022-02-17T15:17:18.919Z info hbrsrv[6AB2AD956700] [Originator@6876 sub=Delta] RDID-######
2022-02-17T15:17:33.881Z info hbrsrv[6AB2ADA9B700] [Originator@6876 sub=Delta opID=hsl-10579a55] Full sync complete for disk RDID-####### (198057984 bytes transferred, 209715200 bytes checksummed)
2022-02-17T15:17:55.078Z info hbrsrv[6AB2AD8D4700] [Originator@6876 sub=Delta opID=hsl-1057c4a8] Full sync complete for disk RDID-####### (827564032 bytes transferred, 838860800 bytes checksummed)
2022-02-17T15:20:04.403Z info hbrsrv[6AB2AD9D8700] [Originator@6876 sub=Delta opID=hsl-1057c4bc] Instance complete for disk RDID-#######
2022-02-17T15:20:04.738Z info hbrsrv[6AB2AD70D700] [Originator@6876 sub=Delta opID=hsl-1057c4ee] Instance complete for disk RDID-#######
2022-02-17T15:20:14.508Z info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Image opID=hs-4f3c1b62:hs-d5da:hs-4252] Creating image from group VRID-########, instance 49, in #######
2022-02-17T15:20:14.526Z info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Host opID=hs-4f3c1b62:hs-d5da:hs-4252] Getting disk type for /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk
2022-02-17T15:20:14.822Z info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Host opID=hs-4f3c1b62:hs-d5da:hs-4252] Getting disk type for /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk
2022-02-17T15:20:15.123Z info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Image opID=hs-4f3c1b62:hs-d5da:hs-4252] Copying cfg /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdkvmx.137 to /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk.vmx
2022-02-17T15:20:15.410Z info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Image opID=hs-4f3c1b62:hs-d5da:hs-4252] Copying cfg /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk.vmxf.138 to /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk.vmxf
2022-02-17T15:20:15.430Z info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Image opID=hs-4f3c1b62:hs-d5da:hs-4252] Copying cfg /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk.nvram.139 to /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk.nvram
2022-02-17T15:20:32.891Z info hbrsrv[6AB2AD70D700] [Originator@6876 sub=PersistentCleanup opID=hs-565f4eb] The disk '/vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk' (key=186) was cleaned up successfully.
2022-02-17T15:20:33.004Z info hbrsrv[6AB2AD70D700] [Originator@6876 sub=PersistentCleanup opID=hs-565f4eb] The disk '/vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk' (key=187) was cleaned up successfully.
2022-02-17T15:20:33.148Z info hbrsrv[6AB2AD70D700] [Originator@6876 sub=PersistentCleanup opID=hs-565f4eb] The file '/vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmx.137' (key=189) was cleaned up successfully.
2022-02-17T15:20:33.220Z info hbrsrv[6AB2AD70D700] [Originator@6876 sub=PersistentCleanup opID=hs-565f4eb] The file '/vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmxf.138' (key=190) was cleaned up successfully.
2022-02-17T15:20:33.291Z info hbrsrv[6AB2AD70D700] [Originator@6876 sub=PersistentCleanup opID=hs-565f4eb] The file '/vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.nvram.139' (key=191) was cleaned up successfully.
vim-cmd
CLIs to verify the status of replication.[root@cia-vmc-esx-015:~] vim-cmd vmsvc/getallvms
852 VM_Name rhel6_64Guest vmx-14
vim-cmd hbrsvc/vmreplica.getState 852
Retrieve VM running replication state:
(vim.fault.ReplicationVmFault) {
faultCause = (vmodl.MethodFault) null,
faultMessage = <unset>,
reason = "notConfigured",
state = <unset>,
instanceId = <unset>,
vm = 'vim.VirtualMachine:852'
msg = "Received SOAP response fault from [<cs p:000000081ef64380, TCP:localhost:8307>]: getGroupState
vSphere Replication operation error: Virtual machine is not configured for replication."
}
vim-cmd hbrsvc/vmreplica.getState 852
Retrieve VM running replication state:
The VM is configured for replication. Current replication state: Group: VRID-##### (generation=32459820918756983)
Group State: full sync (74% done: checksummed 614 MB of 1000 MB, transferred 569.3 MB of 593.8 MB)
DiskID RDID-###### State: full sync (checksummed 414 MB of 800 MB, transferred 380.4 MB of 404.9 MB)
DiskID RDID-###### State: inactive
vim-cmd hbrsvc/vmreplica.getState 852
Retrieve VM running replication state:
The VM is configured for replication. Current replication state: Group: VRID-##### (generation=32459820918756983)
Group State: lwd delta (instanceId=replica-########) (0% done: transferred 0 bytes of 40 KB)
DiskID RDID-####### State: lwd delta (transferred 0 bytes of 40 KB)
DiskID RDID-####### State: lwd delta (transferred 0 bytes of 0 bytes)
VMware HCX
Note: In the event that the bulk migration workflow fails and rolls back at the cut over stage, when rescheduling the migrations the workflow will try to reuse the seed data already copied in the previous attempt.
Note: The recommendation is to not perform a cleanup operation of a failed job which will lead to the removal of the seed data.
Note: In such cases, the recommendation is to relocate the source VM compute resources to another ESXi host (probably a free one) using vCenter vMotion. This action won't impact ongoing replication processes and do not require any changes in the migration workflow.
Alternatively:
Note: DR Protection Recovery would be a more manual and lengthy process but with a higher chance for success given any infrastructure and network limitations.
IMPORTANT: A migration cannot be guaranteed under ANY circumstances therefore these and other considerations must be taken to maximize the possibilities for a successful migration under those conditions, by minimizing the impact of infrastructure and network limitations.