hbrdisk.RDID vmdk
called as replica instance vmdk file on the target datastore.
/common/log/admin/app.log
,ccli
>> var/log/vmware/hbrsrv.log
Note: You can also go /tmp/Fleet-appliances/<Service-Mesh>/<IX-Appliance>/var/log/vmware/hbrsrv.log
from the HCX tech bundle.hbrsrv.log
the target/cloud IX appliance.<timestamp> info hbrsrv[6AB2AD852700] [Originator@6876 sub=Host opID=hs-285dd448] Getting disk type for /vmfs/volumes/vsan:UUID/VM_UUID/VM_NAME
<timestamp> info hbrsrv[6AB2AD956700] [Originator@6876 sub=Host opID=hs-4ac4bf6f] Getting disk type for /vmfs/volumes/vsan:UUID/VM_UUID/VM_NAME
<timestamp> info hbrsrv[6AB2AD956700] [Originator@6876 sub=Delta] Configured disks for group VRID-######:
<timestamp> info hbrsrv[6AB2AD956700] [Originator@6876 sub=Delta] RDID-######
<timestamp> info hbrsrv[6AB2AD956700] [Originator@6876 sub=Delta] RDID-######
<timestamp> info hbrsrv[6AB2ADA9B700] [Originator@6876 sub=Delta opID=hsl-10579a55] Full sync complete for disk RDID-####### (198057984 bytes transferred, 209715200 bytes checksummed)
<timestamp> info hbrsrv[6AB2AD8D4700] [Originator@6876 sub=Delta opID=hsl-1057c4a8] Full sync complete for disk RDID-####### (827564032 bytes transferred, 838860800 bytes checksummed)
<timestamp> info hbrsrv[6AB2AD9D8700] [Originator@6876 sub=Delta opID=#######] Instance complete for disk RDID-#######
<timestamp> info hbrsrv[6AB2AD70D700] [Originator@6876 sub=Delta opID=#######] Instance complete for disk RDID-#######
<timestamp> info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Image opID=#######] Creating image from group VRID-########, instance 49, in #######
<timestamp> info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Host opID=#######] Getting disk type for /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk
<timestamp> info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Host opID=#######] Getting disk type for /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk
<timestamp> info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Image opID=#######] Copying cfg /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdkvmx.137 to /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk.vmx
<timestamp> info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Image opID=#######] Copying cfg /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk.vmxf.138 to /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk.vmxf
<timestamp> info hbrsrv[6AB2ADBE0700] [Originator@6876 sub=Image opID=#######] Copying cfg /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk.nvram.139 to /vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk.nvram
<timestamp> info hbrsrv[6AB2AD70D700] [Originator@6876 sub=PersistentCleanup opID=#######] The disk '/vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk' (key=186) was cleaned up successfully.
<timestamp> info hbrsrv[6AB2AD70D700] [Originator@6876 sub=PersistentCleanup opID=#######] The disk '/vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmdk' (key=187) was cleaned up successfully.
<timestamp> info hbrsrv[6AB2AD70D700] [Originator@6876 sub=PersistentCleanup opID=#######] The file '/vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmx.137' (key=189) was cleaned up successfully.
<timestamp> info hbrsrv[6AB2AD70D700] [Originator@6876 sub=PersistentCleanup opID=#######] The file '/vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.vmxf.138' (key=190) was cleaned up successfully.
<timestamp> info hbrsrv[6AB2AD70D700] [Originator@6876 sub=PersistentCleanup opID=#######] The file '/vmfs/volumes/vsan:UUID/VM_UUID/hbrdisk.RDID-########.nvram.139' (key=191) was cleaned up successfully.
vim-cmd
CLIs to verify the status of replication.[root@###-###-###-015:~] vim-cmd vmsvc/getallvms
852 VM_Name rhel6_64Guest vmx-14
vim-cmd hbrsvc/vmreplica.getState 852
Retrieve VM running replication state:
(vim.fault.ReplicationVmFault) {
faultCause = (vmodl.MethodFault) null,
faultMessage = <unset>,
reason = "notConfigured",
state = <unset>,
instanceId = <unset>,
vm = 'vim.VirtualMachine:852'
msg = "Received SOAP response fault from [<cs p:000000081ef64380, TCP:localhost:8307>]: getGroupState
vSphere Replication operation error: Virtual machine is not configured for replication."
}
vim-cmd hbrsvc/vmreplica.getState 852
Retrieve VM running replication state:
The VM is configured for replication. Current replication state: Group: VRID-##### (generation=32459820918756983)
Group State: full sync (74% done: checksummed 614 MB of 1000 MB, transferred 569.3 MB of 593.8 MB)
DiskID RDID-###### State: full sync (checksummed 414 MB of 800 MB, transferred 380.4 MB of 404.9 MB)
DiskID RDID-###### State: inactive
vim-cmd hbrsvc/vmreplica.getState 852
Retrieve VM running replication state:
The VM is configured for replication. Current replication state: Group: VRID-##### (generation=32459820918756983)
Group State: lwd delta (instanceId=replica-########) (0% done: transferred 0 bytes of 40 KB)
DiskID RDID-####### State: lwd delta (transferred 0 bytes of 40 KB)
DiskID RDID-####### State: lwd delta (transferred 0 bytes of 0 bytes)
VMware HCX
Note: In the event that the bulk migration workflow fails and rolls back at the cut over stage, when rescheduling the migrations the workflow will try to reuse the seed data already copied in the previous attempt.
Note: The recommendation is to not perform a cleanup operation of a failed job which will lead to the removal of the seed data.
Note: In such cases, the recommendation is to relocate the source VM compute resources to another ESXi host (probably a free one) using vCenter vMotion. This action won't impact ongoing replication processes and do not require any changes in the migration workflow.
Alternatively:
Note: DR Protection Recovery would be a more manual and lengthy process but with a higher chance for success given any infrastructure and network limitations.
IMPORTANT: A migration cannot be guaranteed under ANY circumstances therefore these and other considerations must be taken to maximize the possibilities for a successful migration under those conditions, by minimizing the impact of infrastructure and network limitations.