Cross vCenter vMotion is failing for VMs
search cancel

Cross vCenter vMotion is failing for VMs

book

Article ID: 423225

calendar_today

Updated On:

Products

VMware vSphere Standard

Issue/Introduction

Migration of VMs from 1 vCenter to other vCenter is failing with the error :

VMotionStream timed out while waiting for disk # queue count to drop below the maximum limit of 32768 blocks. This could indicate either network or storage problems preventing proper block transfer.

hostd.log (source host)

yyyy-mm-ddT12:00:13.754Z In(166) Hostd[2102001]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/xxxxxxx-xxxxxx-xxxx-xxxxxxxxx/VM_Name/VM_Name.vmx] VMotionStatusCb [287422213629634110] : Prepare task completed successfully
yyyy-mm-ddT12:00:15.051Z In(166) Hostd[2101966]: [Originator@6876 sub=Vcsvc.VMotion opID=xxxxxx-xxxxxxx-auto-xxxxx-xx:xxxxxxx-xx-xx-xx-xxxx sid=xxxxxx user=vpxuser:] InitiateSource WID = xxxxxxx
yyyy-mm-ddT12:00:15.052Z In(166) Hostd[2101966]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/xxxxxxx-xxxxxx-xxxx-xxxxxxxxx/VM_Name/VM_Name.vmx.opID=xxxxxx-xxxxxxxx-auto-xxxxx-xx:xxxxxxxx-xx-xx-xx-xxxx sid=xxxxxxx user=vpxuser: VMotionInitiateSrc : wid=xxxxxxxxx
yyyy-mm-ddT12:00:15.052Z Db(167) Hostd[2101966]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/xxxxxxx-xxxxxx-xxxx-xxxxxxxxx/VM_Name/VM_Name.vmx opID=xxxxxx-xxxxxxxx-auto-xxxxx-xx:xxxxxxxx-xx-xx-xx-xxxx sid=xxxxxxx user=vpxuser:VMotionInitiateSrc: begin ReadSynchronized
yyyy-mm-ddT12:00:15.052Z Db(167) Hostd[2101966]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/xxxxxxx-xxxxxx-xxxx-xxxxxxxxx/VM_Name/VM_Name.vmx opID=xxxxxx-xxxxxxxx-auto-xxxxx-xx:xxxxxxxx-xx-xx-xx-xxxx sid=xxxxxxx user=vpxuser: VMotionInitiateSrc: end ReadSynchronized
yyyy-mm-ddT12:00:15.052Z Db(167) Hostd[2101966]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/xxxxxxx-xxxxxx-xxxx-xxxxxxxxx/VM_Name/VM_Name.vmx opID=xxxxxx-xxxxxxxx-auto-xxxxx-xx:xxxxxxxx-xx-xx-xx-xxxx sid=xxxxxxx user=vpxuser: VMotionInitiateSrc: Done
yyyy-mm-ddT12:00:39.012Z Db(167) Hostd[2101993]: [Originator@6876 sub=Vigor.Vmsvc.vm:/vmfs/volumes/xxxxxxx-xxxxxx-xxxx-xxxxxxxxx/VM_Name/VM_Name.vmx VMotionInitiateSrc: Start message: A fatal internal error occurred. See the virtual machine's log for more details.
yyyy-mm-ddT12:00:39.012Z Db(167) Hostd[2101928]: --> VMotionStream timed out while waiting for disk 6's queue count to drop below the maximum limit of 32768 blocks. This could indicate either network or storage problems preventing proper block transfer.
yyyy-mm-ddT12:00:39.012Z Db(167) Hostd[2102000]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/xxxxxxx-xxxxxx-xxxx-xxxxxxxxx/VM_Name/VM_Name.vmx VMotionStatusCb [287422213629634110]: Failed with error [N3Vim5Fault20GenericVmConfigFaultE:0x000000b9f286ccd0]
yyyy-mm-ddT12:00:39.012Z Db(167) Hostd[2102000]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/xxxxxxx-xxxxxx-xxxx-xxxxxxxxx/VM_Name/VM_Name.vmx VMotionStatusCb: Firing ResolveCb
yyyy-mm-ddT12:00:39.012Z In(166) Hostd[2102000]: [Originator@6876 sub=Vcsvc.VMotionSrc.287422213629634110] ResolveCb: VMX reports needsUnregister = false for migrateType MIGRATE_TYPE_VMOTION
yyyy-mm-ddT12:00:39.012Z In(166) Hostd[2102000]: [Originator@6876 sub=Vcsvc.VMotionSrc.287422213629634110] ResolveCb: Failed with fault: (vim.fault.GenericVmConfigFault) {
yyyy-mm-ddT12:00:39.013Z In(166) Hostd[2101928]: -->          message = "VMotionStream timed out while waiting for disk 6's queue count to drop below the maximum limit of 32768 blocks. This could indicate either network or storage problems preventing proper block transfer.
yyyy-mm-ddT12:00:39.013Z In(166) Hostd[2101928]: --> VMotionStream [c0a80c1c:287422213629634110] timed out while waiting for disk 6's queue count to drop below the maximum limit of 32768 blocks. This could indicate either network or storage problems preventing proper block transfer.

The destination ESXi hostd.log indicate connection reset errors.

yyyy-mm-ddT13:43:04.536Z Db(167) Hostd[2099716]: [Originator@6876 sub=Vigor.Vmsvc.vm:/vmfs/volumes/xxxxxxx-xxxxxx-xxxx-xxxxxxxxx/VM_Name/VM_Name.vmx VMotionPrepare: MigrateFromDest message: Failed waiting for data.  Error bad004b. Connection reset by peer.
yyyy-mm-ddT13:43:04.536Z Db(167) Hostd[2099724]: [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/xxxxxxx-xxxxxx-xxxx-xxxxxxxxx/VM_Name/VM_Name.vmx VMotionStatusCb [287422219775952989]: Failed with error [N3Vim5Fault20GenericVmConfigFaultE:0x0000008700760510]

 

The vmware.log of the VM indicate timeout errors.

yyyy-mm-ddT13:07:34.184Z Wa(03) vmx - Mirror: scsi0:6: Failed to copy disk: Timeout
yyyy-mm-ddT13:07:34.185Z Wa(03) worker-6178497 - SVMotionMirroredModeThreadDiskCopy: Found internal error when woken up on diskCopySemaphore. Aborting storage vmotion.
yyyy-mm-ddT13:07:34.185Z Wa(03) worker-6178497 - SVMotionCopyThread: disk copy failed. Canceling Storage vMotion.
yyyy-mm-ddT13:07:34.185Z In(05) worker-6178497 - SVMotionCopyThread: Waiting for SVMotion Bitmap thread to complete before issuing a stun during migration failure cleanup.2025-12-13T13:07:34.265Z In(05) vmx - Migrate: Caching migration error message list:
yyyy-mm-ddT13:07:34.265Z In(05) vmx - [msg.svmotion.fail.internal] A fatal internal error occurred. See the virtual machine's log for more details.
yyyy-mm-ddT13:07:34.265Z In(05) vmx - [msg.svmotion.disk.copyphase.failed] Failed to copy one or more disks.
yyyy-mm-ddT13:07:34.265Z In(05) vmx - [msg.mirror.disk.copyfailed] Failed to copy source (/vmfs/volumes/xxxxxxx-xxxxxx-xxxx-xxxxxxxxx/VM_Name/VM_Name.vmdk) to destination (/vmfs/volumes/xxxxxxx-xxxxxx-xxxx-xxxxxxxxx/VM_Name/VM_Name.vmdk): Timeout.
yyyy-mm-ddT13:07:34.265Z In(05) vmx - [vob.vmotion.stream.check.block.mem.timed.out] VMotionStream [c0a80c1c:287422217649189303] timed out while waiting for disk 6's queue count to drop below the maximum limit of 32768 blocks. This could indicate either network or storage problems preventing proper block transfer.
yyyy-mm-ddT13:07:34.265Z In(05) vmx - Migrate: cleaning up migration state.

 

Environment

VMware vSphere 7.x

VMware vSphere 8.x

Cause

This issue occurs due to the network configuration related settings.

One of the scenarios can be due to different MTU values on the source & destination ESXi hosts.

Resolution

An MTU mismatch causes vMotion failures because large vMotion packets (often using Jumbo Frames like 9000 MTU) get dropped by intermediate network devices (physical switches, routers, or even vSwitches/VMkernel ports) with smaller MTUs (like 1500), leading to timeouts or connection drops.

If there is a MTU mismatch on the source & destination ESXi hosts change the MTU value so that both the hosts will have same MTU value.

If still the issue is same need to involve inhouse Network team for investigating the network issues or latency in environment.