Storage vMotion fails with error "Failed to copy one or more disks. Canceling Storage vMotion"
search cancel

Storage vMotion fails with error "Failed to copy one or more disks. Canceling Storage vMotion"

book

Article ID: 431786

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

  • Storage vMotion fails with the below error on the vCenter UI for the task:

Failed to copy one or more disks. Canceling Storage vMotion

  • You may also notice the below error on the vCenter UI:

A general system error occurred: Storage vMotion failed to copy one or more of the VM's disks. Please consult the VM's log for more details, looking for lines with "svMotion" 

  • /vmfs/volumes/<Datastore for VM Home Directory>/<VM Home Directory>/vmware.log records the below waring pointing to intermittent data transfer failure for one VMDK during disk copy:

2026-02-24T15:06:06.601Z Wa(03) vmx - SVMotion: scsi0:5: Disk transfer rate slow: 6549 kB/s over the last 10.01 seconds, copied total 20544 MB at 420568 kB/s.
2026-02-24T15:06:16.602Z Wa(03) vmx - SVMotion: scsi0:5: Disk transfer rate slow: 0 kB/s over the last 10.00 seconds, copied total 20544 MB at 350493 kB/s.
2026-02-24T15:06:26.604Z Wa(03) vmx - SVMotion: scsi0:5: Disk transfer rate slow: 0 kB/s over the last 10.00 seconds, copied total 20544 MB at 300427 kB/s.
2026-02-24T15:06:36.611Z Wa(03) vmx - SVMotion: scsi0:5: Disk transfer rate slow: 0 kB/s over the last 10.01 seconds, copied total 20544 MB at 262862 kB/s.

  • /vmfs/volumes/<Datastore for VM Home Directory>/<VM Home Directory>/vmware.log may also record successful disk copy for one or more VMDKs:

2026-02-24T15:05:01.171Z In(05) vmx - SVMotion: scsi0:14: Disk copy completed for total 10240 MB at 774424 kB/s.
2026-02-24T15:05:13.019Z In(05) vmx - SVMotion: scsi0:11: Disk copy completed for total 10240 MB at 885047 kB/s.
2026-02-24T15:05:14.791Z In(05) vmx - SVMotion: scsi0:9: Disk copy completed for total 51200 MB at 29594557 kB/s.
2026-02-24T15:05:16.580Z In(05) vmx - SVMotion: scsi0:1: Disk copy completed for total 51200 MB at 29311168 kB/s.

  • /vmfs/volumes/<Datastore for VM Home Directory>/<VM Home Directory>/vmware.log will have the below error logged for the SVMotion failure:

2026-02-24T15:11:36.716Z In(05) vmx - [msg.svmotion.fail.internal] A fatal internal error occurred. See the virtual machine's log for more details.
2026-02-24T15:11:36.716Z In(05) vmx - [msg.svmotion.disk.copyphase.failed] Failed to copy one or more disks.
2026-02-24T15:11:36.716Z In(05) vmx - Migrate: cleaning up migration state.
2026-02-24T15:11:36.712Z Wa(03) worker-2101749 - SVMotionMirroredModeThreadDiskCopy: Found internal error when woken up on diskCopySemaphore. Aborting storage vmotion.

  • /var/run/log/hostd.log will record the below error:

2026-02-24T15:11:36.705Z In(166) Hostd[2098989]: [Originator@6876 sub=Vcsvc.VMotionDst.5251263474880616244] ResolveCb: Failed with fault: (vim.fault.GenericVmConfigFault) {
2026-02-24T15:11:36.705Z In(166) Hostd[2098957]: -->    faultMessage = (vmodl.LocalizableMessage) [
2026-02-24T15:11:36.705Z In(166) Hostd[2098957]: -->       (vmodl.LocalizableMessage) {
2026-02-24T15:11:36.705Z In(166) Hostd[2098957]: -->          key = "msg.checkpoint.migration.noprogress",
2026-02-24T15:11:36.705Z In(166) Hostd[2098957]: -->          message = "Timed out waiting for migration data.
2026-02-24T15:11:36.705Z In(166) Hostd[2098957]: --> ",
2026-02-24T15:11:36.705Z In(166) Hostd[2098957]: -->       }
2026-02-24T15:11:36.705Z In(166) Hostd[2098957]: -->    ],
2026-02-24T15:11:36.705Z In(166) Hostd[2098957]: -->    reason = "Timed out waiting for migration data.

Environment

  • VMware vSphere ESXi
  • VMware Cloud Foundation

Cause

  • The issue occurs due to intermittent frame drops in the fabric:

2026-02-24T15:05:59.945Z In(182) vmkernel: cpu35:2099341)qlnativefc: vmhba1(81:0.0): qlnativefcStatusEntry:2266:C0:T1:L0 - FCP command status: 0x15-0x0 (0x2) portid=1c0740 oxid=0x419 cdb=28004e len=65536 rspInfo=0x0 resid=0x0 fwResid=0x9
000 host status = 0x2 device statu$
2026-02-24T15:05:59.945Z In(182) vmkernel: cpu35:2099341)qlnativefc: vmhba1(81:0.0): qlnativefcStatusEntry:2116:(1:0) Dropped frame(s) detected (63488 of 65536 bytes).
2026-02-24T15:05:59.945Z In(182) vmkernel: cpu35:2099341)qlnativefc: vmhba1(81:0.0): qlnativefcStatusEntry:2266:C0:T1:L0 - FCP command status: 0x15-0x0 (0x2) portid=1c0740 oxid=0x420 cdb=28004e len=65536 rspInfo=0x0 resid=0x0 fwResid=0xf
800 host status = 0x2 device statu$
2026-02-24T15:06:40.000Z In(182) vmkernel: cpu35:2098370)qlnativefc: vmhba1(81:0.0): qlnativefcStatusEntry:2266:C0:T7:L0 - FCP command status: 0x5-0x0 (0x8) portid=1902a1 oxid=0x55d cdb=2a0003 len=65536 rspInfo=0x0 resid=0x0 fwResid=0x0
host status = 0x8 device status = $
2026-02-24T15:07:20.467Z In(182) vmkernel: cpu10:2100460)qlnativefc: vmhba1(81:0.0): qlnativefcStatusEntry:2266:C0:T7:L0 - FCP command status: 0x5-0x0 (0x8) portid=1902a1 oxid=0x240 cdb=2a0003 len=65536 rspInfo=0x0 resid=0x0 fwResid=0x0
host status = 0x8 device status = $
2026-02-24T15:08:00.712Z In(182) vmkernel: cpu10:2097873)qlnativefc: vmhba1(81:0.0): qlnativefcStatusEntry:2266:C0:T7:L0 - FCP command status: 0x5-0x0 (0x8) portid=1902a1 oxid=0x4ed cdb=2a0003 len=65536 rspInfo=0x0 resid=0x0 fwResid=0x0
host status = 0x8 device status = $

  • The frame drops causes huge latencies, retransmissions, aborts and resets
  • XCOPY commands to the destination LUN  will fail:

2026-02-24T15:06:40.201Z In(182) vmkernel: cpu11:2097926)ScsiDeviceIO: 4697: Cmd(0x45b93e1ab300) 0x83, CmdSN 0xa57d6 from world 2100463 to dev "naa.6005076813#################" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x26 0x0

2026-02-24T15:06:40.201Z In(182) vmkernel: cpu11:2097926)ScsiDeviceIO: 4697: Cmd(0x45b93e1aad00) 0x83, CmdSN 0xa57d7 from world 2100463 to dev "naa.6005076813#################" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x26 0x0

2026-02-24T15:06:40.201Z In(182) vmkernel: cpu11:2097926)ScsiDeviceIO: 4697: Cmd(0x45b93e1aa700) 0x83, CmdSN 0xa57d8 from world 2100463 to dev "naa.6005076813#################" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x26 0x0

  • You may also notice failures for Read and Write commands and fast path updates both on the source and destination LUNs:

2026-02-24T15:05:59.945Z In(182) vmkernel: cpu2:2097919)ScsiDeviceIO: 4644: Cmd(0x45b92921f300) 0x28, CmdSN 0x345d2e from world 2100463 to dev "naa.600507680c#################" failed H:0x2 D:0x0 P:0x0
2026-02-24T15:05:59.945Z In(182) vmkernel: cpu2:2097919)ScsiDeviceIO: 4644: Cmd(0x45b929340500) 0x28, CmdSN 0x345d31 from world 2100463 to dev "naa.600507680c#################" failed H:0x2 D:0x0 P:0x0
2026-02-24T15:06:55.144Z In(182) vmkernel: cpu6:2097851)NMP: nmp_ResetDeviceLogThrottling:3854: last error status from device naa.600507680c################# repeated 77 times
2026-02-24T15:08:43.442Z In(182) vmkernel: cpu3:2097919)NMP: nmp_ThrottleLogForDevice:3893: Cmd 0x28 (0x45b93e05d100, 2100463) to dev "naa.600507680c#################" on path "vmhba1:C0:T1:L0" Failed:
2026-02-24T15:08:43.442Z Wa(180) vmkwarning: cpu3:2097919)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:235: NMP device "naa.600507680c#################" state in doubt; requested fast path state update...

  • Together the storage vMotion process times out while reading from source and copying data to the destination device

Resolution

  • The issue is outside the scope of VMware and needs to be investigated by the Fabric Team or vendor
  • Frame drops are caused by, but not limited to congestion, physical layer(Fibre Channel)degradation, or misconfigured flow control in the fabric
  • Please contact the Fabric team or the Fabric vendor for a resolution to the issue

Additional Information