Cross-vCenter migrations with different storage take hours or days to complete. When you migrate virtual machines between vCenter Server instances where the source and destination datastores are different (for example, VMFS to NFS or NFS to VMFS), you may experience:
Additional symptoms:
ESXi 7.0 and newer
Cross-storage migrations between datastores which are not shared between the vCenters require disk data to be transferred using Network File Copy (NFC) protocol. NFC handles the file-level copying of virtual machine disk files during migrations where the source and destination storage are different.
NFC traffic uses a provisioning network when one is configured on the ESXi hosts. A provisioning network is a dedicated VMkernel adapter designated specifically for cold migration, cloning, and disk copy operations. When no provisioning network is configured, NFC traffic automatically falls back to using the management network.
The management network often has bandwidth constraints applied through network infrastructure configurations such as Quality of Service (QoS) policies, rate limiting rules, firewall deep packet inspection, or WAN link capacity limitations. When NFC traffic traverses a bandwidth-limited management network, the disk transfer throughput is restricted to whatever bandwidth is available on that path.
For typical virtual machine disk sizes, this bandwidth restriction creates extended transfer times. For example, a 55 GB virtual disk transferred at 44 Mbps (approximately 5.5 MB/s) requires approximately 2.8 hours minimum for that single disk. VMs with multiple disks or larger disk sizes accumulate these transfer times, resulting in migrations that take 24-48 hours or longer to complete.
Note: Migrations between VMFS and NFS storage types experience additional overhead due to storage format translation, which further extends transfer times beyond pure network throughput limitations.
Diagnostic Step: Test vMotion Network Throughput
Before implementing a resolution, test the vMotion network path to determine which option is appropriate:
Connect to both the source and destination ESXi hosts via SSH.
On both hosts, check the appDom security policy status:
localcli system secpolicy domain list | grep appDom
On both hosts, disable the appDom security policy:
localcli system secpolicy domain set -n appDom -l disabled
On both hosts, disable the ESXi firewall:
esxcli network firewall set --enabled false
Navigate to the iperf directory on both hosts:
cd /usr/lib/vmware/vsan/bin/
On the destination ESXi host, start iperf3 in server mode using the vMotion vmkernel IP address:
./iperf3 -s -B <destination_vMotion_vmk_IP>
On the source ESXi host, run iperf3 in client mode:
./iperf3 -c <destination_vMotion_vmk_IP> -B <source_vMotion_vmk_IP> -t 60 -i 5
Review the average throughput reported in the test results.
After testing completes, re-enable the ESXi firewall on both hosts:
esxcli network firewall set --enabled true
Re-enable the appDom security policy on both hosts:
esxcli system secpolicy domain set -n appDom -l enforcing
Based on the test results, proceed with the appropriate option below.
Option A: Configure Provisioning Network (vSphere 8.0 and later, vMotion throughput 500+ Mbps)
Use this option when the vMotion network path shows adequate bandwidth and you are running vSphere 8.0 or later. This enables Unified Data Transport (UDT) which provides significantly faster transfer speeds.
Verify TCP port 902 connectivity between the source and destination ESXi hosts on the vMotion network path:
nc -z <destination_vMotion_vmk_IP> 902
Note: A successful connection displays "Connection to [IP] 902 port [tcp/*] succeeded!"
If TCP 902 is blocked, work with your network team to allow this port between ESXi hosts on the vMotion network before proceeding.
Configure the provisioning service on the vMotion vmkernel adapter on both source and destination hosts. See KB 394773 for detailed configuration steps.
Perform a test migration to validate improved performance.
Important: If your vMotion vmkernel adapter uses a dedicated vMotion TCP/IP stack, you cannot enable provisioning service on the same vmkernel adapter. In this scenario, either reconfigure vMotion to use the default TCP/IP stack, or create a separate vmkernel adapter using the Provisioning TCP/IP stack.
Option B: Configure Provisioning TCP/IP Stack (vSphere 7.0 or if Option A is not viable)
Use this option when running vSphere 7.0 or when the vMotion vmkernel uses a dedicated vMotion TCP/IP stack.
Verify TCP port 902 connectivity between source and destination ESXi hosts on the target network path:
nc -z <destination_target_IP> 902
If TCP 902 is blocked, work with your network team to allow this port between ESXi hosts before proceeding.
Create a VMkernel adapter using the Provisioning TCP/IP stack on both source and destination hosts. Configure this adapter on a network path with adequate bandwidth.
Perform a test migration to validate improved performance.
Note: After you configure a VMkernel adapter with the provisioning TCP/IP stack, all cold migrations, cloning, and snapshot operations use only this stack. VMkernel adapters on the default TCP/IP stack are disabled for provisioning traffic.
Option C: Network Infrastructure Remediation (when all network paths show insufficient bandwidth)
Use this option when the vMotion network iperf test shows limited bandwidth similar to the management network path, indicating site-wide bandwidth constraints that prevent efficient migration performance.
Engage your network or firewall team to investigate bandwidth limitations on cross-site traffic.
Request your network team review the following configurations:
After your network team increases bandwidth or removes throttling, perform a test migration to validate improved performance.
Note: Network infrastructure troubleshooting and configuration changes are outside the scope of VMware support.
When this issue does not apply:
This issue does not affect migrations where the source and destination datastores are the same. When virtual machines are migrated between ESXi hosts that share access to the same datastore, only the VM memory state and CPU state are transferred using the vMotion protocol. No disk data transfer occurs because the VM disk files remain on the shared storage.
Understanding NFC and vMotion traffic:
Network File Copy (NFC) protocol handles disk data transfer during cross-storage migrations, cold migrations, cloning operations, and snapshot migrations. The vMotion protocol handles VM memory state, CPU state, and device state during live migrations. These are separate traffic types that can use different network paths.
Performance expectations:
Migration transfer times are directly proportional to disk size and available network bandwidth. For example, a 55 GB virtual disk transferred at 44 Mbps (approximately 5.5 MB/s) requires approximately 2.8 hours for that single disk. VMs with multiple disks accumulate these transfer times sequentially.
Additional overhead for storage format translation:
Migrations between VMFS and NFS storage types experience additional overhead due to storage format translation beyond pure network throughput. This adds to the total transfer time.
For more information, see: