When performing a Storage vMotion task on two virtual machines (VMs) of identical size, a significant performance discrepancy is observed: one migration completes quickly, while the other is exceptionally slow.
Key Technical Context:
The source and destination datastores reside on the same storage array/server.
The storage array supports vSphere Storage APIs for Array Integration (VAAI).
VAAI is enabled on the ESXi host.
Log Analysis: The following entries in the vmware.log of the affected VM reveal a significant delay during the migration state transition:
YYYY-MM-DDTHH:MM:SS In(05) vmx - MigrateWriteHostLog: Writing to log file took 8738 us. YYYY-MM-DDTHH:MM:SS In(05) vmx - MigrateDevStreamInit: Should stream device data: 0. YYYY-MM-DDTHH:MM:SS In(05) vmx - MigratePlatformInitMigration: init migration data, is_source: 0 YYYY-MM-DDTHH:MM:SS In(05) vmx - MigratePlatformUpdateVnicBackingChange: numVnicBackingChange: 0, is_source: 0 YYYY-MM-DDTHH:MM:SS No(00) vmx - ConfigDB: Setting migration.vmxDisabled = "TRUE" YYYY-MM-DDTHH:MM:SS In(05) vmx - MigrateWaitForData: waiting for data. YYYY-MM-DDTHH:MM:SS In(05) vmx - MigrateSetState: Transitioning from state MIGRATE_FROM_VMX_INIT (8) to MIGRATE_FROM_VMX_WAITING (9). YYYY-MM-DDTHH:MM:SS In(05) vmx - Migrate_RPCsReady: Wait for PrepareDestRPC from source to preallocate BusMem. YYYY-MM-DDTHH:MM:SS In(05) ######-####### - MigrateEnableCapabilities: Based on capabilities exchanged, device vmotion version is 2, EarlyRestoreNoOffset mode is not enabled, 4Kn is supported, going to use CBRC digest version '1' for this relocate. This host's version was '1' YYYY-MM-DDTHH:MM:SS In(05) ######-####### - MigrateBusMemPrealloc: BusMem preallocation begins. YYYY-MM-DDTHH:MM:SS In(05) ######-####### - MigrateBusMemPrealloc: BusMem preallocation completes. YYYY-MM-DDTHH:MM:SS In(05) vmx - MigrateSetState: Transitioning from state MIGRATE_FROM_VMX_WAITING (9) to MIGRATE_FROM_VMX_PRECOPY (10). YYYY-MM-DDTHH:MM:SS In(05) vmx - MigrateWaitForData: Waited for xxxx.xx seconds. YYYY-MM-DDTHH:MM:SS In(05) vmx - MigrateRPC_DrainPendingWork: Draining pending remote user messages before restore... YYYY-MM-DDTHH:MM:SS In(05) vmx - MigrateRPC_DrainPendingWork: All pending work completed.
Note the timestamp gap in the entries above. In this instance, the process hung in the MIGRATE_FROM_VMX_WAITING state for over xxxxx seconds.
VMware ESXi 8.x
When Storage vMotion performance degrades despite VAAI being active, the investigation must shift to the storage array side.
When VAAI is enabled and supported, vSphere utilizes Hardware Accelerated Move (XCOPY) to offload data migration tasks directly to the storage array. In this scenario:
Storage vMotion traffic does not traverse the ESXi host network or CPU.
The data is copied internally within the storage processor of the array.
Any performance bottleneck at this stage is typically caused by array-level contention, storage processor (SP) load, or internal microcode delays within the storage server.
Please contact the storage hardware vendor to investigate internal storage performance.
To provide the storage vendor with sufficient data, please perform the following checks:
1. Identify Involved Storage Objects
Collect the following details for both source and destination datastores:
Datastore Name
Device ID (naa.ID / eui.ID)
Volume UUID
LUN ID
2. Verify VAAI Support via ESXi CLI
Confirm if the specific LUNs correctly report VAAI support by running:
esxcli storage core device vaai status get
3. Verify ESXi VAAI Configuration
Although VAAI is enabled by default in ESXi 8.x, verify the host settings. Ensure the following parameters are set to 1 (Enabled) in the vmkernel.log:
YYYY-MM-DDTHH:MM:SS In(182) vmkernel: cpu11:2097862)Config: 727: "HardwareAcceleratedLocking" = 1, Old Value: 1, (Status: 0x0) YYYY-MM-DDTHH:MM:SS In(182) vmkernel: cpu11:2097862)Config: 727: "HardwareAcceleratedMove" = 1, Old Value: 1, (Status: 0x0) YYYY-MM-DDTHH:MM:SS In(182) vmkernel: cpu11:2097862)Config: 727: "HardwareAcceleratedInit" = 1, Old Value: 1, (Status: 0x0)