Virtual Machine can not be migrated successfully from one NvmeTCP datastore  to another NvmeTCP datastore
search cancel

Virtual Machine can not be migrated successfully from one NvmeTCP datastore  to another NvmeTCP datastore

book

Article ID: 398589

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Storage vmotion is not working when datastore are in different ANA groups, process gets stuck at 35%, issue noticed in esxi version 8.x. 

Environment

ESXi 8.X

IBM NVME storage

Cause

  • Log messages are indicating issue related to storage target.

 Following messages noticed in vmkernel.log:

2025-04-17T21:55:41.754Z Wa(180) vmkwarning: cpu26:2098589)WARNING: NVMEPSA:217 Complete vmkNvmeCmd: 0x45d934a93bc0, vmkPsaCmd: 0x45d93823dec0, cmdId.initiator=0x4306f261f100, CmdSN: 0x268, status: 0x302
2025-04-17T21:55:41.754Z Wa(180) vmkwarning: cpu13:2098598)WARNING: NVMEPSA:217 Complete vmkNvmeCmd: 0x45d934a70bc0, vmkPsaCmd: 0x45d93823f0c0, cmdId.initiator=0x4306f261f100, CmdSN: 0x263, status: 0x302
2025-04-17T21:55:41.754Z Wa(180) vmkwarning: cpu45:2098617)WARNING: NVMEPSA:217 Complete vmkNvmeCmd: 0x45d934a953c0, vmkPsaCmd: 0x45d93823e4c0, cmdId.initiator=0x4306f261f100, CmdSN: 0x265, status: 0x302
2025-04-17T21:55:41.754Z Wa(180) vmkwarning: cpu29:2098609)WARNING: NVMEIO:2645 command 0x45d934a757c0 failed: ctlr 257, queue 5, psaCmd 0x45d93823eac0, status 0x302, opc 0x19, cid 18, nsid 6
2025-04-17T21:55:41.754Z Wa(180) vmkwarning: cpu29:2098609)WARNING: NVMEPSA:217 Complete vmkNvmeCmd: 0x45d934a757c0, vmkPsaCmd: 0x45d93823eac0, cmdId.initiator=0x4306f261f100, CmdSN: 0x264, status: 0x302
2025-04-17T21:55:41.754Z Wa(180) vmkwarning: cpu1:2098587)WARNING: NVMEIO:2645 command 0x45d934a815c0 failed: ctlr 257, queue 1, psaCmd 0x45d93823ccc0, status 0x302, opc 0x19, cid 24, nsid 6
2025-04-17T21:55:41.754Z Wa(180) vmkwarning: cpu1:2098587)WARNING: NVMEPSA:217 Complete vmkNvmeCmd: 0x45d934a815c0, vmkPsaCmd: 0x45d93823ccc0, cmdId.initiator=0x4306f261f100, CmdSN: 0x267, status: 0x302
2025-04-17T21:55:41.754Z Wa(180) vmkwarning: cpu40:2098619)WARNING: NVMEIO:2645 command 0x45d934af97c0 failed: ctlr 257, queue 7, psaCmd 0x45d93823d8c0, status 0x302, opc 0x19, cid 15, nsid 6
2025-04-17T21:55:41.754Z Wa(180) vmkwarning: cpu40:2098619)WARNING: NVMEPSA:217 Complete vmkNvmeCmd: 0x45d934af97c0, vmkPsaCmd: 0x45d93823d8c0, cmdId.initiator=0x4306f261f100, CmdSN: 0x266, status: 0x302
2025-04-17T21:55:42.041Z In(182) vmkernel: cpu11:2098493)HPP: HppAttemptFailoverRequest:1071: Re-issuing first command for HPP device "eui.d17b50e232094d799dc1ff79fbf2c8c7" (NO_CONNECT_ON_APD = CLEAR) failoverState=2 blocked=1

 

 

Resolution

 Enagage Storage vendor support to troubleshoot  at storage target level.

 

 Workaround:

 switch back to using s/w data mover line in 7.0 release by disabling the h/w accelerated move.

NVMe copy that would have been handled by target will now be handled by doing reads from source and writes to destination. This is the same as how migration worked in 70u3. If needed to take advantage of performance gain due to array offload, then the target side issue needs to be resolved by engaging storage vendor support.

Additional Information