vMotion of the VM with a RDM fails in the vSphere client with the following error:
The source detected that the destination failed to resume. ####-##-##T##:##:##.######Z The VM failed to resume on the destination during early power on. Module Disk power on failed. Cannot open the disk '/vmfs/volumes/<Datastore-UUID>/<VM-Name>/<VM-disk>.vmdk' or one of the snapshot disks it depends on. 19 (No such device) File system specific implementation of OpenFile[file] failed ####-##-##T##:##:##.######Z
On the source host where the vm is running and the RDM was accessible:[<user>@<Hostname>:/vmfs/volumes/<Datastore-UUID>/<VM-Name>] vmkfstools -q <VM-disk>.vmdkDisk <VM-disk>.vmdk is a Passthrough Raw Device MappingMaps to: vml.0200000000<Naa-device-ID>4c554e20432d
VMware vSphere ESXi
The storage device backing the RDM disk has been detached or administratively turned off therefore the disk can't be accessed during power on, resulting in a failure.
On the destination host query the vmkfstools -q <VM-disk>.vmdk
ESXi: /var/run/log/vmkernel.log####-##-##T##:##:##.###Z Wa(180) vmkwarning: cpu11:2489075)WARNING: RDM3: 871: Error opening device vml.0200000000<Naa-device-ID>4c554e20432d: No such target on adapter####-##-##T##:##:##.###Z Wa(180) vmkwarning: cpu11:2489075)WARNING: RDM3: 871: Error opening device vml.0200000000<Naa-device-ID>4c554e20432d: No such target on adapter
To verify the status of the device:
localcli storage core device list -d naa.###############################
naa.###############################: Display Name: NETAPP iSCSI Disk (naa.###############################) Has Settable Display Name: true Size: 0 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: Vendor: NETAPP Model: LUN C-Mode Revision: 9121 SCSI Level: 6 Is Pseudo: false Status: off <-----------------Device turned off Is RDM Capable: true Is Local: false Is Removable: false Is SSD: true Is VVOL PE: false Is Offline: false Is Perennially Reserved: false Queue Full Sample Size: 0 Queue Full Threshold: 0 Thin Provisioning Status: unknown Attached Filters: VAAI Status: unknown Other UIDs: vml.0200000000################################4c554e20432d Is Shared Clusterwide: true Is SAS: false Is USB: false Is Boot Device: false Device Max Queue Depth: 16 No of outstanding IOs with competing worlds: 16 Drive Type: unknown RAID Level: unknown Number of Physical Drives: unknown Protection Enabled: false PI Activated: false PI Type: 0 PI Protection Mask: NO PROTECTION Supported Guard Types: NO GUARD SUPPORT DIX Enabled: false DIX Guard Type: NO GUARD SUPPORT Emulated DIX/DIF Enabled: false
To check from the CLI if the device has been administratively turned off:
[<user>@<hostname>:/] localcli storage core device detached list
Device UID State
------------------------------------ -----
naa.############################### off
Select the destination host -> Configure -> Storage Devices > Select the Storage device backing the RDM -> Attach
The below log message is generated when a device is detached:
ESXi: /var/run/log/vmkernel*
####-##-##T##:##:##.###Z In(182) vmkernel: cpu52:2098195)ScsiDevice: 1831: Device naa.############################### has been turned off administratively.
Detach a LUN device from ESXi hosts
This issue can also arise if the RDM LUN is presented to the target host with a different LUN ID.
Migrating VMs with attached RDMs fails with the error "Storage vMotion failed to create the destination disk /vmfs/volumes/DATASTORE_NAME/VM_NAME/VM_NAME.vmdk"