2024-10-11T06:02:48.113Z In(166) Hostd[2099101]: [Originator@6876 sub=Vcsvc.VMotionSrc.5321013267882353922] ResolveCb: Failed with fault: (vim.fault.GenericVmConfigFault) {
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> faultMessage = (vmodl.LocalizableMessage) [
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> (vmodl.LocalizableMessage) {
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> key = "msg.svmotion.fail.internal",
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> message = "A fatal internal error occurred. See the virtual machine's log for more details.",
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> },
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> (vmodl.LocalizableMessage) {
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> key = "msg.svmotion.disk.createphase.fail",
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> message = "Failed to create one or more destination disks.",
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> },
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> (vmodl.LocalizableMessage) {
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> key = "msg.svmotion.destdisk.createfail",
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> arg = (vmodl.KeyAnyValue) [
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> (vmodl.KeyAnyValue) {
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> key = "1",
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> value = "/vmfs/volumes/61####52-f1####90-6##2-00########3b/testVMname/testVMname_2.vmdk"
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> },
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> (vmodl.KeyAnyValue) {
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> key = "2",
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> value = "The specified device is not a valid physical disk device"
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> }
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> ],
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> message = "Storage vMotion failed to create the destination disk /vmfs/volumes/61####52-f1####90-6##2-00########3b/testVMname/testVMname_2.vmdk (The specified device is not a valid physical disk device).
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> "
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> }
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> ],
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> reason = "A fatal internal error occurred. See the virtual machine's log for more details.",
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> msg = "A fatal internal error occurred. See the virtual machine's log for more details.
2024-10-11T06:02:48.113Z In(166) Hostd[2099069]: --> Failed to create one or more destination disks.
VMware ESXi 7.0.x
VMware ESXi 8.0.x
The vml ID seen by the host may get changed after the RDM LUN mapping files are created. However, the vml ID recorded on the RDM mapping file will not change after the mapping file is created.
To migrate the VM this information has to be queried to be re-written, and upon querying the vml IDs related to these disks as the IDs have changed, the proper LUNs are not found, hence the migration error.
How to identify if there is inconsistency:
1. To check the vml ID of a LUN seen by the host, with the naa ID of the LUN, run the command:
esxcli storage core device list -d naa.#######################
and look for the line with "Other UIDs: vml.###########################################################"
For example:
esxcli storage core device list -d naa.600a09803830########5275574d7572
naa.600a09803830########5275574d7572:
Display Name: NETAPP iSCSI Disk (naa.600a09803830########5275574d7572)
Has Settable Display Name: true
Size: 20971520
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/naa.600a09803830########5275574d7572
Vendor: NETAPP
Model: LUN C-Mode
Revision: 9141
SCSI Level: 6
Is Pseudo: false
Status: on
Is RDM Capable: true
Is Local: false
Is Removable: false
Is SSD: true
Is VVOL PE: false
Is Offline: false
Is Perennially Reserved: false
Queue Full Sample Size: 0
Queue Full Threshold: 0
Thin Provisioning Status: yes
Attached Filters: VAAI_FILTER
VAAI Status: supported
Other UIDs: vml.0200690100600a09803830########5275574d75724c554e20432d
Is Shared Clusterwide: true
Is SAS: false
Is USB: false
Is Boot Device: false
Device Max Queue Depth: 255
No of outstanding IOs with competing worlds: 32
Drive Type: unknown
RAID Level: unknown
Number of Physical Drives: unknown
Protection Enabled: false
PI Activated: false
PI Type: 0
PI Protection Mask: NO PROTECTION
Supported Guard Types: NO GUARD SUPPORT
DIX Enabled: false
DIX Guard Type: NO GUARD SUPPORT
Emulated DIX/DIF Enabled: false
2. To check the vml ID seen by the VM in the RDM descriptor file, browse into the VM folder and run:
vmkfstools -q /vmfs/volumes/DATASTORE_NAME/VM_NAME/VM_NAME.vmdk
vmkfstools -q /vmfs/volumes/61####52-f1####90-6##2-00########3b/testVMname/testVMname_2.vmdk
Disk /vmfs/volumes/61####52-f1####90-6##2-00########3b/testVMname/testVMname_2.vmdk is a Passthrough Raw Device Mapping
Maps to: vml.02007e010600a09803830########5275574d75724c554e20432d
3. Comparing the 2 vml IDs from the outputs above:
Other UIDs: vml.0200690100600a09803830########5275574d75724c554e20432d
Maps to: vml.02007e010600a09803830########5275574d75724c554e20432d
Note: In this case, the beginning of the vml IDs are different.
To correct the VML ID inconsistency:
Shut down the virtual machine where the RDM disk is incorrectly appearing.
Remove the RDM(s) from the virtual machine:
The vml ID in the RDM descriptor file will now match the vml ID seen currently by the host.
Related information: