Unable to power on VM with RDM after disk expand getting error failed to lock state.
search cancel

Unable to power on VM with RDM after disk expand getting error failed to lock state.

book

Article ID: 385727

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Post RDM disk expansion, VM unable to power on when shared disk added back to cluster VM nodes. 

"Here host is unable to lock the shared vmdk whereby IO commit is failing in absence of disk controller bus sharing. "

 

Environment

VMware vSphere ESXi 7.x

VMware vSphere ESXi 8.x

Cause

The ESXi host is unable to lock the shared vmdk if the shared RDM vmdk is incorrectly added back to the VM to a SCSI controller which is not configured for disk controller bus sharing. 

ESXi is unable to lock shared VM disk if it is not assigned to a SCSI controller with bus sharing configured.

For example: "the VM shared disk is configured with scsi0:x instead of scsi1:x. "

Error snippet: 

Unable to power on VM with RDM after disk expand getting error failed to lock state.

File system specific implementation of Ioctl[file] failed Failed to start the virtual machine. Cannot open the disk '/vmfs/volumes/6401c851-6be774c5-60f5-0017a4773c40/vm-xxx/vm-xxx_4.vmdk' or one of the snapshot disks it depends on.  
Error stack:
Cannot open the disk '/vmfs/volumes/6401c851-6be774c5-60f5-0017a4773c40/vm-xxx/vm-xxx_4_4.vmdk' or one of the snapshot disks it depends on.
Failed to lock the file
File system specific implementation of OpenFile[file] failed

Resolution

Power off both cluster VM nodes. 

  • Edit VM disks and modify the assigned SCSI controller to a controller with SCSI bus sharing. Disks on each node should be added to the same controller and node. 
    • example:  VM1 disk 2 configured on scsi1:1
    •                  VM2 disk 2 configured on scsi1:1 
  •  
  • Ensure the selected disk is configured with physical bus sharing. 
  • Once completed power on both cluster VM nodes. 

Additional Information