nova-compute logs indicate that the resource tracker continuously attempts to lock and validate the instance, preventing its purge from the Placement service and Nova database.controller-ABC/nova/nova-compute.logFeb 02 03:52:49 controller-ABC nova-compute[880]: 2026-02-02 03:52:49.671 1 DEBUG nova.compute.resource_tracker [req-xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx- - - - -] Instance xxxxx-xxxx-xxx-xxx-xxxx actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 100, 'MEMORY_MB': 32768, 'VCPU': 4}}. _remove_deleted_instances_allocations /usr/lib/python3.7/site-packages/nova/compute/resource_tracker.py:1591
/var/log/vmware/vpxd.log
2026-02-06T03:42:14.733Z error vpxd[08081] [Originator@6876 sub=StateLock opID=n-cpu-xxxxxx-xxx-xxx-xxxx-xxxxxx-xx] VM svcabcdefegh (xxxxxx-xxxx-xxx-xxx-xxxxx): Current state: inaccessible, IsConnected = false, expected = true, cryptoLocked = false
2026-02-06T04:47:39.201Z error vpxd[08987] [Originator@6876 sub=StateLock opID=n-cpu-xxxxxx-xxx-xxx-xxxx-xxxxxx-xx] VM svcabcdefegh (xxxxx-xxxx-xxx-xxx-xxxx): Current state: inaccessible, IsConnected = false, expected = true, cryptoLocked = false
deleting loop within VIO, as the compute resource tracker maintains the active allocation.To resolve the issue, the orphaned VM record must be cleared at the vSphere layer before resetting the OpenStack task state.
Ensure valid backups and snapshots of the vCenter Server and VIO management nodes are available.
In the vSphere Client, locate the inaccessible VM in the inventory .
Right-click the VM and select Remove from Inventory. Do not select "Delete from Disk".
Log in to the VIO controller node via SSH.
Reset the OpenStack instance state to active to break the hung deletion task: openstack server set --state active <INSTANCE_UUID>
Re-attempt the deletion of the instance via the CLI: openstack server delete <INSTANCE_UUID>