The environment encounters a node drain failure. The drain blocks on a pod in “Completed” state that does not delete. The Job that originally created the pod has been deleted earlier, and the node object it schedules on has also been deleted, leaving an orphaned pod record persisting in etcd.
Cluster API does not complete the Machine deletion workflow until the drain finishes.
Machine Status from kubectl describe machine <machine> -n <namespace>:
Message: Drain not completed yet: * Pod <pod>: deletionTimestamp set, but still not removed from the Node
Observed Generation: 4
Reason: DrainingNode
Status: True
Type: Deleting
Impact: The Machine, VSphereMachine, VirtualMachine CR, and the backing VM in vSphere remain pending deletion until the node drain clears.
Standard methods such as force pod deletion, patching finalizers, direct API delete, VM reboot, drain with eviction disabled, do not succeed.
An orphaned pod record persists in etcd after its parent Job and associated node have been deleted. The API server cannot complete the pod deletion flow. The pod remains indefinitely with a deletionTimestamp set and never clears. Cluster API requires a successful node drain before deleting a Machine, so the orphaned pod blocks the entire deletion chain.
When this condition occurs, please open a Support ticket for further assistance.