Workloads using dynamic PersistentVolumes (PVs) must be removed before deleting a cluster
search cancel

Workloads using dynamic PersistentVolumes (PVs) must be removed before deleting a cluster

book

Article ID: 331347

calendar_today

Updated On:

Products

VMware

Issue/Introduction

Symptoms:
Your tkgi delete-cluster operation hangs while draining a worker VM containing a Pod bound to one or more dynamic persistent volumes.

Resolution

Before deleting the cluster, remove all workloads.


If you have already attempted to delete a cluster and your workloads with dynamic PVs have stopped, complete the following steps:


1. To fetch the VM CID for the affected worker VM:

  • To fetch the task ID for tasks on your deployment:
bosh -d DEPLOYMENT-ID tasks -r

Where DEPLOYMENT-ID is your deployment name.
  • To fetch the name of the worker VM BOSH has blocked on:
bosh task TASK-ID

Where TASK-ID is the ID you fetched in the last step.

This will return a worker VM name in a format similar to the following: “worker/547a4c8d-a8b7-4640-ac8c-10712925987”.​​
  • To fetch the VM CID for the affected worker VM:

bosh vms | grep WORKER-VM-ID


Where WORKER-VM-ID is the worker VM name you fetched in the last step.


2. Determine the vmdk file paths for all of the dynamic PVs attached to the worker VM.


To determine the vmdk file paths for dynamic PVs using the vCenter webUI:

  • Select the worker VM.
  • Right-click the worker VM.
  • Select Edit Settings.
  • Select Yes.

You can now see all disks and file paths.

3. Power off the worker VM.

  • To power off the worker VM on vSphere, use the vCenter webUI.

4. Delete the worker VM from disk.

To delete a worker VM using the vCenter webUI:

  • Select the worker VM.
  • Right-click the worker VM.
  • Select Delete from disk.

5. Remove the vmdk files at the file paths you collected above.

To remove a vmdk file using the vCenter webUI:

  • Open the vCenter Storage page.
  • Remove the vmdk files for the dynamic PVs attached to the worker VM.

Warning: Remove only the vmdk files for the dynamic PVs attached to the problematic worker VM.


Workaround: