Removing an orphaned PVC from a vSphere Supervisor
search cancel

Removing an orphaned PVC from a vSphere Supervisor

book

Article ID: 435858

calendar_today

Updated On:

Products

Tanzu Kubernetes Runtime

Issue/Introduction

A persistent volume claim (PVC) is left orphaned in a Bound state on a vSphere Supervisor after deleting a persistent volume claim (PVC) from a VKS workload cluster. 

 

kubectl get pvc -A
<Namespace_ID>                 ########-####-####-####-############-########-####-####-####-#############   Bound         pvc-########-####-####-####-############   40Gi      RWO            <Storage-Policy-ID>   <unset>                 113d

 

On the workload cluster, the PVC is gone, but the PV still exists: 

 

kubectl get pv
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                              STORAGECLASS           VOLUMEATTRIBUTESCLASS   REASON   AGE
persistentvolume/pvc-########-####-####-9454-############   40Gi       RWO            Delete           Released   new-namespace/#####-######-######-#####-######-#  <storage class>   <unset>                          113d

Environment

  • vSphere Supervisor

Cause

A persistent volume (PV) on a workload cluster has a corresponding persistent volume claim (PVC) on the workload cluster and the Supervisor.  The deletion of a persistent volume in a workload cluster depends on the deletion of the corresponding Supervisor PVC to work properly.  

In some instances, the Supervisor PVC fails to respond to the workload cluster PV deletion event.

Resolution

On the workload cluster:

Find the corresponding Supervisor PVC of the stuck PV.  Run the "kubectl get" command against the workload cluster.  From the output, the "volumeHandle" is the field containing the name of the corresponding Supervisor PVC.

 

 kubectl get persistentvolume/pvc-########-####-####-9343-############ -oyaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
    volume.kubernetes.io/provisioner-deletion-secret-name: ""
    volume.kubernetes.io/provisioner-deletion-secret-namespace: ""
  creationTimestamp: "2025-10-29T19:11:24Z"
  finalizers:
  - kubernetes.io/pv-protection
  - external-attacher/csi-vsphere-vmware-com
  name: pvc-########-####-####-9343-############ 
  resourceVersion: "5308"
  uid: ########-####-####-####-############ 
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 40Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: #####-######-######-#####-######-#
    namespace: <namespaceName>
    resourceVersion: "3253"
    uid: ########-####-####-9343-############
  csi:
    driver: csi.vsphere.vmware.com
    fsType: ext4
    volumeAttributes:
      storage.kubernetes.io/csiProvisionerIdentity: #############-###-csi.vsphere.vmware.com
      type: vSphere CNS Block Volume
    volumeHandle: ########-####-####-c8d4-############-########-####-####-9343-############
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: topology.kubernetes.io/zone
          operator: In
          values:
          - domain-c10
  persistentVolumeReclaimPolicy: Delete
  storageClassName: dsm-test-latebinding
  volumeMode: Filesystem
status:
  lastPhaseTransitionTime: "2025-10-29T19:36:18Z"
  phase: Released


In the example above, the "volumeHandle" is "########-####-####-c8d4-############-########-####-####-9343-############".

 

 

On the Supervisor:

Next, switch to the Supervisor environment, and delete the PVC from that context.


kubectl delete pvc ########-####-####-c8d4-############-########-####-####-9343-############

 

 

On the workload cluster:

The final step is to restart the vsphere-csi-controller from the workload cluster.

 

kubectl rollout restart deployment/vsphere-csi-controller  -n vmware-system-csi