Error: "mount failed: exit status 32" for vSphere CNS volumes with Pods stuck in Creating state
search cancel

Error: "mount failed: exit status 32" for vSphere CNS volumes with Pods stuck in Creating state

book

Article ID: 431186

calendar_today

Updated On:

Products

VMware Tanzu Kubernetes Grid

Issue/Introduction

  • Kubernetes pods utilizing vSphere Container Storage Interface (CSI) volumes remain stuck in the "Creating" state. The pod events show repeated "FailedMount" warnings with the error: "mount failed: exit status 32" and "No such file or directory".
  • pod will be in ContainerCreating state

Environment

TKGm: 2.x, 2.1

 

 

Cause

The Kubernetes `VolumeAttachment` object is out of sync with the underlying node's storage state. The API reports the volume as attached (`status: true`) to the worker node, preventing the CSI driver from re-attaching it, but the actual NFS mount path on the node OS is missing, stale, or inaccessible.

Resolution

  1. Identify the Persistent Volume Claim (PVC) associated with the failing pod:
       kubectl describe pod <pod-name> -n <namespace> | grep -i claimName
  2. Retrieve the Persistent Volume (PV) name bound to the PVC:
       kubectl get pvc <pvc-name> -n <namespace>
  3. Locate the specific `VolumeAttachment` for that PV:
       kubectl get volumeattachment | grep <pv-name>
  4. Verify the attachment status and node assignment to confirm the discrepancy:
       kubectl get volumeattachment <volumeattachment-name> -o yaml
  5. Delete the desynchronized `VolumeAttachment` object:
       kubectl delete volumeattachment <volumeattachment-name>
  6. Verify the pod successfully mounts the volume and transitions to a Running state:
       kubectl get pods -n <namespace> | grep <pod-name>

Additional Information

  • Deleting the `VolumeAttachment` forces the vSphere CSI driver controller to reconcile the missing object and initiate a fresh attach operation to the node, resolving the state discrepancy.