Error : "Failed to provision volume with StorageClass "##########": failed to create volume. Errors encountered: [Datastore: ds:///vmfs/volumes/######### specified in the storage class is not accessible"
search cancel

Error : "Failed to provision volume with StorageClass "##########": failed to create volume. Errors encountered: [Datastore: ds:///vmfs/volumes/######### specified in the storage class is not accessible"

book

Article ID: 425830

calendar_today

Updated On:

Products

VMware Tanzu Kubernetes Grid Management

Issue/Introduction

  • The application Pod  fails to start and is in a Pending state.
  • The associated PersistentVolumeClaim (PVC) was unable to bind to a volume.
  • Error Message: ProvisioningFailed observed in PVC events.
  • Inspection of the PVC events via kubectl describe pvc revealed the following error from the vSphere CSI driver:


Type     Reason              Message
----     ------              -------
Warning  ProvisioningFailed  failed to provision volume with StorageClass "<##########>": rpc error: code = Internal desc = failed to create volume. Errors encountered: [Datastore: ds:///vmfs/volumes/##########/ specified in the storage class is not accessible to all nodes in vCenter.

Logs

E0107 02:24:45.512865       1 controller.go:957] error syncing claim "#########################": failed to provision volume with StorageClass "########": rpc error: code = Internal desc = failed to create volume. Errors encountered: [Datastore: ds:///vmfs/volumes/#########################/ specified in the storage class is not accessible to all nodes in vCenter "#########################"]


I0107 02:24:45.512877       1 event.go:364] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"############", Name:"#########################", UID:"#########################", APIVersion:"v1", ResourceVersion:"380544", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "######": rpc error: code = Internal desc = failed to create volume. Errors encountered: [Datastore: ds:///vmfs/volumes/#########################/ specified in the storage class is not accessible to all nodes in vCenter "#########################".]

Environment

2.x

Cause

The datastore specified in the Storage Class is not mounted on the ESXi host where the worker node resides.

Resolution

Option A: 

  1. Identify the Datastore UUID specified in the storage class 

    kubectl get sc -o wide

  2. Mount the datastore specified in the Storage Class to the ESXi host where the worker node is running. 

Option B: 

If the datastore is permanently decommissioned or cannot be mounted to the ESXi host for any reason, then follow the below steps to update the storage class with a new datastore UUID that is accessible to the ESXi host. 

  1. Backup existing StorageClass

    kubectl get sc vsphere-sc -o yaml > vsphere-sc-backup.yaml

  2. Delete the StorageClass

    kubectl delete sc vsphere-sc

  3. Force remove finalizers from the stuck PVC

    kubectl patch pvc <PVC name> -n <name space> -p '{"metadata":{"finalizers":null}}' --type merge

  4. Force remove finalizers from the StorageClass (if stuck)

    kubectl patch sc <storage class name >-p '{"metadata":{"finalizers":null}}' --type merge

  5. Apply Corrected storage class with the new datastore UUID 

    at <<EOF | kubectl apply -f -
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <Storage Class name>
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    parameters:
      datastoreurl: ds:///vmfs/volumes/<New Datastore UUID>/
    provisioner: csi.vsphere.vmware.com
    reclaimPolicy: Delete
    volumeBindingMode: WaitForFirstConsumer
    allowVolumeExpansion: true
    EOF


  6. Delete the PVC to retrigger provisioning

    kubectl delete pvc <PVC name> -n <namespace>