Name: <example-label>-pd-<example-label>-0Namespace: <example-namespace>StorageClass: <example-storage-class>Status: PendingVolume:Labels: app=<example-label> release=<example-label>Annotations: volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com volume.kubernetes.io/selected-node: <example-worker-node> volume.kubernetes.io/storage-provisioner: csi.vsphere.vmware.comFinalizers: [kubernetes.io/pvc-protection]Capacity:Access Modes:VolumeMode: FilesystemUsed By: <example-label>-0Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal WaitForFirstConsumer 1m persistentvolume-controller waiting for first consumer to be created before binding Warning ProvisioningFailed 1m csi.vsphere.vmware.com_vsphere-csi-controller-86d4f68d95-w6l2m_####-5fe0-4696-####-a1ba37aff452 failed to provision volume with StorageClass "<example-storage-class>": rpc error: code = Internal desc = failed to get shared datastores in kubernetes cluster. Error: no shared datastores found for nodeVm: VirtualMachine:vm-### [VirtualCenterHost: <example-vCenter-name>, UUID: 420cb357-####-2d6f-####-2f0e108749d9, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-####, VirtualCenterHost: <example-vCenter-name>]] Normal ExternalProvisioning 62s (x4 over 1m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator
{"level":"info","time":"[timestamp]","caller":"syncer/fullsync.go:41","msg":"FullSync: start"}{"level":"warn","time":"[timestamp]","caller":"syncer/fullsync.go:433","msg":"could not find any volume which is present in both k8s and in CNS"}{"level":"info","time":"[timestamp]","caller":"syncer/fullsync.go:276","msg":"FullSync: fullSyncDeleteVolumes could not find any volume which is not present in k8s and needs to be checked for volume deletion."}{"level":"info","time":"[timestamp]","caller":"syncer/fullsync.go:160","msg":"FullSync: end"}
[timestamp] 1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"example-namespace", Name:"example-pvc-name", UID:"d8dd60ee-####-422f-####-dd4276f2912e", APIVersion:"v1", ResourceVersion:"884038317", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "example-namespace/example-pvc-name"[timestamp] 1 connection.go:186] GRPC response: {}[timestamp] 1 connection.go:187] GRPC error: rpc error: code = Internal desc = failed to get shared datastores in kubernetes cluster. Error: no shared datastores found for nodeVm: VirtualMachine:vm-#### [VirtualCenterHost: example-vCenter-FQDN, UUID: 42220481-####-5b62-####-17e69eb2a91d, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-####, VirtualCenterHost: example-vCenter-FQDN]]
In a multi-AZ environment where Pod and PVC placement is not restricted to a specific zone, the CSI driver must identify a datastore with shared visibility across all cluster nodes. In vSphere CSI 3.1.0 and later, topology-aware requests specifically require that a datastore be accessible to every host within a given topology segment (e.g., a tagged Cluster or Datacenter).
Provisioning fails when the vSphere CSI Plug-in lacks a Topology-Aware configuration. If the driver is not configured with the appropriate topology-categories, or if the underlying vSphere objects (Datacenters, Clusters, or Hosts) lack corresponding vSphere tags, the provisioner cannot determine which datastores are "shared" within the required boundary. Essentially, the driver cannot validate zonal accessibility because the mapping between Kubernetes nodes and the vSphere infrastructure topology has not been established.
To resolve this issue, apply the following configuration requirements for the topology-aware vSphere CSI Plug-in. These steps ensure the CSI provisioner identifies datastores accessible to nodes where workloads are scheduled.
The steps below provide a high-level overview. For detailed instructions, refer to the official documentation: Deploying vSphere Container Storage Plug-in with Topology.
[Labels]topology-categories = "k8s-region, k8s-zone"kubectl get deployment vsphere-csi-controller -n vmware-system-csi -o yaml
Required Arguments:
--feature-gates=Topology=true--strict-topology
kubectl get nodes --show-labels | grep topology.csi.vmware.com