# kubectl logs -n vmware-system-csi vsphere-csi-controller-<ID> csi-provisioner
[...]
I0122 13:00:00.688094 1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"<NAMESPACE-NAME>", Name:"<NAME>", UID:"<UID>", APIVersion:"v1", ResourceVersion:"<ID>", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "<STORAGE-CLASS-NAME>": rpc error: code = Internal desc = failed to create pvc with name: <UUID> on namespace: <NAMESPACE-NAME> in supervisorCluster. Error: admission webhook "validate-quota.k8s.io" denied the request: Operation denied due to insufficient storage quota for storage policy
VMware vCenter 8.0 Update 3
Tanzu Kubernetes Grid Service
As a part of vCenter 8.0 Update 3 release, storage quota management for PVC happens using custom Kubernetes resources, such as StorageQuota, StoragePolicyQuota and StoragePolicyUsage. The quota for namespaces are handled by the storage-quota-webhook- and storage-quota-controller-pods.
However, in cases such as CSI driver restarting abruptly, the quota calculation may drift, increasing/decreasing quota usage incorrectly. The same gets reflected in previously mentioned quota-related custom resources. These incorrect quota values are used for quota validation for any future PVC storage requests. Hence in cases where wrong calculation has taken place, all further PVC requests may find the quota limit exceeded and get rejected causing PVC creation/expand failures.
This issue has been addressed and many improvements were made in vSphere 9.0. For vSphere 8.0, please follow the workaround presented below.
Workaround
Note: Please reach out to Broadcom Support if there are any further questions or any assistance is needed before or after proceeding with provided steps.
1. SSH into the vCenter appliance and connect to the affected Supervisor Cluster.
2. Run "kubectl -n <affected namespace name> get pvc" and check the list of PVCs provided. Delete any long-time-pending PVCs. This is to make sure "reserved" quota gets reduced to 0 in StoragePolicyQuotas/StorageQuotas custom resources.
Note: PVCs should be deleted from their respective source, in other words: PVCs created from Tanzu Kubernetes Clusters should be deleted from TKC cluster. Any PVCs created directly on Supervisor Cluster should be deleted from the Supervisor Cluster.
3. From vCenter, unassign the storage policies associated with affected namespace and do have the incorrect calculated quota usage. Please take note of name of the storage policies and its limits, if any set. This step will delete the StoragePolicyQuotas and StoragePolicyUsages custom resources associated with the policies and its namespace from Supervisor Cluster.
4. Run "kubectl -n <affected namespace name> get storagequotas". If the quota usage is found incorrect in StorageQuotas custom resource for affected namespace as well, reset the namespace level storage quota limit in vCenter to "0". This will delete the StorageQuotas custom resource associated with that namespace. Please take note of the original limit.
5. Restart storage quota-webhook and storage-quota-controller pods to reset the in-memory quota usage values.
kubectl -n kube-system scale deploy storage-quota-controller-manager --replicas=0
kubectl -n kube-system scale deploy storage-quota-controller-manager --replicas=3
kubectl -n kube-system scale deploy storage-quota-webhook --replicas=0
kubectl -n kube-system scale deploy storage-quota-webhook --replicas=1
6. From vCenter, reassign the same storage policy to the namespace, which was previously unassigned back in step 3 above. Set the original storage quota limit as previously noted. This will create new instances of StoragePolicyQuotas, StoragePolicyUsages and StorageQuotas custom resources.
7. Restart the CSI driver deployment to initiate a full sync. This will update the actual space usage by PVCs in StoragePolicyUsages custom resource, which will then be updated in StorageQuotas and StoragePolicyQuotas custom resources by storage-quota-controller.
kubectl -n kube-system scale deploy vsphere-csi-controller --replicas=0
kubectl -n kube-system scale deploy vsphere-csi-controller --replicas=3