During the supervisor upgrade, the process may become stuck with the following error:
The validatingwebhookconfigurations "storage-quota-validating-webhook-configuration" is invalid:
metadata.resourceVersion: Invalid value: 0x0: must be specified for an update.
The following message is also logged on the Supervisor control plane node in /var/log/vmware/upgrade-ctl-compupgrade.log:
2025-01-01T12:00:00.000Z INFO comphelper: Running ['/usr/local/bin/etcdctl', 'put', '/vmware/wcp/upgrade/components/status', '--command-timeout=30s', '--dial-timeout=5s', '{"controller_id": "4239a0e729b45b705922c9bfa5ed4353", "state": "error", "messages": [{"level": "error", "message": "Component StoragePolicyQuotaUpgrade failed: Failed to run command: [\'kubectl\', \'apply\', \'-f\', \'/usr/lib/vmware-wcp/objects/storage-policy-quota/manifests/storage_quota_webhook_deployment.yaml\', \'--record\'] ret=1 out=service/storage-quota-webhook-service unchanged\\ndeployment.apps/storage-quota-webhook configured\\ncertificate.cert-manager.io/storage-quota-serving-cert unchanged\\nissuer.cert-manager.io/storage-quota-selfsigned-issuer unchanged\\n err=Flag --record has been deprecated, --record will be removed in the future\\nThe validatingwebhookconfigurations \\"storage-quota-validating-webhook-configuration\\" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update\\n"}, {"level": "error", "message": "Component upgrade failed."}], "started_at": "2025-01-01T12:00.00.000000Z",
This issue occurs when a user updates storage quota webhook configuration on their own. This issue will not occur if the object is not manually updated. When you examine the webhook configuration, you will notice a mismatch between the resourceVersion value stored in the kubectl.kubernetes.io/last-applied-configuration annotation and the object’s current resourceVersion:
k get validatingwebhookconfigurations.admissionregistration.k8s.io -oyaml -n kube-system storage-quota-validating-webhook-configuration
metadata:
annotations:
...
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"admissionregistration.k8s.io/v1",
"kind":"ValidatingWebhookConfiguration" ..........
"name":"storage-quota-validating-webhook-configuration",
"resourceVersion":"0000000000",
"uid":"##########"}, ..........
...
labels:
control-plane: storage-quota-webhook
name: storage-quota-validating-webhook-configuration
resourceVersion: "1111111111"
In this example, the following values do not match:
metadata.annotations.kubectl.kubernetes.io/last-applied-configuration.resourceVersion: "0000000000"
metadata.resourceVersion: "1111111111"
This resourceVersion mismatch causes the Supervisor upgrade to fail.
To resolve the issue, first take a backup of the webhook configuration:
kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io -oyaml -n kube-system storage-quota-validating-webhook-configuration > storage-quota-validating-webhook-configuration-backup.yaml
Next, we edit the webhook configuration to remove the resourceVersion field from the annotation with the key kubectl.kubernetes.io/last-applied-configuration.
kubectl edit validatingwebhookconfigurations.admissionregistration.k8s.io -n kube-system storage-quota-validating-webhook-configuration
metadata:
annotations:
...
kubectl.kubernetes.io/last-applied-configuration: |
...
"resourceVersion":"0000000000", <-- Remove this line
"uid":"##########"}, ..........
Once the annotation has been removed, retry the Supervisor upgrade from the vCenter GUI. The upgrade should now proceed past this issue.