While creating new clusters, the provisioning of additional worker and control plane nodes is observed.
Even after the cluster has become stable, these nodes may enter a continuous recreation loop again.
The cluster shows 'RollingUpdateInProgress" state as below:
lastTransitionTime: "####-##-####:##:###"
message: Rolling 1 replicas with outdated spec (1 replicas up to date)
reason: RollingUpdateInProgress
severity: Warning
status: "False"
type: Ready
This issue arises when the label {managed-by: vmware-vRegistry} is incorrectly applied to both the internal TLS secret and the harbor-tls secret, which holds the ca.crt data for the customer certificate. The label should only be applied to the harbor-tls secret. This secret is the one that should be filtered and appended to the cluster configuration during cluster creation when Harbor is used as a self-hosted registry.
As a result of this incorrect label assignment, the system will trigger the creation of kubeadmconfigTemplates, and the cluster will update its spec. This will cause the cluster to enter the "RollingUpdateInProgress" state.
Fix:
Use a fixed Version of Harbor Supervisor Service >= v2.11.2+vmware.1-tkg.2
Workaround(Recovery steps):
1. Remove Extra Labels from the Internal TLS Secret.
#kubectl get secrets -l managed-by=vmware-vRegistry -n svc-harbor-domain-c11
NAME TYPE DATA AGE
harbor-ca-key-pair kubernetes.io/tls 3 3h9m
harbor-core-internal-tls kubernetes.io/tls 3 6m57s
harbor-jobservice-internal-tls kubernetes.io/tls 3 15m
harbor-portal-internal-tls kubernetes.io/tls 3 15m
harbor-registry-internal-tls kubernetes.io/tls 3 15m
harbor-tls kubernetes.io/tls 3 3h9m
harbor-token-service kubernetes.io/tls 3 3h9m
harbor-trivy-internal-tls kubernetes.io/tls 3 15m
2. Create an Overlay YAML File.
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=lambda indexOrKey, left, right: left["kind"] == "Certificate" and not left["metadata"]["name"].startswith("harbor-tls"), expects="1+"
---
spec:
#@overlay/match when="1+"
secretTemplate:
labels:
#@overlay/match when="1+"
#@overlay/remove
managed-by: vmware-vRegistry
3. Create Secret.
pkgi resource of the Harbor Supervisor service.# kubectl get pkgi -A|grep harbor
vmware-system-supervisor-services svc-harbor.tanzu.vmware.com harbor.tanzu.vmware.com 2.11.2+vmware.1-tkg.1 Reconcile succeeded 9m55s
# kubectl create secret generic remove-internal-tls-label-overlay -o yaml --from-file=remove-internal-tls-label-overlay.yaml -n vmware-system-supervisor-services
4. Patch the harbor package with the secret.
# kubectl annotate pkgi svc-harbor.tanzu.vmware.com ext.packaging.carvel.dev/ytt-paths-from-secret-name.0=remove-internal-tls-label-overlay -n vmware-system-supervisor-services
packageinstall.packaging.carvel.dev/svc-harbor.tanzu.vmware.com annotated
5. Check if the pkgi is Reconciled Successfully.
#kubectl get pkgi -A | grep harbor
6. Validate the secret after the patch.
Once the patch is applied, only the harbor-tls secret should retain the label. Use the following command to validate:
# kubectl get secrets -l managed-by=vmware-vRegistry -n svc-harbor-domain-c11
NAME TYPE DATA AGE
harbor-tls kubernetes.io/tls 3 14m