VMware advises customers to review the issue and:
Notes on the workaround:
- The workaround won't persist after a patched Supervisor cluster is upgraded to a newer cluster version. And you will need to re-apply the workaround in this case.
- If you have multiple supervisors and enabled more than one embedded Harbor registry, the workaround must be applied for each.
Workaround:
The following workaround steps will replace the default key by a key generated by you. Steps #1 - #9 are manual steps. For step 9, a script is available and attached (see the end of step #9).
Log onto the Supervisor cluster as root user. To do that ssh to the vCenter and execute the following commands:
root@sc1-10-168-193-150 [ ~ ]# /usr/lib/vmware-wcp/decryptK8Pwd.pyRead key from fileConnected to PSQLCluster: domain-c9: *******-******-*****-a4ab-*******IP: 10.**.***.****PWD: *********------------------------------------------------------------root@sc1-10-**-***-*** [ ~ ]# ssh root@10.***.***.***Welcome to Supervisor on vSphere Zones!Password:Last login: Tue Feb 28 02:16:28 2023 from 10.**.**.*** 20:58:15 up 3 days, 20:58, 0 users, load average: 2.29, 2.51, 2.54root@************* [ ~ ]# |
Check the "private_key.pem" file inside the harbor core pod on the Supervisor cluster. First, find the harbor namespace which begins with "vmware-system-registry-" and ends with a sequence of digits. Next, find the harbor core pod inside the namespace and print the content inside private_key.pem. If the content is identical to tls.key, the embedded harbor is affected by the security issue. Otherwise, it is not.
# harbor_ns=`kubectl get ns | grep vmware-system-registry- | awk '{print $1}'`# echo $harbor_nsvmware-system-registry-*******# harbor_core_pod=`kubectl get po -n $harbor_ns | grep harbor-core | awk '{print $1}'`# echo $harbor_core_podharbor-******-harbor-core-*******# kubectl exec $harbor_core_pod -n $harbor_ns -- cat /etc/core/private_key.pem# -----BEGIN RSA PRIVATE KEY-----MIIJK******************************o....... |
If you want to use your own tls key and cert pair, you can convert the tls.key to an RSA key and copy them to the Supervisor. Make sure that tls.rsa begins with "-----BEGIN RSA PRIVATE KEY-----" because some old openssl version might work differently.
$ openssl versionLibreSSL 2.8.3$ openssl rsa -in tls.key -out tls.rsawriting RSA key$ head -1 tls.rsa-----BEGIN RSA PRIVATE KEY-----$ scp tls.rsa tls.crt root@10.***.***.****:~Password:$ ssh root@10.***.***.***root@****************** [ ~ ]# lstls.crt tls.rsa |
Alternatively, you can generate a self-signed tls.crt and tls.key pair directly on the supervisor and then convert the tls.key to an RSA key. Verify that tls.rsa begins with "-----BEGIN RSA PRIVATE KEY-----". Note, the default openssl package on photon 3 is old and doesn't give the RSA key in the expected format, thus we are using nxtgn-openssl instead.
# tdnf install nxtgn-opensslInstalling:nxtgn-openssl x86_64 1.1.1o-4.ph3 photon-updates 4.17M 4377562Total installed size: 4.17M 4377562Is this ok [y/N]: y# nxtgn-openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out tls.crt -keyout tls.key...... ( enter all the required fields)# lstls.crt tls.key# nxtgn-openssl rsa -in tls.key -out tls.rsawriting RSA key# lstls.crt tls.key tls.rsa# head -1 tls.rsa -----BEGIN RSA PRIVATE KEY----- |
Create a secret with the above files inside the harbor namespace
# harbor_ns=`kubectl get ns | grep vmware-system-registry- | awk '{print $1}'`# echo $harbor_nsvmware-system-registry-*******# kubectl create secret tls harbor-core-secret --key="tls.rsa" --cert="tls.crt" -n $harbor_nssecret/harbor-core-secret created |
To upgrade the harbor, we also need to have the "patch" privilege for harbor Persistent Volume Claims (PVCs). To do that, you need to edit the clusterrole:
# kubectl edit clusterrole vmware-registry-manager-clusterrole |
Type "i" for insertion, and add a list item "- patch" under the resources persistentvolumeclaims. Type "ESC" + ":wq" to save the change.
- apiGroups: - "" resources: - persistentvolumeclaims verbs: - get - list - create - watch - patch <== add this line |
Upgrade the harbor instance with the new secret. We need to run the "helm upgrade" command from the vmware-registry-controller-manager pod. To do that we first need to find out the registry-controller pod name and installed harbor name.
# registry_pod=`kubectl get po -n vmware-system-registry -o=jsonpath='{.items[0].metadata.name}'`# echo $registry_podvmware-registry-controller-manager-*********# installed_harbor=`kubectl exec $registry_pod -n vmware-system-registry -it -c admin-agent -- /usr/local/helm3/linux-amd64/helm list | tail -n1 | awk '{print $1}'`# echo $installed_harborharbor-975528592 |
Run the "helm upgrade" command from the registry-controller pod to upgrade the harbor with the newly created secret.
# kubectl exec $registry_pod -n vmware-system-registry -it -c admin-agent -- /usr/local/helm3/linux-amd64/helm upgrade $installed_harbor /usr/local/harbor-helm/ --set core.secretName=harbor-core-secret --reuse-valuesRelease "harbor-975528592" has been upgraded. Happy Helming!NAME: harbor-975528592...... |
Check the Replicaset in the harbor namespace. We can see there are duplicate replicasets and the old ones don't get deleted. (reference: https://github.com/helm/charts/issues/3450). We need to manually delete the 3 duplicate replicasets. Make sure you replace the replicasets's names in the command and delete the old ones with larger ages.
# kubectl get replicaset -n $harbor_nsNAME DESIRED CURRENT READY AGEharbor-975528592-harbor-core-6b87549fd5 1 1 1 2m22sharbor-975528592-harbor-core-6d484d56f6 0 0 0 20h <=== delete this oneharbor-975528592-harbor-jobservice-754cf97498 1 1 0 2m22sharbor-975528592-harbor-jobservice-cd8dfb687 1 1 1 20h <=== delete this oneharbor-975528592-harbor-nginx-795dc6b5bf 1 1 1 20hharbor-975528592-harbor-portal-6d6bf8b6d8 1 1 1 20hharbor-975528592-harbor-registry-64b6f454f9 1 1 1 20h <=== delete this oneharbor-975528592-harbor-registry-994c54776 1 1 0 2m21s# kubectl delete replicaset harbor-975528592-harbor-core-6d484d56f6 harbor-975528592-harbor-jobservice-cd8dfb687 harbor-975528592-harbor-registry-64b6f454f9 -n $harbor_nsreplicaset.apps "harbor-975528592-harbor-core-6d484d56f6" deletedreplicaset.apps "harbor-975528592-harbor-jobservice-cd8dfb687" deletedreplicaset.apps "harbor-975528592-harbor-registry-64b6f454f9" deleted |
If any pod in the harbor namespace is stuck at "pending stage", you may delete it to force a re-creation. After that, all the replicasets inside the harbor namespace should be ready and the harbor status should be "healthy"
# kubectl get po -n $harbor_ns | grep Pendingharbor-975528592-harbor-registry-994c54776-l4nsr 0/2 Pending 0 9m5s# kubectl delete po harbor-975528592-harbor-registry-994c54776-l4nsr -n $harbor_ns pod "harbor-975528592-harbor-registry-994c54776-l4nsr" deleted# kubectl get replicaset -n $harbor_nsNAME DESIRED CURRENT READY AGEharbor-975528592-harbor-core-6b87549fd5 1 1 1 14mharbor-975528592-harbor-jobservice-754cf97498 1 1 1 14mharbor-975528592-harbor-nginx-795dc6b5bf 1 1 1 20hharbor-975528592-harbor-portal-6d6bf8b6d8 1 1 1 20hharbor-975528592-harbor-registry-994c54776 1 1 1 14m# kubectl describe registry -A | grep -A 1 'Health Status' Health Status: Status: healthy |
As a part of vRegistry's integration, there are pull and push robot accounts created for each user namespace, which are used by Supervisor pod deployment as well as guest clusters. Those robot accounts are generated using the public/private key pair and need to be re-generated once the key pair has been changed. To force refresh the robot accounts, make the following edits on every project resource:
- remove entire lines of "ImagePullSecretName" and "ImagePushSecretName"
- modify "currentMode" from "running" to "starting"
and type "ESC" + ":wq" to save the change.
# kubectl get projects -ANAMESPACE NAME AGEgc-sso-rbac-test gc-sso-rbac-test 5dtest-ns test-ns 3d2h# kubectl edit project test-ns -n test-ns......status: ImagePullSecretName: test-ns-default-image-pull-secret <=== remove the entire line ImagePushSecretName: test-ns-default-image-push-secret <=== remove the entire line currentMode: running <=== modify to "starting" projectID: 4 pullRobotAccountID: 14 pushRobotAccountID: 15 repoCount: 1 |
Verify that new secrets have been created:
# kubectl get secret -n test-nsNAME TYPE DATA AGEtest-ns-default-image-pull-secret kubernetes.io/dockerconfigjson 1 18stest-ns-default-image-push-secret kubernetes.io/dockerconfigjson 1 18s |
Customers can execute the script update_harbor_projects.sh that automates step#9 for all the projects from the Supervisor Control Plane VM.
# chmod u+x update_harbor_projects.sh# ./update_harbor_projects.sh |
The output printed by the script will help to identify the projects for which the update failed. Even if the script fails to update the status for a certain project, it will continue processing the remaining projects. Customers can manually update the projects which were not updated by the script due to some failure.