The following workaround steps will replace the default key by a key generated by you. Steps #1 - #9 are manual steps. For step 9, a script is available and attached (see the end of step #9).
Log onto the Supervisor cluster as root user. To do that ssh to the vCenter and execute the following commands:
root @sc1 - 10 - 168 - 193 - 150 [ ~ ]# /usr/lib/vmware-wcp/decryptK8Pwd.py Read key from file Connected to PSQL Cluster: domain-c9: 47295692 -e9f1-4ab5-a4ab-ffb9eb9a48d7 IP: 10.168 . 203.96 PWD: kl4=J#BF4!{cc} 9 ? ------------------------------------------------------------ root @sc1 - 10 - 168 - 193 - 150 [ ~ ]# ssh root @10 .168. 203.96 Welcome to Supervisor on vSphere Zones! Password: Last login: Tue Feb 28 02 : 16 : 28 2023 from 10.168 . 193.150 20 : 58 : 15 up 3 days, 20 : 58 , 0 users, load average: 2.29 , 2.51 , 2.54 root @422f7328bad0c503d4afd599e3b635a5 [ ~ ]# |
Check the "private_key.pem" file inside the harbor core pod on the Supervisor cluster. First, find the harbor namespace which begins with "vmware-system-registry-" and ends with a sequence of digits. Next, find the harbor core pod inside the namespace and print the content inside private_key.pem. If the content is identical to tls.key, the embedded harbor is affected by the security issue. Otherwise, it is not.
# harbor_ns=`kubectl get ns | grep vmware-system-registry- | awk '{print $1}' ` # echo $harbor_ns vmware-system-registry- 975528592 # harbor_core_pod=`kubectl get po -n $harbor_ns | grep harbor-core | awk '{print $1}' ` # echo $harbor_core_pod harbor- 975528592 -harbor-core-6d484d56f6-h7785 # kubectl exec $harbor_core_pod -n $harbor_ns -- cat /etc/core/private_key.pem # -----BEGIN RSA PRIVATE KEY----- MIIJKQIBAAKCAgEA3xlUJs2b/aI2NLoy4OIQ+dn/yMb/O99iKDRyZKpH8rSOmS+o ....... |
If you want to use your own tls key and cert pair, you can convert the tls.key to an RSA key and copy them to the Supervisor. Make sure that tls.rsa begins with "-----BEGIN RSA PRIVATE KEY-----" because some old openssl version might work differently.
$ openssl version LibreSSL 2.8 . 3 $ openssl rsa -in tls.key -out tls.rsa writing RSA key $ head - 1 tls.rsa -----BEGIN RSA PRIVATE KEY----- $ scp tls.rsa tls.crt root @10 .168. 203.96 :~ Password: $ ssh root @10 .168. 203.96 root @422f7328bad0c503d4afd599e3b635a5 [ ~ ]# ls tls.crt tls.rsa |
Alternatively, you can generate a self-signed tls.crt and tls.key pair directly on the supervisor and then convert the tls.key to an RSA key. Verify that tls.rsa begins with "-----BEGIN RSA PRIVATE KEY-----". Note, the default openssl package on photon 3 is old and doesn't give the RSA key in the expected format, thus we are using nxtgn-openssl instead.
# tdnf install nxtgn-openssl Installing: nxtgn-openssl x86_64 1.1 .1o- 4 .ph3 photon-updates 4 .17M 4377562 Total installed size: 4 .17M 4377562 Is this ok [y/N]: y # nxtgn-openssl req - new -newkey rsa: 4096 -x509 -sha256 -days 365 -nodes -out tls.crt -keyout tls.key ...... ( enter all the required fields) # ls tls.crt tls.key # nxtgn-openssl rsa -in tls.key -out tls.rsa writing RSA key # ls tls.crt tls.key tls.rsa # head - 1 tls.rsa -----BEGIN RSA PRIVATE KEY----- |
Create a secret with the above files inside the harbor namespace
# harbor_ns=`kubectl get ns | grep vmware-system-registry- | awk '{print $1}' ` # echo $harbor_ns vmware-system-registry- 975528592 # kubectl create secret tls harbor-core-secret --key= "tls.rsa" --cert= "tls.crt" -n $harbor_ns secret/harbor-core-secret created |
To upgrade the harbor, we also need to have the "patch" privilege for harbor Persistent Volume Claims (PVCs). To do that, you need to edit the clusterrole:
# kubectl edit clusterrole vmware-registry-manager-clusterrole |
Type "i" for insertion, and add a list item "- patch" under the resources persistentvolumeclaims. Type "ESC" + ":wq" to save the change.
- apiGroups: - "" resources: - persistentvolumeclaims verbs: - get - list - create - watch - patch <== add this line |
Upgrade the harbor instance with the new secret. We need to run the "helm upgrade" command from the vmware-registry-controller-manager pod. To do that we first need to find out the registry-controller pod name and installed harbor name.
# registry_pod=`kubectl get po -n vmware-system-registry -o=jsonpath= '{.items[0].metadata.name}' ` # echo $registry_pod vmware-registry-controller-manager-67c9cdb964-p7lz5 # installed_harbor=`kubectl exec $registry_pod -n vmware-system-registry -it -c admin-agent -- /usr/local/helm3/linux-amd64/helm list | tail -n1 | awk '{print $1}' ` # echo $installed_harbor harbor- 975528592 |
Run the "helm upgrade" command from the registry-controller pod to upgrade the harbor with the newly created secret.
# kubectl exec $registry_pod -n vmware-system-registry -it -c admin-agent -- /usr/local/helm3/linux-amd64/helm upgrade $installed_harbor /usr/local/harbor-helm/ --set core.secretName=harbor-core-secret --reuse-values Release "harbor-975528592" has been upgraded. Happy Helming! NAME: harbor- 975528592 ...... |
Check the Replicaset in the harbor namespace. We can see there are duplicate replicasets and the old ones don't get deleted. (reference: https://github.com/helm/charts/issues/3450). We need to manually delete the 3 duplicate replicasets. Make sure you replace the replicasets's names in the command and delete the old ones with larger ages.
# kubectl get replicaset -n $harbor_ns NAME DESIRED CURRENT READY AGE harbor- 975528592 -harbor-core-6b87549fd5 1 1 1 2m22s harbor- 975528592 -harbor-core-6d484d56f6 0 0 0 20h <=== delete this one harbor- 975528592 -harbor-jobservice-754cf97498 1 1 0 2m22s harbor- 975528592 -harbor-jobservice-cd8dfb687 1 1 1 20h <=== delete this one harbor- 975528592 -harbor-nginx-795dc6b5bf 1 1 1 20h harbor- 975528592 -harbor-portal-6d6bf8b6d8 1 1 1 20h harbor- 975528592 -harbor-registry-64b6f454f9 1 1 1 20h <=== delete this one harbor- 975528592 -harbor-registry-994c54776 1 1 0 2m21s # kubectl delete replicaset harbor- 975528592 -harbor-core-6d484d56f6 harbor- 975528592 -harbor-jobservice-cd8dfb687 harbor- 975528592 -harbor-registry-64b6f454f9 -n $harbor_ns replicaset.apps "harbor-975528592-harbor-core-6d484d56f6" deleted replicaset.apps "harbor-975528592-harbor-jobservice-cd8dfb687" deleted replicaset.apps "harbor-975528592-harbor-registry-64b6f454f9" deleted |
If any pod in the harbor namespace is stuck at "pending stage", you may delete it to force a re-creation. After that, all the replicasets inside the harbor namespace should be ready and the harbor status should be "healthy"
# kubectl get po -n $harbor_ns | grep Pending harbor- 975528592 -harbor-registry-994c54776-l4nsr 0 / 2 Pending 0 9m5s # kubectl delete po harbor- 975528592 -harbor-registry-994c54776-l4nsr -n $harbor_ns pod "harbor-975528592-harbor-registry-994c54776-l4nsr" deleted # kubectl get replicaset -n $harbor_ns NAME DESIRED CURRENT READY AGE harbor- 975528592 -harbor-core-6b87549fd5 1 1 1 14m harbor- 975528592 -harbor-jobservice-754cf97498 1 1 1 14m harbor- 975528592 -harbor-nginx-795dc6b5bf 1 1 1 20h harbor- 975528592 -harbor-portal-6d6bf8b6d8 1 1 1 20h harbor- 975528592 -harbor-registry-994c54776 1 1 1 14m # kubectl describe registry -A | grep -A 1 'Health Status' Health Status: Status: healthy |
As a part of vRegistry's integration, there are pull and push robot accounts created for each user namespace, which are used by Supervisor pod deployment as well as guest clusters. Those robot accounts are generated using the public/private key pair and need to be re-generated once the key pair has been changed. To force refresh the robot accounts, make the following edits on every project resource:
- remove entire lines of "ImagePullSecretName" and "ImagePushSecretName"
- modify "currentMode" from "running" to "starting"
and type "ESC" + ":wq" to save the change.
# kubectl get projects -A NAMESPACE NAME AGE gc-sso-rbac-test gc-sso-rbac-test 5d test-ns test-ns 3d2h # kubectl edit project test-ns -n test-ns ...... status: ImagePullSecretName: test-ns- default -image-pull-secret <=== remove the entire line ImagePushSecretName: test-ns- default -image-push-secret <=== remove the entire line currentMode: running <=== modify to "starting" projectID: 4 pullRobotAccountID: 14 pushRobotAccountID: 15 repoCount: 1 |
Verify that new secrets have been created:
# kubectl get secret -n test-ns NAME TYPE DATA AGE test-ns- default -image-pull-secret kubernetes.io/dockerconfigjson 1 18s test-ns- default -image-push-secret kubernetes.io/dockerconfigjson 1 18s |
Customers can execute the script update_harbor_projects.sh that automates step#9 for all the projects from the Supervisor Control Plane VM.
# chmod u+x update_harbor_projects.sh # ./update_harbor_projects.sh |
The output printed by the script will help to identify the projects for which the update failed. Even if the script fails to update the status for a certain project, it will continue processing the remaining projects. Customers can manually update the projects which were not updated by the script due to some failure.