Manually Renew Cluster Certificates
search cancel

Manually Renew Cluster Certificates

book

Article ID: 314182

calendar_today

Updated On:

Products

VMware Telco Cloud Automation

Issue/Introduction

The following procedure details the steps required to renew the certificates for any Management and Workload Clusters, deployed and managed by VMware Telco Cloud Automation (TCA) 2.1.x or 2.2. This procedure is strictly for TCA 2.1.x or 2.2.

Environment

2.1.x, 2.2

Resolution

Prepare the Patch Scripts

  1. Download the cert-rotation-v1-tca-2.1.tar.gz patch file to a local machine.
  2. Unzip the patch file:
    tar -xzvf cert-rotation-v1-tca-2.1.tar.gz
  3. Move into the cert-rotation-v1-tca-2.1 folder and unzip the master-kubeconfig-script-main file:
    cd cert-rotation-v1
    tar -xzvf master-kubeconfig-script-main.tar.gz

 

Prepare the Telco Cloud Automation-Control Plane Appliances (TCA-CP)

The following steps must be applied to ALL TCA-CP appliances that were deployed and are being managed by TCA.

Note: Replace tca-cp-ip with the actual IP of the TCA-CP appliance in the upcoming commands. 

  1. SSH into the TCA-CP as the admin user:
  2. Run the following command to take a backup of the current appliance-management war file:
    cp /opt/vmware/hybridity-appliance-management-0.1.0.war /tmp/hybridity-appliance-management-0.1.0.war.old
  3. Switch to the root user:
    su –
  4. Replace the /opt/vmware/hybridity-appliance-management-0.1.0.war file war file with the one downloaded on the local machine by running the following command from the cert-rotation-v1 folder on the local machine:
    scp cert-rotation-v1/master-kubeconfig-script-main/hybridity-appliance-management-0.1.0.war admin@tca-cp-ip:/opt/vmware
  5. Restart the appliance-management service:
    systemctl restart appliance-management
  6. Copy the cluster-cert-renew.tar.gz from the local machine:
    scp cert-rotation-v1/master-kubeconfig-script-main/cluster-cert-renew.tar.gz admin@tca-cp-ip:/home/admin/cluster-cert-renew.tar.gz
  7. Unzip the cluster-cert-renew.tar.gz file:
    cd /home/admin
    tar -xvf cluster-cert-renew.tar.gz

 

Prepare the TCA-Manager (TCA-M)

Note: Replace tca-m-ip with the actual IP of the TCA-M appliance in the upcoming commands. 

  1. Copy the cluster-cert-renew.tar.gz from the local machine to the TCA-M
  2. Go to the parent directory where the cert-rotation-v1 folder exists and run the following command:
    scp cert-rotation-v1/cluster-kubeconfig-1.0-jar-with-dependencies.jar admin@tca-m-ip:/opt/vmware/tools/cluster-kubeconfig-1.0-jar-with-dependencies.jar
  3. SSH into the TCA-M as the admin user.
  4. Create a backup of the mongo DB:
    mongodump --db hybridity --gzip --archive=/tmp/archive.gz

 

Identify Clusters with Expiring kubeconfig Certificates

  1. SSH into the TCA-M as the admin user.
  2. Generate a clusters.json file containing the kubeconfig certificate expiration dates for all the clusters deployed via TCA:
    /usr/java/jre/bin/java -cp /opt/vmware/tools/cluster-kubeconfig-1.0-jar-with-dependencies.jar GetAllCertExpiryData > clusters.json
    Optional: Filter out clusters with kubeconfig certificates expiring before a specific date by appending the date, in the yy-mm-dd format.
    Example:
    /usr/java/jre/bin/java -cp /opt/vmware/tools/cluster-kubeconfig-1.0-jar-with-dependencies.jar GetAllCertExpiryData 2025-07-21
  3. Review the kubeconfig run.log file to ensure there were no errors:
    cat /tmp/cluster-kubeconfig-tool/run.log | grep ERROR

 

The clusters.json file provides a list of TCP-CP IPs (tcaCp) along with their cluster’s:
expirationDate
clusterId
mgmtClusterTcaCp (location of the mgmt cluster)
mgmtClusterName

Refer to this list to identify clusters requiring certificate renewal and for reference in the upcoming steps.

 

Renew Management Cluster Certificate

NOTE: Do not run the scripts in parallel. The scripts should not run concurrent to any other process.

  1. SSH into TCA-CP as admin.
  2. Switch to the sudo user using the following command:su 
    su –
NOTE: Refer to the cluster.json file for the associated mgmt-cluster-name, control-plane-node-ip, and clusterId values in the upcoming commands. 
  1. Obtain one of the control-plane IPs by running either of the following commands:
    NOTE: This IP is different from the static cluster
    kube-vip IP.
    1. kubectl config use-context mgmt-cluster-name-admin@mgmt-cluster-name
    2. kubectl get nodes -owide | grep control-plane | awk '{print ""$6""}' | head -n 1
  2. Renew the management cluster certificate:
    cd /home/admin/cluster-cert-renew
    bash cert-renew -mc mgmt-cluster-name -t management -ip control-plane-node-ip
    NOTE: This process can take several minutes.
  1. Print the updated kubeconfig:
    cat /opt/vmware/k8s-bootstrapper/clusterId/kubeconfig
  1. Copy and save the kubeconfig to the TCA-M with a name format of clusterName-kubeconfig.

 

Update the TCA-CP kubeconfig via the TCA-CP UI

  1. Login into the TCA-CP Appliance UI (tcaCp:9443)
    NOTE: Refer to the clusters.json file for the associated tcaCp.
  2. Go to Configuration > Kubernetes
  3. Edit the corresponding cluster record, paste the new kubeconfig (from preceding section), and save the changes.

Update the TCA-M kubeconfig

  1. SSH into the TCA-M as the admin user.
    ssh admin@tca-m-ip
  2. Update kubeconfig in the TCA DB
    /usr/java/jre/bin/java -cp /opt/vmware/tools/cluster-kubeconfig-1.0-jar-with-dependencies.jar ClusterKubeConfigUpdate "$(cat copied_kubeconfig_file_path)" cluster-name
    NOTE: Replace the copied_kubeconfig_file_path with the correct path.
    NOTE: Refer to the
    cluster.json file for the associated cluster-name value.

Renew the Workload Cluster Certificate

 

  1. SSH into the TCA-CP as admin
NOTE: Refer to the mgmtClusterTcaCp from the clusters.json file. This is the TCA-CP where the corresponding management cluster is deployed.
admin@mgmtClusterTcaCp
  1. # Switch to sudo user
    su –
  2. Renew the workload cluster certificate:
    cd /home/admin/cluster-cert-renew
    bash cert-renew -wc workload-cluster-name -mc mgmt-cluster-name -t workload
  3. Switch to the mgmt cluster context:
    kubectl config use-context mgmt-cluster-name -admin@mgmt-cluster-name
  4. Print the updated kubeconfig:
    kubectl get secret cluster-name -kubeconfig -n cluster-name -ojsonpath='{.data.value}' | base64 -d
  5. Copy and save the kubeconfig and save it in a file on the TCA-M with the name format of cluster-name-kubeconfig.
Update the kubeconfig via the TCA-CP UI
  1. Login to TCA-CP Appliance UI (tcaCp:9443).
  2. Go to Configuration > Kubernetes.
  3. Edit the corresponding cluster record, paste the new kubeconfig and save it.
     
Update kubeconfig via the TCA-M UI
  1. Login to TCA-M UI.
  2. Go to Virtual Infrastructure.
  3. Edit the workload cluster vim and paste the new kubeconfig.
     
Restart TCA-CP services:
  1. SSH to TCA-CP as the admin user.
  2. Switch to the sudo user:
    su –
  3. Restart the primary TCA-CP services:
    systemctl restart app-engine
    systemctl restart web-engine
    systemctl restart helm-service
    
NOTE: Before renewing workload cluster certificates, ensure that the corresponding management cluster certificates have NOT expired. Management cluster certificates must first be renewed.



Additional Information

How to Get Management cluster context in TCA CP
There is some corners case reason that cause the management cluster kubeconfig not present on /root/.kube/config which is kubectl default kubeconfig path. the following steps about how to fetch the management cluster kubeconfig and merged into /root/.kube/config.

  1. SSH into the TCA-CP as the admin user.
  2. Switch to the sudo user:
    su –
  3. Run the following command to switch to the management cluster. A list is provided if there is a management cluster.
    tanzu login --server={management_cluster_name}
  4. Get the management cluster kubeconfig:
    tanzu management-cluster kubeconfig get --admin

 

Attachments

cert-rotation-v1-tca-2.1.tar.gz get_app