Manually sync the kube-config/cluster configuration
Symptoms:
In such cases, it is required to manually update the Cluster certificates and / or kubeconfig stored in TCA database.
2.x
The cluster config stored inside TCA-DB for the clusters has expired, causing the problem.
In certain cases, the certificates are renewed, but these are not synced to the TCA appliances.
To Resolve this, we need to manually update the kube-config/cluster configuration
Workaround:
Follow these steps to update the clusterConfig for the mentioned clusters:
1. SSH to the k8s cluster (use the cluster-vip) as capv.
2. Once on the cluster, navigate to the '/etc/kubernetes/ directory.
3. View the contents of the 'admin.conf file:
---- this is sample kubeconfig----
apiVersion: v1
clusters:
cluster:
certificate-authority-data: [Certificate Authority Data]
server: https://192.168.11.22:6443 (sample IP)
name: wc01
contexts:
context:
cluster: wc01
user: kubernetes-admin
name: kubernetes-admin@wc01
current-context: kubernetes-admin@wc01
kind: Config
preferences: {}
users:
name: kubernetes-admin
user:
client-certificate-data: [Client Certificate Data]
client-key-data: [Client Key Data]
4. Copy the content of the 'admin.conf' file.
5. Go to the TCA-UI -> Virtual Infrastructure -> cluster.
6. Click on the "Edit" button.
7. Update the Kubernetes Config with the admin.conf (kube config) obtained from the previous step.
8. Click on "Update."
9. This action will set the VIM status to pending for about 30-40 seconds, and eventually, it should return to the connected status.
10. Open the TCACP 9443 admin page
11. Located the cluster, edit and update the kube config file with the contents from admin.conf
12. Repeat the same steps for the other affected cluster.
10. Restart the TCA Manager appliance
11. Restart the TCACP Appliance
Verify the K8S VIMs status in TCA-M, or the status of the Workload Cluster in the TCA-CP Appliance Management portal (9443). It should be in a connected state.
CNF installation is failing