Manually sync the kube-config/cluster configuration
While running TCP 3.0 (TCA 2.3) and (Harbor 2.6), In TCA UI under partner system, Harbor is registered and connected Kubernetes cluster as VIM. In cluster Harbor add-on gets added successfully. But in partner system still it shows "Initiated", it should show "Enable". Due to this issue CNF installation is failing.
The cluster config stored inside TCA-DB for the clusters has expired, causing the problem.
To resolve this, we need to manually update the kube-config/cluster configuration
Follow these steps to update the clusterConfig for the mentioned clusters:
1. SSH to the k8s cluster (use the cluster-vip) as capv.
2. Once on the cluster, navigate to the `.kube/` directory.
3. View the contents of the `config` file:
---- this is sample kubeconfig----
apiVersion: v1
clusters:
cluster:
certificate-authority-data: [Certificate Authority Data]
server: https://192.168.111.15:6443 (sample IP)
name: wc01
contexts:
context:
cluster: wc01
user: kubernetes-admin
name: kubernetes-admin@wc01
current-context: kubernetes-admin@wc01
kind: Config
preferences: {}
users:
name: kubernetes-admin
user:
client-certificate-data: [Client Certificate Data]
client-key-data: [Client Key Data]
4. Copy the content of the `config` file.
5. Go to the TCA-UI -> Virtual Infrastructure -> cluster.
Note: Add the cluster Manually if it does not exists in the virtual infrastructure
6. Click on the "Edit" button.
7. Update the Kubernetes Config with the clusterConfig obtained from the previous step.
8. Click on "Update."
9. This action will set the VIM status to pending for about 30-40 seconds, and eventually, it should return to the connected status.
Repeat the same steps for the other affected cluster.
CNF installation is failing