In TCA management cluster status is showns as 'unknown'.
# kbsctl show managementclusters
Count: 1
----------------------------------------
ID: 0######a-b7a2-####-####-a###########22
Name: tca1-mgmt-cluster1234
Status: unknown
TKG ID: #######-1##d-####-####-3###########a2.3 or below
The management cluster kubeconfig is usually valid for one year and can be renewed either automatically or manually by the user. Once renewed, the kubeconfig needs to be updated both in the file system and in the database. The issue in 2.3 or earlier release is the automated poller only updates the kubeconfig into the database but not on the file system. When the kubeconfig on the file system is out of sync with the endpoint, users will encounter the symptoms mentioned above.
This issue is because the kubeconfig that is being used to access management cluster that is (located at /opt/vmware/k8s-bootstrapper/<mgmt-cluster-id>/kubeconfig) has expired.
The issue is resolved in 3.x and later versions.
The below workaround can be applied to TCA 2.3 or lower versions.
# kbsctl show managementclusters
Count: 1
----------------------------------------
ID: 0######a-b7a2-####-####-a###########22
Name: tca1-mgmt-cluster1234
Status: unknown
TKG ID: #######-1##d-####-####-3###########a
The management cluster ID in this case is 0######a-b7a2-####-####-a###########22.
The kubeconfig is located at /opt/vmware/k8s-bootstrapper/0######a-b7a2-####-####-a###########22/kubeconfig.
kbsctl show managementclusterskbsctl show managementclusters
Count: 1
----------------------------------------
ID: 0######a-b7a2-####-####-a###########22
Name: tca1-mgmt-cluster1234
Status: Running
TKG ID: #######-1##d-####-####-3###########a
Management cluster status should come back to Running
Restart the app-engine on TCA-CP
# systemctl restart app-engine