The cli output shows the status of the management cluster as "unknown"
search cancel

The cli output shows the status of the management cluster as "unknown"

book

Article ID: 398909

calendar_today

Updated On:

Products

VMware Telco Cloud Automation

Issue/Introduction

In TCA management cluster status is showns as 'unknown'.

# kbsctl show managementclusters

Count: 1

----------------------------------------

ID: 0######a-b7a2-####-####-a###########22

Name: tca1-mgmt-cluster1234

Status: unknown

TKG ID: #######-1##d-####-####-3###########a

Environment

2.3 or below

Cause

The management cluster kubeconfig is usually valid for one year and can be renewed either automatically or manually by the user. Once renewed, the kubeconfig needs to be updated both in the file system and in the database. The issue in 2.3 or earlier release is the automated poller only updates the kubeconfig into the database but not on the file system. When the kubeconfig on the file system is out of sync with the endpoint, users will encounter the symptoms mentioned above.

This issue is because the kubeconfig that is being used to access management cluster that is (located at /opt/vmware/k8s-bootstrapper/<mgmt-cluster-id>/kubeconfig) has expired.

Resolution

The issue is resolved in 3.x and later versions.

The below workaround can be applied to TCA 2.3 or lower versions.

  1. Find the management cluster-id execute the following command
    # kbsctl show managementclusters
    
    Count: 1
    
    ----------------------------------------
    
    ID: 0######a-b7a2-####-####-a###########22
    
    Name: tca1-mgmt-cluster1234
    
    Status: unknown
    
    TKG ID: #######-1##d-####-####-3###########a

    The management cluster ID in this case is 0######a-b7a2-####-####-a###########22.
    The kubeconfig is located at /opt/vmware/k8s-bootstrapper/0######a-b7a2-####-####-a###########22/kubeconfig.

  2. Update this kubeconfig, by copying the admin.conf from the cluster endpoint.
    -SSH into the management cluster using either the VIP or IP of the control-plane node,
    -Copy /etc/kubernetes/admin.conf to /opt/vmware/k8s-bootstrapper/0######a-b7a2-####-####-a###########22/kubeconfig.

  3. Make sure to replace the content of the existing kubeconfig with the contents of admin.conf. 
  4.  After replacing the kubeconfig, rerun kbsctl show managementclusters
    kbsctl show managementclusters
    
    Count: 1
    
    ----------------------------------------
    
    ID: 0######a-b7a2-####-####-a###########22
    
    Name: tca1-mgmt-cluster1234
    
    Status: Running
    
    TKG ID: #######-1##d-####-####-3###########a
    

    Management cluster status should come back to Running

  5.  After this in order to make sure all services uses this config
    Restart the app-engine on TCA-CP
    
    # systemctl restart app-engine