How to set up new key-pair to SSH into nodes in Tanzu Kubernetes Grid Class-based cluster (Classy Cluster)
search cancel

How to set up new key-pair to SSH into nodes in Tanzu Kubernetes Grid Class-based cluster (Classy Cluster)

book

Article ID: 383359

calendar_today

Updated On:

Products

VMware Tanzu Kubernetes Grid Management

Issue/Introduction

 

# kubectl get cluster <cluster-name>  -n <namespace> -o jsonpath="{.spec.topology.class}"  | more

Ex: 

kubectl get cluster tkg-mgmt  -n tkg-system -o jsonpath="{.spec.topology.class}"  | more

tkg-vsphere-default-v1.2.0

NOTE: If the cluster is Plan-based, the command above will return nothing.

Environment

VMware Tanzu Kubernetes Grid Management (VMware Tanzu Kubernetes Grid Management ) (TKGm)

Cause

  • The SSH private key that was used when creating the cluster is missing and preventing SSH to the cluster nodes.
  • The SSH key-pair need to be change for security reasons.

Resolution

 

Pre-requisite:

  • Run the following command to make sure that the management and the workload cluster are in health running state.

# tanzu cluster list --include-management-cluster -A

EX:

# tanzu cluster list --include-management-cluster -A
  NAME      NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES       PLAN  TKR
  workload  default     running  1/1           2/2      v1.28.7+vmware.1  <none>      dev   v1.28.7---vmware.1-tkg.3
  tkg-mgmt  tkg-system  running  1/1           1/1      v1.28.7+vmware.1  management  dev   v1.28.7---vmware.1-tkg.3

# tanzu cluster get <cluster-name> -n <namesapce>

# tanzu mc get 

Notes:

    • Do not update the SSH  key-pair on management cluster that is not in running state.
    • Do not update the SSH  key-pair on workload cluster that is not in running state .
    • Do not update the SSH  key-pair on workload cluster that "is in running" state while the management cluster is not in running state.

 

  • Make sure that your DHCP scope that the TKG cluster is using has enough free IP addresses to allocated.
  • If the cluster created using IPAM node, make sure that the IPAM pool used to create the cluster has free IP address to be allocated.
    • Run the following command to get ipaddressclaims used to create the cluster 

# kubectl get ipaddressclaims -A | grep <cluster-name>

EX:

# kubectl get ipaddressclaims -A | grep workload
default      workload-controlplane-j96hm-pznhb-0-0   workload-globalincluster-ippool   GlobalInClusterIPPool
default      workload-md-0-nwntw-jw44b-8rm8s-0-0     workload-globalincluster-ippool   GlobalInClusterIPPool
default      workload-md-0-nwntw-jw44b-glrh4-0-0     workload-globalincluster-ippool   GlobalInClusterIPPool

    • Run the following command to get the number of free IP address in the pool that was used to create the cluster 

# kubectl get <IPPool> -n <name-space>  -o wide

EX:

# kubectl get GlobalInClusterIPPool -n default  -o wide
NAME                              ADDRESSES                           TOTAL   FREE   USED
workload-globalincluster-ippool   ["10.x.x.224-10.x.x.230"]   7       4      3

Notes:

  • Changing the ssk per-key will trigger cluster rollout and it might take a while depends number of workers in the cluster.
  • We need to have enough IP addresses available to be allocated for the new created cluster nodes
  • The same steps can be used to update Management and workload cluster nodes with the new key-pair to SSH.

 

Steps to update the key-pair to SSH into nodes in Tanzu Kubernetes Grid Class-based cluster (Classy Cluster)

In this steps we are updating the SSH key-pair on the Management Cluster.

 

  1. Create a new  SSH Key Pair using the steps in TKG doc. (when get promote, you may want to enter different file name in which to save the new key)
  2. Open the new SSH public key file and make a note of it .

    cat /home/new-ssh-keu-pair/id_rsa.pub-new

    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC3aCkxAkDp+RAmZJ5dyJ0CIRweR086I3t7.              <snip>.                          <snip> mj37hS8lEqRzJZoMYjACOc7VEQ== [email protected]

  3. Change to the management cluster context.

    # kubectl config get-contexs

    # kubectl config use-context MGMT-CLUSTER-NAME-admin@MGMT-CLUSTER-NAME

  4. Run the following command to backup the k8s cluster object for the cluster you need to modify its SSH key-pair.

    # kubectl get cluster cluster-name -n name-space -o yaml  > cluster-object-clustername-org.yaml

  5. Run the following command to edit the k8s cluster object for the cluster we need to modify its SSH key-pair.

    # kubectl edit cluster <cluster-name> -n <name-space>

  6. Locate the sshAuthorizedKeys section.
  7. Type i to enter insert mode and replace the SSH public key with the new one.

    Before:-

     - name: user
          value:
            sshAuthorizedKeys:
            - ssh-rsa OldAAAAB3NzaC1yc2EAAAADAQABAAACAQCwBGVuzlft6A1V4hSdfG4qt0/vxkH7dUpXe8yD/rVO+Fqp9Bkg1T9jgg664abvc2Y2ko8NYLcKs/r065HhFirhdpfMf3ZmrOn3+F47MJHGJx984BbUBRla8c5uyg4zE5owlJ5+C2b========================================###### EXAMPL##### ============================================= /pObPTVrmBucw6VnYefzvXILD233UrDsLeV1lBbjOEXdukMeoNlWNlVY1D9sTNOSzI9pCIXQSZJKqkxWuoNaHLPsY024s0NmC4QZlayBfrEpmvjhXLMULEnlgcvE3LsAeqe5nSNISQvBpMhsOBcGM+L4P8kfNcoMAhdnCywPtRUWyKGnGKx5s8zBJ/NxaQo4DL9lM84S9hKUCJiAIDZ+flKBnM8e+5/YowzWjHYkv67A484Yg5F8RV5Hk2wt2brTkmYIlRBbNFDthr27Vq5NFFVQSfQ==
              [email protected]
        - name: controlPlane


    After:-

    name: user
          value:
            sshAuthorizedKeys:
            - ssh-rsa NewAAAAB3NzaC1yc2EAAAADAQABAAACAQC3aCkxAkDp+RAmZJ5dyJ0CIRweR086I3t7Cz7SMcMrcvjGTqBg6MVflr8pcEB/bL+g9nXrzhMm8PuEDO1BwgKXQ30FjQJWg6Zdgb8Bf6Zd3JFmspn/ln+leXbYQDIgLd0S6RxujESnWo/MtQuS3awdY/Opb0TyXzUZ8ImTAwfp3oVfWH6bCSbgpJxeLiIkzI9WaTVEnQCVHibiu1YfkN4kcNb/cKef8VYO5a/oBBZzIDHaTOgya/e9wa3BqaLF2OyySm+f7k2CXGACzIHyBGlc1pFe5pk5wat/======================= ###### EXAMPL##### =========================================== fTdvc29sZpNQv5VHE7l0kFbl4BqMv9jLgn/4dTas21zoPYm1nUtPhxaCggNw9IYoSKQzKeVz27EHhk5lofTya5kcOZ101VDcZMHLQJpOeof1xxv/aLXO8tBzkHJbgSKdKn0QvvPLpByER3onMHQBKHg1DaYC3FNPWiKmiqm054mZtyzaaEqOYivQFjrOMKB4G62NCozXQuQsgyFBS49Ld2uRpL5+9fEckbgdCk60m2gKZP9ICSW3xmGjVW71+005nmj37hS8lEqRzJZoMYjACOc7VEQ== [email protected]
        - name: controlPlane

  8. Press esc to exit insert mode.
  9. Type :wq to write, save, and quit the file
  10. This Edit of the k8s cluster object will trigger a cluster rollout and new cluster nodes will get created.


    Note: The following command can be used to monitor the cluster rollout:

    # kubectl get cluster,kcp,md -n tkg-system

    NAME                                PHASE         AGE   VERSION
    cluster.cluster.x-k8s.io/tkg-mgmt   Provisioned   13d   v1.28.7+vmware.1

    NAME                                                                            CLUSTER    INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE   VERSION
    kubeadmcontrolplane.controlplane.cluster.x-k8s.io/tkg-mgmt-controlplane-7qncf   tkg-mgmt   true          true                   2          1       1         1             13d   v1.28.7+vmware.1

    NAME                                                     CLUSTER    REPLICAS   READY   UPDATED   UNAVAILABLE   PHASE     AGE   VERSION
    machinedeployment.cluster.x-k8s.io/tkg-mgmt-md-0-5fjmz   tkg-mgmt   2          1       1         1             Running   13d   v1.28.7+vmware.1
    jumpbox@admin-jumpbox:~/Documents/alexisv-tkg/new-ssk-key$ kubectl get machine  -n tkg-system

    # kubectl get machine  -n tkg-system

    NAME                                CLUSTER    NODENAME                            PROVIDERID                                       PHASE          AGE   VERSION
    tkg-mgmt-controlplane-7qncf-nqsmn   tkg-mgmt   tkg-mgmt-controlplane-7qncf-nqsmn   vsphere://421e8d64-69f4-8b29-ca5f-5753ab4b38ba   Running        13d   v1.28.7+vmware.1
    tkg-mgmt-controlplane-7qncf-vgz9n   tkg-mgmt                                                                                        Provisioning   43s   v1.28.7+vmware.1
    tkg-mgmt-md-0-5fjmz-btf52-g88kb     tkg-mgmt                                                                                        Provisioning   44s   v1.28.7+vmware.1
    tkg-mgmt-md-0-5fjmz-jgzs6-5szcb     tkg-mgmt   tkg-mgmt-md-0-5fjmz-jgzs6-5szcb     vsphere://421ed84b-1e1e-03de-794b-24ed39d7acfa   Running        13d   v1.28.7+vmware.1

    # tanzu mc get
     
    NAME      NAMESPACE   STATUS    CONTROLPLANE  WORKERS  KUBERNETES        ROLES       PLAN  TKR
    tkg-mgmt  tkg-system  updating  1/1           1/1      v1.28.7+vmware.1  management  dev   v1.28.7---vmware.1-tkg.3



  11. Once the new cluster nodes got created you can validate if the SSH pair-key replacement completed successfully by SSH to one of the new  created cluster node using the new privet key

    # ssh -i id_rsa-new capv@<node-iP-address>

  12. If the cluster nodes getting their IP addresses from  DHCP server, you will need to create DHCP reservations for the new nodes. Configure Node DHCP Reservations and Endpoint DNS Record