Scaling down a specific node in Tanzu Kubernetes Grid
search cancel

Scaling down a specific node in Tanzu Kubernetes Grid

book

Article ID: 368322

calendar_today

Updated On:

Products

Tanzu Kubernetes Grid VMware Tanzu Kubernetes Grid VMware Tanzu Kubernetes Grid 1.x VMware Tanzu Kubernetes Grid Plus VMware Tanzu Kubernetes Grid Plus 1.x VMware Tanzu Kubernetes Grid Service (TKGs)

Issue/Introduction

This article describes the process of scaling down a specific node in TKG. The process can be used for both worker and control plane nodes. At a high level the process constitutes of

  • Identifying the nodes which need scaling down
  • Identifying the corresponding machine object
  • Adding "cluster.x-k8s.io/delete-machine"="yes" annotation to the machine object
  • Performing a scale-down operation using tanzu cli

Resolution

Identify the node to scale down

  • Switch the context to the cluster where you need to perform the scale-down operation. In the below example, a cluster with one control plane and three worker nodes is used. 
  • This process will scale down the cluster to two worker nodes by removing oom-wld-rp02-md-0-7559b5578d-pxr54
kubectl get nodes

NAME                                 STATUS   ROLES                  AGE   VERSION
oom-wld-rp02-control-plane-qrnrb     Ready    control-plane,master   11h   v1.20.5+vmware.1
oom-wld-rp02-md-0-7559b5578d-62zfl   Ready    <none>                 26m   v1.20.5+vmware.1
oom-wld-rp02-md-0-7559b5578d-7tw9k   Ready    <none>                 39m   v1.20.5+vmware.1
oom-wld-rp02-md-0-7559b5578d-pxr54   Ready    <none>                 11h   v1.20.5+vmware.1

 

Identify the corresponding machine object

Switch Context to the management cluster

kubectl config use-context  oom-mgmt-rp02-admin@oom-mgmt-rp02
Switched to context "oom-mgmt-rp02-admin@oom-mgmt-rp02"

 

Get the corresponding machine object

From the output below node oom-wld-rp02-md-0-7559b5578d-pxr54 corresponds to machine object oom-wld-rp02-md-0-7559b5578d-pxr54

kubectl get machines

NAME                                 PROVIDERID                                       PHASE     VERSION
oom-wld-rp02-control-plane-qrnrb     vsphere://423823f9-4a51-ad85-e352-9b0c91767d92   Running   v1.20.5+vmware.1
oom-wld-rp02-md-0-7559b5578d-62zfl   vsphere://423873bf-f87d-1e35-02f0-71e3f5973d2b   Running   v1.20.5+vmware.1
oom-wld-rp02-md-0-7559b5578d-7tw9k   vsphere://4238131a-f866-d033-766f-56192a093a80   Running   v1.20.5+vmware.1
oom-wld-rp02-md-0-7559b5578d-pxr54   vsphere://42382d27-aeb2-7c59-674a-06fc06e70fa2   Running   v1.20.5+vmware.1

 

Add annotation to the machine object

As detailed in the cluster-api source code the next step is to annotate the machine with DeleteMachineAnnotation

Annotate Object

kubectl annotate machine oom-wld-rp02-md-0-7559b5578d-pxr54 "cluster.x-k8s.io/delete-machine"="yes"
machine.cluster.x-k8s.io/oom-wld-rp02-md-0-7559b5578d-pxr54 annotated
 
Validate Annotation
 
kubectl get machine oom-wld-rp02-md-0-7559b5578d-pxr54 -o yaml | grep delete-mach
    cluster.x-k8s.io/delete-machine: "yes"

 

Perform scale-down operation

tanzu cluster scale oom-wld-rp02 -w 2
Successfully updated worker node machine deployment replica count for cluster oom-wld-rp02
Workload cluster 'oom-wld-rp02' is being scaled

From the output below, it can be observed that the annotated machine is picked up for deletion during the scale-down operation.

kubectl get machines

NAME                                 PROVIDERID                                       PHASE      VERSION
oom-wld-rp02-control-plane-qrnrb     vsphere://423823f9-4a51-ad85-e352-9b0c91767d92   Running    v1.20.5+vmware.1
oom-wld-rp02-md-0-7559b5578d-62zfl   vsphere://423873bf-f87d-1e35-02f0-71e3f5973d2b   Running    v1.20.5+vmware.1
oom-wld-rp02-md-0-7559b5578d-7tw9k   vsphere://4238131a-f866-d033-766f-56192a093a80   Running    v1.20.5+vmware.1
oom-wld-rp02-md-0-7559b5578d-pxr54   vsphere://42382d27-aeb2-7c59-674a-06fc06e70fa2   Deleting   v1.20.5+vmware.1