Customizing Kubelet config file using custom ClusterClass
search cancel

Customizing Kubelet config file using custom ClusterClass

book

Article ID: 375460

calendar_today

Updated On:

Products

Tanzu Kubernetes Grid VMware Tanzu Kubernetes Grid VMware Tanzu Kubernetes Grid 1.x VMware Tanzu Kubernetes Grid Plus VMware Tanzu Kubernetes Grid Plus 1.x

Issue/Introduction

Certain Kubernetes components' settings can be customized in configuration files in the node.  This KB has the steps on creating a classy cluster with custom kubelet configuration.  The steps here can be leveraged to also customize the settings of other components through the customization of the KubeadmControlPlaneTemplates and KubeadmConfigTemplates via kubeadm patches.  Moreover, KubeadmControlPlaneTemplates and KubeadmConfigTemplates can also be customized to change pre/postKubeadmCommands in the nodes of classy clusters.

The steps here are based on the following:

Environment

TKGm v2.4 and above

Resolution

It is assumed that the Management Cluster was installed from the same jumpbox where the following steps will be ran.

In the following steps, the customization is done on kubelet configuration particularly adding the specific settings “serializeImagePulls: false” and “maxParallelImagePulls: 1” to the /var/lib/kubelet/config.yaml in the nodes.

The following cli binaries are required in the jumpbox.

  • kubectl
  • tanzu
  • ytt 

Steps:

  1. In the jumpbox, create a “clusterclass” directory and sub-directories for this task.
    $ mkdir -p clusterclass/custom/overlays

  2. Create a Base ClusterClass Manifest by copying the Manifest from the Management Cluster.
    $ cp ~/.config/tanzu/tkg/clusterclassconfigs/tkg-vsphere-default-v1.2.0.yaml clusterclass/ 
     
  3. Create an overlay file, which will have the additional settings we want to add to the kubelet configuration. This file should be saved in “clusterclass/custom/overlays” directory. In the following example, we use the filename “custom-kubelet.yaml”, and the specific settings “serializeImagePulls: false” and “maxParallelImagePulls: 1” will be added to the kubelet configuration. Note that the name of the custom clusterclass in this example is “tkg-vsphere-default-v1.2.0-extended”, which will be referenced in the succeeding steps as well.
    $ cat clusterclass/custom/overlays/custom-kubelet.yaml
    #@ load("@ytt:overlay", "overlay")#@overlay/match by=overlay.subset({"kind":"ClusterClass"})
    ---
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: ClusterClass
    metadata:
      name: tkg-vsphere-default-v1.2.0-extended
    spec:
      #@overlay/match expects=1
      patches:
      #@overlay/append
      - name: kcp-add-file
        definitions:
          - selector:
              apiVersion: controlplane.cluster.x-k8s.io/v1beta1
              kind: KubeadmControlPlaneTemplate
              matchResources:
                controlPlane: true
            jsonPatches:
              - op: add
                path: /spec/template/spec/kubeadmConfigSpec/files/-
                value:
                  path: /etc/kubernetes/patches/kubeletconfiguration0+strategic.yaml
                  owner: "root:root"
                  permissions: "0644"
                  content: |
                    serializeImagePulls: false
                    maxParallelImagePulls: 1
              - op: add
                path: /spec/template/spec/kubeadmConfigSpec/initConfiguration/patches
                value:
                  directory: /etc/kubernetes/patches
              - op: add
                path: /spec/template/spec/kubeadmConfigSpec/joinConfiguration/patches
                value:
                  directory: /etc/kubernetes/patches 
      - name: kubeadmconfig-add-file
        definitions:
          - selector:
              apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
              kind: KubeadmConfigTemplate
              matchResources:
                machineDeploymentClass:
                  names:
                    - tkg-worker
            jsonPatches:
              - op: add
                path: /spec/template/spec/files/-
                value:
                  path: /etc/kubernetes/patches/kubeletconfiguration0+strategic.yaml
                  owner: "root:root"
                  permissions: "0644"
                  content: |
                    serializeImagePulls: false
                    maxParallelImagePulls: 1
              - op: add
                path: /spec/template/spec/joinConfiguration/patches
                value:
                  directory: /etc/kubernetes/patches 

     


  4. Create another overlay file that will exclude everything from the Base Manifest that we don’t want to change. This file should be saved in “clusterclass/custom/overlays” directory. In the following example, we use the filename “filter.yaml”.

    $ cat clusterclass/custom/overlays/filter.yaml
    #@ load("@ytt:overlay", "overlay")
    
    #@overlay/match by=overlay.not_op(overlay.subset({"kind": "ClusterClass"})),expects="0+"
    ---
    
    #@overlay/remove 

     

  5. Create and install the custom clusterclass.

    Change directory into “clusterclass/”. Create the file “default_cc.yaml” based off the Base Manifest and filter. This file is just used for reference and/or comparison.
    $ cd clusterclass
    $ ytt -f tkg-vsphere-default-v1.2.0.yaml -f custom/overlays/filter.yaml > default_cc.yaml

    Create the file “custom_cc.yaml” based off the Base Manifest and all the custom overlay files.
    $ ytt -f tkg-vsphere-default-v1.2.0.yaml -f custom/ > custom_cc.yaml

    See the difference between the default_cc.yaml and custom_cc.yaml file to see what the changes are. The output should show the patches that were added in the overlay file we created in step 3.
    $ diff custom_cc.yaml default_cc.yaml

    In the Management Cluster context, install the custom clusterclass.

    $ kubectl apply -f custom_cc.yaml 

     

  6. Now that the custom clusterclass has been installed, the next steps entail creating a cluster config file, which uses the custom clusterclass.


    Create a base workload cluster config file as “workload-1.yaml”.
    $ cat workload-1.yaml
    AVI_ENABLE: "false"
    CLUSTER_NAME: "custom-classy"
    CLUSTER_PLAN: dev
    CLUSTER_CIDR: 100.96.0.0/11
    CNI: "antrea"
    ENABLE_CEIP_PARTICIPATION: "true"
    ENABLE_MHC: "true"
    INFRASTRUCTURE_PROVIDER: vsphere
    SERVICE_CIDR: 100.64.0.0/13
    VSPHERE_CONTROL_PLANE_DISK_GIB: "60"
    VSPHERE_CONTROL_PLANE_ENDPOINT: "10.x.x.8"
    VSPHERE_CONTROL_PLANE_MEM_MIB: "8192"
    VSPHERE_CONTROL_PLANE_NUM_CPUS: "8"
    VSPHERE_DATASTORE: "/vc/datastore/vsan"
    VSPHERE_DATACENTER: "vc"
    VSPHERE_FOLDER: "/vc/vm/env"
    VSPHERE_NETWORK: "vc-env"
    VSPHERE_PASSWORD: "xxxxxxx"
    VSPHERE_RESOURCE_POOL: "/vc/host/vcc1/Resources/RPxx"
    VSPHERE_SERVER: "vc.example.vmware.com"
    VSPHERE_SSH_AUTHORIZED_KEY: "ssh-rsa AAAAB3NzaC1yc2...0= tkg@jumpbox"
    VSPHERE_TLS_THUMBPRINT: "XX:F1:8A:54:D3:D8:03:xx:10:E6:D8:84:CE:DD:9E:71:61:49:20:XX"
    VSPHERE_WORKER_DISK_GIB: "60"
    VSPHERE_USERNAME: "envuser@localos"
    VSPHERE_WORKER_MEM_MIB: "8192"
    VSPHERE_WORKER_NUM_CPUS: "8"
    TKG_HTTP_PROXY_ENABLED: "false" 

     
  7. Create a classy cluster definition file “default_cluster.yaml” based off the file “workload-1.yaml” file in step 6.
    $ tanzu cluster create --file workload-1.yaml --dry-run > default_cluster.yaml -v6

     

  8. Create another overlay file named “cluster_overlay.yaml” to reference the custom clusterclass that was installed in step 5.
    $ cat cluster_overlay.yaml
    #@ load("@ytt:overlay", "overlay")
    
    #@overlay/match by=overlay.subset({"kind":"Cluster"})
    ---
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: Cluster
    spec:
      topology:
        class: tkg-vsphere-default-v1.2.0-extended

     

  9. Create the final cluster config yaml file “custom_cluster.yaml”.
    $ ytt -f default_cluster.yaml -f cluster_overlay.yaml > custom_cluster.yaml 

     

  10. Run the tanzu cluster creation.
    $ tanzu cluster create -f custom_cluster.yaml -v 9

 

Once the cluster is created successfully, confirm that the additional configuration settings are there in the nodes.

$ grep ImagePulls /var/lib/kubelet/config.yaml
maxParallelImagePulls: 1
serializeImagePulls: false