Certain Kubernetes components' settings can be customized in configuration files in the node. This KB has the steps on creating a classy cluster with custom kubelet configuration. The steps here can be leveraged to also customize the settings of other components through the customization of the KubeadmControlPlaneTemplates and KubeadmConfigTemplates via kubeadm patches. Moreover, KubeadmControlPlaneTemplates and KubeadmConfigTemplates can also be customized to change pre/postKubeadmCommands in the nodes of classy clusters.
The steps here are based on the following:
TKGm v2.4 and above
It is assumed that the Management Cluster was installed from the same jumpbox where the following steps will be ran.
In the following steps, the customization is done on kubelet configuration particularly adding the specific settings “serializeImagePulls: false” and “maxParallelImagePulls: 1” to the /var/lib/kubelet/config.yaml in the nodes.
The following cli binaries are required in the jumpbox.
kubectl
tanzu
ytt
Steps:
$ mkdir -p clusterclass/custom/overlays
$ cp ~/.config/tanzu/tkg/clusterclassconfigs/tkg-vsphere-default-v1.2.0.yaml clusterclass/
$ cat clusterclass/custom/overlays/custom-kubelet.yaml
#@ load("@ytt:overlay", "overlay")#@overlay/match by=overlay.subset({"kind":"ClusterClass"})
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
name: tkg-vsphere-default-v1.2.0-extended
spec:
#@overlay/match expects=1
patches:
#@overlay/append
- name: kcp-add-file
definitions:
- selector:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
matchResources:
controlPlane: true
jsonPatches:
- op: add
path: /spec/template/spec/kubeadmConfigSpec/files/-
value:
path: /etc/kubernetes/patches/kubeletconfiguration0+strategic.yaml
owner: "root:root"
permissions: "0644"
content: |
serializeImagePulls: false
maxParallelImagePulls: 1
- op: add
path: /spec/template/spec/kubeadmConfigSpec/initConfiguration/patches
value:
directory: /etc/kubernetes/patches
- op: add
path: /spec/template/spec/kubeadmConfigSpec/joinConfiguration/patches
value:
directory: /etc/kubernetes/patches
- name: kubeadmconfig-add-file
definitions:
- selector:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
matchResources:
machineDeploymentClass:
names:
- tkg-worker
jsonPatches:
- op: add
path: /spec/template/spec/files/-
value:
path: /etc/kubernetes/patches/kubeletconfiguration0+strategic.yaml
owner: "root:root"
permissions: "0644"
content: |
serializeImagePulls: false
maxParallelImagePulls: 1
- op: add
path: /spec/template/spec/joinConfiguration/patches
value:
directory: /etc/kubernetes/patches
$ cat clusterclass/custom/overlays/filter.yaml
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.not_op(overlay.subset({"kind": "ClusterClass"})),expects="0+"
---
#@overlay/remove
$ cd clusterclass
$ ytt -f tkg-vsphere-default-v1.2.0.yaml -f custom/overlays/filter.yaml > default_cc.yaml
$ ytt -f tkg-vsphere-default-v1.2.0.yaml -f custom/ > custom_cc.yaml
$ diff custom_cc.yaml default_cc.yaml
In the Management Cluster context, install the custom clusterclass.
$ kubectl apply -f custom_cc.yaml
$ cat workload-1.yaml
AVI_ENABLE: "false"
CLUSTER_NAME: "custom-classy"
CLUSTER_PLAN: dev
CLUSTER_CIDR: 100.96.0.0/11
CNI: "antrea"
ENABLE_CEIP_PARTICIPATION: "true"
ENABLE_MHC: "true"
INFRASTRUCTURE_PROVIDER: vsphere
SERVICE_CIDR: 100.64.0.0/13
VSPHERE_CONTROL_PLANE_DISK_GIB: "60"
VSPHERE_CONTROL_PLANE_ENDPOINT: "10.x.x.8"
VSPHERE_CONTROL_PLANE_MEM_MIB: "8192"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "8"
VSPHERE_DATASTORE: "/vc/datastore/vsan"
VSPHERE_DATACENTER: "vc"
VSPHERE_FOLDER: "/vc/vm/env"
VSPHERE_NETWORK: "vc-env"
VSPHERE_PASSWORD: "xxxxxxx"
VSPHERE_RESOURCE_POOL: "/vc/host/vcc1/Resources/RPxx"
VSPHERE_SERVER: "vc.example.vmware.com"
VSPHERE_SSH_AUTHORIZED_KEY: "ssh-rsa AAAAB3NzaC1yc2...0= tkg@jumpbox"
VSPHERE_TLS_THUMBPRINT: "XX:F1:8A:54:D3:D8:03:xx:10:E6:D8:84:CE:DD:9E:71:61:49:20:XX"
VSPHERE_WORKER_DISK_GIB: "60"
VSPHERE_USERNAME: "envuser@localos"
VSPHERE_WORKER_MEM_MIB: "8192"
VSPHERE_WORKER_NUM_CPUS: "8"
TKG_HTTP_PROXY_ENABLED: "false"
$ tanzu cluster create --file workload-1.yaml --dry-run > default_cluster.yaml -v6
$ cat cluster_overlay.yaml
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind":"Cluster"})
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
spec:
topology:
class: tkg-vsphere-default-v1.2.0-extended
$ ytt -f default_cluster.yaml -f cluster_overlay.yaml > custom_cluster.yaml
$ tanzu cluster create -f custom_cluster.yaml -v 9
Once the cluster is created successfully, confirm that the additional configuration settings are there in the nodes.
$ grep ImagePulls /var/lib/kubelet/config.yaml
maxParallelImagePulls: 1
serializeImagePulls: false