Creating Custom vSphere Machine Templates
For both Worker and Control Plane scaling, we will need to switch to our management cluster context using the below commands:
[root@CentOS7TestVM ~]# kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* mgmt-kubevip-###@###-##### mgmt-kubevip mgmt-kubevip-admin
[root@CentOS7TestVM ~]# kubectl config use-context mgmt-kubevip-###@####-kubevip
Switched to context "mgmt-kubevip-####@#####-kubevip".
Next we can run the below command in order to view the available machine templates on our deployment:
[root@CentOS7TestVM ~]# kubectl get vspheremachinetemplates.infrastructure.cluster.x-k8s.io
NAME AGE
wld-kubevip-control-plane 110m
wld-kubevip-worker 110m
We will export these templates to files so that we can modify them with our own resource specifications:
[root@CentOS7TestVM ~]# kubectl get vspheremachinetemplates.infrastructure.cluster.x-k8s.io wld-kubevip-control-plane -oyaml > new-wld-kubevip-control-plane.yaml
[root@CentOS7TestVM ~]# kubectl get vspheremachinetemplates.infrastructure.cluster.x-k8s.io wld-kubevip-worker -oyaml > new-wld-kubevip-worker.yaml
The next step is to remove any object metadata from the templates. In this case, we can remove all values in "metadata" except for "name" and "namespace". A custom and meaningful name, different to those that already exist, should be provided in the "name" field such as "wld-kubevip-control-plane-12cpus". In here, values like "diskGiB", "memoryMiB" and "numCPUs" can be modified to suit our requirements. Below is an example of the contents of the template before and after these changes have been made:
Before:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"infrastructure.cluster.x-k8s.io/v1beta1","kind":"VSphereMachineTemplate","metadata":{"annotations":{"vmTemplateMoid":"vm-79"},"name":"wld-kubevip-control-plane","namespace":"default"},"spec":{"template":{"spec":{"cloneMode":"fullClone","datacenter":"/Datacenter","datastore":"/Datacenter/datastore/vsanDatastore","diskGiB":60,"folder":"/Datacenter/vm/tkg","memoryMiB":8192,"network":{"devices":[{"dhcp4":true,"networkName":"/Datacenter/network/VLAN-60-PG"}]},"numCPUs":8,"resourcePool":"/Datacenter/host/Cluster/Resources/TKG","server":"192.168.10.125","storagePolicyName":"","template":"/Datacenter/vm/tkg/ubuntu-2004-kube-v1-23-8+vmware-2-tkg-1-85a434f93857371fccb566a414462981"}}}}
vmTemplateMoid: vm-79
creationTimestamp: "2023-03-01T09:22:49Z"
generation: 1
managedFields:
- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:vmTemplateMoid: {}
f:spec:
.: {}
f:template:
.: {}
f:spec:
.: {}
f:cloneMode: {}
f:datacenter: {}
f:datastore: {}
f:diskGiB: {}
f:folder: {}
f:memoryMiB: {}
f:network:
.: {}
f:devices: {}
f:numCPUs: {}
f:resourcePool: {}
f:server: {}
f:storagePolicyName: {}
f:template: {}
manager: kubectl-client-side-apply
operation: Update
time: "2023-03-01T09:22:49Z"
- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
.: {}
k:{"uid":"4e3b838b-e964-49fc-9914-46263f46a325"}: {}
manager: manager
operation: Update
time: "2023-03-01T09:22:49Z"
name: wld-kubevip-control-plane
namespace: default
ownerReferences:
- apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
name: wld-kubevip
uid: 4e3b838b-e964-49fc-9914-46263f46a325
resourceVersion: "5608"
uid: 2cf5a81f-e8ef-4f1d-9f89-240e1ddb221f
spec:
template:
spec:
cloneMode: fullClone
datacenter: /Datacenter
datastore: /Datacenter/datastore/vsanDatastore
diskGiB: 60
folder: /Datacenter/vm/tkg
memoryMiB: 8192
network:
devices:
- dhcp4: true
networkName: /Datacenter/network/VLAN-60-PG
numCPUs: 8
resourcePool: /Datacenter/host/Cluster/Resources/TKG
server: 192.168.10.125
storagePolicyName: ""
template: /Datacenter/vm/tkg/ubuntu-2004-kube-v1-23-8+vmware-2-tkg-1-85a434f93857371fccb566a414462981
After:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
metadata:
name: wld-kubevip-control-plane-12cpus
namespace: default
spec:
template:
spec:
cloneMode: fullClone
datacenter: /Datacenter
datastore: /Datacenter/datastore/vsanDatastore
diskGiB: 60
folder: /Datacenter/vm/tkg
memoryMiB: 8192
network:
devices:
- dhcp4: true
networkName: /Datacenter/network/VLAN-60-PG
numCPUs: 12
resourcePool: /Datacenter/host/Cluster/Resources/TKG
server: 192.168.10.125
storagePolicyName: ""
template: /Datacenter/vm/tkg/ubuntu-2004-kube-v1-23-8+vmware-2-tkg-1-85a434f93857371fccb566a414462981
Using Kubectl, apply the new templates and verify their existence:
[root@CentOS7TestVM ~]# kubectl apply -f new-wld-kubevip-worker.yaml
vspheremachinetemplate.infrastructure.cluster.x-k8s.io/wld-kubevip-worker-12cpus created
[root@CentOS7TestVM ~]# kubectl apply -f new-wld-kubevip-control-plane.yaml
vspheremachinetemplate.infrastructure.cluster.x-k8s.io/wld-kubevip-control-plane-12cpus created
[root@CentOS7TestVM ~]# kubectl get vspheremachinetemplates.infrastructure.cluster.x-k8s.io
NAME AGE
wld-kubevip-control-plane 139m
wld-kubevip-control-plane-12cpus 53s
wld-kubevip-worker 139m
wld-kubevip-worker-12cpus 116s
Scaling Control Plane Nodes Vertically
The first step is to get the name of our KCP object:
[root@CentOS7TestVM ~]# kubectl get kubeadmcontrolplanes.controlplane.cluster.x-k8s.io
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
wld-kubevip-control-plane wld-kubevip true true 1 1 1 0 154m v1.23.8+vmware.2
To make use of the custom control plane vsphere machine template that we created earlier, we need to update the name of the template in the object's specification. For this, we can edit the object and find the "spec.machineTemplate.infrastructureRef.name" field, and change its value to the name of our custom template.
kubectl edit kubeadmcontrolplanes.controlplane.cluster.x-k8s.io wld-kubevip-control-plane
machineTemplate:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
name: wld-kubevip-control-plane-12cpus
namespace: default
metadata: {}
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/wld-kubevip-control-plane edited
The control plane nodes will update with the new configuration after we save the changes. The CP will now use the new machine template.
Scaling Worker Nodes Vertically
In order to edit the workload machine deployment, we must first get the machine deployment object to modify:
[root@CentOS7TestVM tkg]# kubectl get md
NAME CLUSTER REPLICAS READY UPDATED UNAVAILABLE PHASE AGE VERSION
wld-kubevip-md-0 wld-kubevip 1 1 1 0 Running 7h8m v1.23.8+vmware.2
Lastly, add the name of the custom vSphere machine template to the ".spec.template.spec.infrastructureRef.name" field of the object, following the same procedure as when modifying the control plane node. After saving these changes, the nodes will automatically update to reflect the newly configured settings.