When the run kubectl get node
command is ran from the master node of the workload cluster, we observe a newly created control plane node listing the old version, as seen below in bold:
capv@workload-control-plane-blg5r [ ~ ]$ kubectl get node
NAME STATUS ROLES AGE VERSION
workload-np1-66b84fc879-rvhq8 Ready <none> 27h v1.23.16+vmware.1
workload-np2-5479c5d85c-7qglk Ready <none> 27h v1.23.16+vmware.1
workload-control-plane-4tvtz Ready control-plane 27h v1.23.16+vmware.1
workload-control-plane-98z9f Ready control-plane 27h v1.23.16+vmware.1
workload-control-plane-blg5r Ready control-plane 27h v1.23.16+vmware.1
workload-control-plane-stm1a Ready control-plane 5m v1.23.16+vmware.1
This issue is addressed in TCA 3.0.
The following workaround will permanently address the issue in TCA 2.3.
Run the following command to delete the machine:
kubectl delete machine -n <WORKLOAD-CLUSTER-NAME> <MACHINE-NAME>
Sample:
kubectl delete machine -n workload workload-control-plane-stm1a