controlPlaneRef and infrastructureRef specs are removed from the cluster. Any further edits to the cluster will not take place as nodes will fail to get reconciled. kubectl get kcp -n <namespace_name>
NAMESPACE NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
<namespace_name> cluster123-KCP1 cluster123 true true 3 -3 2d v1.30.1+vmware.1-fips
<namespace_name> cluster123-KCP2 cluster123 true true 3 3 3 0 45d v1.30.1+vmware.1-fips
kubectl get vspherecluster -n <Namespace_name>
NAMESPACE NAME AGE
<Namespace_name> cluster123-vspherecluster2 45d
<Namespace_name> cluster123-vspherecluster1 2d
Where cluster123-KCP1 is the newer KCP object and cluster123-vspherecluster1 is the newer vspherecluster object which have got created in the Cluster.
capi-kubeadm-control-plane-controller-manager pod will have the following log trace
EMMDD HH:MM:SS 1 controller.go:302] "KCP cannot reconcile" err="not all control plane machines are owned by this KubeadmControlPlane, refusing to operate in mixed management mode" controller="kubeadmcontrolplane" controllerGroup="controlplane.cluster.x-k8s.io" controllerKind="KubeadmControlPlane" KubeadmControlPlane="<Namespace_Name>/<KCP_Object_1>" namespace="<Namespace_Name>" name="<KCP_Object1>" reconcileID="j09axxxx-xxxx-4xxx-b7xxxxxxxxxxn89" Cluster="<Namespace_Name>/<Cluster_Name>"
IMMDD HH:MM:SS 1 controller.go:346] "Reconcile KubeadmControlPlane" controller="kubeadmcontrolplane" controllerGroup="controlplane.cluster.x-k8s.io" controllerKind="KubeadmControlPlane" KubeadmControlPlane="<Namespace_Name>/<KCP_Object_2>" namespace="<Namespace_Name>" name="<KCP_Object2>" reconcileID="k17axxxx-xxxx-3xxx-l3xxxxxxxxxxn67" Cluster="<Namespace_Name>/<Cluster_Name>"
virtualmachine.vmoperator.vmware.com/cluster123-KCP2-abcd1 PoweredOn guaranteed-xlarge vmi-54xxxxxxxxxxxxx7 <IP_Address_CP_Node> 45d
virtualmachine.vmoperator.vmware.com/cluster123-KCP2-abcd2 PoweredOn guaranteed-xlarge vmi-54xxxxxxxxxxxxx7 <IP_Address_CP_Node> 45d
virtualmachine.vmoperator.vmware.com/cluster123-KCP2-abcd3 PoweredOn guaranteed-xlarge vmi-54xxxxxxxxxxxxx7 <IP_Address_CP_Node> 45d
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
name: cluster123-KCP1
namespace: <namespace_name>
infrastructureRef:
apiVersion: vmware.infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereCluster
name: cluster123-vspherecluster1
namespace: <namespace_name>
VMware vSphere with Tanzu
This problem could be caused if the controlPlaneRef and infrastructureRef specs from the Cluster are removed manually or the cluster is updated by modifying the local YAML file via which the cluster was created originally. Local YAML file does not contain the controlPlaneRef and infrastructureRef specs and re-applying it again can lead to this issue.
kubectl get cluster <cluster_name> -n <namespace> -o yaml > /root/TKC.yaml kubectl get kcp cluster123-KCP1 -n <namespace> -o yaml > /root/KCP.yaml kubectl get vspherecluster cluster123-vspherecluster1 -n <namespace> -o yaml > /root/vspherecluster.yaml .spec.controlPlaneRef to point out to older KCP object and the value of .spec.infrastructureRef to the older vspherecluster object. Here in this case older KCP object refers to 'cluster123-KCP2' and older vspherecluster object refers to 'cluster123-vspherecluster2'. kubectl edit cluster <cluster_name> -n <namespace> kubectl delete kcp cluster123-KCP1 -n <namespace_name>
ubectl delete vspherecluster cluster123-vspherecluster1 -n <namespace_name>