I0924 21:50:02.070872 1 vimmachine.go:147] "capv-controller-manager/vspheremachine-controller/<CLUSTER-NAMESPACE>/<CLUSTER-NAME>-control-plane-gnx88-htrkg: waiting for ready state"
I0924 21:50:02.071795 1 vimmachine.go:432] "capv-controller-manager/vspheremachine-controller/tkg-system/<MGMT-CLUSTER-NAME>-md-1-infra-g5jrq-txq7c: updated vm" vm="tkg-system/<MGMT-CLUSTER-NAME>-md-1-9fcbn-65kv57b5b-bdbbf"
I0924 21:50:02.071883 1 vimmachine.go:432] "capv-controller-manager/vspheremachine-controller/<CLUSTER-NAMESPACE>/<CLUSTER-NAME>-md-1-infra-kgl22-5w2wm: updated vm" vm="<CLUSTER-NAMESPACE>/<CLUSTER-NAME>-md-1-g96v4-998dfc6c7f9-mqh9k"
Restart the capv controller. There should be no impact on existing clusters:
In the TKG Management cluster context run the following to collect the CAPV controller deployment info and record the deployment name and namespace of the CAPV controller:
kubectl get deployments -A | grep capv
Restart the CAPV controller with the following command:
kubectl rollout restart deployment -n vmware-system-capv capv-controller-manager
OR
kubectl rollout restart deployment -n capv-system capv-controller-manager
Validate the CAPV controller pods are back up in ready status using the following command:
kubectl get pods -A | grep capv
Validate there are now VMs being provisioned or existing VMs are being removed and replaced in vCenter UI