Some times after an unexpected reboot of nodes or the cluster you will see nodes in not ready state. Also the machine resource in TKG might show as failed or are stuck in provisioning state
2.X
In order to resolve the issue remove the machine, vspheremachine and vspherevm resources and let the capi/capv recreate these resources
kubectl delete machine MACHINENAME -n NAMESPACE
kubectl delete vspheremachine VSPHEREMACHINENAME -n NAMESPACE
kubectl delete vspherevm VSPHEREVMNAME -n NAMESPACE
After the removal of the machines from the cli, the nodes may get auto created ( reason being the Machine Health check being enabled).
To synch the exact replicas for the nodes, please edit the cluster configuration from the TCA-M GUI with the correct node count of the replicas and wait for the nodes to get provisioned .