* Machine <machine-ID>:* NodeHealthy: Waiting for a Node with spec.providerID vsphere://<provider ID> to exist* Control plane components: Waiting for a Node with spec.providerID vsphere://<provider ID> to exist* EtcdMemberHealthy: Waiting for a Node with spec.providerID vsphere://<provider ID> to existWarning FailedScheduling <time in sec>s default-scheduler nodes are available:node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1node(s) were unschedulable, node(s) had untolerated taint{node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: nodesare available: Preemption is not helpful for scheduling.vSphere Kubernetes Service
The spec.providerID inside the node is not yet appended. The vSphere CPI is responsible for setting the providerID during Node Initialization. The CPI sets up the provider ID into the `node` object, and then the node is moved to `Running` by CAPI. The taint is removed once the provider ID is successfully associated with the node.
NOTE: It is not recommended to either remove the taint or append the provider ID manually.
vSphere Cloud Provider Interface runs as pod inside the guest cluster. Below is how it looks under normal circumstances. Look inside the guest cluster to check if there are any issues with the Cloud Provider Interface allocating provider ID to the node.
This also happens when the Cloud Provider Interface pods are missing inside the guest cluster. Per the example below, the guest-cluster-cloud-provider deployment is scaled down to zero.root@<Node-ID> [ ~ ]# k get deployment -A | grep -i cloudvmware-system-cloud-provider guest-cluster-cloud-provider 0/0
To fix the issue
root@<Node-ID> [ ~ ]# k scale deployment guest-cluster-cloud-provider -n vmware-system-cloud-provider --replicas=1deployment/guest-cluster-cloud-provider scaled
root@<Node-ID> [ ~ ]# k get pods -A | grep -i cloudvmware-system-cloud-provider guest-cluster-cloud-provider-<ID> 1/1 Running 0