Symptoms:
Web GUI will show "Node is not healthy and is not accepting pods. Details Kubelet stopped posting node status"
kubectl get nodes show:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
XXXXXXXXXXXXX(Node-UUID)XXXXXXXXX Ready master 3d10h v1.19.1+wcp.2
XXXXXXXXXXXXX(Node-UUID)XXXXXXXXX Ready master 3d10h v1.19.1+wcp.2
XXXXXXXXXXXXX(Node-UUID)XXXXXXXXX Ready master 3d10h v1.19.1+wcp.2
xxx-vmkernel-16.xxx.xxx.com NotReady agent 3d10h v1.19.1-sph-496a80d
xxx-vmkernel-16.xxx.xxx.com Ready agent 3d10h v1.19.1-sph-496a80d
xxx-vmkernel-16.xxx.xxx.com NotReady agent 3d10h v1.19.1-sph-496a80d
on the api server
/var/log/vmware/fluentbit/consolidated.log:systemd.kubelet.service: [1610083381.475725, {"hostname":"XXXXXXXXXXXXX(Node-UUID)XXXXXXXXX","unit":"kubelet","pid":"425","exe":"/opt/kubernetes/k8s-1.19/bin/kubelet","cmdline":"/usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock --pod-infra-container-image=vmware/pause:1.19.0 --node-ip=10.244.0.2","log":"E0108 05:23:01.475657 425 remote_runtime.go:389] ExecSync 08844f408feadfa658a3f9eaff8e2ad2aed5c9c583e2ff5c6063614aaa397df4 '/bin/sh -c extender_reply=$(curl -k -s -o /dev/null -w %{http_code} https://x.x.x.x:12345/healthz); if [[ \"$extender_reply\" -lt 200 || \"$extender_reply\" -ge 400 ]]; then exit 1; fi; scheduler_healthy=false; for (( i=0; i<8; i++ )); do scheduler_reply=$(curl -k -s -o /dev/null -w %{http_code} http://x.x.x.x:10251/healthz); if [[ \"$scheduler_reply\" -ge 200 && \"$scheduler_reply\" -lt 400 ]]; then scheduler_healthy=true; break; fi; sleep 10; done; if [[ \"$scheduler_healthy\" = false ]]; then exit 1; fi;' from runtime service failed: rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"f7885c0ce64223d8951fc2a420b5aa1c6f06683fdff12692d2bc64210a9409e5\": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused \"exec: \\\"/bin/sh\\\": stat /bin/sh: no such file or directory\": unknown"}]
in the /var/log/vmware/wcp/wcpsvc.log:
vcenter.wcp.node.kubenotready","localized":{"OPTIONAL":"Node is not healthy and is not accepting pods. Details Kubelet stopped posting node status.."},"params":{"OPTIONAL":null}}}}},"severity":"ERROR"}}},{"STRUCTURE":{"com.vmware.vcenter.namespace_management.clusters.message":{"details":{"OPTIONAL":{"STRUCTURE":{"com.vmware.vapi.std.localizable_message":{"args":["Kubelet stopped posting node status."],"default_message":"Node is not healthy and is not accepting pods. Details Kubelet stopped posting node status..","id":"vcenter.wcp.node.kubenotready","localized":{"OPTIONAL":"Node is not healthy and is not accepting pods. Details Kubelet stopped posting node status.."}
on esx: spherelet is not running:
[root@xxxx-xxxxxxx-16:/var/log] /etc/init.d/spherelet status
XXXX-XX-XX XX:XX:XX,303 init.d/spherelet Log fetcher support: True
XXXX-XX-XX XX:XX:XX,330 init.d/spherelet spherelet is not running
XXXX-XX-XX XX:XX:XX,330 init.d/spherelet spherelet is not running
VMware vSphere 7.0.x
This is a known issue in vSphere 7.0 and addressed in vSphere 7.0 U2.
Upgrade to vSphere 7.0 U2
Workaround:
1. Login to each ESXi host and run /etc/init.d/spherelet status.
2. If that spherelet service is down, run /etc/init.d/spherelet start.