A general system error occurred. Error message: context deadline exceeded.No node on Supervisor 'supervisor##' is accepting vSphere Pods. See Node specific messages for more detailsvCenter 8.0 u3
On the host /var/run/log/spherelet.log the following may be observed:
[YYYY-MM-DDTHH:MM:SS] No(13) spherelet[3272489]: E0115 07:00:14.977112 3272467 reflector.go:147] k8s.io/client-go/informers/factory.go:154: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.##.##.##:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosi6227.de-prod.dk&limit=500&resourceVersion=0": dial tcp 10.##.##.###:6443: i/o timeout[YYYY-MM-DDTHH:MM:SS] No(13) spherelet[3272489]: W0115 07:00:14.976144 3272467 reflector.go:539] k8s.io/client-go/informers/factory.go:154: failed to list *v1.Service: Get "https://10.##.##.##.###:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.##.#.##.###:6443: i/o timeout[YYYY-MM-DDTHH:MM:SS] No(13) spherelet[3272489]: Trace[203089729]: ---"Objects listed" error:Get "https://10.##.##.###:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.x.x.x:6443: i/o timeout 30006ms (07:00:14.976)
In the logs for the Supervisor Control Plane VM in the WCP log bundle at /var/log/pods/kube-system_kube-controller-manager-#####/kube-controller-manager/0.log/, the following logging may be observed:
[YYYY-MM-DDTHH:MM:SS] stderr F E0109 07:54:27.349582 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://10.##.##.###:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp10.##.##.###:6443: connect: connection refused
A connection between the Supervisor and ESXI host over TCP port 6443 cannot be established.
To complete Supervisor deployment, bidirectional networking connections are required between ESXi and Supervisors:
Check the ports connectivity between ESXi and Supervisor:
1. Verify if ESXi can connect to the port 6443 of supervisor FIP:
$ openssl s_client -connect <Supervisor-FIP>:6443
If the connection is successful, the first line of the output indicates CONNECTED
NOTE: To retrieve the Supervisor FIP in vCenter Server:
$ /usr/lib/vmware-wcp/decryptK8Pwd.py
root@vcenter [ ~ ]# /usr/lib/vmware-wcp/decryptK8Pwd.py
Read key from file
Connected to PSQL
Cluster: domain-c#: <supervisor cluster domain id>
IP: <Supervisor FIP>
PWD: <password>
2. Verify if both management and workload networks of Supervisor nodes can connect to the port 10250 of ESXi: "vSphere Kubernetes Supervisor ESXi Host with Kubernetes status showing Node is not healthy and is not accepting pods. Details Kubelet stopped posting node status"
If connectivity has failed between ESXi and Supervisor, the underlying networking needs to be checked. For other networking requirements see Additional Information.
These components must be comprehensively installed and configured before enabling the Supervisor.
| Port | Protocol | Source | Destination | Required? | Notes |
|---|---|---|---|---|---|
| 53 | UDP TCP | ESXi Management IP | DNS | Mandatory | Must be enabled during initial infrastructure setup. |
| 123 | UDP | ESXi Management IP | NTP | Mandatory | Must be enabled during initial infrastructure setup. |
| 6443 | TCP | ESXi Management IP | Supervisor Management IP Pool (VIP)* | Mandatory | Supervisor Management IP Pool (VIP) is the floating IP in the Supervisor Management IP Pool. |
| 10250 | TCP | ESXi Management IP | Primary Workload Network IP Pool (Supervisor Service) | Mandatory | Supervisor Service. |
| 443 | TCP | vCenter | Internet | Optional | Egress Internet traffic. |
| 443, 902, 9080 | TCP | vCenter | ESXi Management IP | Mandatory | Must be enabled during initial infrastructure setup. |
| 443, 6443 | TCP | vCenter | Supervisor Management IP Pool | Mandatory | Supervisor Service. |
| 22, 443, 5000, 6443 | TCP | vCenter | Supervisor Management IP Pool (VIP)* | Mandatory | Supervisor Management IP Pool (VIP) is the floating IP in the Supervisor Management IP Pool. |