kubectl get pods -n kube-system
...server is likely overloaded
...failed to send out heartbeat on time (exceeded the 100ms timeout for 17.346965944s, to 9a4c6c6012cbdb5a)
Unable to connect to the server: dial tcp: lookup vra-k8s.local on <DNS-IP>:53: no such host
Aria Automation 8.x
Aria Automation Orchestrator 8.x
VMware vRealize Automation 8.x (vRA)
VMware vRealize Orchestrator 8.x (vRO)
Either disk or memory pressure exists on one of the appliances in the cluster which will cause kube-system pods to evict. This will place the prelude pods into a pending state causing the system to become non-functional
To confirm if this is the case, review the journal with the following command:
journalctl -u kubelet
If the journal is very large, you can pipe to grep and look for entries relating to Disk Pressure or Memory Pressure. If further log reviewing is needed, add another grep via pipe and search by the date in format CCC ##" (e.g. - MAR 10):
grep -i journalctl -u kubelet | grep -i pressure
Maximum Storage Latency is 20 ms for each disk IO operation from any Aria Automation node under the official product documentation System Requirements (techdocs.broadcom.com).
vracli disk-mgr resize
kubectl get pods -n kube-system
kubectl delete pods -n kube-system podName
/opt/scripts/deploy.sh --shutdown
/opt/scripts/deploy.sh