Pods do not start on Aria Automation node
search cancel

Pods do not start on Aria Automation node

book

Article ID: 389553

calendar_today

Updated On:

Products

VMware Aria Suite

Issue/Introduction

Aria Automation UI may be available intermittently or not at all depending on if this is a cluster of three nodes, or a single node. It may also depend on how many nodes are effected if clustered behind a load balancer. 

Some interactions may work, such as deployments via UI, while tools leveraging the API may be reporting connection errors

Environment

Aria Automation 8.x

Cause

Timeouts occur when kubelet and docker client attempt to connect to docker daemon. This can be confirmed with the following logs in `journalctl`:

Feb 25 12:45:47 <node_name> kubelet[1890547]: E0225 12:45:47.593202 1890547 docker_service.go:265] Failed to execute Info() call to the Docker client: operation timeout: context deadline exceeded
Feb 25 12:47:47 <node_name> kubelet[1890547]: F0225 12:47:47.597520 1890547 server.go:269] failed to run Kubelet: failed to get docker info: operation timeout: context deadline exceeded

The following can also be seen in the journal to confirm this issue:

Feb 25 12:47:47 <node_name> kubelet[1890547]: F0225 12:47:47.597520 1890547 server.go:269] failed to run Kubelet: failed to get docker info: operation timeout: context deadline exceeded
Feb 25 12:47:47 <node_name> kubelet[1890547]: goroutine 1 [running]:
Feb 25 12:47:47 <node_name> kubelet[1890547]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0x1)
Feb 25 12:47:47 <node_name> kubelet[1890547]:         /build/mts/release/bora-19631864/cayman_kubernetes/kubernetes/src/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0x8a
Feb 25 12:47:47 <node_name> kubelet[1890547]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x6ee48e0, 0x3, {0x0, 0x0}, 0xc0009ba2a0, {0x5801a35, 0xc0000a2c00}, 0xc000480430, 0x0)
Feb 25 12:47:47 <node_name> kubelet[1890547]:         /build/mts/release/bora-19631864/cayman_kubernetes/kubernetes/src/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x569
Feb 25 12:47:47 <node_name> kubelet[1890547]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x5, 0x61eecfd8, {0x0, 0x0}, {0x0, 0x0}, 0xc000bbfcd8, {0xc000480430, 0x1, 0x1})
Feb 25 12:47:47 <node_name> kubelet[1890547]:         /build/mts/release/bora-19631864/cayman_kubernetes/kubernetes/src/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x191
Feb 25 12:47:47 <node_name> kubelet[1890547]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
Feb 25 12:47:47 <node_name> kubelet[1890547]:         /build/mts/release/bora-19631864/cayman_kubernetes/kubernetes/src/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
Feb 25 12:47:47 <node_name> kubelet[1890547]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)

 

Feb 25 12:53:13 <node_name> systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Feb 25 12:53:13 <node_name> systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 25 12:53:13 <node_name> systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Feb 25 12:53:13 <node_name> kubelet[1908513]: ++ uname -n
Feb 25 12:53:13 <node_name> kubelet[1908512]: + node_name=<node_name>

Resolution

  1. Validate the status of the kubelet service with the following command:
    1. systemctl status kubelet
  2. If not running, start with the following command:
    1. systemctl start kubelet
  3. Once started, monitor pods as they should begin to start automatically:
    1. kubectl get pods -n prelude
  4. Once all pods are in either a "Running" or "Completed" state the environment should be usable again