Kube-apiserver pod stuck in Unknown State on Tanzu workload cluster
search cancel

Kube-apiserver pod stuck in Unknown State on Tanzu workload cluster

book

Article ID: 425475

calendar_today

Updated On:

Products

VMware vSphere Kubernetes Service

Issue/Introduction

Kube-apiserver pod is stuck in an unknown state even after deletion:

kube-apiserver               1/1     Running   
kube-apiserver               0/1     Unknown   
kube-apiserver               1/1     Running   

Logs may report mount volume audit similar to:

Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/host-path/-audit") pod "kube-apiserver" (UID") : hostPath type check failed: /var/log/kubernetes is not a directory
 tkc-workload kubelet[1233432]: E1217 12:43:12.676310 1233432 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[audit], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/kube-apiserver" podUID=""




Environment

VMware vSphere Kubernetes Service

Cause

If there is a system-related issue or the registry was created manually or recreated, this behavior is by design:

The type: Directory option enforces that the path must already be a directory on the host; it will not create the directory for you.

If you want Kubernetes to create the directory automatically if it does not exist, you must use type: DirectoryOrCreate instead.

Resolution

  1. Check the directory and compare it to the working pod to see which file is missing:
    cd /var/logs
    ls -lah
  2.  SSH to Guest/workload control plane node
  3. Run command: 
    sudo mkdir <missing directory>
  4. The pod should be restarted by deleting the pod using the command kubectl delete pod <apiserver podname>

    Wait for the pod to be initialized and running. Validate the apiserver in running state.