When deploying a POD with Persistent volumes, Pod is stuck in ContainerCreating state
$ kubectl get PodNAME READY STATUS RESTARTS AGEpod/test-nginx 0/1 ContainerCreating 0 2m3s
PV and PVC are in bound status:
$ kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEtest-nginx Bound pvc-xxxxxxx 1Gi RWO default 2m18s
$ kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-xxxxxxx 1Gi RWO Retain Bound default/test-nginx default 2m57s
$ kubectl describe pod/test-nginx
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 4m26s default-scheduler 0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. Normal Scheduled 4m25s default-scheduler Successfully assigned default/test-nginx to test-restore-md-0-xxxxx-xxxxxxxx-xxxxx Normal SuccessfulAttachVolume 4m13s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-xxxxxxx" Warning FailedMount 8s (x2 over 2m23s) kubelet Unable to attach or mount volumes: unmounted volumes=[www], unattached volumes=[www kube-api-access-92dmw]: timed out waiting for the condition Warning FailedMount 5s (x10 over 4m15s) kubelet MountVolume.MountDevice failed for volume "pvc-xxxxxxx" : rpc error: code = NotFound desc = disk: 6000c294f746b8b2e5412b78cb4xxxxx not attached to node
TKC
TKGm
kubelet logs report that it cannot find the disk by uuid "6000c294f746b8b2e5412b78cb4xxxxx"
journalctl -u kubelet:
Warning FailedMount 93s (x23 over 32m) kubelet MountVolume.MountDevice failed for volume "pvc-xxxxx" : rpc error: code = NotFound desc = disk: 6000c294f746b8b2e5412b78cb4xxxxx not attached to node
Disk uuid:6000c294f746b8b2e5412b78cb4xxxxx was not generated within the Agent Node OS under "/dev/disk/by-id/*" as the parameter, disk.EnableUUID was removed from the .vmx file.
1. Validate in the worker VM OS if disk uuids exist under /dev/disk/by-id/
2. If disk uuids do not exist under /dev/disk/by-id/, Turn off VM, Set disk.EnableUUID to TRUE in the vmx file, power on the VM.
3. Validate that disk uuids have been generated under /dev/disk/by-id/.
4. Redeploy the pod and validate that the pod is now able to mount the PV.
disk.EnableUUID needs to be set to true for Tanzu Kubernetes VMs