Pod Scaling Failure Due to Insufficient CPU and Resource Allocation
search cancel

Pod Scaling Failure Due to Insufficient CPU and Resource Allocation

book

Article ID: 381977

calendar_today

Updated On:

Products

Tanzu Kubernetes Runtime vSphere with Tanzu VMware vSphere 7.0 with Tanzu

Issue/Introduction

Symptoms:

  • Scaling up pods may fail with the following error message:

   Warning  FailedScheduling  64s   default-scheduler  0/10 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 7 Insufficient cpu.

  • kubectl top node may show less utilization on the worker node.

     

Environment

vSphere with Tanzu 7.x

vSphere with Tanzu 8.x

Cause

  • Worker Node Overload: The worker node may already have too many pods running, consuming its available resources (such as CPU and memory), which prevents the scheduler from placing new pods.
  • Check worker node allocated resources:

 

  • Insufficient Resources: The resources allocated on the worker node may not be sufficient for additional pods. It's important to note that resource usage and allocation are different — even if the CPU usage appears low on the worker node, it doesn't necessarily mean there are free cores available to schedule new pods.
  • Check the used resources worker node : 

The given details can be checked using kubectl describe node <node-name> 

Resolution

Adjust the VM Class of the worker node. A VM Class defines the CPU, memory, and resource reservations for virtual machines (both control plane and worker nodes).

To resolve resource-related scheduling issues, you can change the VM class on the worker nodes to a larger class with more CPU and memory. This will provide the additional resources needed to accommodate the scaling of new pods.