"eviction-hard" memory threshold limit exceeded but no pods are evicted in Tanzu Kubernetes Grid Integrated Edition
search cancel

"eviction-hard" memory threshold limit exceeded but no pods are evicted in Tanzu Kubernetes Grid Integrated Edition

book

Article ID: 298488

calendar_today

Updated On:

Products

VMware Tanzu Kubernetes Grid Integrated Edition

Issue/Introduction

The configured eviction-hard threshold, which Kubelet can use to evict pods when they exceed the limit, is reached. However, no pods are evicted in Tanzu Kubernetes Grid Integrated Edition (TKGI). 

The amount of memory available is: 

memory.available = memTotal - kubeReserved - systemReserverd - workingSet (from kubepods cgroup)


In general, the actual memory consumption of Kube services increases as the workloads increase. The actual memory consumption of Kube services is located in kubeReserved,

If the Kube services on one node consumed too much memory and there is not enough memory reserved, a node experiences this issue.


Resolution

As explained in Installing Tanzu Kubernetes Grid Integrated Edition on Azure, there is no explicit setting to customize kubeReserved but you can customize systemReserved.

To work around this issue, increase the systemReserved value to make sure the memory limit is not reached.

Under Kubelet customization - system-reserved, enter resource values that Kubelet can use to reserve resources for system daemons. For example, memory=250Mi, cpu=150m. For more information about system-reserved values, refer to System Reserved.

Note: There is no good way to find the optimal value for kubeReserved and systemReserved since the consumption of Kube service varies based on workloads. You need to test and tune the values tailored to your specific environment.