Unable to find System Pods' CPU and Memory Allocation Details with the Latest VKR Versions
search cancel

Unable to find System Pods' CPU and Memory Allocation Details with the Latest VKR Versions

book

Article ID: 416716

calendar_today

Updated On:

Products

Tanzu Kubernetes Runtime

Issue/Introduction

  • Customers want CPU and memory allocation per System Pod in the latest VKR version and requested official documentation or fixed configuration limits for resource sizing.
  • Customers also ask if Broadcom Support could provide information on how much resource (CPU and memory) can be assigned to each Pod when spinning up workloads within the VKR/TKC/VKC environment.

Environment

 

  • VMware Tanzu Kubernetes Grid (VKR / VKC / TKC / Guest Cluster)

  • vSphere with Tanzu  (VKS)

Cause

There is no official document or fixed configuration limit for CPU and memory allocation per System Pod in VKR.
Such information pertains to internal system specifications that are not published or supported through the Technical Support channel.
Resource sizing depends heavily on each customer’s specific environment and workload type, and hence cannot be universally defined.

Resolution

VMware engineering confirms that there is currently no official documentation or fixed configuration limit available for System Pod CPU and memory allocation in VKR.

Support clarifies here that internal specifications cannot be disclosed via the Support channel as they are outside the standard break/fix support scope.

For detailed environment design, capacity planning, or sizing guidance, customers are advised to engage Broadcom Professional Services who's specialized in such assessments and can provide validated recommendations.

As resource consumption varies based on workload and environment configuration, customers are encouraged to verify system resource usage in their test environments using the following commands:

# View node-level details
kubectl describe node <worker-node-name>

# View CPU and memory usage
kubectl top nodes

# Review namespace resource quotas and limits
kubectl get resourcequota -A
kubectl get limitrange -A

# On the worker node
crictl stats

Example output from lab testing (for reference only):

Container Name CPU % Memory Usage
antrea-ovs 0.40 67.16MB
node-driver-registrar 2.19 5.82MB
secretgen-controller 0.09 9.87MB
antrea-controller 0.53 55.6MB
coredns 0.22 14.32MB
vsphere-csi-node 0.00 10.43MB
kube-proxy 0.00 12.53MB
metrics-server 0.49 23.7MB

Additional Information

VKR follows standard Kubernetes resource allocation behavior, where a Pod can request up to the node’s allocatable CPU and memory, minus resources already used by other Pods and any Namespace-level constraints (e.g., ResourceQuota, LimitRange).