inspection-extension pod under the vmware-system-tmc namespace will show error similer to the following.# kubectl logs -n vmware-system-tmc inspection-extension-#########-#####
8l4.183d935255bb2b75","namespace":"vmware-system-tmc","time":"2025-05-08T14:53:05Z"}{"func":"ReconcileInspect.Reconcile","level":"info","msg":"Reconciling for request: vmware-system-tmc/inspection-d0############c0","time":"2025-05-08T14:53:05Z"}{"error":"Inspect.intents.tmc.cloud.vmware.com \"inspection-d0############c0\" not found","func":"ReconcileInspect.Reconcile","level":"error","msg":"r.Get: object not found","time":"2025-05-08T14:53:05Z"}
vmware-system-tmc namespace one will see several sonobuoy-kube-bench-daemon-set are in Pending state.sonobuoy-kube-bench-daemon-set pod # kubectl describe pods -n vmware-system-tmc sonobuoy-kube-bench-daemon-set-#######-####status": { "phase": "Pending", "conditions": [ { "type": "PodScheduled", "status": "False", "lastProbeTime": null, "lastTransitionTime": "2025-06-19T19:26:02Z", "reason": "Unschedulable", "message": "0/20 nodes are available: 1 Too many pods. preemption: 0/20 nodes are available: 20 No preemption victims found for incoming pod.."
sonobuoy-kube-bench DaemonSet one can see that there are sonobuoy-kube-bench-daemon-set pods still not available.status": { "currentNumberScheduled": 20, "numberMisscheduled": 0, "desiredNumberScheduled": 20, "numberReady": 16, "observedGeneration": 1, "updatedNumberScheduled": 20, "numberAvailable": 16, "numberUnavailable": 4VMware vSphere Kubernetes Service (VKS)
sonobuoy-kube-bench-daemon-set pods are not getting into Running state.sonobuoy-kube-bench-daemon-set pods are in pending state because there are trying to schedule on nodes that have reached their maxPods per node capacity (which is 110 pod per node) .
The Following are the steps to validate if some of the cluster nodes reached the 110 default maxPods per node capacity:
Note: the following command need to get run from the guest cluster context.
# kubectl get nodes -o custom-columns=NAME:.metadata.name,Capacity:.status.capacity.pods,Allocatable:.status.allocatable.podskubectl get nodes -o custom-columns=NAME:.metadata.name,Capacity:.status.capacity.pods,Allocatable:.status.allocatable.podsNAME Capacity AllocatableGuestCluster--worker-####-###jr-pbr2b 110 110GuestCluster--worker-####-###r-wnv8g 110 110GuestCluster--worker-####-###jr-z5ndw 110 110GuestCluster-cp-###h6 110 110GuestCluster-cp-###q6 110 110GuestCluster-cp-###r4 110 110# kubectl describe nodes | grep -E 'HolderIdentity|Non-terminated Pods'kubectl describe nodes | grep -E 'HolderIdentity|Non-terminated Pods' HolderIdentity: GuestCluster--worker-####-###j-pbr2bNon-terminated Pods: (110 in total) HolderIdentity: GuestCluster--worker-####-###j-wnv8gNon-terminated Pods: (110in total) HolderIdentity: GuestCluster--worker-####-###j-z5ndwNon-terminated Pods: (80in total) HolderIdentity: GuestCluster-cp-###h6Non-terminated Pods: (40 in total) HolderIdentity: GuestCluster-cp-###q6Non-terminated Pods: (50 in total) HolderIdentity: GuestCluster-cp-###r4Non-terminated Pods: (30 in total)# kubectl get pods -n vmware-system-tmc -o wide | grep sonobuoy-kube-bench-daemon-set | grep -v RunningCurrently their is no supported way to modify the MaxPod count in VMware vSphere Kubernetes Service (VKS).
Workaround"
Option 1: Manually balance the number of pods on the node:
Option 2: Use Descheduler for Kubernetes