Several pods in the nsxi-platform namespace consume large amounts of memory. Upon initial scheduling of the pod, Kubernetes may not have been aware of the large memory consumption, thus leading to an unbalanced distribution of memory consumption.
This information can be captured by using the following command from within the context of the napp Tanzu cluster:
kubectl top pods -n nsxi-platform --sort-by=memory
NAME CPU(cores) MEMORY(bytes)
druid-middle-manager-0 137m 11639Mi
druid-middle-manager-1 347m 10618Mi
druid-middle-manager-2 1442m 7621Mi
visualization-6779c4585b-abcde 1404m 7119Mi
druid-historical-1 50m 6960Mi
druid-historical-0 60m 6940Mi
druid-config-historical-0 156m 6678Mi
druid-broker-5b6f7fcd4-abcde 6m 6465Mi
anomalydetectionstreamingjob-628bfc8a24232f76-exec-1 3m 4657Mi
anomalydetectionstreamingjob-628bfc8a24232f76-exec-2 2m 4612Mi
druid-config-broker-6db5b7759b-abcde 43m 4330Mi
overflowcorrelator-62b6548a242594b9-exec-2 43m 3628Mi
overflowcorrelator-62b6548a242594b9-exec-1 60m 3595Mi
overflowcorrelator-62b6548a242594b9-exec-3 46m 3563Mi
The command kubectl get pods -n nsxi-platform -o wide will display which node the pod is scheduled to. This information can be correlated with the output from kubectl top nodes command to assist with which pods to prioritize.