No NUMA nodes seen inside the guest operating system
search cancel

No NUMA nodes seen inside the guest operating system

book

Article ID: 338059

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

  • Inside Windows OS Task manager, if we change the CPU Performance graph to NUMA nodes, the option is greyed out

  • Using the coreinfo.exe on a command prompt inside the Windows OS shows that no NUMA nodes exist:


 

  • We see the following information in the vmware.log of the impacted virtual machine:


2021-08-20T17:14:51.588Z| vmx| I125: numaHost: NUMA config: consolidation= 1 preferHT= 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: 16 VCPUs 1 VPDs 1 PPDs
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 0 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 1 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 2 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 3 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 4 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 5 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 6 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 7 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 8 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 9 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 10 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 11 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 12 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 13 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 14 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: numaHost: VCPU 15 VPD 0 PPD 0
2021-08-20T17:14:51.589Z| vmx| I125: CreateVM: Swap: generating normal swap file name.
 

  • The number of vCPUs assigned to the virtual machine are lesser than the 'Cores per socket' value at the ESXi host  Summary ( Hardware section ).
  • The number of vCPUs assigned to the virtual machine is 9 or greater and the version of ESXi is 6.5 or greater.

 

Environment

VMware vSphere ESXi 6.x
VMware vSphere ESXi 7.x
VMware vSphere ESXi 8.x

Cause

  • The VPD log messages indicate that the VM has a single NUMA topology ( VPD 0 PPD 0 in vmware.log )
  • If the number of vCPUs assigned to the virtual machine is lesser than the 'Cores per socket' value at the ESXi host that the VM resides on, then by design, the VM will fit within a single NUMA node. This is why no NUMA nodes reflect inside the guest operating system.

Resolution

If possible, increase the vCPU count till it exceeds the Cores per socket value of the host and confirm if the NUMA nodes reflect correctly inside the Guest OS.

   vmware.log  will contain PP0,1 and VPD 0,1 



Workaround:
If the current vCPU count cannot be increased,  the numa.vcpu.maxPerMachineNode value in the virtual machine's .vmx can be used to override the default settings. 

Note:  Backup the .vmx file before making any changes. The virtual machine needs to be powered off to implement the change.

Example:
A VM with 16 vCPUs on a host that has a physical Cores per Socket value of 22 and it's desired the guest OS uses two vNUMA nodes, add the following parameter in the .vmx file of the VM:

numa.vcpu.maxPerMachineNode = "8"

This would place 8 vCPUs on a single NUMA node and another 8 vCPUs on the other NUMA node.

We would need to see PP0,1 and VPD 0,1 in the vmware.log of the impacted virtual machine which would indicate that more than one NUMA node is used.

Reload the VMX config using "Reloading a vmx file without removing the virtual machine from inventory".

Additional Information

Reference: https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/7-0/vsphere-resource-management-7-0.html

Impact/Risks:
The pattern of this issue depends on the ESXi host CPU architecture. If the virtual machine is migrated to another host that has a different CPU architecture, then the changes implemented as a resolution/workaround might have to be re-calculated.