Setting corespersocket can affect guest OS topologies
search cancel

Setting corespersocket can affect guest OS topologies


Article ID: 340277


Updated On:


VMware vSphere ESXi


In vSphere 6.5 and later, changing the corespersocket value no longer influences vNUMA or the configuration of the vNUMA topology. The corespersocket value can, however, affect guest OS topologies, which in turn can impact performance.


VMware vSphere ESXi 7.0.0
VMware vSphere ESXi 6.5
VMware vSphere ESXi 6.7


The vSphere administrator changed the corespersocket setting to something other than the default of 1.


The feature is working as designed.

To remedy the described situation and avoid any potential performance impact, you can either
  • Simply change the corespersocket back to its default value of 1.
  • Build a more advanced configuration using the explanation, example scenario and tools below.
As the corespersocket setting no longer directly sets the vNUMA topology, some corespersocket values can result in sub-optimal guest OS topologies that are not efficiently mapped to the physical NUMA nodes, potentially resulting in reduced performance.

For example, if you create a 14-vCPU virtual machine on a dual-socket ESXi host with 12 physical cores per NUMA node, and leave corespersocket unset (in which case it defaults to 1 core per socket), ESXi would present to the guest OS two 7-vCPU vNUMA nodes. However, if you start with the same configuration, but set corespersocket to 2, ESXi wouldn't be able to present 7-vCPU vNUMA nodes (that would require 3.5 virtual sockets, and ESXi can't split a virtual socket across multiple vNUMA nodes); ESXi would thus present to the guest OS three vNUMA nodes: two 6-vCPU and one 2-vCPU, a topology which would likely not perform as well in this scenario as two 7-vCPU vNUMA nodes.

A further optimization in this scenario would be to set corespersocket to 7; this would still result in two 7-vCPU vNUMA nodes (just like setting corespersocket to 1, or leaving corespersocket unset), but would present to the guest OS two vNUMA nodes that align to virtual sockets, thus saving the guest OS the work of performing load balancing across non-existent socket boundaries.

The Virtual Machine vCPU and vNUMA Rightsizing best practices blog post provides more information on this topic; the Virtual Machine Compute Optimizer tool  can provide guidance on configuring optimal topologies.


Additional Information