Administrators require documentation and recommendations for configuring BIOS settings, specifically NUMA and CCX options, on AMD EPYC servers running VMware ESXi to optimize performance and memory access latency. AMD EPYC sockets utilize a Core Complex (CCX) design where cache access latency can increase significantly for inter-CCX access compared to intra-CCX access
VMware ESXi 8.x
VMware ESXi 9.x
AMD EPYC Processors (All Generations)
Information request. AMD EPYC processors feature large system-wide L3 caches, but this architecture introduces non-uniform cache-access (NUCA) latencies. Firmware settings dictate memory interleaving and how NUMA nodes are presented to the ESXi hypervisor.
Access the server BIOS/UEFI configuration during boot.
Locate the memory interleaving and NUMA configuration settings (typically under Processor or Memory Settings).
Evaluate and configure the NUMA Nodes Per Socket (NPS) setting. NPS changes the memory interleaving policy to present either 1, 2, or 4 NUMA nodes per single socket.
Evaluate the CCX-as-NUMA BIOS option. This setting presents each LLC/CCX as a NUMA node to operating systems but does not change the memory channel interleave policy dictated by the NPS setting.
Disable the CCX-as-NUMA option for modern ESXi versions. While historically used to aid unoptimized OS schedulers, modern ESXi CPU schedulers are highly optimized for AMD EPYC topology. Enabling CCX-as-NUMA can skew the hypervisor's assumptions regarding physical memory architecture and degrade performance for workloads such as VMware View Planner and VMware VMmark.
Save BIOS settings and reboot the ESXi host for the new memory interleaving and NUMA presentation to take effect
VMware vSphere CPU Scheduling for AMD EPYC Processors: