Powering on VM with Latency Sensitivity as high may report "Unable to apply latency-sensitivity setting to virtual machine" during power-on
search cancel

Powering on VM with Latency Sensitivity as high may report "Unable to apply latency-sensitivity setting to virtual machine" during power-on

book

Article ID: 380495

calendar_today

Updated On:

Products

VMware vSphere ESXi VMware vSphere ESXi 7.0 VMware vSphere ESXi 8.0

Issue/Introduction

  • Powering on VM (virtual machine) with "Latency Sensitivity" as "High" may report "Unable to apply latency-sensitivity setting to virtual machine <VM name>. No valid placement on the host."
  • Issue is seen when multiple VMs with high "Latency Sensitivity" is powered on the same host
  • /var/log/vmkernel.log:

YYYY-MM-DDTHH:MM:SS cpu18:xxxxxxxx)WARNING: CpuSched: 311: vcpu xxxxxxxx is placed in adoption mode since all pcpus in its affinitySet 0xffffffffff have exclusive affinity to other vcpus
YYYY-MM-DDTHH:MM:SS cpu19:xxxxxxxx)WARNING: CpuSched: 311: vcpu xxxxxxxx is placed in adoption mode since all pcpus in its affinitySet 0xffffffffff have exclusive affinity to other vcpus
YYYY-MM-DDTHH:MM:SS cpu19:xxxxxxxx)WARNING: CpuSched: 311: vcpu xxxxxxxx is placed in adoption mode since all pcpus in its affinitySet 0xffffffffff have exclusive affinity to other vcpus
YYYY-MM-DDTHH:MM:SS cpu19:xxxxxxxx)WARNING: CpuSched: 311: vcpu xxxxxxxx is placed in adoption mode since all pcpus in its affinitySet 0x80000 have exclusive affinity to other vcpus
YYYY-MM-DDTHH:MM:SS cpu12:xxxxxxxx)WARNING: CpuSched: 1324: Unable to apply latency-sensitivity setting to virtual machine <vm name>. No valid placement on the host.
YYYY-MM-DDTHH:MM:SS cpu2:xxxxxxxx)CpuSched: vm xxxxxxxx: 2764: unset exclusive affinity to 2
YYYY-MM-DDTHH:MM:SS cpu18:xxxxxxxx)CpuSched: vm xxxxxxxx: 2764: unset exclusive affinity to 18
YYYY-MM-DDTHH:MM:SS cpu11:xxxxxxxx)CpuSched: vm xxxxxxxx: 2764: unset exclusive affinity to 11
YYYY-MM-DDTHH:MM:SS cpu1:xxxxxxxx)CpuSched: vm xxxxxxxx: 2764: unset exclusive affinity to 1

Cause

High Latency Sensitivity requires to set CPU and 100% memory reservation for the VM and each virtual CPU is granted exclusive access to a physical core. In case if the configured cores for VM is not available in a single NUMA node, the latency-sensitivity setting would default to Normal to ensure successful VM poweron operation.

This issue is caused due to unavailability of non-exclusive cores to give to the VM as exclusive cores because NUMA client(s) of the VM required more exclusive cores than were available on a particular NUMA node. 

Resolution

In order to resolve the issue, split the VM into multiple NUMA clients and place NUMA clients on different NUMA nodes so that there is enough non-exclusive cores to give each NUMA client from each NUMA node.

Perform the below steps to ensure VM is able to power-on with exclusive core affinity across sockets

 

  • Log in to the vCenter server or ESXi host via UI
  • Right Click on the desired VM and select Edit Settings
  • Navigate to VM options tab
  • Under Advanced select the Latency Sensitivity value as High
  • Navigate to the Advanced Parameters tab
  • Add the below parameters under Attribute and Value section

numa.vcpu.maxPerClient = "6" <- enforces NUMA clients to be capped to a certain size
numa.consolidate = "FALSE" <- enforces that NUMA clients are spread across NUMA nodes

Note: This reference is based on the below configuration of the host and needs to be adjusted as per the hardware present on host. In the above example, the NUMA clients are spread across NUMA nodes where 6 clients(VM cores) would be affinitized per NUMA node(socket).

So the value for numa.vcpu.maxPerClient = (number of vCPU on VM)/(number of sockets on host)

Host CPU:
Number of Sockets: 2
Number of cores (total): 40 (20 cores per socket) <- HyperThreading enabled

VM CPU: 
Number of vCPU: 12

  • Click OK to save the configuration
  • Proceed with the VM power on



Additional Information

In order to validate the CPU affinity, execute the below command via ssh session on the related ESXi host

  • CPU affinity configuration being successful

[root@esxi:~] tail -f /var/log/vmkernel.log | grep "exclusive affinity"

YYYY-MM-DDTHH:MM:SS vmkernel: cpu6:891115)CpuSched: vm xxxxxx: 2813: unset exclusive affinity to 6
YYYY-MM-DDTHH:MM:SS vmkernel: cpu8:891522)CpuSched: vm xxxxxx: 2813: set exclusive affinity to 8
YYYY-MM-DDTHH:MM:SS vmkernel: cpu15:892677)CpuSched: vm xxxxxx: 2813: set exclusive affinity to 15
YYYY-MM-DDTHH:MM:SS vmkernel: cpu10:893175)CpuSched: vm xxxxxx: 2813: set exclusive affinity to 10
YYYY-MM-DDTHH:MM:SS vmkernel: cpu11:893173)CpuSched: vm xxxxxx: 2813: set exclusive affinity to 11

  • In case of the affinity is not set during power on, the snippets would be displayed as below

[root@esxi:~] tail -f /var/log/vmkernel.log | grep "exclusive affinity"

YYYY-MM-DDTHH:MM:SS cpu12:26997166)WARNING: CpuSched: 1324: Unable to apply latency-sensitivity setting to virtual machine <VM name>. No valid placement on the host.
YYYY-MM-DDTHH:MM:SS cpu2:26997157)CpuSched:  vm xxxxxxxx: 2764: unset exclusive affinity to 2
YYYY-MM-DDTHH:MM:SS cpu18:26997167)CpuSched: vm xxxxxxxx: 2764: unset exclusive affinity to 18
YYYY-MM-DDTHH:MM:SS cpu11:26997162)CpuSched: vm xxxxxxxx: 2764: unset exclusive affinity to 11
YYYY-MM-DDTHH:MM:SS cpu1:26997163)CpuSched:  vm xxxxxxxx: 2764: unset exclusive affinity to 1
YYYY-MM-DDTHH:MM:SS cpu34:26997161)CpuSched: vm xxxxxxxx: 2764: unset exclusive affinity to 34
YYYY-MM-DDTHH:MM:SS cpu39:26997153)CpuSched: vm xxxxxxxx: 2764: unset exclusive affinity to 39

These additional commands can be executed via ssh session on the ESXi host for detailed information

[root@esxi:~] sched-stats -t cpu -z "name:group:cpu:mode:affinity" | awk 'NR==1 || /vmx-vcpu/'

[root@esxi:~] sched-stats -t numa-pnode

[root@esxi:~] sched-stats -t numa-clients