Setting Latency Sensitivity to High on a virtual machine fails without warning
search cancel

Setting Latency Sensitivity to High on a virtual machine fails without warning

book

Article ID: 339990

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Symptoms:
  • The %RUN value in esxtop or the CPU Utilization in the performance charts for each vCPU is not 100
  • The values in the CPU column in esxtop's CPU view (with "SUMMARY STATS" fields enabled) changes for the VM's vcpu worlds (GID extended)


Environment

VMware vSphere ESXi 6.0
VMware vSphere ESXi 5.5
VMware vSphere ESXi 6.5

Cause

If Latency Sensitivity is correctly configured to High, absence of exclusive affinity can be explained by over-committed cores in the host or NUMA node.
 
For exclusive affinity to work, the ESXi host needs to be able to reserve a full core (not PCPU / Thread when HT is enabled) for each of the VM's vCPUs while respecting other host reservations. For example, on an ESXi host with 16 cores, the maximum number of vCPUs for a VM that has to run with exclusive affinity is 13 (n-3). This is a result of the system and vim pool reservations which depending on the host size and 3rd party plugins will reserve, not necessarily use, between 2-3 cores.

Resolution

This issue is resolved in vSphere ESXi 6.7, Virtual Machines with Latency Sensitivity set to High now require a full CPU reservation to power on.


Additional Information

Deploying Extremely Latency-Sensitive Applications in VMware vSphere 5.5