Frequent DRS Warnings: "Unable to apply DRS resource settings on host... The available Memory resources in the parent resource pool are insufficient for the operation" due to overhead estimation mismatch after VM Power Cycle
search cancel

Frequent DRS Warnings: "Unable to apply DRS resource settings on host... The available Memory resources in the parent resource pool are insufficient for the operation" due to overhead estimation mismatch after VM Power Cycle

book

Article ID: 426967

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

In vSphere 8.0 environments (including Update 3g), you may observe repeated DRS configuration failure events in vCenter logs, with the error message:

Unable to apply DRS resource settings on host <hostname> in <cluster>. The available Memory resources in the parent resource pool are insufficient for the operation. This can significantly reduce the effectiveness of DRS.

/var/run/log/vpxa.log on ESXi:

YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2105676] [Originator@6876 sub=vpxLro opID=HB-host-###@239###-730e####-48] [VpxLRO] -- BEGIN lro-135576 -- overheadService -- vim.OverheadService.downloadVMXConfig -- 5267####-44##-41##-6a##-e3##########
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101825] [Originator@6876 sub=vmomi.soapStub[4] opID=HB-SpecSync-host-###@171###-2450####-73] SOAP request returned HTTP failure; <<io_obj p:0x0000002934566460, h:22, <TCP '127.0.0.1 : 51823'>, <TCP '127.0.0.1 : 8307'>>, /sdk>, method: updateChildResourceConfiguration; code: 500(Internal Server Error); fault: (vim.fault.InsufficientMemoryResourcesFault) {
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> faultCause = (vmodl.MethodFault) null,
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> faultMessage = (vmodl.LocalizableMessage) [
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> (vmodl.LocalizableMessage) {
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> key = "vob.sched.group.mem.minlimit.lt.min.kb",
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> arg = (vmodl.KeyAnyValue) [
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> (vmodl.KeyAnyValue) {
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> key = "1",
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> value = "vm.X"
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> },
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> (vmodl.KeyAnyValue) {
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> key = "2",
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> value = "25593856"
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> },
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> (vmodl.KeyAnyValue) {
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> key = "3",
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> value = "25598856"
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> }
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> ],
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> message = "Group vm.X: Requested memory minLimit 25593856 KB insufficient to support effective reservation 25598856 KB"
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> }
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> ],
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> unreserved = -1,
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> requested = -1
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> msg = "Received SOAP response fault from [<<io_obj p:0x0000002934566460, h:22, <TCP '127.0.0.1 : 51823'>, <TCP '127.0.0.1 : 8307'>>, /sdk>]: updateChildResourceConfiguration
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> The available Memory resources in the parent resource pool are insufficient for the operation."
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> }
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2105676] [Originator@6876 sub=vpxLro opID=HB-host-###@239###-730e####-48] [VpxLRO] -- FINISH lro-135576
YYYY-MM-DDT00:00:00:000Z Er(163) Vpxa[2101825] [Originator@6876 sub=vpxaVmomi opID=HB-SpecSync-host-###@171###-2450####-73] Got exception when invoking VMOMI method; <<last binding: <<TCP '127.0.0.1 : 59573'>, <TCP '127.0.0.1 : 8307'>> >, /sdk>, vim.ResourcePool.updateChildResourceConfiguration, N3Vim5Fault32InsufficientMemoryResourcesFault9ExceptionE(Fault cause: vim.fault.InsufficientMemoryResourcesFault

/var/log/vmware/vpxd.log on vCenter:

YYYY-MM-DDT00:00:00:000Z info vpxd[06495] [Originator@6876 sub=vpxLro opID=HB-SpecSync-host-###@171###-2450####] [VpxLRO] -- BEGIN lro-128621073 -- -- SpecSyncLRO.Synchronize --
YYYY-MM-DDT00:00:00:000Z info vpxd[06481] [Originator@6876 sub=OMMVmInfo opID=HB-host-###@239###-730e####] Reset VMX config cache to null, type:0
YYYY-MM-DDT00:00:00:000Z info vpxd[06481] [Originator@6876 sub=OMMVmInfo opID=HB-host-###@239###-730e####] Reset VMX config cache to null, type:0
YYYY-MM-DDT00:00:00:000Z info vpxd[06481] [Originator@6876 sub=InvtVm opID=HB-host-###@239###-730e####] VM [vim.VirtualMachine:vm-xxx] Fallback accept of larger value: changing vpxdLimit from 418 -> 425
YYYY-MM-DDT00:00:00:000Z info vpxd[06481] [Originator@6876 sub=OMMVmInfo opID=HB-host-###@239###-730e####] Reset VMX config cache to null, type:0
YYYY-MM-DDT00:00:00:000Z info vpxd[06481] [Originator@6876 sub=OMMVmInfo opID=HB-host-###@239###-730e####] Reset VMX config cache to null, type:0
YYYY-MM-DDT00:00:00:000Z info vpxd[06481] [Originator@6876 sub=OMMVmInfo opID=HB-host-###@239###-730e####] Reset VMX config cache to null, type:0
YYYY-MM-DDT00:00:00:000Z info vpxd[06481] [Originator@6876 sub=OMMVmInfo opID=HB-host-###@239###-730e####] Reset VMX config cache to null, type:0
YYYY-MM-DDT00:00:00:000Z info vpxd[06495] [Originator@6876 sub=vpxLro opID=HB-SpecSync-host-###@171###-2450####] [VpxLRO] -- FINISH lro-128621073

/vmfs/volumes/<Datastore-UUID>/<VM-folder>/vmware.log on ESXi:

YYYY-MM-DDT00:00:00:000Z In(05) vmx - DICT vmx.reboot.PowerCycle = "TRUE"
YYYY-MM-DDT00:00:00:000Z In(05) vmx - DICT vmx.reboot.powerOffTriggered = "FALSE"

These warnings appear frequently (e.g., every minute) on specific hosts or across multiple clusters, often disrupting automation workflows. Restarting the vpxa agent on the ESXi host temporarily clears the connection but does not prevent recurrence. Actual VM performance, migrations, and resource availability remain unaffected, the warnings are false positives.

This issue is most noticeable in high-churn environments such as VDI clusters with frequent VM resets or power cycles.

Environment

vSphere 8.x

 

Cause

The root cause is a temporary mismatch between DRS overhead memory estimation and the actual reservation enforced by the ESXi kernel during fast-path operations (VM power-on, spec-sync, vMotion).

  • When vmx.reboot.PowerCycle = TRUE is set, a guest OS reboot is converted to a full power cycle on ESXi (treated as a fresh power-on, creating a new VMX instance).
  • DRS uses the OMM (Overhead Memory Manager) scheme script to estimate VM overhead and adds a default 10 MB buffer, sending ~418 MB to ESXi.
  • ESXi enforces the actual overhead reservation calculated at power-on (~422.88 MB, including ~15 MB VMkernel overhead), which is slightly higher due to conservative initial allocation during fresh power-on.
  • ESXi rejects the DRS spec update because the requested minLimit is below the effective reservation, triggering an InsufficientMemoryResourcesFault.

This is a known timing/estimation discrepancy, as DRS relies on the OMM schema script for OMM estimation, and this estimation may differ from the actual consumption in the ESXi kernel. The mismatch self-corrects on the next host sync (<1 second), but with frequent resets (common in VDI due to login storms, updates, or container-like behavior), failures occur repeatedly across multiple VMs, causing persistent UI warnings.

The discrepancy is typically ~15 MB and harmless (no memory corruption or instability), but visible as alarms when vmx.reboot.PowerCycle is used frequently.

Resolution

Workaround:

Set the DRS advanced option on affected clusters: MemOverheadGrowthMin = 4

  • Default value = 2 → 10 MB buffer
  • New value = 4 → 20 MB buffer
  • Effect: DRS sends ~428 MB overhead (exceeds ESXi's ~422.88 MB enforcement), preventing rejections.

Steps to Apply:

  1. In vCenter UI: Cluster → Configure → Services → vSphere DRS → Edit → Advanced Options. Add: Key = MemOverheadGrowthMin, Value = 4.
  2. Optionally set DRS to Manual temporarily (avoid migrations during change).
  3. Validate: Monitor cluster for warnings, test VM power-on/reset/spec-sync.
  4. Restore DRS automation level if changed.

Note: No vCenter service restart or host/vpxa restart is required, the change takes effect immediately and is non-disruptive.

Bulk Rollout (for multiple VDI clusters): Use the provided Python script (change_drs_advance_option.py):

  • Copy to VCSA: /usr/lib/vmware/site-packages/
  • Make executable: chmod +x change_drs_advance_option.py
  • Edit script: Update vc_ip and vc_password
  • Run:
    • Set: python3 change_drs_advance_option.py 4
    • Rollback/remove: python3 change_drs_advance_option.py

Additional Recommendations:

  • Minimize use of vmx.reboot.PowerCycle = TRUE unless required (e.g., for specific patching workflows).
  • Manually acknowledge persistent warnings in vCenter UI instead of restarting hostd/vpxa (does not fix root cause).