In vSphere 8.0 environments (including Update 3g), you may observe repeated DRS configuration failure events in vCenter logs, with the error message:
Unable to apply DRS resource settings on host <hostname> in <cluster>. The available Memory resources in the parent resource pool are insufficient for the operation. This can significantly reduce the effectiveness of DRS.
/var/run/log/vpxa.log on ESXi:
YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2105676] [Originator@6876 sub=vpxLro opID=HB-host-###@239###-730e####-48] [VpxLRO] -- BEGIN lro-135576 -- overheadService -- vim.OverheadService.downloadVMXConfig -- 5267####-44##-41##-6a##-e3##########YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101825] [Originator@6876 sub=vmomi.soapStub[4] opID=HB-SpecSync-host-###@171###-2450####-73] SOAP request returned HTTP failure; <<io_obj p:0x0000002934566460, h:22, <TCP '127.0.0.1 : 51823'>, <TCP '127.0.0.1 : 8307'>>, /sdk>, method: updateChildResourceConfiguration; code: 500(Internal Server Error); fault: (vim.fault.InsufficientMemoryResourcesFault) {YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> faultCause = (vmodl.MethodFault) null,YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> faultMessage = (vmodl.LocalizableMessage) [YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> (vmodl.LocalizableMessage) {YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> key = "vob.sched.group.mem.minlimit.lt.min.kb",YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> arg = (vmodl.KeyAnyValue) [YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> (vmodl.KeyAnyValue) {YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> key = "1",YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> value = "vm.X"YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> },YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> (vmodl.KeyAnyValue) {YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> key = "2",YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> value = "25593856"YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> },YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> (vmodl.KeyAnyValue) {YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> key = "3",YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> value = "25598856"YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> }YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> ],YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> message = "Group vm.X: Requested memory minLimit 25593856 KB insufficient to support effective reservation 25598856 KB"YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> }YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> ],YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> unreserved = -1,YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> requested = -1YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> msg = "Received SOAP response fault from [<<io_obj p:0x0000002934566460, h:22, <TCP '127.0.0.1 : 51823'>, <TCP '127.0.0.1 : 8307'>>, /sdk>]: updateChildResourceConfigurationYYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> The available Memory resources in the parent resource pool are insufficient for the operation."YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2101760] --> }YYYY-MM-DDT00:00:00:000Z In(166) Vpxa[2105676] [Originator@6876 sub=vpxLro opID=HB-host-###@239###-730e####-48] [VpxLRO] -- FINISH lro-135576YYYY-MM-DDT00:00:00:000Z Er(163) Vpxa[2101825] [Originator@6876 sub=vpxaVmomi opID=HB-SpecSync-host-###@171###-2450####-73] Got exception when invoking VMOMI method; <<last binding: <<TCP '127.0.0.1 : 59573'>, <TCP '127.0.0.1 : 8307'>> >, /sdk>, vim.ResourcePool.updateChildResourceConfiguration, N3Vim5Fault32InsufficientMemoryResourcesFault9ExceptionE(Fault cause: vim.fault.InsufficientMemoryResourcesFault
/var/log/vmware/vpxd.log on vCenter:
YYYY-MM-DDT00:00:00:000Z info vpxd[06495] [Originator@6876 sub=vpxLro opID=HB-SpecSync-host-###@171###-2450####] [VpxLRO] -- BEGIN lro-128621073 -- -- SpecSyncLRO.Synchronize --YYYY-MM-DDT00:00:00:000Z info vpxd[06481] [Originator@6876 sub=OMMVmInfo opID=HB-host-###@239###-730e####] Reset VMX config cache to null, type:0YYYY-MM-DDT00:00:00:000Z info vpxd[06481] [Originator@6876 sub=OMMVmInfo opID=HB-host-###@239###-730e####] Reset VMX config cache to null, type:0YYYY-MM-DDT00:00:00:000Z info vpxd[06481] [Originator@6876 sub=InvtVm opID=HB-host-###@239###-730e####] VM [vim.VirtualMachine:vm-xxx] Fallback accept of larger value: changing vpxdLimit from 418 -> 425YYYY-MM-DDT00:00:00:000Z info vpxd[06481] [Originator@6876 sub=OMMVmInfo opID=HB-host-###@239###-730e####] Reset VMX config cache to null, type:0YYYY-MM-DDT00:00:00:000Z info vpxd[06481] [Originator@6876 sub=OMMVmInfo opID=HB-host-###@239###-730e####] Reset VMX config cache to null, type:0YYYY-MM-DDT00:00:00:000Z info vpxd[06481] [Originator@6876 sub=OMMVmInfo opID=HB-host-###@239###-730e####] Reset VMX config cache to null, type:0YYYY-MM-DDT00:00:00:000Z info vpxd[06481] [Originator@6876 sub=OMMVmInfo opID=HB-host-###@239###-730e####] Reset VMX config cache to null, type:0YYYY-MM-DDT00:00:00:000Z info vpxd[06495] [Originator@6876 sub=vpxLro opID=HB-SpecSync-host-###@171###-2450####] [VpxLRO] -- FINISH lro-128621073
/vmfs/volumes/<Datastore-UUID>/<VM-folder>/vmware.log on ESXi:
YYYY-MM-DDT00:00:00:000Z In(05) vmx - DICT vmx.reboot.PowerCycle = "TRUE"YYYY-MM-DDT00:00:00:000Z In(05) vmx - DICT vmx.reboot.powerOffTriggered = "FALSE"
These warnings appear frequently (e.g., every minute) on specific hosts or across multiple clusters, often disrupting automation workflows. Restarting the vpxa agent on the ESXi host temporarily clears the connection but does not prevent recurrence. Actual VM performance, migrations, and resource availability remain unaffected, the warnings are false positives.
This issue is most noticeable in high-churn environments such as VDI clusters with frequent VM resets or power cycles.
vSphere 8.x
The root cause is a temporary mismatch between DRS overhead memory estimation and the actual reservation enforced by the ESXi kernel during fast-path operations (VM power-on, spec-sync, vMotion).
This is a known timing/estimation discrepancy, as DRS relies on the OMM schema script for OMM estimation, and this estimation may differ from the actual consumption in the ESXi kernel. The mismatch self-corrects on the next host sync (<1 second), but with frequent resets (common in VDI due to login storms, updates, or container-like behavior), failures occur repeatedly across multiple VMs, causing persistent UI warnings.
The discrepancy is typically ~15 MB and harmless (no memory corruption or instability), but visible as alarms when vmx.reboot.PowerCycle is used frequently.
Workaround:
Set the DRS advanced option on affected clusters: MemOverheadGrowthMin = 4
Steps to Apply:
Note: No vCenter service restart or host/vpxa restart is required, the change takes effect immediately and is non-disruptive.
Bulk Rollout (for multiple VDI clusters): Use the provided Python script (change_drs_advance_option.py):
Additional Recommendations: