Vpxd service in vCenter Server Appliance crashes with "Panic: Memory exceeds hard limit. Panic" due to ContainerViews clogging up memory
search cancel

Vpxd service in vCenter Server Appliance crashes with "Panic: Memory exceeds hard limit. Panic" due to ContainerViews clogging up memory

book

Article ID: 374206

calendar_today

Updated On:

Products

VMware vCenter Server

Issue/Introduction

  • VPXD service is crashing intermittently.
  • vCenter Appliance vpxd service will not start.
  • Error when logging into vCenter: "You have no privileges to view this object or it is deleted".
  • vCenter Server Appliance storage/core full due to vpxd-worker core dump files.
  • When reviewing the vCenter service logs (/var/log/vmware/vpxd/vpxd-<number>.log), back traces similar to the following one can be seen:
    From the vpxd.log:
    
    YYYY-MM-DDTHH:MM:SS.484Z error vpxd[08223] [Originator@6876 sub=Memory checker] Current value 11101224 exceeds hard limit 11099136. Shutting down process.
    YYYY-MM-DDTHH:MM:SS.581Z panic vpxd[08223] [Originator@6876 sub=Default]
    -->
    --> Panic: Memory exceeds hard limit. Panic
    --> Backtrace:
    --> [backtrace begin] product: VMware VirtualCenter, version: 7.0.3, build: build-24026615, tag: vpxd, cpu: x86_64, os: linux, buildType: release
    --> backtrace[00] libvmacore.so[0x0037DB8B]
    --> backtrace[01] libvmacore.so[0x002C79C5] Vmacore::System::Stacktrace::CaptureFullWork(unsigned int)
    --> backtrace[02] libvmacore.so[0x002D6C5B] Vmacore::System::SystemFactory::CreateBacktrace(Vmacore::Ref<Vmacore::System::Backtrace>&)
    --> backtrace[03] libvmacore.so[0x00370CD7]
    --> backtrace[04] libvmacore.so[0x00370DF3] Vmacore::PanicExit(char const*)
    --> backtrace[05] libvmacore.so[0x002C7827] Vmacore::System::ResourceChecker::DoCheck()
    --> backtrace[06] libvmacore.so[0x0023B390]
    --> backtrace[07] libvmacore.so[0x002349E7]
    --> backtrace[08] libvmacore.so[0x00239F75]
    --> backtrace[09] libvmacore.so[0x003765C0]
    --> backtrace[10] libpthread.so.0[0x00007F87]
    --> backtrace[11] libc.so.6[0x000F36BF]
    --> backtrace[12] (no module)
    --> [backtrace end]

 

  • Prior to the crash there a lot of "vim.view.ViewManager.createContainerView" tasks are logged, in a number far larger than "vim.view.View.destroy" tasks, or there are none of the latter seen at all:
    egrep 'View.destroy|createContainerView' vpxd-???.log | awk -F " -- " '{print $4}' | sort | uniq -c
    477634 vim.view.ViewManager.createContainerView
     19314 vim.view.View.destroy

 

  • There are external application running in the environment, which are accessing vCenter for diverse purposes (e.g. monitoring software)

Environment

  • VMware vCenter Server Appliance 7.x
  • VMware vCenter Server Appliance 8.x

Cause

Two primary conditions contribute to this issue:

  1. Excessive Concurrent Login Sessions
    Frequent and simultaneous client logins—often from automated systems such as monitoring, backup, or orchestration tools—generate significant memory load on vpxd-worker. Over time, this can exhaust memory resources allocated to the process.

  2. Unmanaged ContainerView Objects
    ContainerViews are memory-resident objects used by vCenter to represent inventory components such as hosts, virtual machines, clusters, datastores, and network entities.

    • These views are created in memory when clients query inventory data.

    • The size of each object can vary, ranging from minimal to several hundred megabytes, depending on the inventory's complexity.

    • While some ContainerViews are persistent by design, temporary views must be explicitly destroyed using the vim.view.View.destroy method.

    • Failure to destroy unused views leads to memory bloat, further stressing the vpxd-worker process.

Resolution

To solve the issue, find out which source the ContainerViews are created by, using the following command:

# zcat vpxd-profiler*.gz |grep SessionStats | grep Container | cut -d '/' -f5-10 | sort | uniq -c | sort -nr

The output of the command will look like this:

  128 Id='52d6####-####-####-####-######e656c3'/Username='com.vmware.vcIntegrity'/ClientIP='127.0.0.1'/SessionView/Container/total 553
  128 Id='52c5####-####-####-####-######46dc12'/Username='VSPHERE.LOCAL\vpxd-extension-d557####-####-####-######b90a4e'/ClientIP='127.0.0.1'/SessionView/Container/total 32
  128 Id='52c4####-####-####-####-######895e4f'/Username='VSPHERE.LOCAL\Administrator'/ClientIP='10.x.y.zz'/SessionView/Container/total 123
  128 Id='522c####-####-####-####-######47e689'/Username='VirtualCenter'/ClientIP='127.0.0.1'/SessionView/Container/total 68
  128 Id='5219####-####-####-####-######646f1f'/Username='com.vmware.vsan.health'/ClientIP='127.0.0.1'/SessionView/Container/total 73
  127 Id='5220####-####-####-####-######59e190'/Username='VSPHERE.LOCAL\vpxd-extension-d557####-####-####-######b90a4e'/ClientIP='127.0.0.1'/SessionView/Container/total 534
  125 Id='52f6####-####-####-####-######b54da5'/Username='VSPHERE.LOCAL\Administrator'/ClientIP='10.x.10.zz'/SessionView/Container/total 471
   37 Id='5274####-####-####-####-######073eaf'/Username='<external_domain>\<account>'/ClientIP='10.x.10.z'/SessionView/Container/total 531
   16 Id='521d####-####-####-####-######c26d37'/Username='<external_domain>\<account>'/ClientIP='10.x.10.z'/SessionView/Container/total 502
    3 Id='5274####-####-####-####-######073eaf'/Username='<external_domain>\<account>'/ClientIP='10.x.10.z'/SessionView/Container/total 529

 

Most of the sessions where the ClientIP is 127.0.0.1 can normally be ignored, these are internal vCenter components.
For the others, take the session ID (The UID in Id='52d6####-####-####-####-######e656c3' for example) from above and verify if the source is actively destroying all of its ContainerViews, by comparing the ones created with the ones destroyed:

# zcat $(ls -1 vpxd*.gz | grep -v profiler) | grep <session_Id> | egrep "ContainerView|View.destroy" | awk -F " -- " '{print $4}' | sort | uniq -c

For example:

zcat $(ls -1 vpxd*.gz | grep -v profiler) | grep 52c4####-####-####-####-######895e4f | egrep "ContainerView|View.destroy" | awk -F " -- " '{print $4}' | sort | uniq -c
  13143 vim.view.ViewManager.createContainerView
  13142 vim.view.View.destroy

 

In the example whichever source the session was created by does properly destroy its ContainerViews. However, if the number for vim.view.View.destroy task is much smaller than the number for vim.view.ViewManager.createContainerView, or no vim.view.View.destroy are shown at all, the source is not destroying its ContainerViews at all, for example:

zcat $(ls -1 vpxd*.gz | grep -v profiler) | grep 52f6####-####-####-####-######b54da5 | egrep "ContainerView|View.destroy" | awk -F " -- " '{print $4}' | sort | uniq -c
  1465 vim.view.ViewManager.createContainerView

 

If you see this for a specific source, to stop the vpxd crashes, find out which application is using the source IP of the session, temporarily stop it from accessing vpxd to prevent the service from crashing. 
Then involve the application vendor support for further troubleshooting.

Additional Information