Clarity - Hanging threads and high CPU utilization in process engine due to java.util.HashMap infinite loop problem
search cancel

Clarity - Hanging threads and high CPU utilization in process engine due to java.util.HashMap infinite loop problem


Article ID: 42239


Updated On:


Clarity PPM SaaS Clarity PPM On Premise


BG services where the process engine is running can appear to have high CPU utilization intermittently that will persist until the service is restarted.

Thread dumps captured with the JDK tool called jstack will reveal that multiple threads are stuck in java.util.HashMap.getEntry() at the same point without synchronization locking being present.

E.g. Each search result is a separate thread (process action or pipeline or custom script execution thread):
C:\HeapDumps\bg_threaddump_201602032203.txt (9 hits)
Line 79: at java.util.HashMap.getEntry(
Line 392: at java.util.HashMap.getEntry(
Line 408: at java.util.HashMap.getEntry(
Line 424: at java.util.HashMap.getEntry(
Line 440: at java.util.HashMap.getEntry(
Line 456: at java.util.HashMap.getEntry(
Line 472: at java.util.HashMap.getEntry(
Line 497: at java.util.HashMap.getEntry(
Line 535: at java.util.HashMap.getEntry(

Specific thread stack trace showing an example from the thread dump file:

"Event Handler pool-3-thread-13" prio=10 tid=0x00000000088b1800 nid=0x657a runnable [0x0000000040266000]
 java.lang.Thread.State: RUNNABLE
at java.util.HashMap.getEntry(
at java.util.HashMap.get(
at com.niku.bpm.utilities.BpmUtils.getLoggedinSecurityIdentifier(
at com.niku.bpm.eventmgr.ObjectEventHandler.processEventToAutoStartProcesses(
at com.niku.bpm.eventmgr.ObjectEventHandler.fireEvent(
at com.niku.bpm.eventmgr.messageserver.BaseEventHandler.fireEvent(
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$
 Locked ownable synchronizers:
- <0x00000007825e2770> (a java.util.concurrent.ThreadPoolExecutor$Worker)

Steps to Reproduce:

  1. Create a lot of process instances for several (e.g. 100+) users in a short time frame.
  2. Monitor the threads (thread dump) of the bg service and look for an accumulation of java.util.HashMap.getEntry() calls all sticking on the same line.
  3. Monitor the CPU utilization of the machine where the bg service runs.
  4. Capture a thread dump using the jstack command from the JDK at the command prompt: jstack -l <pid of hanging service>


Expected Result:  CPU utilization will not spike/thrash at 95% and higher, and threads will not appear stuck on the same line in java.util.HashMap.getEntry().

Actual Result: Intermittently, CPU can spike and threads will hang, and the condition will not typically resolve itself without a service restart.


Release: All 


Caused by CLRT-79908



This defect will not be fixed at this time.  If the problem occurs for our customers on any currently supported version of Clarity, please raise a support ticket referencing this defect (CLRT-79908) or knowledge article, and the defect will be able to be reviewed again.


Restart the bg services when possible. It is unlikely that much else will run before the restart takes place anyway, as most CPU cycles are exhausted in an infinite loop spin.

Additional Information:

In the stack traces, the top two lines contain the following in multiple threads as the indicator of this problem:

at java.util.HashMap.getEntry(
at java.util.HashMap.get(

The line numbers for java.util.HashMap() correspond to the use of Oracle JDK 1.7.0_21 and other versions and possibly even other operating systems where the bg is running could reveal a different line number in use accordingly, but will still be for the same problem and cause.