Wasp probe spiking CPU to 100% on UMP machine


Article ID: 127509


Updated On:


DX Infrastructure Management NIMSOFT PROBES CA Unified Infrastructure Management SaaS (Nimsoft / UIM) CA Unified Infrastructure Management for z Systems


The console keeps failing out and the CPU of the ump server is staying at 100% until wasp is restarted.
We are seeing the following types of errors:
In the portal.log you are getting a lot of out of memory errors:
19 Feb 2019 11:34:49,071 ERROR [PortalInitAction:111] java.lang.OutOfMemoryError: GC overhead limit exceeded
19 Feb 2019 11:34:49,087 ERROR [PortalInitAction:111] java.lang.OutOfMemoryError: GC overhead limit exceeded

The wasp.log are also showing out of memory errors:
Feb 19 11:34:14:494 ERROR [http-bio-8080-exec-78, org.apache.catalina.core.ContainerBase.[wasp-engine].[localhost].[/nisapi].[resteasy-servlet]] org.jboss.resteasy.spi.UnhandledException: java.lang.OutOfMemoryError: GC overhead limit exceeded


the wasp probe is a Java application.
If it is starved for memory resources it will cause the CPU of the system to used and cause
a high cpu situation.


UIM 9.02 and earlier
UMP 9.02 and earlier


Increase the ram for the wasp probe:

Client had default settings of:
options = -Dfile.encoding=UTF-8
java_mem_max = -Xmx2048m
java_mem_init = -Xms1024m
max_perm_size = -XX:MaxPermSize=512m

To resolve the issue we changed  them to:
options = -Dfile.encoding=UTF-8
java_mem_max = -Xmx8g
java_mem_init = -Xms2g
max_perm_size = -XX:MaxPermSize=512m

Support recomends a MINIMUM of 6 Gigs for production UMP. 4 gigs for Labs.

Additional Information

When you change the java memory limits of the UMP wasp probe, you are actually changing the memory limits of the JVM running the wasp probe. The only physical restriction to the specified java_mem_max that you configure for the wasp probe is the amount of physical memory available on the robot where you have the UMP wasp probe. You should not configure java_mem_max to a value that exceeds the amount of memory available on the robot system (be sure to leave some memory free for use by the Operating System and other applications that may be installed on the UMP robot).  Double check the memory available on the UMP robot, then start by setting java_mem_max to a value that will work for the amount that is available on the UMP robot.

It is also recommended that you do not have more than a 2g difference between the configured java_mem_max and java_mem_init key values as a larger difference between these values may affect how often garbage collection is executed.   For example, if the java_mem_max key value is set to 8g, then set the java_mem_init key value to 6g.

It is also recommended that when you start configuring the UMP wasp probe to use large amounts of memory that you also change the Garbage Collection method employed by the JVM hosting the probe so that it uses G1 instead of the default GMS method.  GC1 is the preferred garbage collection method for large memory java applications (anything over 20G is considered large but if java_mem_max is set to a smaller value and you still see "java.lang.OutOfMemoryError: GC overhead limit exceeded" messages in the wasp logs, it would probably be a good idea to configure the UMP wasp probes to use the GC1 garbage collection method).  You would do this from the wasp probe's Raw Configure GUI on the UMP robots.  Select the startup folder from the left-hand pane then change the following key value in the right-hand pane:

options =  -Dfile.encoding=UTF-8 -XX:+UseG1GC

If you want to enable logging of the GC1 performance, then set the options key value to the following instead:

options = -Dfile.encoding=UTF-8 -XX:+UseG1GC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:gc.log

NOTE:  We have seen multiple performance issues when you specify a java_mem_max of 24g or more for the UMP wasp probe.  We have also found that 12g seems to be the "sweet spot" for ideal UMP wasp performance.  8g java_mem_max is the recommendation (per some KBs available).