The wasp probe on the Operator Console server (20.4x) is consuming 99% of the system CPU and everything seems to be hanging up.
Restarting the robot fixes the problem temporarily but it returns every week or so.
Release : 20.4
There is a problem with the EMS probe. The following log messages can be observed in the wasp.log when the issue occurs:
Jul 20 00:09:46:762 INFO [http-nio-8080-exec-6, com.ca.uim.ems.api.CallbackServiceClientInvocationHandler] Unable to resolve typed exception from callback service call.
Jul 20 00:10:17:703 INFO [http-nio-8080-exec-7, com.ca.uim.ems.api.CallbackServiceClientInvocationHandler] Unable to resolve typed exception from callback service call.
Jul 20 00:10:48:548 INFO [http-nio-8080-exec-10, com.ca.uim.ems.api.CallbackServiceClientInvocationHandler] Unable to resolve typed exception from callback service call.
Jul 20 00:11:19:837 INFO [http-nio-8080-exec-1, com.ca.uim.ems.api.CallbackServiceClientInvocationHandler] Unable to resolve typed exception from callback service call.
Jul 20 00:11:50:555 INFO [http-nio-8080-exec-4, com.ca.uim.ems.api.CallbackServiceClientInvocationHandler] Unable to resolve typed exception from callback service call.
The EMS probe (observed via task manager) is using 100% of the memory that is allocated to it in the .cfg file under java_mem_max.
- stop the EMS probe, and delete the folder : /nimsoft/probes/service/ems/db/ from the primary hub machine.
- with EMS still down, use Raw Configure on the EMS probe and check the amount of RAM allocated to it. (java_mem_max) In a large environment with many alarms it may help performance to set this higher, e.g. 8gb but in any case it would be a good idea to add some to whatever is configured. Try a factor of 1.5*, meaning if currently set to 2gb try 3gb, if set to 4gb try 6gb and so on.
- now activate EMS
Once the EMS probe has sufficient memory to perform adequately in the environment it should resolve this problem in OC.