The prediction_engine probe may stop processing messages over time. The probe will be seen in Task Manager to be using 100% of the allocated RAM. The prediction_engine queue will be seen to grow without reducing, until the probe is restarted, which will cause processing to resume again for a time.
Release: Any supported UIM release
Component: prediction_engine
This is caused by a combination of RAM settings and Java Garbage Collection.
To resolve this issue, we will increase the amount of RAM allocated to the probe, and change the garbage collection method.
To do this, edit the prediction_engine.cfg and look for this section:
<startup>
<opt>
java_mem_init = -Xms64m
java_mem_max = -Xmx512m
java_opts = -XX:+UseConcMarkSweepGC -XX:+ScavengeBeforeFullGC -XX:+UseParNewGC
</opt>
Change it to look like this:
<startup>
<opt>
java_mem_init = -Xms64m
java_mem_max = -Xmx4096m
java_opts = -XX:+UseG1GC
</opt>
Deactivate and re-activate the probe to accept the new settings.