DLP Endpoint : java.lang.OutOfMemoryError: Java heap space
search cancel

DLP Endpoint : java.lang.OutOfMemoryError: Java heap space


Article ID: 242291


Updated On:


Data Loss Prevention Endpoint Prevent


Endpoint servers maxing out aggregator. 


Message:  Stack array is empty. The following exception does not have a proper stack trace.
java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
 at com.symantec.dlp.communications.common.activitylogging.ConnectionLogger.getThrottler(ConnectionLogger.java:553)
 at com.symantec.dlp.communications.common.activitylogging.ConnectionLogger.shouldSuppressHSL(ConnectionLogger.java:506)
 at com.symantec.dlp.communications.common.activitylogging.ConnectionLogger.writeToLogFileIfNeeded(ConnectionLogger.java:473)
 at com.symantec.dlp.communications.common.activitylogging.ConnectionLogger.writeToLogs(ConnectionLogger.java:459)
 at com.symantec.dlp.communications.common.activitylogging.ConnectionLogger.onReplicatorException(ConnectionLogger.java:1161)
 at com.symantec.dlp.communications.common.activitylogging.AsynchronousConnectionLogger$ReplicatorExceptionTask.run(AsynchronousConnectionLogger.java:2414)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Java heap space




Endpoint server detection memory insufficient for size of policy matrix or amount of agents connecting.


When seeing this error several issues can be in play. 

Review the filereader logs look for:

Class: com.vontu.policy.loader.execution.ExecutionMatrixGenerator 

Look for any policies which have over 60k of rows. If the customer has recently made a policy change such as adding an exception or a response rule they may have doubled the amount of rows and inadvertently caused a scenario where the endpoint servers are being flooded with connections that are timing out. 

For example. 100k rows * 32 bytes per row (for deltas)  * amount of agents = the cost of pushing out the changes. 100k*32*(60k agents for example)=  192 GB 

If this is the case consolidate the policy as much as possible. 

Then on the endpoint servers go to the aggregator.properties file located at default

Program Files\Symantec\DataLossPrevention\DetectionServer\15.8.00000\Protect\config


Save a copy and edit the file. Edit this line:

# The maximum number of simultaneously connected agents
maxConnections = 10000


# The maximum number of simultaneously connected agents
maxConnections = 5000

Then restart the endpoint server.

You may need to keep lowering this number until the aggregator stays running and you do not see that GC out of memory issue. Additionally see how much memory the endpoint server has, and up the endpoint server memory to the maximum possible without over provisioning the box. 

Change the -Xmx4096M to the available size. Then restart the server.

Continue to monitor aggregator to see if the service stays running. 

Additional Information

See also: DLP Endpoint : java.lang.OutOfMemoryError: Java heap space

Here is an example of how the policy matrix is calculated. 

num of rows = (number of matching rules) * (number of rules in excep1) * (num of rules in except2) * ... * (num of rules in exception n) 

Example Rules:

Detection Rules
matches - keywords "hello", "bye" AND keywords "what", "why"
matches - regex "[a-z]" 

#1 matches - keywords "root", "admin" AND ssn 111-99-3023
#2 matches - keyword "everyone" AND regex "99*"
#3 matches - keyword "abc" AND keyword "def" AND keyword "zyx"

Calculating the total rows:
Number of Detection rules = 2
number of rules in exception #1 = 2
number of rules in exception #2 = 2
number of rules in exception #3 = 3
Number of rows in the execution matrix for the policy = 2 * 2 * 2 * 3 for a total of 24 rows