There is high memory usage of the Data Loss Prevention (DLP) agent or Java Services (detection server) when using compound exceptions in policies.
A policy change can take place that will increase the size of the execution matrix; which will lead to an increase in the amount of memory being used by the agent.
The current design of the execution matrix is such that its size can increase by a factor of 2 or more just by modifying a policy and adding a single compound exception. A policy matrix of 15K rows, which is functional, can quickly become 45K rows and become nonfunctional. Because the symptom of this issue is high memory usage it can go unnoticed for some time thus making identifying the exact change in policy or configuration impractical to identify. Other factors such as agent configuration/channels monitored, CPU and physical memory on the client will have an impact on the severity with which such symptoms may manifest.
Optimizations have been made to 16.0 that will likely resolve this issue. If possible, upgrade the servers and agents to at least that version.
If staying on 15.x then see the following:
We recommend optimizing policies to best fit the environment. DLP admins may find that a policy matrix of 5k or 10k runs fine in their environment. Try to keep within that target in the future to avoid the reoccurrence of this issue.
Here is an example of how the policy matrix is calculated.
num of rows = (number of matching rules) * (number of rules in except1) * (num of rules in except2) * ... * (num of rules in exception n)
Example Rules:
Detection Rules
#1 matches - keywords "hello", "bye" AND keywords "what", "why"
#2 matches - regex "[a-z]"
Exceptions
#1 matches - keywords "root", "admin" AND ssn 111-99-3023
#2 matches - keyword "everyone" AND regex "99*"
#3 matches - keyword "abc" AND keyword "def" AND keyword "zyx"
Calculating the total rows:
Number of Detection rules = 2
number of rules in exception #1 = 2
number of rules in exception #2 = 2
number of rules in exception #3 = 3
Number of rows in the execution matrix for the policy = 2 * 2 * 2 * 3 for a total of 24 rows
Using this formula, you will be able to roughly calculate the impact of a policy in an environment.
We recommend tuning the policies down to the lowest size matrix as practical. This will allow incident detection to have a much lower impact on the machine resources and improve the response time of the detection process.
Note: Network Discover has higher resource requirements and can handle a larger matrix than endpoints.
Once the policy is on the agent you can see the actual size of the execution matrix by installing the agent tools then running 'vontu_sqlite3.exe -db=ps.ead'. Next run the following query:
select matrixDataLength from ExecutionMatrix;
A normal execution matrix size would be up to 100k . 1M would be very large but probably doable. 10M+ would likely not function. These boundaries are a general guideline and are not absolute as the memory usage also depends on the size of the data in each row.
Please note:
There was a known issue with agents having abnormally high CPU usage which was resolved with 15.8 MP2 Hotfix 3.
It is best practice to keep agents up to date with the latest agent version to circumvent issues as new versions of OSs and other software are released.