Managing Event Purging in the Symantec Critical System Protection Server
By default, the database purge feature will purge at most 100,000 events every 23 hours. If the incoming event flow is greater than this, clearly the database will continue to grow. You can adjust both of these values by editing the sis-server.properties file (located in <Server>\tomcat\conf).
The file itself documents the two parameters:
#
# sisdbcleanup.runtime
# sisdbcleanup.event.purge.limit
#
# sisdbcleanup.runtime
# This tag represents how often database cleanup is performed.
# The value is specified in hours, ex 24 means that database
# cleanup is performed every 24 hours.
#
# default: 23
#
#
# sisdbcleanup.event.purge.limit
# This tag represents the max number of event are purged each
# time the db cleanup is performed. This value will only be
# used when the event purging is enabled in the console.
#
# default: 100000
#
#
#sisdbcleanup.runtime=23
#sisdbcleanup.event.purge.limit=100000
To change the values, uncomment either or both of the "runtime" and "event.purge.limit" lines and set the new desired values. (You may have to remove the "read only" attribute on the file in order to edit it.) After making the changes, restart the "Symantec Critical System Protection Server" service. The changes will take effect when the server starts up. The server runs a purge cycle shortly after startup and then again every "runtime" hours based on your setting.
In order to set these numbers properly, you need to know roughly how many events are being added to your database each day. Also be aware that over time, as more agents are added to a server and as policies are changes, the average daily event inflow may increase. So you should review these purge settings periodically to make sure your inflow has not exceeded your purging.
The maximum number of events that can be purged each day is <sisdbcleanup.event.purge.limit> * ( 24 / <sisdbcleanup.runtime> ). You want to make sure that number is greater than the average number of events added to the database each day. For example, if you have an average of 1,000,000 events added each day, then you might set the "runtime" value to 2 hours and leave the "event.purge.limit" at the default of 100,000. These settings would purge up to 1,200,000 events /day ( 100,000 * ( 24 / 2 ) = 1,200,000 ).
If you have only recently turned on event purging in the Console, you may also want to change these settings for a short period of time to rapidly reduce the number of events in the database. Once the database is down to your desired size, you can set the purging settings to their steady state values as determined using the method described above. For example, assume you have an average event inflow of 1,000,000 events/day and you want to keep events for 30 days. That would be a steady state of roughly 30,000,000 events in the database. Further, assume that at the point you turn on event purging, your database has 150,000,000 events in it. That means you need to remove 120,000,000 events (the events over 30 days old) over and above the 1,000,000 you are adding every day. To do this, you might change the "runtime" to 1 hour and the "event.purge.limit" to 500,000. This would purge ( 500,000 * ( 24 / 1 ) = 12,000,000 events per day. Your database would be at your desired size in about 2 weeks. Once you have reached your steady-state, you could reset the values to 2 hours and 100,000 events if you desire.
You can monitor the purging activity by viewing Audit records in the Console. Each time the server purges events, it writes an audit record to the database with information about how many records it purged and how long the purge operation took. To find the Audit records, you can use the Event Search feature, specifying an "Event Category" of "Audit" and in the Advanced Options, specify an "Object Name" of "REALTIME". Also, select the period of time (past day, past month) that you want to review. The description of the audit record looks as follows:
Deleted 1789 REALTIME Events from database based on a 90 day limit and a purge limit of 100000 rows. Duration 1 seconds.
You can see how many events were deleted and how long the operation took. You can also see some of your purge settings. When the actual number of event deleted (1789 in this example) consistently remains below the purge limit, you have reached a steady state.
Some considerations to keep in mind when setting the purge parameters:
- While the purge operation is taking place, all event insertion into the database is blocked. Therefore you want to make the settings such that the purge operation is fairly short. You can monitor the length of the purge operation via the audit records. Based on this, purging fewer records more often may be a better choice than purging a larger number of records less frequently.
- The "runtime" setting does not let you control when the purge runs, only how often it runs. As noted above, the first purging cycle happens shortly after the server starts, and then every "runtime" hours after that. If you are setting a short "runtime", this probably does not matter. But if you are setting a long "runtime", you may want to set an odd number of hours so the purge operation rotates to different times of day. The default of 23 hours was chosen instead of 24 hours, to avoid having the purge always happen at a high activity time of the day.
These instructions are valid for all SCSP and DCS managers.