Optimizing SIEM Agent Job
search cancel

Optimizing SIEM Agent Job


Article ID: 207537


Updated On:


CASB Security Advanced CASB Security Premium CASB Security Standard


Splunk SIEM agent it's pulling around 50000 logs every run, but this is less than the amount of the new logs generated. As a result, the SIEM export is falling behind.




  1. The Agent is trying to pull all CloudSOC logs, including some information that is not essential for exporting.
  2. The syslog writting parameter is not correctly configured.
  3. The gap between each Job is too big.


  1. Use the filters in the SIEM Agent Technote to import selective activity_type, severity, object_type to reduce the number of nonessential logs to be queried
    • python <tool> _agent.py [--proxy <host_and_port> ] [-u <username> -p <password> ] [--severity <severity ...> ] [--app <app ...> ] [--object_type <object_type ...> ] [--activity_type <activity_type ...> ] [--elastica_app <elastica_app ...> ] [-c] [-r] [-v] [-d] [--rate] [-o/--output] [--start_date <start_date> ] [-s/--stream <stream> ] [-t/--target <socket> ] [--socket_type <udp_or_tcp> ] [-f/--filename <filename> ] [--max_bytes <maximum_bytes> ] [--backup_count <backup_count> ]
  2. Set up multiple agents with each agent pulls different Securlet logs using the --app option
  3. To check the agent performance, first add the -d (diagnostic mode) to the cron job
  4. Check the time it takes to complete a job. then check the log writing rate. If the log writing takes more than 1 minutes, change the --rate parameter. 
    • For example, the logs below indicate it takes about 6 mins to write 51861 logs. This indicates the syslog writing is not optimized. You can change this to --rate 5000 and monitor the behaviour.  But do not change this parameter to a extreme large number (such as one million) because it may not be supported by the system:

      yyyy-mm-dd hh:20:53,241-Log_Exporter_Client-INFO-Logging at the rate of 1000000 logs per second
      yyyy-mm-dd hh:20:53,242-Log_Exporter_Client-INFO-lock file not found, starting new process!
      yyyy-mm-dd hh:20:53,242-Log_Exporter_Client-INFO-Status file not found, starting from 1st step.
      yyyy-mm-dd hh:20:53,242-Log_Exporter_Client-INFO-Last investigate timestamp = yyyy-mm-ddTHH:MM:SS
      yyyy-mm-dd hh:20:53,242-Log_Exporter_Client-INFO-Last detect timestamp = yyyy-mm-ddTHH:MM:SS
      yyyy-mm-dd hh:20:53,242-Log_Exporter_Client-DEBUG-LAST_RUN_STATUS = 0
      yyyy-mm-dd hh:20:53,471-Log_Exporter_Client-INFO-Starting polling!
      yyyy-mm-dd hh:22:25,106-Log_Exporter_Client-INFO-status is complete
      yyyy-mm-dd hh:22:25,107-Log_Exporter_Client-INFO-Start downloading zipfile.
      yyyy-mm-dd hh:22:59,827-Log_Exporter_Client-INFO-File Downloaded, writing it to zip file.
      yyyy-mm-dd hh:22:59,831-Log_Exporter_Client-INFO-Removing the status file as we have downloaded logs.
      yyyy-mm-dd hh:22:59,832-Log_Exporter_Client-INFO-Extracting file of TEMP.zip
      yyyy-mm-dd hh:22:59,955-Log_Exporter_Client-INFO-Writing log to syslog.
      yyyy-mm-dd hh:27:22,829-Log_Exporter_Client-INFO-Wrote 51861 logs to syslog.


  5. Check the time it takes to complete each job and reduce the interval between jobs. For example, if a job takes 2 mins to complete, then adjust the schedule so the agent will run the job every 3 mins. For instance:
    1. yyyy-mm-dd hh:51:23,302-Log_Exporter_Client-INFO-Logging at the rate of 5000 logs per second
      yyyy-mm-dd hh:51:23,302-Log_Exporter_Client-INFO-lock file not found, starting new process!
      yyyy-mm-dd hh:51:23,303-Log_Exporter_Client-INFO-Status file not found, starting from 1st step.
      yyyy-mm-dd hh:51:23,303-Log_Exporter_Client-INFO-Last investigate timestamp = yyyy-mm-ddTHH:MM:SS
      yyyy-mm-dd hh:51:23,303-Log_Exporter_Client-INFO-Last detect timestamp = yyyy-mm-ddTHH:MM:SS
      yyyy-mm-dd hh:51:23,303-Log_Exporter_Client-DEBUG-LAST_RUN_STATUS = 0
      yyyy-mm-dd hh:51:23,552-Log_Exporter_Client-INFO-Starting polling!
      yyyy-mm-dd hh:53:15,922-Log_Exporter_Client-INFO-status is complete
      yyyy-mm-dd hh:53:15,923-Log_Exporter_Client-INFO-Start downloading zipfile.
      yyyy-mm-dd hh:53:48,126-Log_Exporter_Client-INFO-File Downloaded, writing it to zip file.
      yyyy-mm-dd hh:53:48,129-Log_Exporter_Client-INFO-Removing the status file as we have downloaded logs.
      yyyy-mm-dd hh:53:48,130-Log_Exporter_Client-INFO-Extracting file of TEMP.zip
      yyyy-mm-dd hh:53:48,262-Log_Exporter_Client-INFO-Writing log to syslog.
      yyyy-mm-dd hh:54:04,518-Log_Exporter_Client-INFO-Wrote 54584 logs to syslog.
      yyyy-mm-dd hh:54:04,521-Log_Exporter_Client-INFO-Investigate Logs fetch up until yyyy-mm-ddTHH:MM:SS
      yyyy-mm-dd hh:54:04,521-Log_Exporter_Client-INFO-Detect Logs fetch up until yyyy-mm-ddTHH:MM:SS
      yyyy-mm-dd hh:54:04,521-Log_Exporter_Client-INFO-Removing file TEMP.zip
      yyyy-mm-dd hh:54:04,522-Log_Exporter_Client-INFO-Removing file TEMP.log
      yyyy-mm-dd hh:05:02,101-Log_Exporter_Client-INFO-Logging at the rate of 5000 logs per second
    2. The job only took about 3 minutes to complete, but waited an extra 10 minutes for the next job to start. You should change the crontab job to run every 4 minutes like below. Please note, this is equal to running the job every N-th minute (*/N) instead of every mintue(*) for the minute indicator in crontab:  
      1. */4 * * * * spluk_agent.py