BPM_ERRORS table filling up impacting performance
search cancel

BPM_ERRORS table filling up impacting performance

book

Article ID: 50586

calendar_today

Updated On:

Products

Clarity PPM SaaS Clarity PPM On Premise

Issue/Introduction

The DBA noticed that the BPM_ERRORS table is doing a FULL TABLE SCAN every time it runs, which impacts the database performance. This may also impact application and process performance

The table is currently very large in size. Every time that table is queried, Oracle will have to flush all its buffer data to disk in order to load BPM_ERRORS data for read. Can the records from this table be deleted or truncated?

Environment

Release: Any

Resolution

Run Delete Process Instances job on regular basis:

  1. Start with a lower environment
  2. Run the Delete Process Instance Job. If there are many records to delete, the job may take a long time.
  3. Make sure that you enter date parameters (eg. run month by month). Aim to have not more than 5000 removed at each run

Note:

  • The Delete Process Instance job will delete Aborted or Completed processes. If you have many processes in Error/Failed, those have to be aborted first before deleting them.
  • If a specific process throws too many errors, this is something that has to be worked on with the process developer.

Best Practices: 

  • As a best practice, ensure the amount of records in BPM_ERRORS is not going over 1M.
  • Recommended amount is under 500K rows.
  • To keep the numbers low, schedule different instances of Delete Process Instance job for the process instances that generate different amount of logging.
  • If you routinely have more than 10M records consult Broadcom Support for additional guidance on best practices and adjustment of configuration

Additional Information

Also check if you have any orphaned records as per below: Delete Orphaned records from the BPM_ERRORS table