Why is my ITPAM database so large?
search cancel

Why is my ITPAM database so large?


Article ID: 21862


Updated On:


CA Process Automation Base


The ITPAM database mdf file along with the transaction log become large.

ITPAM makes heavy use of the database, it is not unusual for production usage to utilize gigabytes of database space in a relatively short amount of time, and we have seen IT Process Automation utilize over 500GB of disk space in production environments.

Understanding how ITPAM utilizes the database can help you prepare for, as well as minimize this usage as much as possible.


Release: 4.2.x / 4.3.x
Component: ITPAM


ITPAM has three distinct schemas.

  1. Library: (a) definitions (e.g., process definitions) and (2) runtime states (state of running processes, their dataset)

  2. Queues: persistent queues that store request for operations and responses from those operations

  3. Reporting: data collected to support reports accessible from the management console.


  • The Library is by far the largest consumer of database space within ITPam.

    1. Definition objects are stored in XML which is compressed if it gets beyond a certain size.

      The space taken up by definitions is directly related to the number of definitions (processes, datasets, calendars, custom operators.) and the objects sizes.

      Every new version of a new object creates a new copy. It is therefore important when you are developing new contents to avoid creating new versions unnecessarily (take the option to override the existing version when it is practical to do so).

    2. Process instances have the following lifecycle:

      1. Process instance is started: a copy of the version is created (to allow runtime modification, to avoid issues if the version is overwritten while the instance is running). That copy is broken down into multiple rows in c2oruntimeobjects and child tables. These rows store the state and results of individual operators to avoid loading/parsing/serializing too much data when operators are started, complete etc.

      2. Process instance completes: after a couple of minutes, it is rolled up into a single row (and XML) since it can no longer change.

      3. Periodically, the system checks history settings defined in the Orchestrator policies (ITPAM Admin client configuration browser). It moves objects from the runtime tables to equivalent archive tables. The instances remain accessible, but you have to explicitly ask for archives.

        At that point, their only effect on the system is space.

        Archives are never actively automatically deleted by the system.

Administrators can delete them from the configuration browser or though other direction SQL means.

To keep things small, it is important to understand what is growing:

  • Definitions: Try to limit the number of useless versions you create. Work within a single version and choose to 'overwrite' whenever and as often as possible.

  • Archive tables: Ensure that Archive and Purging is enabled and tuned to ensure completed instances are cleaned up automatically.  Please see  Enable and Tune Archiving and Purging feature of Process Automation for details on enabling and tuning your archiving engine.

  • Transaction Logs: depending on the recovery model and the database itself (SQL Server, MySQL etc..), you may end up with more or less logs. Every state change of any process instance is transactional committed to the database; the only particularity about ITPAM is that our state data is BLOBS (XML) which can get pretty big. The DBMS generates and maintains the transaction log to be able to recover.

    Ensure that the DBA closely monitors the transaction log as you begin working, deploy a new install or version, or when working with new content to understand the tranaction log usage and to prevent stoppages.

    Support regularly gets support calls where ITpam has quit working due to the transaction log being full.


  • They only store pending requests/responses. These tables do not grow significantly.

Reporting DB:

  • That data can be in an entirely separate database. The system inserts data in bulk, and asynchronously from activities to progress running instances. To redirect the reporting to a separate database rerun the ITpam install and choose the "Reconfigure Existing install" option

  • There is no automatic cleaning of the reporting data.

  • If you do not want to use the reporting feature reporting, you can turn it off entirely by setting the following in the OasisConfig.properties file (under ..PAM/server/c2o/.config):

    and restart the Orchestrator(s)

    This will save space and processing.

For SQL Server:

  • Please also see the help for "How Does Shrinking the Log File Work?". Please discuss the below with your DBA first.
  • Turn the DB recovery model from full to simple
  • Shrink the DB (using shrink-file operation detailed in SQL help topic mentioned above)
  • Issue "checkpoint" command (commit)