Troubleshooting large table growth in vRealize Orchestrator 7.x and vRealize Automation 7.x
search cancel

Troubleshooting large table growth in vRealize Orchestrator 7.x and vRealize Automation 7.x

book

Article ID: 326006

calendar_today

Updated On:

Products

VMware Aria Suite

Issue/Introduction

Symptoms:
  • You are unable to run workflows to request new Deployments.
  • Running the commands for checking the top 10 DB tables as per vRealize Orchestrator PostgreSQL database size is abnormally inflated reports the vmo_ tables are consuming a large amount of space many Gigabytes in size.
    RelationName | total_size
    ---------------------------------------------------------------+------------
    public.vmo_tokenreplay | 41 GB
    public.vmo_workflowtokencontent | 48 MB
    public.vmo_statistic | 12 MB
    public.vmo_workflowtokenstatistics | 5688 kB
    public.vmo_vroconfiguration | 4352 kB


Environment

VMware vRealize Orchestrator 7.6.x

Cause

Workflow tokens may bloat in size when running too many intensive operations in your scriptable tasks or workflows. The platform allows for a user of Orchestrator to write code that may write too much information to the workflow execution, and subsequently the database.

Resolution

  • Reduce your action scripts to follow the single responsibility principle.
  • When polling for large lists of objects from an Orchestrator plugin, use oData or xpath filtering for your searches:
Example:
See Usage of Odata Filtering for Catalog API for additional information on doing this with an HTTP request if these calls exist inside your orchestrator logic.
Example:
JavaScript using xpath in the vRealize Orchestrator vCenter plugin:
var vms = vcenter.getAllVirtualMachines(null , 'xpath:runtime[question]');
  • See the VMware vRealize Orchestrator 7.x Coding Design guide for additional information.


Workaround:
Prerequisites
  • Take snapshot of each vRealize Orchestrator or Automation appliances in the cluster

Procedure

  1. SSH to your vRA appliance as root
  2. Once logged in, switch to the postgres user with the following command:
    su postgres
    Change to the vRA database (VCAC): 
    psql vcac
    DELETE FROM vmo_tokenreplay;
  • If Delete command fails complaining about space, you may try the truncate command to remove all the information in the entire table:
TRUNCATE ALL vmo_tokereplay;


Additional Information

In some cases, the delete command will fail due to space issue and you may see following errors
ERROR:  could not access status of transaction 0
DETAIL:  Could not write to file "pg_subtrans/2C55" at offset 8192: No space left on device.
In that case check for /storage/db/pgdata/pg_log/
postgresql and the bz2 files are log files
You can remove the older one just keep the one which is of latest date

As clearing log file may only free up 4-5 MB space delete command may still fail Run the truncate 
command below.
TRUNCATE ALL vmo_tokereplay;
Token replay is basically a debugging feature, it allows to follow the inputs/outputs of each “token”/item in a workflow run, so either customer is using it or not, it’s additional information that doesn’t affect in any way the execution of workflows etc.

For workflows storing lots of large objects as part of the workflow execution runs, this means a large amount of vmo_tokenreplay entries.

If customer do not want to use this feature it is recommended to disable this feature to avoid excessive storage consumption you can monitor the table

The feature can  be disabled by running the following command
mv /usr/lib/vco/app-server/extensions/tokenreplay-7.6.0.jar /usr/lib/vco/app-server/extensions/tokenreplay-7.6.0.disable

After running the command please start and stop the VCO services.

service vco-server stop
service vco-configurator stop
service vco-server start
service vco-configurator start