When browsing the largest workflow tokens, the Orchestrator process restarts generating an 502 / 503 error in the UI
search cancel

When browsing the largest workflow tokens, the Orchestrator process restarts generating an 502 / 503 error in the UI

book

Article ID: 369686

calendar_today

Updated On:

Products

VMware Aria Suite

Issue/Introduction

  • You are using Aria Automation Orchestrator
  • When browsing the largest workflow tokens in the UI, the Aria Automation Orchestrator server process restarts generating 502 or 503 errors in the dashboard.
  • In some cases a heap dump is created.
  • In other cases the following log message is seen in the journal:
    kernel: GC Thread#16 invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=941

Environment

  • VMware Aria Automation 8.x
  • VMware Aria Automation Orchestrator 8.x

Cause

  • Some workflow tokens are very large ( > 10 MB) causing memory allocation issues when this content is decompressed.
    • This issue may be experienced during compression as well.

Impact:

  • The user is not able to browse and work with the largest tokens.
  • When browsing the largest workflow tokens, the Orchestrator process restarts generating an 502 / 503 error in the UI.

Resolution

This issue is resolved in Aria Automation Orchestrator 8.18.1.

Workaround:

Prerequisites:

  • You have valid backups or temporary snapshots of the appliance(s) participating in the cluster.
  • You have access to root username and password for the appliance(s).
  • You have access to an SSH tool or utility.

Procedure: Delete the largest tokens

The user can delete the largest tokens to reduce the maximum allowed size of retained content.

  1. SSH into a one appliance in the participating cluster.
  2. Run the following commands to connect to PostgreSQL and the vco-db:
    vracli dev psql
    \c vco-db
  3. Copy the following query and run it:
    delete from vmo_workflowtoken where id in (select tokenid from vmo_workflowtokenstatistics s where (s.tokensize is null or s.tokensize >= 10000000));
    delete from vmo_workflowtokencontent where workflowtokenid in (select tokenid from vmo_workflowtokenstatistics s where (s.tokensize is null or s.tokensize >= 10000000));
    delete from vmo_workflowtokenstatistics where tokenid in (select tokenid from vmo_workflowtokenstatistics s where (s.tokensize is null or s.tokensize >= 10000000));

Procedure: Reduce the allowed maximum size of persisted token content.

  1. Set the following system properties
    com.vmware.o11n.token.content-size-hard-limit
    com.vmware.o11n.token.content-size-soft-limit
    • Content that exceeds the hard setting will not be saved in the database, the soft limit just produces warning.
    • Default values are 32 * 1024 * 1024 and 4 * 1024 * 1024, respectively.