Full Load DWH fails with "must be a member of the role whose process is being terminated or member of pg_signal_backend"
search cancel

Full Load DWH fails with "must be a member of the role whose process is being terminated or member of pg_signal_backend"

book

Article ID: 235340

calendar_today

Updated On:

Products

Clarity PPM SaaS

Issue/Introduction

On SAAS GCP, The Full Load Data Warehouse intermittently fails with the following or similar error related to an ALTER TABLE

ERROR 2022-02-19 21:07:46,048 [dwh_fin_plan_summary_facts UUID: 2cc224bc-1b1f-4ea2-aa6c-2e9b8340c14d] 
dwh.event Load Facts PG - An error occurred executing this job entry : Couldn't execute SQL: CALL DWH_FIN_PLAN_SUM_FACTS_LOAD();
ERROR: must be a member of the role whose process is being terminated or member of pg_signal_backend Where:
SQL statement "SELECT pg_terminate_backend(v_pid)" PL/pgSQL function dwh_disable_constraints_sp(character varying,character varying) line 58 at PERFORM SQL statement "CALL DWH_DISABLE_CONSTRAINTS_SP('DWH_FIN_PLAN_SUMMARY_FACTS','FK')" PL/pgSQL function dwh_fin_plan_sum_facts_load() line 70 at CALL 

Environment

Release : 16.0.0, 16.0.1 

Cause

DE64173

  • The Load DWH is coming into conflict with Postgres auto vacuum runs.
  • The run of an auto vacuum will lock the table and prevent any ALTER table statements, such as dropping constraints. 

 

Workaround: The subsequent 'Incremental' execution of the Load DWH job will resolve the issue. 

 

Resolution

DE64173 is resolved in 16.0.2

Prior Release:

  • Broadcom resolves this by checking the Postgres Auto Vacuum activity and configuring it as appropriate.
  • This is resolved in all PROD environments.

Post 16.0.2, this may happen if a user runs an extensive ODATA report.  The Data Warehouse job cannot continue until the large report is finished.

It is a best practice to schedule ODATA pulls so they do not occur when the Load DWH job runs.

Additional Information

This may surface if a large query is being run on the Data Warehouse table being updated by the Load Data Warehouse.  It is a timing issue.

Please ensure all scheduled reports or odata pulls are not scheduled the same time as the Load DWH.

Another culprit could be the Auto Vacuum taking too long as mentioned above.