gpupgrade fails with "[ERROR]:-execute: substep "UPGRADE_MASTER": upgrading master: exit status" in Greenplum
search cancel

gpupgrade fails with "[ERROR]:-execute: substep "UPGRADE_MASTER": upgrading master: exit status" in Greenplum

book

Article ID: 296569

calendar_today

Updated On:

Products

VMware Tanzu Greenplum

Issue/Introduction

The gpupgrade tool fails to upgrade Greenplum 5.x to Greenplum 6.x, with the following errors:


gpupgrade_cli log:

20201203:18:10:56 gpupgrade_cli:gpadmin:ddlgpmdev11a.us.dell.com:014986-[ERROR]:-execute: substep "UPGRADE_MASTER": upgrading master: exit status 1
20201203:18:10:56 gpupgrade_cli:gpadmin:ddlgpmdev11a.us.dell.com:041563-[DEBUG]:-Execute took 9m44s
20201203:18:10:56 gpupgrade_cli:gpadmin:ddlgpmdev11a.us.dell.com:041563-[DEBUG]:-Execute:
    github.com/greenplum-db/gpupgrade/cli/commanders.Execute
        /tmp/build/80754af9/gpupgrade_src/cli/commanders/steps.go:136
  - rpc error: code = Unknown desc = substep "UPGRADE_MASTER": upgrading master: exit status 1


execute_date_log:

*failure*
 
Consult the last few lines of "pg_upgrade_dump_16387.log" for
 
the probable cause of the failure.
 
Failure, exiting
 
 
 
UPGRADE_MASTER took 7m20s


g_upgrade_dump_16387.log:

pg_restore: creating VIEW gp_cluster
 
pg_restore: [archiver (db)] Error while PROCESSING TOC:
 
pg_restore: [archiver (db)] Error from TOC entry 850; 1259 153374 VIEW gp_cluster gpadmin
 
pg_restore: [archiver (db)] could not execute query: ERROR: relation "pg_filespace_entry" does not exist
 
LINE 14: ...) AS m_dir FROM ((gp_segment_configuration c JOIN pg_filespa...
 
                               ^
 
  Command was: 
 
-- For binary upgrade, must preserve pg_type oid
 
SELECT binary_upgrade.set_next_pg_type_oid('153376'::pg_catalog.oid, '1531...


Environment

Product Version: 5.28

Resolution

The view "dba_work"."gp_cluster", which is failing to upgrade is dependent on relation pg_filespace_entry. Since file-space, has been deprecated from Greenplum 6.x, the pg_filespace_entry does not exist in Greenplum 6.x.

The workaround is to drop the concerned view, continue the upgrade, and recreate the view after the upgrade is finished on the Greenplum 6.x target cluster.