Greenplum 6.x segment recovery appears to fail immediately
search cancel

Greenplum 6.x segment recovery appears to fail immediately

book

Article ID: 296312

calendar_today

Updated On:

Products

VMware Tanzu Greenplum

Issue/Introduction

In Greenplum 6.0 and Greenplum 6.1, gpstate -e should show that the segments are still `Unsynchronized` after recovering a segment.
[gpadmin@mdw_lab2 ~]$ gpstate -e

20191212:11:28:07:008067 gpstate:mdw_lab2:gpadmin-[INFO]:-Starting gpstate with args: -e

20191212:11:28:07:008067 gpstate:mdw_lab2:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.1.0 build commit:6788ca8c13b2bd6e8976ccffea07313cbab30560'

20191212:11:28:07:008067 gpstate:mdw_lab2:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.24 (Greenplum Database 6.1.0 build commit:6788ca8c13b2bd6e8976ccffea07313cbab30560) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Nov  1 2019 22:06:07'

20191212:11:28:07:008067 gpstate:mdw_lab2:gpadmin-[INFO]:-Obtaining Segment details from master...

20191212:11:28:07:008067 gpstate:mdw_lab2:gpadmin-[INFO]:-Gathering data from segments...

.

20191212:11:28:08:008067 gpstate:mdw_lab2:gpadmin-[INFO]:-----------------------------------------------------

20191212:11:28:08:008067 gpstate:mdw_lab2:gpadmin-[INFO]:-Segment Mirroring Status Report

20191212:11:28:08:008067 gpstate:mdw_lab2:gpadmin-[INFO]:-----------------------------------------------------

20191212:11:28:08:008067 gpstate:mdw_lab2:gpadmin-[INFO]:-Unsynchronized Segment Pairs

20191212:11:28:08:008067 gpstate:mdw_lab2:gpadmin-[INFO]:-   Current Primary   Port    Mirror      Port

20191212:11:28:08:008067 gpstate:mdw_lab2:gpadmin-[INFO]:-   sdw4_lab2         30677   sdw1_lab2   35677


However, no failures are reported in the logging.


Environment

Product Version: 6.1

Resolution

In Greenplum 6.x the gpstate -e does not have the summary details listed as it did in Greenplum 4.x and Greenplum 5.x. The segment may still be recovering in the background. 

Run `gpstate -s` to identify the segment state.

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-   Segment Info

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-      Hostname                          = sdw1_lab2

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-      Address                           = sdw1_lab2

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-      Datadir                           = /data/mirror/gp_6.1.0_201912101437_adam6.1_seg3

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-      Port                              = 35677

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-   Mirroring Info

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-      Current role                      = Mirror

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-      Preferred role                    = Mirror

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-      Mirror status                     = Catchup

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-   Replication Info

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-      WAL Sent Location                 = 1/8A100000

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-      WAL Flush Location                = 1/89C00000 (5242880 bytes left)

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-      WAL Replay Location               = 1/1A5D1988 (1873995384 bytes left)

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-   Status

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-      PID                               = 11431

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-      Configuration reports status as   = Up

20191212:11:27:58:007978 gpstate:mdw_lab2:gpadmin-[INFO]:-      Segment status                    = Up