[gpadmin@mdw_prod gpAdminLogs]$ gprecoverseg -a 20150305:14:03:33:006153 gprecoverseg:mdw:gpadmin-[INFO]:-Starting gprecoverseg with args: -a 20150305:14:03:33:006153 gprecoverseg:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.4.1 build 2' 20150305:14:03:33:006153 gprecoverseg:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.4.1 build 2) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Feb 10 2015 14:15:10' 20150305:14:03:33:006153 gprecoverseg:mdw:gpadmin-[INFO]:-Checking if segments are ready 20150305:14:03:33:006153 gprecoverseg:mdw:gpadmin-[INFO]:-Obtaining Segment details from master... 20150305:14:03:36:006153 gprecoverseg:mdw:gpadmin-[INFO]:-Obtaining Segment details from master... 20150305:14:03:42:006153 gprecoverseg:mdw:gpadmin-[INFO]:-Performing persistent table check 20150305:14:03:49:006153 gprecoverseg:mdw:gpadmin-[ERROR]:-Persistent table check gp_persistent_relation_node <=> gp_global_sequence failed on host sdw8:50002. 20150305:14:03:49:006153 gprecoverseg:mdw:gpadmin-[ERROR]:-Persistent table check gp_persistent_relation_node <=> gp_global_sequence failed on host sdw7:50001. 20150305:14:03:49:006153 gprecoverseg:mdw:gpadmin-[ERROR]:-Persistent table check gp_persistent_relation_node <=> gp_global_sequence failed on host sdw1:50003. 20150305:14:03:49:006153 gprecoverseg:mdw:gpadmin-[CRITICAL]:-gprecoverseg failed. (Reason='Persistent tables check failed. Please fix the persistent tables issues before running recoverseg') exiting... [gpadmin@mdw_prod gpAdminLogs]$
This is a new feature on the gprecoverseg tool from 4.3.4.0. The earlier persistent issue was encountered in the midway between the recovery which can be hours before it actually detects the issue.
With the new enhancement to gprecoverseg in GPDB v4.3.4.0, the persistent catalog issue are checked even before actually running the segment recovery.
When the gprecoverseg is run on a live system this might be misleading error message as there might be objects created and dropped, to skip the check use the --skip-persistent-check option along with gprecoverseg:
[gpadmin@mdw ~]$ gprecoverseg --skip-persistent-check 20150514:10:31:18:761299 gprecoverseg:mdw:gpadmin-[INFO]:-Starting gprecoverseg with args: --skip-persistent-check 20150514:10:31:18:761299 gprecoverseg:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.4.1 build 2' 20150514:10:31:18:761299 gprecoverseg:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.4.1 build 2) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Feb 10 2015 14:15:10' 20150514:10:31:18:761299 gprecoverseg:mdw:gpadmin-[INFO]:-Checking if segments are ready 20150514:10:31:18:761299 gprecoverseg:mdw:gpadmin-[INFO]:-Obtaining Segment details from master... 20150514:10:31:18:761299 gprecoverseg:mdw:gpadmin-[INFO]:-Obtaining Segment details from master... 20150514:10:31:18:761299 gprecoverseg:mdw:gpadmin-[INFO]:-Skipping persistent table check
The above issue with false reporting is fixed in GPDB 4.3.4.2.
If you are above 4.3.4.2 or you want to ensure there is no Persistent issue then you can use the below steps to check the status of the catalog.
-- Cleanup the orphaned temp schema first using the steps mentioned in the article and retry the gprecoverseg again
-- If the error still occurs, then run the gpcheckcat as indicated in the article, once done check if there are any persistent issues for those reported segments in the error message of gprecoverseg:
grep -i ERROR /home/gpadmin/gpAdminLogs/<gpcheckcat_logs>
If there are any, engage Pivotal Support
-- If there is no issue for those segments, then execute "gprecoverseg -a" for recovery.