How to recover GPtext when a Server has been replaced
search cancel

How to recover GPtext when a Server has been replaced

book

Article ID: 296242

calendar_today

Updated On:

Products

VMware Tanzu Greenplum

Issue/Introduction

This article discusses how to recover GPtext when a Server in the cluster has been replaced.

A server has been replaced on a cluster that runs GPtext and the disks have been reformatted.

In this case, gptext-recover does recover the failed segments and gives the following error:
[INFO]:-Execute GPText cluster recover.
[INFO]:-Check zookeeper cluster state ...
[WARNING]:-On host: gpdb-gptext-1, node directory: /data/primary/solr2, missing: , logs, solr.in.sh, 
[WARNING]:-On host: gpdb-gptext-1, node directory: /data/primary/solr1, missing: , logs, solr.in.sh, 
[WARNING]:-Skip recover inconsistant nodes.
[WARNING]:-The data directories are in inconsistent state, some data directories have been manually 
[WARNING]:-Please provide log file: /home/gpadmin/gpAdminLogs/gptext-recover_20200207.log to support 
[INFO]:-Start down solr instances ...
[INFO]:-   Host            Solr Dir
[INFO]:-   gpdb-gptext-1   /data/primary/solr1
[INFO]:-Start command execute success, checking whether instances are working ...
[WARNING]:-Some of GPText's nodes may not started!
[WARNING]:-Check their's solr-*-console.log and solr.log under GPText {node directory}/logs/.
[WARNING]:-Please run 'gptext-recover -r/--index_replicas' to recover indexes' replicas after 
[ERROR]:-Error recovering GPText cluster: Start instances failed.

This issue is cause by a hardware failure.


Environment

Product Version: 3.1

Resolution

To recover the server please follow the steps listed below:


1. Replace failed hardware and recover all Greenplum database segments using full recovery.

2. Make sure to use the same hostname as before and that the new server and ssh known hosts are update so that the ssh password-less is set as per the GP server replacement documentation. 

3. Bring the the Greenplum Database up.

4. Copy the GPtext and Solr directories and their contents from another server to the newly replaced server:
/usr/local/greenplum-solr
/usr/local/greenplum-text-<version>  e.g. /usr/local/greenplum-text-3.3.1
5. Make sure the base directory of the Solr instances is created sucesfully:
Run gptext-state -D from master to get the node dictory 
This is usually in the same directory as the primary of mirrors e.g. /data/primary
6. Check the state of the indexes on the cluster:
gptext-state -D
7. If any of the indexes are in a RED state, restart the gptext cluster, to try an get it back into a YELLOW state:
gptext-stop
gptext-start
8. If or when all indexes are in a YELLOW or GREEN state, run a force gptext recovery, the -f force option will create a new node if the original node is unrecoverable. Therefore as the old node was wiped, it will create a new node in its place on the new server.

Note: go not run gptext-recovery -f if any of the indexes are in RED state.
gptext-recover -f
9. When the recovery is finished, confirm that all indexes are in a GREEN state:
gptext-state -D