Backup to storage plugin
This would be the best practice. In general, the --resize-cluster option works best when a --plugin-config is used (s3 or DDBoost). In such a case, gpbackup will organize the data directory on the specified intermediate storage in the following structure:Back up to local storage or NFS
If there is no storage plugin, we will have to utilize --backup-dir, the better alternative would be to backup to a shared mount which is accessible by all of the segments, like a NFS mount. If that is not possible, and customer has to back to the local storage then follow the below instructions.$ /usr/local/greenplum-db-6.26.2/bin/gpbackup --dbname backuptest --single-data-file --backup-dir /data/backuptest --single-backup-dir
$ gpssh -f hostfile => ls -ltrh /data/backuptest/backups/20240402/20240402111439/ [ mdw] total 28K [ mdw] -r--r--r-- 1 gpadmin gpadmin 4.5K Apr 2 11:14 gpbackup_20240402111439_metadata.sql [ mdw] -r--r--r-- 1 gpadmin gpadmin 8.7K Apr 2 11:14 gpbackup_20240402111439_toc.yaml [ mdw] -r--r--r-- 1 gpadmin gpadmin 741 Apr 2 11:14 gpbackup_20240402111439_config.yaml [ mdw] -r--r--r-- 1 gpadmin gpadmin 1.9K Apr 2 11:14 gpbackup_20240402111439_report [sdw2] total 168K [sdw2] -r--r--r-- 1 gpadmin gpadmin 61 Apr 2 11:14 gpbackup_5_20240402111439_toc.yaml [sdw2] -rw-r--r-- 1 gpadmin gpadmin 50K Apr 2 11:14 gpbackup_5_20240402111439.gz [sdw2] -r--r--r-- 1 gpadmin gpadmin 61 Apr 2 11:14 gpbackup_2_20240402111439_toc.yaml [sdw2] -rw-r--r-- 1 gpadmin gpadmin 50K Apr 2 11:14 gpbackup_2_20240402111439.gz [sdw2] -r--r--r-- 1 gpadmin gpadmin 61 Apr 2 11:14 gpbackup_3_20240402111439_toc.yaml [sdw2] -rw-r--r-- 1 gpadmin gpadmin 49K Apr 2 11:14 gpbackup_3_20240402111439.gz [sdw1] total 168K [sdw1] -r--r--r-- 1 gpadmin gpadmin 61 Apr 2 11:14 gpbackup_4_20240402111439_toc.yaml [sdw1] -rw-r--r-- 1 gpadmin gpadmin 50K Apr 2 11:14 gpbackup_4_20240402111439.gz [sdw1] -r--r--r-- 1 gpadmin gpadmin 61 Apr 2 11:14 gpbackup_1_20240402111439_toc.yaml [sdw1] -rw-r--r-- 1 gpadmin gpadmin 49K Apr 2 11:14 gpbackup_1_20240402111439.gz [sdw1] -r--r--r-- 1 gpadmin gpadmin 61 Apr 2 11:14 gpbackup_0_20240402111439_toc.yaml [sdw1] -rw-r--r-- 1 gpadmin gpadmin 50K Apr 2 11:14 gpbackup_0_20240402111439.gz => quit
//The source cluster segment configuration $ psql -c "select content,hostname from gp_segment_configuration where role='p' and content<>'-1'" content | hostname ---------+---------- 3 | sdw2 2 | sdw2 1 | sdw1 0 | sdw1 5 | sdw2 4 | sdw1 (6 rows) //The target cluster segment configuration $ psql -c "select content,hostname from gp_segment_configuration where role='p' and content<>'-1'" content | hostname ---------+---------------------- 1 | sdw1 0 | sdw1 2 | sdw2 3 | sdw2 (4 rows)The re-mapping would be like(round robin way):
The new cluster | The old cluster | ||
---|---|---|---|
New seg host1 | seg0 | seg0 | seg4 |
seg1 | seg1 | seg5 | |
New seg host2 | seg2 | seg2 | |
seg3 | seg3 |
$ gpssh -f hostfile => ls -ltrh /data/backuptest/backups/20240402/20240402111439/ [sdw2] total 112K [sdw2] -r--r--r-- 1 gpadmin gpadmin 61 Apr 2 11:14 gpbackup_2_20240402111439_toc.yaml [sdw2] -rw-r--r-- 1 gpadmin gpadmin 50K Apr 2 11:14 gpbackup_2_20240402111439.gz [sdw2] -r--r--r-- 1 gpadmin gpadmin 61 Apr 2 11:14 gpbackup_3_20240402111439_toc.yaml [sdw2] -rw-r--r-- 1 gpadmin gpadmin 49K Apr 2 11:14 gpbackup_3_20240402111439.gz [sdw1] total 224K [sdw1] -r--r--r-- 1 gpadmin gpadmin 61 Apr 2 11:14 gpbackup_4_20240402111439_toc.yaml [sdw1] -rw-r--r-- 1 gpadmin gpadmin 50K Apr 2 11:14 gpbackup_4_20240402111439.gz [sdw1] -r--r--r-- 1 gpadmin gpadmin 61 Apr 2 11:14 gpbackup_1_20240402111439_toc.yaml [sdw1] -rw-r--r-- 1 gpadmin gpadmin 49K Apr 2 11:14 gpbackup_1_20240402111439.gz [sdw1] -r--r--r-- 1 gpadmin gpadmin 61 Apr 2 11:14 gpbackup_0_20240402111439_toc.yaml [sdw1] -rw-r--r-- 1 gpadmin gpadmin 50K Apr 2 11:14 gpbackup_0_20240402111439.gz [sdw1] -rw-r--r-- 1 gpadmin gpadmin 50K Apr 2 11:29 gpbackup_5_20240402111439.gz [sdw1] -r--r--r-- 1 gpadmin gpadmin 61 Apr 2 11:29 gpbackup_5_20240402111439_toc.yaml [ mdw] total 28K [ mdw] -r--r--r-- 1 gpadmin gpadmin 4.5K Apr 2 11:14 gpbackup_20240402111439_metadata.sql [ mdw] -r--r--r-- 1 gpadmin gpadmin 8.7K Apr 2 11:14 gpbackup_20240402111439_toc.yaml [ mdw] -r--r--r-- 1 gpadmin gpadmin 741 Apr 2 11:14 gpbackup_20240402111439_config.yaml [ mdw] -r--r--r-- 1 gpadmin gpadmin 1.9K Apr 2 11:14 gpbackup_20240402111439_report => exit
Error 1: [CRITICAL]:-Backup directories missing or inaccessible on 1 segment. See /home/gpadmin/gpAdminLogs/gprestore_20240312.log for a complete list of errors. Error 2: [DEBUG]:-Expected to find 2 file(s) on segment 3 on host XXXGP3, but found 1 instead. [DEBUG]:-Expected to find 2 file(s) on segment 4 on host XXXGP3, but found 1 instead. [DEBUG]:-Expected to find 2 file(s) on segment 5 on host XXXGP3, but found 1 instead. [CRITICAL]:-Found incorrect number of backup files on 6 segments. See /home/gpadmin/gpAdminLogs/gprestore_20240314.log for a complete list of errors. github.com/greenplum-db/gp-common-go-libs/cluster.LogFatalClusterError5) Now we can run a restore on the target cluster with --resize-cluster
$ dropdb backuptest $ createdb backuptest $ gprestore --resize-cluster --backup-dir /data/backuptest --timestamp 20240402111439 20240402:11:33:10 gprestore:gpadmin:support-gpddb6-mdw:026863-[INFO]:-Restore Key = 20240402111439 20240402:11:33:10 gprestore:gpadmin:support-gpddb6-mdw:026863-[INFO]:-Resize restore specified, will restore a backup set from a 6-segment cluster to a 4-segment cluster 20240402:11:33:10 gprestore:gpadmin:support-gpddb6-mdw:026863-[INFO]:-gpbackup version = 1.30.2 20240402:11:33:10 gprestore:gpadmin:support-gpddb6-mdw:026863-[INFO]:-gprestore version = 1.30.2 20240402:11:33:10 gprestore:gpadmin:support-gpddb6-mdw:026863-[INFO]:-Greenplum Database Version = 6.26.2 build commit:609ff2bb9ccb7d393d772b29770e757dbd2ecc79 20240402:11:33:10 gprestore:gpadmin:support-gpddb6-mdw:026863-[INFO]:-Restoring pre-data metadata Pre-data objects restored: 6 / 6 [=====================================================] 100.00% 0s 20240402:11:33:10 gprestore:gpadmin:support-gpddb6-mdw:026863-[INFO]:-Pre-data metadata restore complete Tables restored: 1 / 1 [==================================================================] 100.00% 20240402:11:33:11 gprestore:gpadmin:support-gpddb6-mdw:026863-[INFO]:-Data restore complete 20240402:11:33:11 gprestore:gpadmin:support-gpddb6-mdw:026863-[INFO]:-Restoring post-data metadata 20240402:11:33:11 gprestore:gpadmin:support-gpddb6-mdw:026863-[INFO]:-Post-data metadata restore complete 20240402:11:33:11 gprestore:gpadmin:support-gpddb6-mdw:026863-[INFO]:-Found neither /opt/greenplum_6.26.2/bin/gp_email_contacts.yaml nor /home/gpadmin/gp_email_contacts.yaml 20240402:11:33:11 gprestore:gpadmin:support-gpddb6-mdw:026863-[INFO]:-Email containing gprestore report /data/backuptest/backups/20240402/20240402111439/gprestore_20240402111439_20240402113310_report will not be sent 20240402:11:33:11 gprestore:gpadmin:support-gpddb6-mdw:026863-[INFO]:-Restore completed successfully $