How to Change a Segment port in gp_segment_configuration
search cancel

How to Change a Segment port in gp_segment_configuration

book

Article ID: 296952

calendar_today

Updated On:

Products

VMware Tanzu Greenplum VMware Tanzu Greenplum / Gemfire VMware Tanzu Data Suite

Issue/Introduction

Steps to safely update the segment port number in the gp_segment_configuration.

Environment

Greenplum All versions.

Resolution

In order to change the segment port [specified in the table gp_segment_configuration under column "port"], you can take the following steps:


1. Log into the db, view the gp_segment_configuration, sort the entries by content. This will allow you see the primary and mirror pairing. 

NOTE: PLEASE DO NOT STOP A PRIMARY AND MIRROR WITH THE SAME CONTENT ID. THIS WILL CREATE A DOUBLE FAULT. 


2. In this example, the ports for mirror segments 6251greenplum-1 /data/mirror/gp_6.25.1_202308040341203 and 6251greenplum-2 /data/mirror/gp_6.25.1_202308040341201 will be changed.
 

gpadmin=# select * from gp_segment_configuration order by content;
 dbid | content | role | preferred_role | mode | status | port  |    hostname     |     address     |                 datadir
 
------+---------+------+----------------+------+--------+-------+-----------------+-----------------+----------------------------------
-------
    1 |      -1 | p    | p              | n    | u      |  5432 | 6251greenplum-m | 6251greenplum-m | /data/master/gp_6.25.1_20230804034120-1
    6 |       0 | m    | m              | s    | u      | 35000 | 6251greenplum-2 | 6251greenplum-2 | /data/mirror/gp_6.25.1_202308040341200
    2 |       0 | p    | p              | s    | u      | 30000 | 6251greenplum-1 | 6251greenplum-1 | /data/primary/gp_6.25.1_202308040341200
    7 |       1 | m    | m              | s    | u      | 35001 | 6251greenplum-2 | 6251greenplum-2 | /data/mirror/gp_6.25.1_202308040341201
    3 |       1 | p    | p              | s    | u      | 30001 | 6251greenplum-1 | 6251greenplum-1 | /data/primary/gp_6.25.1_202308040341201
    8 |       2 | m    | m              | s    | u      | 35000 | 6251greenplum-1 | 6251greenplum-1 | /data/mirror/gp_6.25.1_202308040341202
    4 |       2 | p    | p              | s    | u      | 30000 | 6251greenplum-2 | 6251greenplum-2 | /data/primary/gp_6.25.1_202308040341202
    9 |       3 | m    | m              | s    | u      | 35001 | 6251greenplum-1 | 6251greenplum-1 | /data/mirror/gp_6.25.1_202308040341203
    5 |       3 | p    | p              | s    | u      | 30001 | 6251greenplum-2 | 6251greenplum-2 | /data/primary/gp_6.25.1_202308040341203
(9 rows)



3. SSH to the desired segment host(s), then stop the specific segment using pg_ctl stop". pg_ctl stop -D <enter data directory> 
 

[gpadmin@6251greenplum-m ~]$ ssh 6251greenplum-1
Last login: Wed Jan 31 18:07:15 2024 from ##.##.##.##
 
[gpadmin@6251greenplum-1 ~]$ pg_ctl stop -D /data/mirror/gp_6.25.1_202308040341203
waiting for server to shut down.... done
server stopped
[gpadmin@6251greenplum-1 ~]$ exit
logout
 
gpadmin@6251greenplum-m ~]$ ssh 6251greenplum-2
Last login: Mon Jan 29 19:04:54 2024 from ##.##.##.##
 
[gpadmin@6251greenplum-2 ~]$ pg_ctl stop -D /data/mirror/gp_6.25.1_202308040341201
waiting for server to shut down.... done
server stopped


4. Confirm the segment(s) is marked down in gp_segment_configuration table.
 

gpadmin=# select * from gp_segment_configuration where status='d';
 dbid | content | role | preferred_role | mode | status | port  |    hostname     |     address     |                datadir
 
------+---------+------+----------------+------+--------+-------+-----------------+-----------------+----------------------------------
------
    9 |       3 | m    | m              | n    | d      | 35001 | 6251greenplum-1 | 6251greenplum-1 | /data/mirror/gp_6.25.1_202308040341203
    7 |       1 | m    | m              | n    | d      | 35001 | 6251greenplum-2 | 6251greenplum-2 | /data/mirror/gp_6.25.1_202308040341201
(2 rows)


5. Run gprecoverseg -o <file_name> to generate a output file. gprecoverseg -o port_switch.out


6. Modify the file generated and change the value of port. In the example below, the ports were changed from 35001 to 36001. 
 

6251greenplum-2|35001|/data/mirror/gp_6.25.1_202308040341201 6251greenplum-2|36001|/data/mirror/gp_6.25.1_202308040341201
6251greenplum-1|35001|/data/mirror/gp_6.25.1_202308040341203 6251greenplum-1|36001|/data/mirror/gp_6.25.1_202308040341203


7. Run gprecoverseg -i port_switch.out. [Note that this will perform a full recovery of the segment]
 

20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-Heap checksum setting is consistent between master and the segments that are candidates for recoverseg
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-Greenplum instance recovery parameters
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:----------------------------------------------------------
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-Recovery from configuration -i option supplied
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:----------------------------------------------------------
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-Recovery 1 of 2
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:----------------------------------------------------------
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Synchronization mode                 = Full
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Failed instance host                 = 6251greenplum-2
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Failed instance address              = 6251greenplum-2
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Failed instance directory            = /data/mirror/gp_6.25.1_202308040341201
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Failed instance port                 = 35001
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Source instance host        = 6251greenplum-1
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Source instance address     = 6251greenplum-1
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Source instance directory   = /data/primary/gp_6.25.1_202308040341201
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Source instance port        = 30001
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Target instance host        = 6251greenplum-2
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Target instance address     = 6251greenplum-2
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Target instance directory   = /data/mirror/gp_6.25.1_202308040341201
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Target instance port        = 36001
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:----------------------------------------------------------
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-Recovery 2 of 2
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:----------------------------------------------------------
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Synchronization mode                 = Full
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Failed instance host                 = 6251greenplum-1
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Failed instance address              = 6251greenplum-1
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Failed instance directory            = /data/mirror/gp_6.25.1_202308040341203
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Failed instance port                 = 35001
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Source instance host        = 6251greenplum-2
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Source instance address     = 6251greenplum-2
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Source instance directory   = /data/primary/gp_6.25.1_202308040341203
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Source instance port        = 30001
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Target instance host        = 6251greenplum-1
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Target instance address     = 6251greenplum-1
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Target instance directory   = /data/mirror/gp_6.25.1_202308040341203
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:-   Recovery Target instance port        = 36001
20240201:19:49:58:018862 gprecoverseg:6251greenplum-m:gpadmin-[INFO]:----------------------------------------------------------
 
Continue with segment recovery procedure Yy|Nn (default=N):
> y


8. Once the recovery is done, validate the change of port in the gp_segment_configuration table.
 

gpadmin=# select * from gp_segment_configuration where port=36001;
 dbid | content | role | preferred_role | mode | status | port  |    hostname     |     address     |                datadir
------+---------+------+----------------+------+--------+-------+-----------------+-----------------+----------------------------------------
    7 |       1 | m    | m              | s    | u      | 36001 | 6251greenplum-2 | 6251greenplum-2 | /data/mirror/gp_6.25.1_202308040341201
    9 |       3 | m    | m              | s    | u      | 36001 | 6251greenplum-1 | 6251greenplum-1 | /data/mirror/gp_6.25.1_202308040341203




Reference: https://github.com/greenplum-db/gpdb/issues/14500