segment marked as down in greenplum command center for mirrorless cluster.
search cancel

segment marked as down in greenplum command center for mirrorless cluster.

book

Article ID: 426325

calendar_today

Updated On:

Products

VMware Tanzu Data Intelligence

Issue/Introduction

When use mirrorless cluster (only primary instane), if one segment is down, greenplum command center will mark that segment as down. Even after database restart or gpcc restart, when all segment are up, in gpcc, that segment will still be marked as down.

Environment

command center 7.x

greenplum database 7.x

Cause

Before greenplum db 7.5.0,   for a mirrorless cluster (only primary segments), when one primary segment is down, restart the database can directly recover the failed segments. Also when one segment down, it will not mark the segment as down in gp_segment_configuration table. So for mirrorless cluster, greenplum command center no longer monitor gp_segment_configuration table for latest segment status, it monitor the gp_configuration_history table (the latest records) to check if one segment is up or down. But since mirrorless cluster does not need to run gprecoverseg, after one segment down, there will no records for segment up in gp_configuration_history table. Then it cause gpcc always mark that segment as down.

Resolution

There is an improvement in gpdb 7.5.0. For mirrorless cluster, when there is segment down, after database restart, the failed segment can be recovered. Then there will be a "segment up" records generated in gp_configuration_history table. This will make gpcc display the segment status correctly.

For clusters using gpdb before gpdb 7.5.0, there is a workaround, either insert a new records in gp_configuration_history table for that failed segment or backup and then delete from all records from gp_configuration_history table.