Increasing "max_connections" over 1000 Leads to Errors
search cancel

Increasing "max_connections" over 1000 Leads to Errors

book

Article ID: 295883

calendar_today

Updated On:

Products

VMware Tanzu Greenplum

Issue/Introduction

Symptoms:

Pivotal Greenplum (GPDB) Admin has increased the parameters of max_connections in postgresql.conf file to more than 1000; because of this change, the database does not start (shared memory allocation errors in the log file).

 

Environment


Cause

Some database parameters control the size of shared memory areas that database allocate. 'max_connections' is one of those parameters. Increasing max_connections will increase the allocation size of shared memory, which in turn will affect the total size of shared memory the instance allocates. If this amount is greater than allowed by OS parameters (SHMMAX, SHMALL), then the allocation will be refused and the instance will not be able to start.

GPDB was designed to be a Data Warehouse (DW) type of database system. In DW systems, there is usually a handful of large queries running (as opposed to OLTP systems where there are multiple quick statements running). Because of that GPDB is not optimized for handling thousands of connections simultaneously.

 

Resolution

  • Immediate resolution is to decrease the value of "max_connections" and the instance will be able to start again.
  • If "max_connections" really need to be that high, increase the SHMMAX and SHMALL OS parameter values, so that the shared memory allocation succeeds.

Note- If the cluster really needs to handle thousands of connections, it is recommended to install connection pooler in front of the database to handle the connections.