Background -
+ The GUC max_appendonly_tables defines the maximum number of append-optimized tables that can be acted upon concurrently.
+ GUC's value will correlate to how much shared memory used during the Server startup.
You might see this error -
ERROR: can't have more than 20000 different append-only tables open for writing data at the same time. if tables are heavily partitioned or if your workload requires, increase the value of max_appendonly_tables and retry (appendonlywriter.c:434)
+ Currently, there is no guidance available on how to asses what value should be set for a specific cluster workload.
+ Setting GUC value to very high might cause a Cluster startup failure.
This article explains what is the size of memory (in bytes) that each unit of max_appendonly_tables reserves.
All Greenplum database versions.
If at a given point in time # of append-only tables concurrently acted(written/modified) upon crosses the threshold defined by max_appendonly_tables, below error will be thrown:
ERROR: can't have more than 20000 different append-only tables open for writing data at the same time. if tables are heavily partitioned or if your workload requires, increase the value of max_appendonly_tables and retry (appendonlywriter.c:434)
The calculation detailed here helps you in determining the approximate value by which you can increase max_appendonly_tables in order to avoid this error.
Note- All the units of memory are in bytes
Attach to primary postmaster using gdb (gdb) p sizeof(AORelHashEntryData) $3 = 4104 (gdb) p sizeof(AppendOnlyWriterData) $2 = 4
When max_appendonly_tables is set to 10000
(gdb) p hash_estimate_size((Size)MaxAppendOnlyTables, sizeof(AORelHashEntryData)) $4 = 41403272
/* The hash of append-only relations */ size = hash_estimate_size((Size)MaxAppendOnlyTables, sizeof(AORelHashEntryData)); /* The writer structure. */ size = add_size(size, sizeof(AppendOnlyWriterData)); /* safety margin */ size = add_size(size, size / 10);
When max_appendonly_tables is set to 20000
(gdb) p hash_estimate_size((Size)MaxAppendOnlyTables, sizeof(AORelHashEntryData)) $3 = 82804384
/* The hash of append only relations */ size = hash_estimate_size((Size)MaxAppendOnlyTables, sizeof(AORelHashEntryData)); /* The writer structure. */ size = add_size(size, sizeof(AppendOnlyWriterData)); /* safety margin */ size = add_size(size, size / 10);
When max_appendonly_tables was increased from 10000 to 20000, the shared memory it needed was increased from 41403272 to 82804384.
82804384 - 41403272 = 41401112 -> 41401112/4104 = 10087.9902534
Given the above facts, increasing max_appendonly_tables by 1 needs 4104 more bytes per primary/mirror postmaster.
Therefore if there are 10 segments (primary+mirror) on a server and max_appendonly_tables value is increased from 10000 to 11000, it will need additional shmax of (11000 - 10000) * 4104 * 10 = 41040000 bytes
after hitting its maximum limit.