How to Calculate the Size of max_appendonly_tables in Greenplum - ERROR: can't have more than 20000 different append-only tables open for writing data at the same time
search cancel

How to Calculate the Size of max_appendonly_tables in Greenplum - ERROR: can't have more than 20000 different append-only tables open for writing data at the same time

book

Article ID: 296168

calendar_today

Updated On:

Products

VMware Tanzu Greenplum VMware Tanzu Data Suite

Issue/Introduction

Background -

+ The GUC max_appendonly_tables defines the maximum number of append-optimized tables that can be acted upon concurrently.

+ GUC's value will correlate to how much shared memory used during the Server startup.

 

You might see this error -

ERROR: can't have more than 20000 different append-only tables open for writing data at the same time. if tables are heavily partitioned or if your workload requires, increase the value of max_appendonly_tables and retry (appendonlywriter.c:434)

 

+ Currently, there is no guidance available on how to asses what value should be set for a specific cluster workload.

+ Setting GUC value to very high might cause a Cluster startup failure.

This article explains what is the size of memory (in bytes) that each unit of max_appendonly_tables reserves.

 

 

Environment

All Greenplum database versions.

Cause

If at a given point in time # of append-only tables concurrently acted(written/modified) upon crosses the threshold defined by max_appendonly_tables, below error will be thrown:

ERROR: can't have more than 20000 different append-only tables open for writing data at the same time. if tables are heavily partitioned or if your workload requires, increase the value of max_appendonly_tables and retry (appendonlywriter.c:434)

Resolution

The calculation detailed here helps you in determining the approximate value by which you can increase max_appendonly_tables in order to avoid this error.

Note- All the units of memory are in bytes

  • A hash table is maintained in the shared memory for max_appendonly_tables, each entry of this table is of size 4104 bytes. Each entry of the table is of type AORelHashEntryData. The size of this data structure can be verified using the following method:
    Attach to primary postmaster using gdb
    
    (gdb) p sizeof(AORelHashEntryData)
    $3 = 4104
    
    (gdb) p sizeof(AppendOnlyWriterData) 
    $2 = 4

When max_appendonly_tables is set to 10000

  • Amount of shared memory reserved by the GUC:
    (gdb) p  hash_estimate_size((Size)MaxAppendOnlyTables, sizeof(AORelHashEntryData))
    $4 = 41403272
  • Size of the hash table it needs is:
    /* The hash of append-only relations */
        size = hash_estimate_size((Size)MaxAppendOnlyTables,
                                  sizeof(AORelHashEntryData));
    
        /* The writer structure. */
        size = add_size(size, sizeof(AppendOnlyWriterData));
    
        /* safety margin */
        size = add_size(size, size / 10);
  • When max_appendonly_tables is set to 10000, the total size that this GUC reserves in the shared memory is 41403272 + 4+ 4140327.6 = 45543603.6 bytes

When max_appendonly_tables is set to 20000

  • Amount of shared memory reserved by the GUC:
    (gdb) p hash_estimate_size((Size)MaxAppendOnlyTables, sizeof(AORelHashEntryData))
    $3 = 82804384
  • Size of the hash table it needs is:
    /* The hash of append only relations */
        size = hash_estimate_size((Size)MaxAppendOnlyTables,
                                  sizeof(AORelHashEntryData));
    
        /* The writer structure. */
        size = add_size(size, sizeof(AppendOnlyWriterData));
    
        /* safety margin */
        size = add_size(size, size / 10);
  • When max_appendonly_tables is set to 20000, the total size that this GUC reserves in the shared memory is 82804384 + 4 + ((82804384+4)/10) =91084826.8 bytes

When max_appendonly_tables was increased from 10000 to 20000, the shared memory it needed was increased from 41403272 to 82804384.

82804384 - 41403272 = 41401112 -> 41401112/4104 = 10087.9902534

Given the above facts, increasing max_appendonly_tables by 1 needs 4104 more bytes per primary/mirror postmaster.

Therefore if there are 10 segments (primary+mirror) on a server and max_appendonly_tables value is increased from 10000 to 11000, it will need additional shmax of (11000 - 10000) * 4104 * 10 = 41040000 bytes after hitting its maximum limit.