What is the difference between Buffer Pools and Covering in CA Datacom and are there any performance gains for using Buffer Pools versus Covering?
search cancel

What is the difference between Buffer Pools and Covering in CA Datacom and are there any performance gains for using Buffer Pools versus Covering?

book

Article ID: 17489

calendar_today

Updated On:

Products

Datacom DATACOM - AD Ideal CIS COMMON SERVICES FOR Z/OS 90S SERVICES DATABASE MANAGEMENT SOLUTIONS FOR DB2 FOR Z/OS COMMON PRODUCT SERVICES COMPONENT Common Services CA ECOMETER SERVER COMPONENT FOC Easytrieve Report Generator for Common Services INFOCAI MAINTENANCE IPC UNICENTER JCLCHECK COMMON COMPONENT Mainframe VM Product Manager CHORUS SOFTWARE MANAGER CA ON DEMAND PORTAL CA Service Desk Manager - Unified Self Service PAM CLIENT FOR LINUX ON MAINFRAME MAINFRAME CONNECTOR FOR LINUX ON MAINFRAME GRAPHICAL MANAGEMENT INTERFACE WEB ADMINISTRATOR FOR TOP SECRET Xpertware

Issue/Introduction

  1. What is the difference between Covering and Buffer Pools?

    The are several significant differences between having a secondary ?devoted? buffer pool and a ?devoted? covered area.

    • The additional buffer pool works just like buffers in DATAPOOL (DATA and DATA2) and SYSPOOL (IXX and DXX) buffer pools.

    • Devoted buffer pools use all of the same re-use and high-performance buffer search logic that has been developed over the years for standard buffer pools.

      • Covered memory uses either a data block slot/memory slot mapping (Covered First) or a LRU mapping (Covered Active).

    • Devoted buffers handled selected tables index/data blocks and are loaded and managed the same as standard buffer pools.

      • This differs from covered as covered areas are used to hold data blocks that have been loaded to a buffer and then later used to restore that data block to a buffer when requested.

      • In the case, that a devoted buffer pool has the same memory as the covered area and is only managing the same area as the covered area, the devoted buffer pool should be faster as it does not need to ?pass through? a buffer like a covered block does.

    • There are some limits for the devoted buffer pools in V14

      • Devoted buffer data pools used for data buffers are restricted to 31 bit storage the same as DATA and DATA2.

        • So a shop that in 31 bit memory constrained with the existing DATA and DATA2 would only be able to add a DEVOTED data buffer pool if they shrank the DATA and DATA2 pool.

        • This is not the case with IXX and DXX as they can be set to 64 bit in V14.

          • Data buffer pools are allowed to be set to 64 bit in V15.

        • DBIDs using the devoted buffer pools must use ACCESS Optimized and URI

      • Devoted buffer pools currently share (in v14 and v15) the same 99,999 buffer pool limit the standard pools have

        • In other words

          • DATA + DATA2 + FLEXPOOL Cannot exceed a total of 99,999

          • IXXNO + FLEXPOOL or DXXNO + FLEXPOOL ?. Cannot exceed a total of 99,999

          • Devoted Buffer pool (IXX01) Cannot exceed a total of 99,999, and so on


    A devoted buffer pool does has certain advantages over covered memory when the devoted buffer pool can provide the same data block coverage as the Covered memory.
    However, Covered allows you hold much larger amounts of data in Covered memory (9G maximum per index/data area). Limit on covered areas = system limit on MUF region 64 bit size.
    Compare this with 99,999 4K data/index buffers which can hold only 390M or even at 32K buffers, the maximum data coverage is about 3G per devoted buffer pool.
    Each devoted buffer pool buffer does have a small footprint in 31 bit storage so while the IXX/DXX devoted buffers pools can be sent to 64 bit, there is still a 31 bit usage that may exceed (when combined with data buffers and other requirements) the regions 31 bit size (typically less than 2G). Covered memory allocated in 64 bit does not have the 31 bit footprint.

    For most shops making significant use of COVERed memory, the use of Covered cannot be eliminated by using devoted buffer pools.
    The case is more that the existing processing at the site may be enhanced by employing both covered and devoted buffer pools in certain cases.

    In shops where Covered memory is not used or used in small amounts, devoted buffers pools could be used instead of covered. This will be more possible with V15 when data buffers go to 64 bit.

  2. Are there any performance gains for using Buffer Pools vs Covering?

    In certain cases, as described above, devoted buffer pools (when they provide the same data/index coverage) can provide some performance gain as we eliminate the move from the buffer/covered/buffer and the buffer pools have highly efficient reuse code.

    A key area where devoted buffer pools, can be seen as a major benefit are:

    • Site has more than 2 data block sizes (also true with indexes but most sites use the standard 4k IXX size).

      • For example a site has all tables at 4K. However, they are running some old VSAM-T tables which have large record sizes that require a 32K data buffer. In this case the DATA pool is 4K and the DATA2 pool is 32K.

      • If this site choses to move their normal data tables up to a larger size (say 4K to 8K) there is going to be transition period where there will be significant buffer size / block size mismatches.

        • Either the site changes the DATA pool to 8K and while the tables are being converted the 4K blocks will use the 8K data pool or the site leaves DATA pool at 4K and the new 8K blocks are mapped to the 32K buffer.

      • With a devoted buffer pool, a new 8K pool could be created for the transitioning tables. Once all tables are move, you could decide whether to combine the devoted pool with the DATA pool.

    • Site has large batch process that runs every night and does huge volumes. During the batch cycle the online system slows down because the batch jobs are monopolizing the standard buffer pools and the batch tables are different from the ones used in online. The number of times an online row is ?in buffer? drops causing more IO than when the batch is not running.

      • Having a devoted pool for the online processing would provide for consistent ?buffer reuse? and more stable processing.

      • Having a devoted pool for batch (maybe smaller than standard pools) may slow the batch processing down, but would limit the effect on online.

    • Site has a small reference table that every application hits. By moving this table to a devoted buffer pool (large enough to hold active rows) the data is always present in the buffer.

Environment

Release: DATABB00200-14-Datacom/AD
Component: