Improving IDMSDBAN Elapsed Times
search cancel

Improving IDMSDBAN Elapsed Times

book

Article ID: 18528

calendar_today

Updated On:

Products

IDMS IDMS - Database

Issue/Introduction

A major difficulty when using the IDMSDBAN utility against a large database is the elapsed time that can be required to execute the utility. Much of this time is spent running program IDMSDBN2. This paper describes a way to reduce the elapsed time for the IDMSDBN2 step by dividing the output file created by IDMSDBN1 and then executing multiple IDMSDBN2 steps either serially or concurrently in multiple jobs.

The Problem

Different types of records are written to the SYS002 file during the execution of IDMSDBN1 depending on whether information about a chain set or an index set is being processed. In addition, index sets can generate a larger volume of intermediate records due to the more complex structures making up an index set as opposed to that used by a chain set. Introduction of VLDB support has also caused more use of user-owned index sets due to the replacement of chain sets with user-owned indexes resulting in an increase of the size of the file written to SYS002.

IDMSDBN2 will read this intermediate file through DDNAME SYS001 and generate additional records. After an initial sort, IDMSDBN2 will perform a first pass through the data to accumulate statistics and verify the database's structural integrity. Usually a single pass is insufficient and subsequent sorts and passes of the data are required. Much of the elapsed time of the IDMSDBN2 step is spent sorting the data. Some sites have also hit record count limitations imposed by the various sort packages that have kept IDMSDBN2 from completing when processing very large databases with large numbers of indexes defined.

Environment

Release: All supported releases.

Resolution

A procedural solution has been developed by adding an additional job step into the IDMSDBAN process to divide the various record types into separate files. These files can then be read individually into different executions of program IDMSDBN2. The extra job step is an execution of the COPY function available through most external sort packages. The following is an example of JCL that can be used to run the first two IDMSDBAN job steps including the new sort step.

 //EXECDBN1 EXEC PGM=IDMSDBN1,REGION=4096K
 //STEPLIB        DD DISP=SHR,DSN=......
 //*   SYSMSG SEGMENT FILES
 //DCMSG       DD DISP=SHR,DSN=......
//SYSJRNL DD DUMMY //SYSIDMS DD * DMCL=EMPTDMCL //SYS002 DD DSN=&&SYS002,DISP=(NEW,PASS,DELETE), // UNIT=SYSDA,SPACE=(TRK,(20,5)), // DCB=(RECFM=VB,LRECL=400,BLKSIZE=9000) //SYSOUT DD SYSOUT=* //SYSLST DD SYSOUT=* //SYSOUD DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSIPT DD * PROCESS SUBSCHEMA EMPSS01 DBNAME EMPDEMO UNLOCKED REPORT SET ALL /* //SORTORIG EXEC PGM=SORT //SYSOUT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SORTWK01 DD UNIT=SYSDA,SPACE=(TRK,(5,1)) //SORTWK02 DD UNIT=SYSDA,SPACE=(TRK,(5,1)) //SORTWK03 DD UNIT=SYSDA,SPACE=(TRK,(5,1)) //SORTIN DD DSN=&&SYS002,DISP=SHR //DBN1CH DD DSN=######.EMPTDBCH,DISP=(NEW,CATLG), // UNIT=SYSDA,SPACE=(TRK,(10,10),RLSE) //DBN1IX DD DSN=######.EMPTDBIX,DISP=(NEW,CATLG), // UNIT=SYSDA,SPACE=(TRK,(10,10),RLSE) //DBN1ID DD DSN=######.EMPTDBID,DISP=(NEW,CATLG), // UNIT=SYSDA,SPACE=(TRK,(10,10),RLSE) //SYSIN DD * OPTION COPY OUTFIL INCLUDE(5,2,BI,LT,4),FNAMES=DBN1CH OUTFIL INCLUDE(5,2,BI,GE,4,AND,5,2,BI,LT,100),FNAMES=DBN1IX OUTFIL INCLUDE(5,2,BI,GE,100),FNAMES=DBN1ID /*

Note that job step SORTORIG executes the site specific sort package and uses the COPY function to divide the input file into three files. These files are separated into the records that define the chain sets (DBN1CH), the index sets (DBN1IX), and general identifying information (DBN1ID). These files are then read into two job steps executing program IDMSDBN2 as follows.

//DBN2CHN  EXEC PGM=IDMSDBN2,REGION=2048K,TIME=10
//STEPLIB     DD DISP=SHR,DSN=......
 //SYS001      DD  DSN=######.EMPTDBCH,DISP=SHR
 //                   DD  DSN=######.EMPTDBID,DISP=SHR
 //SYS002      DD UNIT=SYSDA,SPACE=(CYL,(200,50)),
 //                          DCB=(RECFM=VB,LRECL=400,BLKSIZE=9000)
 //SORTWK01 DD UNIT=SYSDA,SPACE=(TRK,(500,50))
 //SORTWK02 DD UNIT=SYSDA,SPACE=(TRK,(500,50))
 //SORTWK03 DD UNIT=SYSDA,SPACE=(TRK,(500,50))
 //SORTMSG   DD SYSOUT=*
 //SYSOUT       DD SYSOUT=*
 //SYSLST         DD SYSOUT=*
 //SYSUDUMP DD SYSOUT=*
 /*
 //DBN2IDX  EXEC PGM=IDMSDBN2,REGION=2048K,TIME=10
  //STEPLIB        DD DISP=SHR,DSN=......
 //SYS001          DD  DSN=######.EMPTDBIX,DISP=SHR
 //                       DD  DSN=######.EMPTDBID,DISP=SHR
 //SYS002          DD UNIT=SYSDA,SPACE=(CYL,(200,50)),
 //                              DCB=(RECFM=VB,LRECL=400,BLKSIZE=9000)
//SORTWK01    DD UNIT=SYSDA,SPACE=(TRK,(500,50))
 //SORTWK02   DD UNIT=SYSDA,SPACE=(TRK,(500,50))
 //SORTWK03   DD UNIT=SYSDA,SPACE=(TRK,(500,50))
 //SORTMSG     DD SYSOUT=*
 //SYSOUT         DD SYSOUT=*
 //SYSLST           DD SYSOUT=*
 //SYSUDUMP  DD SYSOUT=*
 /*

Job step DBN2CHN will process the records defining chain sets while DBN2IDX will analyze the database's index sets. It is necessary to include the file containing the general identifying information (DBN1ID) in both job steps.

Each job step will produce all reports based on the subschema's definition. This means the step processing the chain set file will produce reports for index sets but these sets will indicate no set occurrences or members as appropriate. The opposite is true for the step executing just the index set information.

The IDMSDBN2 job steps can be executed serially in the same job or can be executed concurrently through two unique operating system jobs. Tests have shown that even when running serially a significant elapsed time saving can occur. At a site which encountered problems with the size of the intermediate sort file created by IDMSDBN2, the IDMSDBN2 step ran for 6.25 hours before failing due to sort constraints. After separating the file from IDMSDBN1 into its three components and running the two IDMSDBN2 steps serially, it required 1.25 hours to process the chain file followed by 2.25 hours to process the index file. The extra sort step to separate the data records required 0.8 hours.