Image copies are internal DB2 format backup datasets produced by a utility like CA Quick Copy or IBM Copy. This Image Copy Dataset can only be used by another Utility like a Recovery, an Unload Utility. A GDG is a Generation Data Group which is a dataset storage structure which is often used for the storage of image copies. Multiple prior versions or generations of the dataset can be stored without any name change in the JCL. RC/Extract is able to use CA Fast Unload to access the Image copy.
The Data stored in an Image Copy Dataset can be used to populate another target database and this is often done to obtain production data for testing purposes without hindering the operations of the online production database.
Db2 for z/os
The first step is to create a source definition for the extract selecting the table that is required as usual.
When the Source Definition is completed the next step is to carry out the extract.
In this case the source data is coming from an Image Copy. Only a UTILITY type extract is able to access an image copy. CA Fast Unload is called by CA RC/Extract in order to carry out the unload.
Having used the "X" line command on the source definition, the "Process Source Definition" is displayed.
RX032 RX032I: Enter O or B to execute. Press HELP for more info
Creator ===> AUTHID (Blank or pattern for list)
Name ===> TBTABLE (Blank or pattern for list)
Extract Object ===> HLQ.DATA.EXTOBJ
Registry SSID: DT32 --------------------------------------------- AUTHID
Process Mode ===> B (O - Online, B - Batch)
Extract Method ===> U (S - SQL, U - Utility)
Update Extended Options ===> Y (Y - Yes, N - No)
Update Allocations ===> Y (Y - Yes, N - No)
Intercept Errors ===> Y (Y - Yes, N - No)
Extract DDL ===> N (Y - Yes, N - No)
Overwrite EXTOBJ ===> N (Y - Yes, N - No)
Set Extract Parameters ===> N (Y - Yes, N - No)
On the screen above the process mode will be "B" for batch, the Extract Method is "U" for Utility and extended options must be updated next indicated with a "Y".
--------------- RC/Extract PFU Extract Options -------------- 2020/07/22 02:
Share level ==> I (C - Change, I - Ignore, R - Reference)
Use Image Copy ==> Y (Y-Yes, P-Parts, T-Template, N-No)
Keys mode ==> 0 (0-private, 1-1 dataspace, 2-n dataspaces)
Split extract ==> Y (N-No, Y-Yes, U-Update default allocations)
Dup elimination ==> U (N-None, X-Sort only, S-System only, U-User RI)
Unit count ==> 01 (01-59) This value is used when Split extract=Y
On the screen above "Use Image Copy" is set to "Y" which will produce an image copy card in the generated JCL. This will be modified for the purpose of this exercise since the target of this extract is on a GDG.
Having stepped through the steps to generate the Batch JCL the RC/Extract parms will look like this:
EXTOBJ DDNAME(PTIXOBJ) +
The IMAGECOPY parm above must now be modified to suit the task at hand.
hlq.CPY.G0001V00 <----- 2 generations old is the target data
The source data for this extract is on a GDG and the target dataset is two generations old. The extract parms that were generated above would cause the unload to simply find the most recent registered image copy of the tablespace and use that for the extract which is not the required source data for this exercise.
The imagecopy parm can be modified to directly reference the image copy absolute generation name dataset on the GDG or it can be referenced by its relative name.
Specify the full name of the current generation data group (GDG) image copy data set as follows: IMAGECOPY(extID-image.copy.dataset(0)) extID is the extract number given to the table by RC/Extract.
Absolute generation name : hlq.CPY.G0001V00
Relative name: IMAGECOPY(0001-hlq.CPY(-2))
The extract parms would look like this when a relative name is used.
EXTOBJ DDNAME(PTIXOBJ) +
In the extract sysout you will see a line like this reported:
IGD104I hlq.CPY.G0001V00 RETAINED, DDNAME=SYSIMAG
CA Fastunload assigns the GDG dataset to the SYSIMAG DD which is the one that is used to reference image copy datasets.
When the sysout is reviewed , the extract procedes in the normal way as if the data was coming from a table on DB2.
This will produce a normal extract object in hlq.sourcedef.EXTOBJ above which can then be loaded into a new table using the normal procedure of creating a target definition and then performing a load.