Fast Unload for Db2 for z/OS (PFU) was executed with SHRLEVEL CHANGE,BP-LOOKUP concurrently
with an application which updates the same table. This resulted in duplicate rows unloaded.
These two rows had the same unique key. Why does this happen?
Release: r20
Component: Fast Unload for Db2 for z/OS
Fast Unload SHRLEVEL CHANGE,BP-LOOKUP should be used with caution and the results can be unpredictable.
Duplicate data can be a result of the use of both SHRLEVEL CHANGE and SHRLEVEL CHANGE,BP-LOOKUP.
Using SHRLEVEL CHANGE alongside updates/inserts/deletes to the same table will create the possibility
for data integrity problems. Not only could duplicates exist but also data could even be missing.
For example, PFU could read a row, then that row is updated by another application and rewritten and externalized
to a page further down in the VSAM file if it was unable to fit on the original page. PFU then reads that page in
the VSAM file and unloads another row with that same unique index.
The Fast Unload Documentation states:
Warning! BP-LOOKUP does not ensure that the most up-to-date data is unloaded. The log data sets are viewed once during the
execution to identify changed pages. Any page changes that occur during or after the log data set lookup are missed.
As a result, BP-LOOKUP can also process uncommitted data changes.
In the case where SHRLEVEL REFERENCE cannot be used, the timing of the PFU job is vital. Execute the PFU job later
when the concurrent activity is not accessing the table and the duplicates should not be seen.
Other ways to obtain a more reliable unload:
1. Use an available Full Image Copy made with SHRLEVEL REFERENCE as input to your PFU job. That Image Copy could be created at
the specific point in time that you wish to have the unload created. There would be no duplicate rows on the Image Copy
and there is a clear time-stamp when the Image Copy was taken.
2. Use Merge/Modify for Db2 for z/OS (PMM) to create a Full Image Copy by merging Incremental Image Copies with a Full Image Copy
and applying changes from the archive and active DB2 logs to produce an new Full Image Copy at the point in time required and then
use that as the input to the PFU job.