Reviewed the Detector for Db2 for z/OS (PDT) data store and cannot explain why the numbers noted in DSN_STATEMENT_CACHE_TABLE are
different from those in PDT regarding CPU consumption and ELAPSE time.
On the other hand they are identical regarding the number of Average Getpages.
Statement 2099 :
DSN_STATEMENT_CACHE_TABLE :
Average CPU ==> 0.00002347 (stat_cpu/stat_execb)
Average Elapse ==> 0.00002122 (stat_elap/stat_execb)
Average Getpages ==> 1.54 (stat_gpagb/stat_execb)
Detector :
Average CPU ==> 0.000070
Average Elapse ==> 0.000094
Average Getpages ==> 1.44
Statement 25 :
DSN_STATEMENT_CACHE_TABLE :
Average CPU ==> 0.00004809 (stat_cpu/stat_execb)
Average Elapse ==> 0.00003606 (stat_elap/stat_execb)
Average Getpages ==> 22.17 (stat_gpagb/stat_execb)
Detector :
Average CPU ==> 0.000072
Average Elapse ==> 0.000058
Average Getpages ==> 22.22
Detector provides various levels of granularity of SQL statistics. Statistics at Plan level, Program level and Statement level.
At statement level, the INDB2_CPU and INDB2_TIME for a statement will be specific to the ENDTIME –START TIME of
INDB2_TIME and INDB2_CPU consumed by that statement.
For Getpage count, Detector obtains it from Db2 control block and start time and end time is not calculated.
It appears that for DSN_STATEMENT_CACHE_TABLE, statistics are cumulative, across executions of the same statement, and across threads,
if the value of COLLID is DSNDYNAMICSQLCACHE. If the value of COLLID is DSNEXPLAINMODEYES, the values are for a single run of the statement only.
So the values you see in Detector can be different from the DSN DSN_STATEMENT_CACHE_TABLE.