We run maintenance jobs every day; these jobs use the utilities ucybdbre and ucybdbun for each client.
For 1 week, the duration of ucybdbun has been very long.
Currently, the ucybdbun is running for client 0. Last week, the same jobs ran in 1 minute; however, the jobs are now always finished after more hours.
The ucybdbun log is full of these Time Critical DB call messages:
20250417/111538.277 - U00003524 UCUDB: ===> Time critical DB call! OPC: 'SLUC' time: '343:365.946.000'
20250417/111538.277 - U00003525 UCUDB: ===> 'select XAO_idnr from XAO where XAO_deleteflag = 1'
20250417/111539.444 - U00003524 UCUDB: ===> Time critical DB call! OPC: 'CLST' time: '1:166.787.000'
20250417/113515.753 - U00003524 UCUDB: ===> Time critical DB call! OPC: 'EXEC' time: '1176:267.008.999'
20250417/113515.753 - U00003525 UCUDB: ===> 'insert into DIVDB (DIVDB_PK) select distinct XAO_idnr from XAO where XAO_DeleteFlag = 1 limit 5000'
Postgres DB
Analysis by the DBA shows that the following statement uses a full table scan:
EXPLAIN (ANALYZE, BUFFERS) select distinct XAO_idnr from XAO where XAO_DeleteFlag = 1 limit 5000 ;
What this EXPLAIN PLAN shows:
Index Scan with inefficient filter:
Filter: (xao_deleteflag = 1)
Rows Removed by Filter: 26480168
Problem: The index used (pk_xao) does not take into account the condition xao_deleteflag = 1, so all 26M rows are scanned, then 99.9% are filtered.
Concerning the best-known Cloud-hosted Postgres databases: 'enable_seqscan = off' can be set to prevent full table scans:
enable_seqscan is listed in the supported database parameters.