After running the garbageCollectTaskPersistence script, the size of the DB remain high and did not trim down as expected.
When run the SPs. It waterfalls like this.
garbageCollectTaskPersistence
garbageCollectCompletedTasks
garbageCollectCmplTaskById
garbageCollectByRowId
garbageCollectRsdById
garbageCollectByTaskId
garbageCollectByRowId
garbageCollectById <--- this is the one that should clean up with DELETE FROM runtimeStatusDetailAttribute12
garbageCollectTasks
garbageCollectByTaskId
The one garbageCollectById pointed out here should clear the table runtimeStatusDetailAttribute12 that use a lot of the space above.
If it is not making it this far in the waterfall their cutoff time may not be allowing to reach this garbageCollectByID.
-------------------
Please note that if you have multiple GB of data in your task persistence tables, it is because you have not been maintaining them properly.
Tasks persistence data is not intended to be used for auditing, reporting, or compliance and should be kept to less than 100,000 rows per task persistence-related table, for best performance.
Set your Cleanup Submitted Tasks task to keep only enough data for verifying that tasks completed and troubleshooting failed or problematic tasks.
Set an index on all large tables, then set the script to collect smaller amounts of data with larger cutoff time and run multiple times. In other words, it is likely trying to delete too much data for the script to handle. Either performance should be increased or data deleted should be decreased and run multiple times.
There is still no guarantee that the stored procedure will be able to clean up multi-GB tables in a timely manner that can keep up with task processing. If you are not able to reclaim significant amounts of table space you may need to drop the task tables and create them empty.