Subscription looping

book

Article ID: 220875

calendar_today

Updated On:

Products

CA Workload Automation iDash for CA 7

Issue/Introduction

We around 90k batch updates today and that has caused deltas to be created and massive delays on subscription. We stopped idash and cleared the delta directory. All carried on that time but then later tonight when it was still processing updates and creating deltas, the mainframe task was restarted. Idash then went dead and stopped service and cleared deltas again. This time idash keeps doing a data extract with 1000 deltas in the directory. How can we resolve this?

 

Environment

Release : 12.1

Component :

Resolution

iDash is going to go through all of the events to get caught up. job update events are the slowest ones to process. if it has to go through XX thousand job update events, that is going to take time, and then the normal activity events will begin to catch up.

The two options are to wait until the job update events are processed, then until the activity events are caught up, then pull a new seed file (several hours of wait time, probably) or option 2, delete the instance and recreate it to skip past all of the job update events, pull a new seed data, and accept that there will be a gap in job runs that get captured by iDash.

That many deltas are going to cause problems with the access time for job definitions during the instance data processing. they are for sure going to need to clean it up with a new seed data file. they just need to decide if they want to wait for the events to catch up and have no gap, or if they want a current picture form iDash faster and miss out on some history.
 
assuming that it takes 1 second per job delta, which is a reasonable assumption,. it will take 16+ hours just to process the deltas, then it will take some time to process all of the events that were backed up behind those 16 hours of nothing but job updates.
 
For future reference, we have recommended in the past for customers to delete the instance in iDash first, then perform the mass updates, then redefine and request a new seed. you may miss some activity. but far less harm can be caused that way than letting it go into a multi-hour process to keep up with all of the job updates. it will require some coordination with the 7 administrators for planned mass updates.