Task persistence was never meant to be used for Reporting or historical data usage. It is used to report on tasks that have been submitted are in progress and have completed with success or completed with error.
Every time that you submit a task to IM an OID is generated for that task, but MUST be unique, so before that task is submitted to task persistence the OID is searched amongst the task persistence table to see if it exists. So the less data in the tables the faster the searches are.
Identity Manager and Virtual Appliance (IM) 14.x
Beginning the Investigation:
View Submitted Tasks and the Task Run-Time Management Task Persistence Monitor will provide valuable insight into the extent of the problem and should point to an initial area of focus.
If for example only Tasks related to Active Directory are stuck In-Progress, the focus should quickly be put on the Provisioning/endpoint layer; a situation where all tasks are hung In Progress would lead to more global areas such as the JMS queue or the Task Persistence database; if all tasks are In-Progress, or the overall user interface is poorly performing this might direct you to overall engine tuning, or on the database itself such as index statistics.
Main causes of In-progress tasks:
Patch Level
There are numerous causes of in-progress tasks that have been identified and patched out of the software.
JMS Health
- JMS is the messaging engine through which Tasks are processed by the application server and ultimately written into the Database. This is listed first as it is one of the simplest problems to locate using the Task Persistence Monitor feature, and the simplest to resolve.
Load / Environmental performance tuning
- The second most common cause of this is not properly tuning the environment initially or adding new load into an existing environment without adjusting the tuning configuration
Database Health
- Generally, the most common cause of In-Process tasks is too much information in the Task Persistence Database Tables. The Task Persistence database contains the runtime tables of the Identity Manager product. The Task Persistence tables are where all Task work is stored throughout the lifetime of the Task’s execution and is constantly being written, read, and updated. A large row count in these tables means each update takes a longer amount of time which over time will slow all task execution down and lead to stoppages of Task execution leaving tasks in the In-Progress State.
Provisioning / Endpoint issues
-Problems such as unavailable endpoints, administrator password changes, underlying services stopped can all lead to Tasks not being able to complete and in many cases remaining in the In Progress state.
Cross-cluster communications. Multiple default cluster configurations running on the same subnet.
Support and Engineering are continuously identifying and resolving potential causes of In-Progress Tasks. These fixes are then rolled into our publicly available Cumulative Patches and Cumulative Hotfix Fix packs. Ensure that your deployment is fully patched to the most current Cumulative Patches and Cumulative hotfix packs to avoid these known and resolved causes of In-Progress task events.
Please review your installed version's Release Notes on the Broadcom Product Documentation Site for the current publicly available patches.
JMS is the messaging engine through which Tasks are processed by the application server and ultimately written into the Database. JMS is a Application Server feature which Identity Management relies on.
Check Java Messaging Service (JMS) processing for problems
Task Run Time Management Task Message Health? Does the synthetic test complete?
This creates dummy tasks which are pushed through the JMS queue into the database to check JMS queue performance.
If not 100% and returned in a few seconds - clear JMS queue and restart engine.
How to restart the JMS queue:
>Non-VAPP deployments:
For JBOSS / WIldfly, stop the application server, backup and then delete the contents of standalone/data/ and standalone/tmp/
Then restart the app server.
WebSphere / Weblogic:
Please see your application server admin for details on clearing JMS in Weblogic or Websphere.
>For VAPP Deployments:
VAPP includes an Alias to accomplish this: deleteIDMJMSqueue
Deletes the Identity Manager JMS queue (/opt/CA/wildfly-idm/standalone/data/*).
This should be completed on all nodes.
Configure Journal Size
This only applies for 14.3 environments. The below configuration does not apply in 14.4.
For Standalone IM and standalone-full-ha.xml:
The current journal file size and the minimum number of files are the default values, which may not be adequate with a heavy load.
Recommended values:
<journal-file-size>25485760</journal-file-size>
<journal-min-files>20</journal-min-files>
Configuring journal size for Virtual Appliance:
https://knowledge.broadcom.com/external/article?articleId=214890
Issues at the database, primarily not cleaning up the completed records in a timely manner, is the most frequent cause of In-Progress tasks.
Start with reviewing Resource usage on DB server?
Is the CPU or memory pegged at 100%?
Run out of disk space?
Get DBA / Server team involved
Review View Submitted Tasks - is there a pattern? Are we seeing only specific tasks against one endpoint having issues? If the issue seems isolated to one endpoint Open Provisioning Manager - Right Click - can you access a user account information and perform CRUD operations (Create Read Update Delete!) in Provisioning Manager:
Can you test against other endpoints to ensure they are accessible?
If endpoint issues are clearly present, focus on and resolve endpoint issues then attempt to use the built in Resubmit Task option to retry the specific problem tasks.
https://techdocs.broadcom.com/us/en/symantec-security-software/identity-security/identity-manager/14-4/configuring/resubmit-stuck-in-progress-tasks.html
Review endpoints for failures and resolve endpoint issues
Check Prov logs (etatrans, etanotify, JCS)
Multiple, default cluster configurations running on the same network can prevent tasks from completing. Shutting down all but one cluster will resolve the issue until both clusters are configured to be isolated.
Isolate JBoss EAP clusters running on the same network:
https://access.redhat.com/solutions/274263
RedHat account is required for access to the above link. Contact your JBoss or Wildfly support for further assistance.
# of Provisioning servers