Beginning the Investigation:
View Submitted Tasks and the Task Run-Time Management Task Persistence Monitor will provide valuable insight into the extent of the problem and should point to an initial area of focus.
If for example only Tasks related to Active Directory are stuck In-Progress, the focus should quickly be put on the Provisioning/endpoint layer; a situation where all tasks are hung In Progress would lead to more global areas such as the JMS queue or the Task Persistence database; if all tasks are In-Progress, or the overall user interface is poorly performing this might direct you to overall engine tuning, or on the database itself such as index statistics.
Main causes of In-progress tasks
- JMS is the messaging engine through which Tasks are processed by the application server and ultimately written into the Database. This is listed first as it is one of the simplest problems to locate using the Task Persistence Monitor feature, and the simplest to resolve.
Load / Environmental performance tuning
- The second most common cause of this is not properly tuning the environment initially or adding new load into an existing environment without adjusting the tuning configuration
- Another common cause of In-Process tasks is too much information in the Task Persistence Database Tables. The Task Persistence database contains the runtime tables of the Identity Manager product. The Task Persistence tables are where all Task work is stored throughout the lifetime of the Task’s execution and is constantly being written, read, and updated. A large row count in these tables means each update takes a longer amount of time which over time will slow all task execution down and lead to stoppages of Task execution leaving tasks in the In-Progress State.
Provisioning / Endpoint issues
-Problems such as unavailable endpoints, administrator password changes, underlying services stopped can all lead to Tasks not being able to complete and in many cases remaining in the In Progress state.
Identity Manager and Virtual Appliance (IM) 14.x
JMS is the messaging engine through which Tasks are processed by the application server and ultimately written into the Database. JMS is a Application Server feature which Identity Management relies on.
Check Java Messaging Service (JMS) processing for problems
Task Run Time Management Task Message Health? Does the synthetic test complete?
This creates dummy tasks which are pushed through the JMS queue into the database to check JMS queue performance.
If not 100% and returned in a few seconds - clear JMS queue and restart engine.
To restart the JMS queue:
For JBOSS / WIldfly, stop the application server, backup and then delete the contents of standalone/data/ and standalone/tmp/
Then restart the app server.
WebSphere / Weblogic:
Please see your application server admin for details on clearing JMS in Weblogic or Websphere.
>For VAPP Deployments:
VAPP includes an Alias to accomplish this: deleteIDMJMSqueue
Deletes the Identity Manager JMS queue (/opt/CA/wildfly-idm/standalone/data/*).
This should be completed on all nodes.
Configure Journal Size
For Standalone IM. In ca-standalone-full-ha.xml:
The current journal file size and minimum number of files are the default values, which may not be adequate with heavy load.
Configuring journal size for Virtual Appliance:
Review View Submitted Tasks - is there a pattern? Are we seeing only specific tasks against one endpoint having issues? If the issue seems isolated to one endpoint Open Provisioning Manager - Right Click - can you access a user account information and perform CRUD operations (Create Read Update Delete!) in Provisioning Manager:
Can you test against other endpoints to ensure they are accessible?
If endpoint issues are clearly present, focus on and resolve endpoint issues then attempt to use the built in Resubmit Task option to retry the specific problem tasks.
Review endpoints for failures and resolve endpoint issues
Check Prov logs (etatrans, etanotify, JCS)
# of Provisioning servers