search cancel

Batch process job PollItemsETLJob mis-fired due to long-running job

book

Article ID: 202570

calendar_today

Updated On:

Products

CA Infrastructure Management CA Performance Management - Usage and Administration DX NetOps

Issue/Introduction

Got this message, what does that mean? How to resolve this?

What does an Event in Performance Management with this Description mean?

"Batch process job PollItemsETLJob mis-fired due to long-running job."

Environment

Performance Management releases r3.7.13 and earlier

Performance Management releases r20.2.2 and earlier

Cause

An insufficiently sized thread pool leads to a WARN message to be logged for a mis-fired Poll Item or Group ETL job every hour when the DIM Item ETL is running at the same time.

Resolution

This is resolved in the following releases of Performance Management.

  • r3.7.16 and newer releases
  • r20.2.3 and newer releases

 

Per the r20.2.x documentation Fixed Issues list we see this entry.

  • Symptom: An insufficiently sized thread pool leads to a WARN message to be logged for a mis-fired Poll Item or Group ETL job every hour when the DIM Item ETL is running at the same time.
  • Resolution: The ETL scheduler's thread pool size was increased and made configurable.
  • (20.2.3, DE474462)

 

The same entry is found in the r3.7.x Fixed Issues list.

The solution implemented two changes to resolve the issue.

  1. The first change was increasing the default schedulerThreadCount value from 2 to 3. This allows one additional thread to run at one time.
  2. The second change was allowing the previously hard coded schedulerThreadCount and misfireThresholdMS values to be configurable.

In Performance Management releases r3.7.13 or r20.2.2 and earlier the hard coded values defaulted to:

  • schedulerThreadCount=2
  • misfireThresholdMS=5000

 

In releases r3.7.16 or r20.2.3 or newer they will default to:

  • schedulerThreadCount=3
  • misfireThresholdMS=5000

 

If an upgrade to a newer release of Performance Management that contains the solution, with the increase of schedulerThreadCount from 2 to 3, doesn't resolve the issue please open a new support case. Reference this Knowledge Base article for assistance determining a root cause and solution, possibly through customizing the default values.