Clarity PPM - "Timeout: Pool empty. Unable to fetch a connection"
search cancel

Clarity PPM - "Timeout: Pool empty. Unable to fetch a connection"

book

Article ID: 210726

calendar_today

Updated On:

Products

Clarity PPM On Premise Clarity PPM SaaS

Issue/Introduction

In Clarity facing Timeout: Pool empty error and any of the below symptoms: 

  • Users are unable to log into Clarity.
  • Leaked connections, jobs stuck or processes hanging
  • Application slowness and DB related errors to the pool connection limit of 1000 being exceeded
  • This could be causing issues with time slices, and general slowness

Error in app-ca/bg-ca log:

ERROR 2021-03-10 00:13:36,700 [https-jsse-nio2-8444-exec-27] niku.security (clarity:unknown:xxxxxx:none) UserSessionCache.get:PMD error
com.niku.union.persistence.PersistenceException: Error getting a DB connection
              at com.niku.union.persistence.PersistenceController.doProcessRequest(PersistenceController.java:620)
              at com.niku.union.persistence.PersistenceController.processRequest(PersistenceController.java:311)
Caused by: org.apache.tomcat.jdbc.pool.PoolExhaustedException: [https-jsse-nio2-8444-exec-27]Timeout: Pool empty.
Unable to fetch a connection in 30 seconds, none available[size:1000; busy:1000; idle:0; lastwait:30000].

Note: This article is also applicable if you experience a number of fast growing connections in the database and the DBA kills them before they hit the 1000 limit.

Environment

Release: Any

Resolution

  • The issue is that you ran out of connections on your database. There was no more available connections over 1000, which caused the issue.
  • This usually is caused by a connection leak. It could be custom processes that do not close the connections or something else in the product. 
  • It could be a REST API integrations that may have flooded the app in a short burst

To remedy the problem, restart the services, this should free up the connections. 

Consider gathering the below information before restarting in order to be able to determine the root cause. Or plan to do this if you have already restarted if this happens again:

  1. Take a heap dump of the application (instructions here) and provide to Broadcom Support for analyze:
  2. Take a look at the /niku/apache page of the application. After restart, monitor what type of connections are created: 
    http://SERVERNAME:PORT/niku/apache
  3. Ask the DBA to provide data from the db server side regarding open connections / running queries in case they are unleaked connections but long running blocked/stuck queries