After applying the dradis-10.6.0.jar patch to our Enterprise Dashboard server, we are running into an issue. Our Database team received alerts on high CPU usage, which were caused by deadlocks on the table related to the SQL Server 2012 database used by the Enterprise Dashboard.
The following is a description of what our DBA is seeing:
"The high CPU symptoms are directly related to the deadlock/blocking on the database 'Devtest_EntDashboard'. There is update/insert/select running in table 'access_event_log' concurrently meanwhile the next set of transactions come in and creating a deadlock situation. During this time the CPU occupied by the transaction stays till a victim process end."
Another team member said:
"What we found was that there were blockings on the database side whenever there is an update command running, and it blocks other sessions for a minute or two before clearing out."
Can you please share your thoughts and help us investigate this?
Release : 10.6 and above
Component : CA Service Virtualization
The error that is seen in the enterprisedashboard.log is as follows:
2021-04-09 04:00:28,633Z (00:00) [pool-5-thread-3] ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper - Transaction (Process ID 78) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
2021-04-09 04:00:28,633Z (00:00) [pool-5-thread-3] WARN org.hibernate.engine.jdbc.spi.SqlExceptionHelper - SQL Error: 1205, SQLState: 40001
According to Microsoft, SQL Error: 1205, SQLState: 40001 is defined as:
Error 1205
Transaction (Process ID n) was deadlocked on generic waitable object resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I found that Microsoft provided a fix for this in SQL Server 2014 - KB3052167 (https://support.
In the vast majority of cases, you can identify the cause of your Deadlock Events and remedy the situation through either a structural change in the database schema or a logical change to the code involved/responsible for the Deadlock Event.
For further reading take a look at:
The SV Engineering reviewed and agrees that this is likely something at the Database level, and a configuration change to the database is required to resolve the deadlock.
There is a similar issue recorder, for deadlock found in a MySQL database (32239806/DE478691):
ERROR:
javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.2.v20111125-r10461): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.BatchUpdateException: Deadlock found when trying to get lock; try restarting transaction
Error Code: 1213
at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commitInternal(EntityTransactionImpl.java:102)
at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:63)
at com.itko.lisa.vse.metrics.MetricPersister.processWork(MetricPersister.java:382)
at com.itko.lisa.vse.metrics.MetricPersister.run(MetricPersister.java:321)
at java.lang.Thread.run(Thread.java:748)
Caused by: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.2.v20111125-r10461): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.BatchUpdateException: Deadlock found when trying to get lock; try restarting transaction
Error Code: 1213
Where the exception was fixed by changing the default value of 'innodb_buffer_pool_size' as described in the following link:
https://dev.mysql.com/doc/refman/5.7/en/innodb-buffer-pool-resize.html
A reworked patch was generated and given to the Client. It was dradis-10.6.0.zip. This new patch got rid of the deadlocks.