Your distributed system is not being responsive in some way. Perhaps some gfsh command is not returning. Perhaps you just started a new node, joining the cluster, and you started encountering issues in your environment that cascade across the DS. Whatever the scenario may be, the general goal is to get out of the situation with minimal business impact. Sometimes, it may seem like all nodes are affected, such that a restart of the entire cluster is the only alternative. In some cases, this may be true. Sometimes, however, you may be able to determine a course of action that allows the bulk of the DS to continue running and recover to a state of being stable. Perhaps you have log messages indicating that some node or subset of nodes is not being responsive. Hopefully, this article will assist you in transitioning your system to a better state.
Once you determine that a deadlock may be possible, due to whatever symptoms you are seeing, the most important thing to do, first and foremost, is to gather thread dumps across the entire cluster. It may not seem like the most important first step, but if you want to make your environment better long term, and create stability in the cluster, a full understanding of what may have happened, this is essential. You should take 2 to 3 thread dumps per member, 30 seconds to 1 minute apart each. If we have only one such thread dump as a snapshot in time, it is impossible to establish whether some blocked thread is BLOCKED during the natural course of processing, as is normal, or whether it remains blocked over the course of a minute or more, which is not natural and surely indicative of something causing issues in the system. Please add this to any runbook you may have so that the people managing the system at any time understand that gathering the dumps is vital to understanding the root cause.
There are a number of symptoms which can give the impression of a blockage in the system. As stated earlier, a gfsh command might not return. This certainly warrants taking the thread dumps across the system as described. There are a number of log messages which are generally an indication of some issue in the system. Here are some examples of log messages which should cause you to examine the DS closely to determine whether some issue exists in the cluster.
There are a few different flavors of the "15 seconds have elapsed while waiting for replies" messages. The ack-wait-threshold from your properties is the 15, by default set to 15 seconds. If you have altered this setting to 10, you will see "10 seconds have elapsed...". If you are seeing such messages repeatedly, and the replies are not coming back, this is certainly a scenario to gather thread dumps and try to take action before the issue cascades and impacts other members of the cluster. The same holds true is you are repeatedly seeing DLock requests outstanding.
Another common symptom of an issue may be that you are hitting the max-connections limit. If you generally do not encounter this in your environment, your system may be experiencing some impact to recent events such as starting a member, loss of a member, repeated failures of clients driving up connections used, etc. There exists a CacheServerMXBean interface that you can monitor with a getThreadQueueSize() method. If you see this returning values greater than your normal range, you may be experiencing some issue. You could set an alert if you surpass some value, such as half your max-connections limit, and potentially drive some resolution to your issues prior to any cascading impact to the system. Here is a link to the Javadocs
Sometimes things are blocked, but not deadlocked. It is often difficult to tell the difference. That said, one area where customers are often reaching out for assistance is when things appear hung during startup when persistence is part of the configuration. In such cases, some region initialization may be blocked waiting for the latest current data to be online. The following message is an indication that you may need to make sure that all necessary nodes have been started to unblock initialization and complete the startup.
In such cases, you need to read the log message carefully, determine what members need to be started to unblock the recover of this persistent disk store, and proceed until all such messages in the logs have been eliminated.
Environment
GemFire 7 and above