I have the following vApp nodes deployment, i.e.
vApp node 1 : IM, US, PS, CS
vApp node 2 : IM, US, PS, CS
vApp node 3 : IM, US, PS, CS
vApp node 4 : IM, US, PS, CS
External MS SQL database machine
I have removed vApp node 3 and node 4 from above deployment using the following steps
1. Login to node 1's vApp dashboard and go to Setup screen
2. Remove node 3 and node 4 by clicking the top right X button on each node box
3. Click [Deploy] to redeploy
The re-deployment wasn't completed and stuck at "Start Services" stage and I found IM won't start on vApp node 1 and IM server log shows the following WARN message repetitively.
2020-06-03 21:32:19,693 WARN [org.jgroups.protocols.TCP] (Timer-2,shared=tcp) JGRP000032: null: no physical address for ##########, dropping message
2020-06-03 21:32:20,192 WARN [org.jgroups.protocols.TCP] (Timer-3,shared=tcp) JGRP000032: null: no physical address for ##########, dropping message
Release : vApp 14.3, 14.4
Component : IdentityMinder(Identity Manager), Identity Suite
When we remove vApp nodes, the subsequent redeployment process won't be able to reach those remove nodes anymore and no reconfiguration happened on those removed vApp nodes. So after reconfiguration, node 1 and node 2 know that Wildfly cluster members are currently node 1 and node 2 only, while removed node 3 and node 4 still perceive that Wildfly cluster members are all the 4 nodes. If Wildfly is still running on node 3 and node 4, Wildfly on node 3 and node 4 still attempt to communicate to Wildfly on node 1 and node 2 and this manifests as repetitive WARN messages found in Wildfly server log on node 1.
2020-06-03 21:32:19,693 WARN [org.jgroups.protocols.TCP] (Timer-2,shared=tcp) JGRP000032: null: no physical address for #############, dropping message
2020-06-03 21:32:20,192 WARN [org.jgroups.protocols.TCP] (Timer-3,shared=tcp) JGRP000032: null: no physical address for #############, dropping message
These unnecessary communications seem to interfere Wildfly on node 1 and cause overhead.
In this case, powering off vApp node 3 and node 4 (or just stopping IM) before removing them from the cluster and redeploy is essential step to avoid this issue. After re-deployment is completed you should then destroy the removed nodes or separate them from the network.
Another workaround is to remove all components, i.e. drag and drop out all the components, instead of removing the vApp nodes on the Setup screen and redeploy. This kind of redeployment will reach removed nodes and do reconfiguration to remove/empty the components.