The Aria Lifecycle UI shows that one of the nodes in the Aria Automation cluster is showing 54GB Mem and 12 CPU (medium profile values), whereas the other 2 nodes of the cluster are showing 96GB and 24 CPU (XL profile values)
This difference can also be observed in vCenter when looking at the appliance-specific details
There are also multiple service health related entries for pods on the specific node with lower resources showing "Not healthy" when running the command:
vracli service status
There are also several pod restarts occuring which can be observed with the command:
kubectl get pods -n prelude
Environment
Aria Automation 8.18.x
Resolution
To assign the proper memory and CPU resources back to the faulty node, follow the steps in this document for the vertical scale up procedure to be performed from Aria Lifecycle Manager.
Downtime is expected for this procedure as the environment is fully stopped, the appliances are reconfigured, and then the services are started back up again.
It is recommended to use XL resource profiles for production environments, so when selecting a size option in the wizard, select XL profile.