In a deployment of Apache and SiteMinder web agent running in Openshift containers, memory in all of them is showing at 93% of the limit at all times, whereas CPU seems to work fine and it only reaches 45% during peak hours
The question is what would be recommended in such case as memory allocation looks too low
CA SiteMinder 12.9 running in Openshift containers
Likely other versions and implementations may have a similar use case
In this scenario, the primary issue may be the lack of memory headroom.
For instance let's imagine a configuration where the limits for the containers are as follows:
resources:
cpu: 1200m
memory: 1200Mi
requests:
cpu: "1"
memory: 1Gi
With usage consistently at 93%, any slight, unexpected traffic spike or temporary memory leak will cause the pod to breach its memory limits of 1200Mi and be immediately terminated by OpenShift's OOM Killer (Out-Of-Memory Killer).
For instance lets assume a memory limit for the main container of 1200Mi. If usage is 93% there is only 84 Mi of available memory (1200 Mi * 0.07) which is not enough buffer for normal operation or unexpected spikes
A general guideline is to set the memory limit with a safety margin (e.g., 20-30%) above the expected peak usage. Since the current peak is 93% of 1200Mi, consider increasing the limit to around 1.5Gi (1536Mi) initially.
Ideally, the ratio between requests and limits should be close to 1.0, especially for performance-sensitive applications, to prevent throttling and ensure guaranteed resources. Consider raising the memory request to around 1.2Gi or 1.3Gi.
In this example the current CPU usage is only 45% of the 1200m limit during peak hours.
CPU request can be safely kept at 1 (which is 1000m) but could potentially lower the CPU limit to around 1.2-1.5 CPU cores if cost optimization is a concern and the current limit provides sufficient headroom