Siteminder: Troubleshoot High CPU utilization on the Access Gateway
search cancel

Siteminder: Troubleshoot High CPU utilization on the Access Gateway

book

Article ID: 377941

calendar_today

Updated On:

Products

CA Single Sign On Agents (SiteMinder) CA Single Sign On Federation (SiteMinder) CA Single Sign On Secure Proxy Server (SiteMinder) CA Single Sign On SOA Security Manager (SiteMinder) SITEMINDER

Issue/Introduction

Troubleshoot High CPU utilization on the Access Gateway.

Environment

12.8.x

Cause

Collecting the PKGAPP and logs to troubleshoot high CPU utilization.

Resolution

1. Check the Access Gateway Proxy Engine's java PID.

Run "ps -ef|grep java |grep secure-proxy" to check the PID of Access Gateway Engine.
[root@www apps]# ps -ef|grep java |grep secure-proxy

nobody     23384       1  6 15:40 ?        00:01:24 /opt/jdk-11.0.21+9-jre/bin/java -ms256m -mx1024m -server -XX:MaxMetaspaceSize=256M -Dcatalina.base=/opt/CA/secure-proxy/Tomcat -Dcatalina.home=/opt/CA/secure-proxy/Tomcat -Djava.io.tmpdir=/opt/CA/secure-proxy/Tomcat/temp -DHTTPClient.log.mask=0 -DHTTPClient.Modules=HTTPClient.RetryModule|org.tigris.noodle.NoodleCookieModule|HTTPClient.DefaultModule -Dlogger.properties=/opt/CA/secure-proxy/Tomcat/properties/logger.properties -Dfile.encoding=UTF8 -DIWACONFIGHOME=/opt/CA/secure-proxy/proxy-engine/conf/sts-config/globalconfig -DNETE_WA_ROOT= -DPWD=/opt/CA/secure-proxy -classpath /opt/CA/secure-proxy/Tomcat/bin/proxybootstrap.jar:/opt/CA/secure-proxy/Tomcat/properties:/opt/CA/secure-proxy/resources:/opt/CA/secure-proxy/Tomcat/bin/bootstrap.jar:/opt/CA/secure-proxy/Tomcat/endorsed/jakarta.xml.bind-api.jar:/opt/CA/secure-proxy/Tomcat/endorsed/jsr105-api-1.0.1.jar:/opt/CA/secure-proxy/Tomcat/endorsed/jakarta.xml.ws-api-2.3.2.jar:/opt/CA/secure-proxy/Tomcat/endorsed/jakarta.activation.jar:/opt/CA/secure-proxy/Tomcat/lib/smi18n.jar:/opt/CA/secure-proxy/agentframework/java/bc-fips-1.0.2.4.jar com.netegrity.proxy.ProxyBootstrap -config /opt/CA/secure-proxy/proxy-engine/conf/server.conf

The PID of Proxy Engine is 23384.

 

2. run "top" command and see if the secure-proxy's java process(can find "secure-proxy" in the command-line path) is consuming high CPU and if it stays that way.

Following shows the PID 23384 is consuming 0.7% CPU but customer's sample may be higher.

[root@www apps]# top
top - 16:01:57 up  4:45,  1 user,  load average: 0.03, 0.04, 0.08
Tasks: 259 total,   1 running, 258 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.2 sy,  0.0 ni, 99.5 id,  0.0 wa,  0.2 hi,  0.1 si,  0.0 st
MiB Mem :   2748.1 total,    119.9 free,   1971.8 used,    821.9 buff/cache
MiB Swap:   2116.0 total,    893.3 free,   1222.7 used.    776.3 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
  14604 smuser    20   0 5636220 879328  11592 S   1.0  31.2   2:38.09 java
  23384 nobody    20   0 4303872 866136 389632 S   0.7  30.8   1:23.94 java
    846 root      20   0  457488   4992   4480 S   0.3   0.2   0:13.99 vmtoolsd
  24121 root      20   0  226028   4224   3328 R   0.3   0.2   0:00.01 top
      1 root      20   0  174232   9456   5728 S   0.0   0.3   0:02.75 systemd
      2 root      20   0       0      0      0 S   0.0   0.0   0:00.03 kthreadd
      3 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 rcu_gp
      4 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 rcu_par_gp
      5 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 slub_flushwq
      6 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 netns
      8 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kworker/0:0H-events_highpri
     10 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 mm_percpu_wq
     12 root      20   0       0      0      0 I   0.0   0.0   0:00.00 rcu_tasks_kthre
     13 root      20   0       0      0      0 I   0.0   0.0   0:00.00 rcu_tasks_rude_
     14 root      20   0       0      0      0 I   0.0   0.0   0:00.00 rcu_tasks_trace
     15 root      20   0       0      0      0 S   0.0   0.0   0:00.06 ksoftirqd/0
     16 root      20   0       0      0      0 I   0.0   0.0   0:01.07 rcu_preempt
     17 root      rt   0       0      0      0 S   0.0   0.0   0:00.01 migration/0
     18 root     -51   0       0      0      0 S   0.0   0.0   0:00.00 idle_inject/0
     20 root      20   0       0      0      0 S   0.0   0.0   0:00.00 cpuhp/0
     21 root      20   0       0      0      0 S   0.0   0.0   0:00.00 cpuhp/1
     22 root     -51   0       0      0      0 S   0.0   0.0   0:00.00 idle_inject/1
     23 root      rt   0       0      0      0 S   0.0   0.0   0:00.28 migration/1
     24 root      20   0       0      0      0 S   0.0   0.0   0:00.07 ksoftirqd/1
     26 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kworker/1:0H-events_highpri
     27 root      20   0       0      0      0 S   0.0   0.0   0:00.00 cpuhp/2
     28 root     -51   0       0      0      0 S   0.0   0.0   0:00.00 idle_inject/2
     29 root      rt   0       0      0      0 S   0.0   0.0   0:00.29 migration/2
     30 root      20   0       0      0      0 S   0.0   0.0   0:00.41 ksoftirqd/2
     32 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kworker/2:0H-events_highpri
     33 root      20   0       0      0      0 S   0.0   0.0   0:00.00 cpuhp/3
     34 root     -51   0       0      0      0 S   0.0   0.0   0:00.00 idle_inject/3
     35 root      rt   0       0      0      0 S   0.0   0.0   0:00.28 migration/3
     36 root      20   0       0      0      0 S   0.0   0.0   0:00.10 ksoftirqd/3
     38 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kworker/3:0H-events_highpri
     43 root      20   0       0      0      0 S   0.0   0.0   0:00.00 kdevtmpfs
     44 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 inet_frag_wq
     45 root      20   0       0      0      0 S   0.0   0.0   0:00.01 kauditd
     47 root      20   0       0      0      0 S   0.0   0.0   0:00.02 khungtaskd
     48 root      20   0       0      0      0 S   0.0   0.0   0:00.00 oom_reaper
     49 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 writeback

 

 

3. ps -p (sps java pid) -L -o pcpu,pid,tid,time,tname,stat,psr |sort -n -k1 -r

This will provide a list of threads that is consuming CPU.

[root@www apps]# ps -p 23384 -L -o pcpu,pid,tid,time,tname,stat,psr |sort -n -k1 -r
 5.9   23384   23385 00:00:40 ?        Sl     3
 3.6   23384   23396 00:00:24 ?        Sl     2
 0.3   23384   23425 00:00:02 ?        Sl     2
 0.3   23384   23397 00:00:02 ?        Sl     0
 0.2   23384   23426 00:00:01 ?        Sl     3
 0.2   23384   23402 00:00:01 ?        Sl     1
 0.2   23384   23401 00:00:01 ?        Sl     2
 0.2   23384   23386 00:00:01 ?        Sl     1
 0.1   23384   23403 00:00:01 ?        Sl     3
%CPU     PID     TID     TIME TTY      STAT PSR
 0.0   23384   23463 00:00:00 ?        Sl     0
 0.0   23384   23462 00:00:00 ?        Sl     3
 0.0   23384   23461 00:00:00 ?        Sl     0
 0.0   23384   23460 00:00:00 ?        Sl     1
 0.0   23384   23459 00:00:00 ?        Sl     0

Above shows TID 23385 is consuming 5.9% CPU and TID 23396 is consuming 3.6% CPU.

In your use case this may be significantly higher.

 

4. capture core dump manually (repeat 3 times)

#gcore 23384

Rename the core.23384 to something so it will not be overwritten.

Around 30 seconds later run the command again and rename the core file again.

Around 30 seconds later run the command again and rename the core file.

 

5. Run pkgapp against the 3 core files

pkgapp -ir -p <Current active pid of Java> -a <Path to Java process of AG> -c <Path to the core file> -s <Path to store the output>