Low Level Agent failures in Kubernetes OpenShift Web Agent
search cancel

Low Level Agent failures in Kubernetes OpenShift Web Agent

book

Article ID: 418013

calendar_today

Updated On:

Products

SITEMINDER CA Single Sign On Agents (SiteMinder) CA Single Sign-On CA Single Sign On Secure Proxy Server (SiteMinder)

Issue/Introduction

Running Web Agent in containers, Kubernetes, and tuning the tcp keepalive by implementing sysctl net.ipv4.tcp_retries2 to value of 5.

Unfortunately, the OpenShift configuration cannot be modified.

Is there anything else at the Web Agent or at the Policy Server level that could have the same impact?

  • Is there any supported Agent/ACO/SmHost timeouts or retry knobs to detect/reset stale Policy Server connections faster (application-level rather than kernel sysctl)?
  • Can the Web Agent tune TCP keepalive behavior per process (e.g., via supported env vars/parameters) so there's no need to rely on node-wide sysctls?
  • Any Policy Server side settings that help reduce Agent socket wait times or accelerate failover/reconnect logic?

The Web Agent reports a lot of errors such as:

LLA: SiteMinder Agent Api function failed - 'Sm_AgentApi_LoginEx' returned '-1'.
HLA: Component reported fatal error: 'Low Level Agent'.
HLA: Component reported fatal error: 'Authentication Manager'.
LLA: SiteMinder Agent Api function failed - 'Sm_AgentApi_AuthorizeEx' returned '-1'.
HLA: Component reported fatal error: 'Low Level Agent'.
HLA: Component reported fatal error: 'Authorization Manager'.

Environment

Policy Server 12.8SP8CR01 on Linux (on-premise);
Web Agent 12.8 on Apache on Openshift 4.16 with Kubernetes 1.29;

Resolution

  • Ensure that any Firewall or Load Balancer between the Web Agent and the Policy Server won't block the TCP RST packets sent by the Policy Server (1)(3);
  • Set on the Policy Server this registry to a lower value (2):

    HKEY_LOCAL_MACHINE\SOFTWARE\Netegrity\SiteMinder\CurrentVersion\PolicyServer=471740537
    Tcp Idle Session Timeout=                         0xa;  REG_DWORD

  • Ensure that the Web Agent traces report a similar log line showing that it uses the tcp keepalive:

    [10/06/2025][02:57:25.474][15959][139700659029824][][SmClient.cpp:1292][CreateConnection][SM_ENABLE_TCP_KEEPALIVE is enabled

  • Set AgentWaitTime in WebAgent.conf;
  • Set sysctl net.ipv4.tcp_retries2;
  • Set SM_ENABLE_TCP_KEEPALIVE environment variable on the Web Agent and on the Policy Server.

 
As long as the OpenShift allows the configuration of tcp keep alive, it should work.
  
Note the configuration sysctl net.ipv4.tcp_retries2 seems to be recommended to configure it at pod level in Openshift (4).

Additional Information

  1. Error: Sm_AgentApi_IsProtectedEx, Sm_AgentApi_LoginEx in Web Agent log

  2. Error: Agent Api function failed with Web Agent and Load balancer

  3. Error 500: Web Agent Failing to Connect to Policy Server

  4. Configure tcp_retries2 for all the pods in the OpenShift cluster