On-Prem Agents fail to communicate with Policy Servers in the Kubernetes Cluster
search cancel

On-Prem Agents fail to communicate with Policy Servers in the Kubernetes Cluster

book

Article ID: 406257

calendar_today

Updated On:

Products

SITEMINDER

Issue/Introduction

Kubernetes Cluster is setup and deployed SiteMinder.

On-prem agents need to cut over to Kubernetes Policy Server but they are getting handshake failures.

Agents are going through AWS NLB to communicate with Kubernetes Policy Servers.

Environment

On-Prem Agents

AWS NLB

Kubernetes Policy Server

Cause

When going through AWS NLB, the LB has to successfully check the status of the Policy Servers in the cluster.

If the healthcheck fails, the LB will not be able to handshake with Policy Server.

Resolution

Follow the documentation on how to configure the healthprobe.

Configure AWS NLB using following sample.

service:
  type: LoadBalancer
  annotations:
    enabled: true
    values:
      service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
      service.beta.kubernetes.io/aws-load-balancer-type: "external"
      service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
      service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: "preserve_client_ip.enabled=true"
      service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: "http"
      service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "8181"
      service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: "/health"
   

If the following annotation exist, it should be removed otherwise there would be handshake error.

service.beta.kubernetes.io/aws-load-balancer-proxy-protocol="*"