How does PAM's internal VIP distribute load to other nodes
search cancel

How does PAM's internal VIP distribute load to other nodes

book

Article ID: 368484

calendar_today

Updated On:

Products

CA Privileged Access Manager (PAM)

Issue/Introduction

It is important to understand how the PAM cluster VIP manages connections when deciding whether to utilize the built in internal VIP or an external 3rd party load balancer.

Environment

Release 4.x

Resolution

The PAM Cluster VIP is assigned to the primary cluster node at the time the cluster is started. Specifically, this IP is assigned as a second IP to the network card defined in the cluster configuration. This requires that the VIP is defined with an IP that can be routed with the network card it is assigned to. When planning the cluster site configuration, it is important to ensure that all nodes within a specific cluster site can make use of that network IP on the network device defined.

 

Depending on the type of network traffic coming into the PAM VIP the built in load balancer may route traffic to any node within the cluster site. Using the built in VIP, you cannot have cluster traffic balanced to any other site. If you need to load balance client connections to other sites within a multi-site cluster you can use a 3rd party load balancer configured separately from the internal VIP address. All client connections coming into the VIP address will be routed to the local address of one of the cluster nodes within the site after determining the least utilized node. Utilization is determined only by the number of active “xcd_spfd” processes on each node. An xcd_spfd child process is started for every client connection plus each active device connection. Once a client connection is made to a PAM appliance, all communication from that client will remain on that PAM appliance until it is logged off. This is true regardless of using the built in VIP or a third-party load balancer VIP.

 

Example:

The first connection made to a 3-node cluster site will go to the primary cluster node since there are an equal number of connections. Assuming 2 more connections come in and are distributed to the other 2 nodes. Then each of those connections start 3 connections to other devices. When the next 3 client connections come in, all 3 connections will go to the first node. Since we do not display the number of xcd_spfd processes it is not possible to determine where the next client connection will be sent to without console access to each node.

 

External load balancers can be configured to balance the load by the number of external client connections but cannot use a similar load balance methodology since it cannot determine the number of active xcd_spfd processes. Most third party load balancers will use a round robin with source persistence methodology to manage the load in a simple manner.

 

Other than client connections, no other types of connections through port 443 are load balanced using the built in VIP. Internal cluster communication, A2A password requests, and API or CLI commands will spawn a new xcd_spfd child process but these will not be redirected to any other node in the cluster site. It is possible to already have a higher client load on the primary cluster node when any of these additional communications will remain on the node with VIP.