Overview of Check Point Load sharing vs. High Availability

book

Article ID: 167891

calendar_today

Updated On:

Products

XOS

Issue/Introduction

Overview of Check Point Load sharing vs. High AvailabilityHow does load sharing compare to high availability?

Cause

When using a third-party High Availability or Load Sharing solution the firewall cluster will act differently if choosing High Availability or Load Sharing. Here are the differences:

1) NAT port distribution

When choosing Load Sharing, the FireWall will automatically turn on the NAT Port Distribution mechanism.

Consider a two node cluster with an external virtual address. This address is configured as the Hide NAT address from which all connections from network A will be hidden. A certain situation can occur in which two clients from the same network are accessing the same web server. Each of these client connections can be routed through a different cluster node. Both of these connections can be given the same Source IP address by each of the cluster nodes. The returning packet from the server arriving at the firewall will have, therefore, the same source and destination IPs and ports making the third-party load sharing application decision to which cluster node to route which connection difficult.

The NAT Port Distribution mechanism solves this problem by assigning a specific range of ports for each cluster node (for example, node A 10,000 to 11,000, node B - 11,000 to 12,000). In the above case the returning packet for each connection will have a different destination port, making the third party application decision easier.

2) Cluster Fold

In cases when a client contacts the FireWall's external virtual address (VPN Client), the connection has to be folded to one of the nodes' physical external addresses. When checking Load Sharing in the High Availability tab, the Cluster Fold mechanism is turned on thus knowing that this connection needs to be folded to the relevant node.

3) Flush and ACK

There are cases in which a packet from the client to the server can be handled on member A and the return packet from the server back to the client can be handled on member B. This usually happens with encrypted connections and with static NAT configurations or with other asymmetric cases. When this parameter is changed to '1', member A will add an extra synchronization and wait for an acknowledgement before sending the packet to the server. This way it is guaranteed that the connection information will arrive to member B prior to the return packet from the server. This process, which is called "Flush and ACK", is done only when there is reason to believe that the synchronization information is needed to handle the return packet correctly. The most typical case is the first packet of a TCP connection (the SYN). This process adds some latency but is essential.

To enable this feature check Load Sharing in the High Availability tab of the cluster node object.

In FireWall-1 NG FP3 you will then need to change the 'use_limited_Flushnack' property in the objects_5_0.C to "true". Use Check Point's Dbedit to do this.

In FireWall-1 NG Application Intelligence this can be done through the SmartDashboard in the High Availability tab of the cluster node object. { This is checking "Support non-Sticky connections" }

In cases where the third party application assures that a connection is always routed through the same node, this feature is not needed.