Denial of Service Mitigation
search cancel

Denial of Service Mitigation

book

Article ID: 42873

calendar_today

Updated On:

Products

STARTER PACK-7 CA Rapid App Security CA API Gateway

Issue/Introduction

Solution

The SecureSpan has several mechanisms that addresses this scenario. The concurrency settings make it so even if the connections are flooded, you cannot over tax the onbox resources and crash the box (httpMaxConcurrency and httpCoreConcurrency). Through the usage of a Stale Connection process that will at a set interval ensure that unused connections are released, configurable per listener private thread pool, rate limiting, and IP Range restriction. All of these will work within the cluster to ensure that the levels are respected across each of the nodes.?

By default the Stale Connections levels are set to?
- io.staleCheckCount: number of stale connections that the SSG will check for every 5 seconds (Default 1)?
- io.staleCheckHosts: number of stale hosts that the SSG will check for every 5 seconds (Default 10)?

Through the configuration of policy, we would recommend that as the first thing checked should be rate limit to control the amount of requests from any one source which in turn can be control by IP Range limits for known entities and unknown. Also in version 5.2 sp2 and higher we included a caching functionality which can be used to track events such as failed authentication requests that can then be used to make decisions on new incoming connections. The caching assertion is bound per node in the cluster but in most instance clients connections from a specific IP will continue to use the same node during consecutive requests.?

Aside from that, we also have mechanisms built in that can be used to limit the controls on the size and composition of a message to prevent this (see io.xmlMaxPartByes, XML Schema Validation, and Soap Service WSDL Validation). Connections that do not meet these criteria are dropped immediately.?

When leveraging these items together, you should be able to protect against either size based, or concurrency based denial-of-service attacks.?

On top of those items, things such as SSL using mutual auth, or any other type of user authentication should assist in this at an application level as well.?

If you would like, you can even go to a lower level and fight this sort of scenario with mitigation methods such as lower the amount of queued connections after maxConcurrency hit by modifying advanced listener port properties (Example: acceptCount=100) or modifying iptables to drop connection amounts over x from host (Example: iptables -A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 100 -j DROP). This can also be heavily mitigated via use of delayed binding or TCP Splicing on a hardware load balancer.?

The main issue here is security v accessibility and determining internal requirements that balance user ease of access and how far you want to go to combat any issues from both a resource usage and end user "hoops to jump through" perspective. Regarding internal v external configuration, you should be able to create new listen ports for internal access (to leverage pre-existing ethernet config), and modify the interface properties to only allow incoming traffic from a single network segment, and modify them as you see fit for internal v external policy.

Environment

Release:
Component: APIGTW