This article will discuss how read and connect timeouts work at the network level, and what is observed from the Gateway application level, as the use of timeouts can cause confusion and create misguided expectations. Specifically, this article will discuss it from the Route via HTTP(S) assertion, but it generally applies to any assertion using a timeout value. This includes CWPs (cluster wide properties) too, such as io.outTimeout (read timeout) and io.outConnectionTimeout (connect timeout).
This article applies to all supported API Gateway appliances, but can apply to unsupported ones as well as these are more network level concepts, not application level.
As an example, let's say an admin of the API Gateway wants a connection to end at 3 seconds if the backend doesn't connect or takes too long after connection was established. This is possible for the connect timeout, generally, but not the read timeout.
The reason for the above statement this is that these timeouts reset on each packet received from the backend. So for example, if the connection is established, it's moved on to the read timeout at this point and that timeout counter resets on every packet received which means if the backend is responding slowly but quick enough to be under the 3 second read timeout set in this example, then the connection will last much longer than the three seconds that was set.
There is no way to force a connection to end at a certain time frame, that is not how TCP/IP was written. Some applications allow this, but it is not part of the specification and the API Gateway tries its best to adhere to the RFCs that we build upon. Unfortunately this mix of standardizing with various RFCs and other applications adding in their own handling for those situations, creates confusion on how things are really supposed to work at the TCP/IP level with regards to connection and read timeouts.
If a backend is not responding in a quick manner but quick enough to reset the timeout counter with each packet, then the focus needs to be why the backend is responding so slow rather than forcing the Gateway to end the connections. A temporary workaround could be to lower the timeout values while the backend is behaving poorly, but this is not generally recommended.