When using the Cloud/SaaS, multi-continent companies tend to add monitoring agents at different locations, some spread across continents, to report to the same central collector.
In the Cloud SaaS environment, when deploying agents in different continents, it can happen that the network configuration prevents the agent from sending the collected data fast enough to the collector, and the data-queue to be sent will stack up on the local disk. This makes it impossible to view near real-time data on the analysis side, and that can cause full disk conditions on the sender side.
Communication link subject to high latency, as often seen between two sites located on different continents.
The Bandwidth Delay product determines the amount of data that can be in transit in the network. It is the product of the available bandwidth and the latency (RTT).
The allowed packet size as the size of the to be transmitted payload will have an impact.
The impacting factors to take into account:
Once all these elements have been gathered, it is easy to compute the available throughput between two locations.
The website https://wintelguy.com/wanperf.pl will perform the calculation with the provided elements.
This tool estimates TCP throughput and file transfer time based on network link characteristics and TCP/IP parameters.
Link bandwidth (Mbit/s): 4000
RTT (millisecond): 280
Packet loss (%): 0.0001
MTU (Byte): 1448
L1/L2 frame overhead (Byte): 38
TCP/IP (v4) header overhead (Byte): 40
TCP window (RWND) size (Byte): 66560
File size (MByte): 0.0004
The previous example uses a 4Gbps link with a MTU of 1448bytes, an average payload size of 4kBytes and a TCP Sliding window configuration allowing max 65K.
Results:
Link bandwidth (Mbit/s): 4000
Max achievable TCP throughput limited by TCP overhead (Mbit/s): 3790.0404
Bandwidth-Delay Product (BDP) (bit): 1120000000
Minimum required TCP RWND (Byte): 140000000
Max TCP throughput limited by packet loss (Mathis et.al. formula) (Mbit/s): 40.228571
Max TCP throughput limited by TCP RWND (Mbit/s): 1.901714
Expected maximum TCP throughput (Mbit/s): 1.901714
Minimum transfer time for a 0.0004 MByte file (d:h:m:s): 0:00:00:00
In this specific case, the result would be ~1.9Mbps of expected bandwidth under best conditions.
Reduce the amount of require ACK's to be sent back by increasing the TCP window size to allow the sliding window to optimize the data transfers.
The following data will be required
Note: we assume all device between send/receiver honor the DF (Don't fragment) flag.
localserver ~# ping remoteserver
PING remoteserver (<IP>) 56(84) bytes of data.
64 bytes from remoteserver (<IP>): icmp_seq=1 ttl=53 time=281 ms
64 bytes from remoteserver (<IP>): icmp_seq=2 ttl=53 time=282 ms
64 bytes from remoteserver (<IP>): icmp_seq=3 ttl=53 time=281 ms
64 bytes from remoteserver (<IP>): icmp_seq=4 ttl=53 time=281 ms
So we have an average of 281 ms ping RTT.
No. Time Source Destination St SrcPt DstPt MSS W.Size Sc Seq. TTL
5480 2022-08-22 14:25:32,653342 <IP> <IP> 6 38182 61619 1448 65536 256 3220610 53
5481 2022-08-22 14:25:32,653348 <IP> <IP> 6 38182 61619 1448 65536 256 3222058 53
5482 2022-08-22 14:25:32,653349 <IP> <IP> 6 61619 38182 44 65536 256 95703 64
5483 2022-08-22 14:25:32,653350 <IP> <IP> 6 38182 61619 1448 65536 256 3223506 53
5484 2022-08-22 14:25:32,653379 <IP> <IP> 6 61619 38182 0 63488 256 95747 64
5485 2022-08-22 14:25:32,653395 <IP> <IP> 6 61619 38182 44 63488 256 95747 64
5486 2022-08-22 14:25:32,653418 <IP> <IP> 6 61619 38182 0 61696 256 95791 64
5487 2022-08-22 14:25:32,653488 <IP> <IP> 6 61619 38182 44 65536 256 95791 64
5488 2022-08-22 14:25:32,653540 <IP> <IP> 6 61619 38182 44 65536 256 95835 64
5489 2022-08-22 14:25:32,653682 <IP> <IP> 6 61619 38182 44 65536 256 95879 64
5490 2022-08-22 14:25:32,692866 <IP> <IP> 6 38182 61619 0 65536 256 3224954 53
5491 2022-08-22 14:25:32,934546 <IP> <IP> 6 38182 61619 1448 65536 256 3224954 53
In the previous packet capture: