Define network throughput between two distant locations
search cancel

Define network throughput between two distant locations

book

Article ID: 249285

calendar_today

Updated On:

Products

CA Performance Management - Usage and Administration

Issue/Introduction

When using the Cloud/SaaS, multi-continent companies tend to add monitoring agents at different locations, some spread across continents, to report to the same central collector.


In the Cloud SaaS environment, when deploying agents in different continents, it can happen that the network configuration prevents the agent from sending the collected data fast enough to the collector, and the data-queue to be sent will stack up on the local disk. This makes it impossible to view near real-time data on the analysis side, and that can cause full disk conditions on the sender side.

Environment

Communication link subject to high latency, as often seen between two sites located on different continents.

Cause

The Bandwidth Delay product determines the amount of data that can be in transit in the network. It is the product of the available bandwidth and the latency (RTT). 
The allowed packet size as the size of the to be transmitted payload will have an impact.

The impacting factors to take into account:

  1. Available link speed
  2. Used MTU/MSS
  3. Size of transferred payload
  4. TCP window settings (Sliding window)
  5. Average RTT (Round Trip Time) latency from A 2 B in ms.

Once all these elements have been gathered, it is easy to compute the available throughput between two locations.   
The website https://wintelguy.com/wanperf.pl will perform the calculation with the provided elements.


This tool estimates TCP throughput and file transfer time based on network link characteristics and TCP/IP parameters.

Link bandwidth (Mbit/s):             4000
RTT (millisecond):                    280
Packet loss (%):                        0.0001
MTU (Byte):                          1448
L1/L2 frame overhead (Byte):           38
TCP/IP (v4) header overhead (Byte):    40
TCP window (RWND) size (Byte):      66560
File size (MByte):                      0.0004



The previous example uses a 4Gbps link with a MTU of 1448bytes, an average payload size of 4kBytes and a TCP Sliding window configuration allowing max 65K.

Results:

Link bandwidth (Mbit/s):                                                           4000
Max achievable TCP throughput limited by TCP overhead (Mbit/s):                    3790.0404
Bandwidth-Delay Product (BDP) (bit):                                         1120000000
Minimum required TCP RWND (Byte):                                             140000000
Max TCP throughput limited by packet loss (Mathis et.al. formula) (Mbit/s):          40.228571
Max TCP throughput limited by TCP RWND (Mbit/s):                                      1.901714
Expected maximum TCP throughput (Mbit/s):                                             1.901714
Minimum transfer time for a 0.0004 MByte file (d:h:m:s):                     0:00:00:00

In this specific case, the result would be ~1.9Mbps of expected bandwidth under best conditions.

 

 

Resolution

Reduce the amount of require ACK's to be sent back by increasing the TCP window size to allow the sliding window to optimize the data transfers.

Additional Information

Gather the required data

The following data will be required

  • RTT ping to remote site
  • Window size and MTU A packet capture on the interface for the traffic between booth peers to be analyzed
  • Payload size can only be determined on non encrypted pathways. So any way to gather the information is suitable.
  • The "Official" link-speed between the 2 sides as per contract/telco's. AppNeta can also be used, as it is able to determine the end to end link speed. This however required the installation of 2 Monitoring points, one at each endpoint.

Note: we assume all device between send/receiver honor the DF (Don't fragment) flag.

Ping to a remote location

localserver ~# ping remoteserver
PING remoteserver (<IP>) 56(84) bytes of data.
64 bytes from remoteserver (<IP>): icmp_seq=1 ttl=53 time=281 ms
64 bytes from remoteserver (<IP>): icmp_seq=2 ttl=53 time=282 ms
64 bytes from remoteserver (<IP>): icmp_seq=3 ttl=53 time=281 ms
64 bytes from remoteserver (<IP>): icmp_seq=4 ttl=53 time=281 ms

So we have an average of 281 ms ping RTT.

Packet capture between both peers

No.  Time                       Source        Destination St SrcPt DstPt MSS  W.Size Sc Seq.    TTL
5480 2022-08-22 14:25:32,653342 <IP> <IP> 6 38182 61619 1448 65536 256 3220610 53
5481 2022-08-22 14:25:32,653348 <IP> <IP> 6 38182 61619 1448 65536 256 3222058 53
5482 2022-08-22 14:25:32,653349 <IP> <IP> 6 61619 38182 44 65536 256 95703 64
5483 2022-08-22 14:25:32,653350 <IP> <IP> 6 38182 61619 1448 65536 256 3223506 53
5484 2022-08-22 14:25:32,653379 <IP> <IP> 6 61619 38182 0 63488 256 95747 64
5485 2022-08-22 14:25:32,653395 <IP> <IP> 6 61619 38182 44 63488 256 95747 64
5486 2022-08-22 14:25:32,653418 <IP> <IP> 6 61619 38182 0 61696 256 95791 64
5487 2022-08-22 14:25:32,653488 <IP> <IP> 6 61619 38182 44 65536 256 95791 64
5488 2022-08-22 14:25:32,653540 <IP> <IP> 6 61619 38182 44 65536 256 95835 64
5489 2022-08-22 14:25:32,653682 <IP> <IP> 6 61619 38182 44 65536 256 95879 64
5490 2022-08-22 14:25:32,692866 <IP> <IP> 6 38182 61619 0 65536 256 3224954 53
5491 2022-08-22 14:25:32,934546 <IP> <IP> 6 38182 61619 1448 65536 256 3224954 53



In the previous packet capture:   

  • the max used MSS/MTU can be read in the MSS column (Wireshark packet length: tcp.len). Max at 1448
  • The Window size can be read in the W. Size column (Wireshark window size: tcp.window_size). Max value at 65536