Predicting TCP throughput

Andrew Smith andrew.william.smith at gmail.com
Thu May 28 05:56:29 UTC 2015


You need to account for window size as well. You should also account for
the details of the specific implementation of the TCP stack you are dealing
with if you truly need a deterministic result.

On Wed, May 27, 2015 at 8:15 PM, Glen Kent <glen.kent at gmail.com> wrote:

> Hi,
>
> I am looking at deterministic ways (perhaps employing data science) to
> predict TCP throughput that i can expect between two end points. I am using
> the latency (RTT) and the packet loss as the parameters. Is there anything
> else that i can use to predict the throughput?
>
> A related question to this is;
>
> If i see an RTT of 150ms and packet loss of 0.01% between points A and B
> and the maximum throughput then between these as, say 250Mbps. Then can i
> say that i will *always* get the same (or in a close ballpark) throughput
> not matter what time of the day i run these tests.
>
> My points A and B can be virtual machines spawned on two different data
> centers, say Amazon Virgina and Amazon Tokyo? So we're talking about long
> distances here.
>
> What else besides the RTT and packet loss can affect my TCP throughput
> between two end points. I am assuming that the effects of a virtual machine
> overload would have direct bearing on the RTT and packet loss, and hence
> should cancel out. What i mean by this is that even if a VM is busy, then
> that might induce larger losses and increased RTT, and that would affect my
> TCP throughput. But then i already know what TCP throughput i get when i
> have a given RTT and loss, and hence should be able to predict it.
>
> Is there something that i am missing here?
>
> Thanks, Glen
>



More information about the NANOG mailing list