10g residential CPE
baldur.norddahl at gmail.com
Sat Dec 26 19:02:16 UTC 2020
On Sat, Dec 26, 2020 at 7:28 PM Mikael Abrahamsson <swmike at swm.pp.se> wrote:
> On Sat, 26 Dec 2020, Baldur Norddahl wrote:
> > I demonstrated that it is about buffers by showing the same download
> > from a server that paces the traffic indeed gets the full 930 Mbps with
> > exactly the same settings, including starting window size, and the same
> > path (Copenhagen to Stockholm).
> You demonstrated that it's about which TCP algorithm they use, probably.
All (virtual) machines used in the experiment are the same. Those are NLNOG
RING network managed machines all running the exact same Ubuntu 16.04.7 LTS.
If you have access to NLNOG RING or equivalent you should try the
experiment for yourself. You will find that as latency increases TCP speeds
goes down and it can not be explained by congestion. And you will find that
some servers have this effect much less than others and that those servers
usually have 1G network speed. The effect is the same no matter what time
of day you try it (ie. it is not congestion related).
Before you panic I will say I am not trying to advocate that we need more
buffers. We need smart buffers. Buffer bloat is bad but no buffers is also
bad. Your home made debloat solution will probably not be able to recover
the missing TCP performance that I am describing here. But if you could
have FQ Codel in the ISP switch that would probably do a lot.
Or we could have TCP with pacing and that will be widely deployed around
the same time as IPv6.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NANOG