Google's peering, GGC, and congestion management

Patrick W. Gilmore patrick at
Thu Oct 15 20:00:33 UTC 2015

On Oct 15, 2015, at 3:50 PM, Baldur Norddahl <baldur.norddahl at> wrote:
> On 15 October 2015 at 16:35, Patrick W. Gilmore <patrick at> wrote:

>> The 100% number is silly. My guess? They’re at 98%.
>> That is easily do-able because all the traffic is coming from them.
>> Coordinate the HTTPd on each of the servers to serve traffic at X bytes per
>> second, ensure you have enough buffer in the switches for micro-bursts,
>> check the NICs for silliness such as jitter, and so on. It is non-trivial,
>> but definitely solvable.
> You would not need to control the servers to do this. All you need is the
> usual hash function of src+dst ip+port to map sessions into buckets and
> then dynamically compute how big a fraction of the buckets to route through
> a different path.
> A bit surprising that this is not a standard feature on routers.

The reason routers do not do that is what you suggest would not work.

First, you make the incorrect assumption that inbound will never exceed outbound. Almost all CDN nodes have far more capacity between the servers and the router than the router has to the rest of the world. And CDN nodes are probably the least complicated example in large networks. The only way to ensure A < B is to control A or B - and usually A.

Second, the router has no idea how much traffic is coming in at any particular moment. Unless you are willing to move streams mid-flow, you can’t guarantee this will work even if sum(in) < sum(out). Your idea would put Flow N on Port X when the SYN (or SYN/ACK) hits. How do you know how many Mbps that flow will be? You do not, therefore you cannot do it right. And do not say you’ll wait for the first few packets and move then. Flows are not static.

Third…. Actually, since 1 & 2 are each sufficient to show why it doesn’t work, not sure I need to go through the next N reasons. But there are plenty more.


More information about the NANOG mailing list