External BGP Controller for L3 Switch BGP routing

Tore Anderson tore at fud.no
Mon Jan 16 14:53:28 UTC 2017


* Saku Ytti

> On 16 January 2017 at 14:36, Tore Anderson <tore at fud.no> wrote:
>
> > Put it another way, my «Internet facing» interfaces are typically
> > 10GEs with a few (kilo)metres of dark fibre that x-connects into my
> > IP-transit providers' routers sitting in nearby rooms or racks
> > (worst case somewhere else in the same metro area). Is there any
> > reason why I should need deep buffers on those interfaces?  
> 
> Imagine content network having 40Gbps connection, and client having
> 10Gbps connection, and network between them is lossless and has RTT of
> 200ms. To achieve 10Gbps rate receiver needs 10Gbps*200ms = 250MB
> window, in worst case 125MB window could grow into 250MB window,  and
> sender could send the 125MB at 40Gbps burst.
> This means the port receiver is attached to, needs to store the 125MB,
> as it's only serialising it at 10Gbps. If it  cannot store it, window
> will shrink and receiver cannot get 10Gbps.
> 
> This is quite pathological example, but you can try with much less
> pathological numbers, remembering TridentII has 12MB of buffers.

I totally get why the receiver need bigger buffers if he's going to
shuffle that data out another interface with a slower speed.

But when you're a data centre operator you're (usually anyway) mostly
transmitting data. And you can easily ensure the interface speed facing
the servers can be the same as the interface speed facing the ISP.

So if you consider this typical spine/leaf data centre network topology
(essentially the same one I posted earlier this morning):

(Server) --10GE--> (T2 leaf X) --40GE--> (T2 spine) --40GE-->
(T2 leaf Y) --10GE--> (IP-transit/"the Internet") --10GE--> (Client)

If I understand you correctly you're saying this is a "suspect" topology
that cannot achieve 10G transmission rate from server to client (or
from client to server for that matter) because of small buffers on my
"T2 leaf Y" switch (i.e., the one which has the Internet-facing
interface)?

If so would it solve the problem just replacing "T2 leaf Y" with, say,
a Juniper MX or something else with deeper buffers?

Or would it help to use (4x)10GE instead of 40GE for the links between
the leaf and spine layers too, so there was no change in interface
speeds along the path through the data centre towards the handoff to
the IPT provider?

Tore



More information about the NANOG mailing list