10G switchrecommendaton

George Bonser gbonser at seven.com
Fri Jan 27 19:48:47 UTC 2012


> -----Original Message-----
> From: Fabien Delmotte 
> Sent: Friday, January 27, 2012 2:20 AM
> To: Grant Ridder
> Cc: nanog list
> Subject: Re: 10G switchrecommendaton
> 
> I worked for Extreme, and I deployed a lot of X650 (24 10G ports) for
> DataCenter environment. The box is really good.
> In fact if you use the box at a layer 2 it is perfect, BUT DON'T use
> their BGP code, they never understood what is BGP :)
> 
> Regards
> 
> Fabien

A place I worked around 2000-ish was an Extreme shop.  My perception at the time was that they were probably the best switch in the world at layer 2.  I used BGP on the 1i and 5i products.  The problem we had with them was when I asked when they were going to support multiple path BGP (as in the maximum-paths command for Cisco / Brocade).  They told me at the time that they had no plans to support that option, it wasn't on the road map, and frankly, BGP was not a priority for them as they were concentrating on layer2 metro and data center features at the time.

That meeting resulted in a call to Foundry and the eventual purchase of several BigIron switches.  As the application was just plain IP routing, they worked great.  I haven't used Extreme since so can't attest to their BGP feature set but my gut feeling seems to be the same ... great gear at layer 2 but layer 3 seems to be a back burner priority for them.  I would have no problem using their gear in an office or data center but would have to take a good long look at it for internet peering/transit.

Arista is really good gear and I use them for 10G aggregation from top of rack switches in an application where pods of connectivity are scattered about in various leased cages in a commercial data center.  The TOR switches link to the Aristas in an MLAG configuration which might look like an "end of row" configuration.  Those uplink to the core in another bit of space in the data center to keep the number of cross-connects down.  Performance has so far been perfect, not so much as a glitch from those units.  I've also recently deployed them as TOR switches for a 10G cluster of machines and would have chosen TurboIrons if they would stack or had MCT features.  The benefit of the TurboIron, if they will work for you, is the lifetime warranty.  No annual support cost is a huge deal.  Arista is also lagging in layer 3 and ipv6 features, or were the last time I looked at them at layer 3.  That might have changed recently.  They had only recently come out with OSPF support on their chassis units.

One question I would have re: deep buffers.  It wouldn't seem to me to make much difference if you are buffering on the TOR switch or buffering on the host.  If flow control is giving you problems, maybe you just need more buffering on the host or maybe you should just let tcp back off a bit and mitigate the congestion using the protocol.  More buffering can sometimes cause more performance problems than it solves but depends on the application.  If I have a lot of "fan in" such as several front end hosts taking to a few back end hosts, I generally try to ease that congestion by giving that back end host considerably more BW.  Such as GigE from the front end hosts and 2x10G to the back end servers.  For example, an Intel X520-T2 card with 2x10G RJ-45 ports to a pair of Aristas in an MLAG configuration works pretty well provided you use the latest Intel driver for the cards.





More information about the NANOG mailing list