MPLS in the campus Network?

Jason Lixfeld jason+nanog at lixfeld.ca
Thu Oct 20 15:12:42 UTC 2016


Hi,

> On Oct 20, 2016, at 9:43 AM, steven brock <ef14019 at gmail.com> wrote:
> 
> Compared to MPLS, a L2 solution with 100 Gb/s interfaces between
> core switches and a 10G connection for each buildings looks so much
> cheaper. But we worry about future trouble using Trill, SPB, or other
> technologies, not only the "open" ones, but specifically the proprietary
> ones based on central controller and lots of magic (some colleagues feel
> the debug nightmare are garanteed).

From my perspective, in this day and age, no service provider or campus should really be using any sort of layer 2 protection mechanism in their backbone, if they can help it.

> If you had to make such a choice recently, did you choose an MPLS design
> even at lower speed ?

Yup.  5 or so years ago, and never looked back.  Actually, this was in conjunction with upgrading our 1G backbone to a 10G backbone, so it was an upgrade for us in all senses of the word.

> How would you convince your management that MPLS is the best solution for
> your campus network ?

You already did:

<snip>
> We are not satisfied with the current backbone design ; we had our share
> of problems in the past:
> - high CPU load on the core switches due to multiple instances of spanning
> tree slowly converging when a topology change happens (somehow fixed
> with a few instances of MSTP)
> - spanning tree interoperability problems and spurious port blocking
> (fixed by BPDU filtering)
> - loops at the edge and broadcast/multicast storms (fixed with traffic
> limits and port blocking based on threshhold)
> - some small switches at the edge are overloaded with large numbers of
> MAC addresses (fixed with reducing broadcast domain size and subnetting)
> 
> This architecture doesn't feel very solid.
> Even if the service provisionning seems easy from an operational point
> of view (create a VLAN and it is immediately available at any point of the
> L2 backbone), we feel the configuration is not always consistent.
> We have to rely on scripts pushing configuration elements and human
> discipline (and lots of duct-tape, especially for QoS and VRFs).

</snip>

> How would you justify the cost or speed difference ?

It’s only more expensive the more big vendor products you use.  Sometimes you need to (i.e.: Boxes with big RIB/FIBs for DFZ, or deep buffers), but more and more, people are looking to OCP/White Box Switches [1][2].

For example, assuming a BCM Trident II based board with 48 SFP+ cages and 6 QSFP+ cages, you get a line-rate, MPLS capable 10G port for $65.  Or, if you’re like me and hate the idea of breakout cables, you’re at about $100/SFP+ cage, at which points the QSPF+ cages are pretty much free.

Software wise, there are lots of vendors.  One that I like is IPInfusion’s OcNOS[3] codebase.  They are putting a lot of resources into building a service provider feature set (full-blown MPLS/VPLS/EVPN, etc.) for OCP switches.  There are others, but last time I looked a couple of years ago, they were less focused on MPLS and more focused on SDN: Cumulus Networks[4], PICA8[5], Big Switch Networks[6].

> Thanks for your insights!

[1] https://www.linx.net/communications/press-releases/lon2-revolutionary-development
[2] http://www.ipinfusion.com/about/press/london-internet-exchange-use-ip-infusion’s-ocnos™-network-operating-system-new-london-in
[3] http://www.ipinfusion.com/products/ocnos
[4] https://cumulusnetworks.com
[5] http://www.pica8.com
[6] http://www.bigswitch.com


More information about the NANOG mailing list