few big monolithic PEs vs many small PEs

Mark Tinka mark.tinka at seacom.mu
Fri Jun 21 12:26:53 UTC 2019



On 21/Jun/19 10:32, adamv0025 at netconsultings.com wrote:

> Well yes but if say I compare just a single line-card cost to a standalone fixed-format 1RU router with a similar capacity -the card will always be cheaper and then as I'll start adding cards on the left-hand side of the equation things should start to even out gradually (problem is this gradual increase is just a theoretical exercise -there are no fixed PE products to do this with).
> Yes I can compare mpc7 with a mx204. Or asr9901 with some tomahawk card(s) probably not apples to apples? 

Yes, you can't always do that because not many vendors create 1U router
versions of their line cards. The MX204 is probably one of those that
comes reasonably close.

I'm not sure deciding whether you get an MPC7 line card or an MX204 will
be a meaningful exercise. You need to determine what your use-case fits.
For example, rather than buy MPC7 line cards to support 100Gbps
customers in our MX480's, it is easier to buy an MX10003. That way, we
can keep the MPC2 line cards in the MX480 chassis to support up to N x
10Gbps of customer links (aggregated to an Ethernet switch, of course)
and not pay the cost of trying to run 100Gbps services through the MX480.

The MX10003 would then be dedicated for 100Gbps customers (and 40Gbps),
meaning we can manage the ongoing operational costs of each type of
customer for a specific box.

We have thought about using MX204's to support 40Gbps and 100Gbps
customers, but there aren't enough ports on it for it to make sense,
particularly given those types of customers will want the routers they
connect to to have some kind of physical redundancy, which the MX204
does not have.

Our use-case for the MX204 is:

    - Peering.
    - Metro-E deployments for customers needing 10Gbps in the Access.


> Also one interesting CAPEX factor to consider is the connectivity back to the core, as with many small PEs in a POP one would need a lot of ports on core routers and also once again the aggregation factor is somewhat lost in doing so. Where I'd have just a couple of PEs with 100G back to the core now I'd need bunch of 10s-bundled or 40s -would probably need additional cards in core routers to accommodate the need for PE ports in the POP.       

Yes, that's not a small issue to scoff at, and you raise a valid concern
that could be easily overlooked if you adopted several smaller edge
routers in the data centre in favour of fewer large ones.

That said, you could do what we do and have a Layer 2 core switching
network, where you aggregate all routers in the data centre, so that you
are not running point-to-point links between routers and your core
boxes. For us, because of this, we still have plenty of slots left in
our CRS-8 chassis 5 years after deploying them, even though we are
supporting several 100's of Gbps worth of downstream router capacity.


> Well playing devil's advocate, having the metro rings build as dumb L1 or L2 with pair of PEs at the top is cheaper -although not much cheaper nowadays the economics in this sector changed significantly over the past years. 

A dumb Metro-E access with all the smarts in the core is cheap to build,
but expensive to operate.

You can't run away from the costs. You just have to decide whether you
want to pay costs in initial cash or in long-term operational headache.

> So this particular case, the major POPs, is actually where we ran into the problem of RE/RP becoming full (too many VRFs/Routes/BGP sessions) halfway through the chassis.
> Hence I'm considering whether it's actually better to go with multiple small chassis and/or fixed form PEs in the rack as opposed to half/full rack chassis. 

Are you saying that even the fastest and biggest control plane on the
market for your chassis is unable to support your requirements (assuming
their cost did not stop you from looking at them in the first place)?

Mark.




More information about the NANOG mailing list