Serious Juniper Hardware EoL Announcements

Saku Ytti saku at ytti.fi
Wed Jun 15 06:08:47 UTC 2022


On Tue, 14 Jun 2022 at 21:42, Eric Kuhnke <eric.kuhnke at gmail.com> wrote:

> I think the more common solution for something like that would be to use one 100GbE port as a trunk on a MX204 or MX304 to a directly adjacent 1U 48-port SFP+ switch in a purely L2 role used as a port expander, with dwdm/bidi/other unique types of SFP+ optics inserted in that.

It is disappointing that we are getting faceplates that are
exclusively cloud optimised, and service providers are scratching
their heads going 'how can i use this?'. But it may be that there
simply isn't a business case to build models with different faceplates
or to design yet another set of linecards.
Of course the fab doesn't charge different costs for different Trio,
we from cost POV, the chips always cost the ~same, if it's MX80 or
MX304 (except MX304 has up-to three of them). So there isn't any real
reason why you couldn't massively underutilize the chips and deliver
faceplates that are optimised for different use-cases. However, JNPR
does see ACX more for this role.

Now VLAN aggregation isn't without its problems:
   a) termination router must be able to do QoS under shaper, you need
to shape every VLAN to access rate and then QoS inside the shaper.
There are a lot of problems here, and even if the termination router
does support proper HQOS it may not support small enough burst values
that the access can handle.
   b) you lose link state information at termination, and you need to
either accept slower convergence (e.g. no BGP external fast fall over)
or investigate into CFM or BFD, where BFD would require active
participation from customer, which is usually not reasonable ask
   c) your pseudowire products will become worse, you may have MAC
learning (you might be able to turn it off) limiting MAC scale, you
will likely eat bunch of new frames which previously were passed, you
may be able to fix it with L2PT (rewrite MAC on L2 ingress, rewrite
MAC on L2 egress). And some things might become outright impossible,
for example paradise chipset will drop ISIS packets with VLAN headers
on the floor (technically impossible to have 802.3 and VLAN), so if
your termination is paradise, your pseudowire customers can't use
ISIS.
   d) most L2 devices have exceedingly small buffers and this solution
implies many=>one traffic flow, so you're going to have to understand
what amount of buffering you're going to need and how many ports you
can attach there
   e) provisioning and monitoring complexity, as you need to have
model where you decouple termination and access port, if you don't
already do this, it can be quite complicated to add, there are number
of complexities like how to associate these two ports for data
collection and rendering, where and how to draw vlans
   f) if you dual attach the L2 aggregation you can create loops due
to simple and complex reasons, termination may not have per-VLAN MAC
filter, so adding 1 pseudowire VLAN, may disable MAC filtering for
whole interface. And if you run default MAC/ARP timers (misconfig,
defaults are always wrong, ARP needs to be lower than MAC, but this is
universally not understood), primaryPE maybe send packets to L3
address which is in ARP, but not in MAC anymore (host down?), backupPE
will receive it due to lack of MAC filtering and will forward to
primaryPE, which will forward back to L2.

This was just what immediately occurred to me, I'm sure I could
remember more issues if I'd spent a bit more time thinking of it. L2
is hard, be it L2 LAN or L2 aggregation. And almost invariably
incorrectly configured, as L2 stuff apparently works usually
out-of-the-box, but is full of bees.

Now the common solution vendors offer to address many of these
problems is 'satellite', where the vendor takes HW and SW workarounds
to reduce the problems caused by VLAN aggregation. Unfortunately the
satellite story is as well regressing, Cisco doesn't have it for
Cisco8k, Juniper wants to kill Fusion.
Nokia and Huawei still seem to have love for provider faceplates.

-- 
  ++ytti


More information about the NANOG mailing list