400G forwarding - how does it work?

Vasilenko Eduard vasilenko.eduard at huawei.com
Tue Jul 26 11:28:50 UTC 2022

Pipeline Stages are like separate computers (with their own ALU) sharing the same memory.
In the ASIC case, the computers have different types (different capabilities).

From: Etienne-Victor Depasquale [mailto:edepa at ieee.org]
Sent: Tuesday, July 26, 2022 2:05 PM
To: Saku Ytti <saku at ytti.fi>
Cc: Vasilenko Eduard <vasilenko.eduard at huawei.com>; NANOG <nanog at nanog.org>
Subject: Re: 400G forwarding - how does it work?

How do you define a pipeline?

For what it's worth, and
with just a cursory look through this email, and
without wishing to offend anyone's knowledge:

a pipeline in processing is the division of the instruction cycle into a number of stages.
General purpose RISC processors are often organized into five such stages.
Under optimal conditions,
which can be fairly, albeit loosely,
interpreted as "one instruction does not affect its peers which are already in one of the stages",
then a pipeline can increase the number of instructions retired per second,
often quoted as MIPS (millions of instructions per second)
by a factor equal to the number of stages in the pipeline.



On Tue, Jul 26, 2022 at 10:56 AM Saku Ytti <saku at ytti.fi<mailto:saku at ytti.fi>> wrote:

On Tue, 26 Jul 2022 at 10:52, Vasilenko Eduard <vasilenko.eduard at huawei.com<mailto:vasilenko.eduard at huawei.com>> wrote:

Juniper is pipeline-based too (like any ASIC). They just invented one special stage in 1996 for lookup (sequence search by nibble in the big external memory tree) – it was public information up to 2000year. It is a different principle from TCAM search – performance is traded for flexibility/simplicity/cost.

How do you define a pipeline? My understanding is that fabric and wan connections are in chip called MQ, 'head' of packet being some 320B or so (bit less on more modern Trio, didn't measure specifically) is then sent to LU complex for lookup.
LU then sprays packets to one of many PPE, but once packet hits PPE, it is processed until done, it doesn't jump to another PPE.
Reordering will occur, which is later restored for flows, but outside flows reorder may remain.

I don't know what the cores are, but I'm comfortable to bet money they are not ARM. I know Cisco used to ezchip in ASR9k but is now jumping to their own NPU called lightspeed, and lightspeed like CRS-1 and ASR1k use tensilica cores, which are decidedly not ARM.

Nokia, as mentioned, kind of has a pipeline, because a single packet hits every core in line, and each core does separate thing.

Network Processors emulate stages on general-purpose ARM cores. It is a pipeline too (different cores for different functions, many cores for every function), just it is a virtual pipeline.


-----Original Message-----
From: NANOG [mailto:nanog-bounces+vasilenko.eduard<mailto:nanog-bounces%2Bvasilenko.eduard>=huawei.com at nanog.org<mailto:huawei.com at nanog.org>] On Behalf Of Saku Ytti
Sent: Monday, July 25, 2022 10:03 PM
To: James Bensley <jwbensley+nanog at gmail.com<mailto:jwbensley%2Bnanog at gmail.com>>
Cc: NANOG <nanog at nanog.org<mailto:nanog at nanog.org>>
Subject: Re: 400G forwarding - how does it work?

On Mon, 25 Jul 2022 at 21:51, James Bensley <jwbensley+nanog at gmail.com<mailto:jwbensley+nanog at gmail.com>> wrote:

> I have no frame of reference here, but in comparison to Gen 6 Trio of

> NP5, that seems very high to me (to the point where I assume I am

> wrong).

No you are right, FP has much much more PPEs than Trio.

For fair calculation, you compare how many lines FP has to PPEs in Trio. Because in Trio single PPE handles entire packet, and all PPEs run identical ucode, performing same work.

In FP each PPE in line has its own function, like first PPE in line could be parsing the packet and extracting keys from it, second could be doing ingressACL, 3rd ingressQoS, 4th ingress lookup and so forth.

Why choose this NP design instead of Trio design, I don't know. I don't understand the upsides.

Downside is easy to understand, picture yourself as ucode developer, and you get task to 'add this magic feature in the ucode'.

Implementing it in Trio seems trivial, add the code in ucode, rock on.

On FP, you might have to go 'aww shit, I need to do this before PPE5 but after PPE3 in the pipeline, but the instruction cost it adds isn't in the budget that I have in the PPE4, crap, now I need to shuffle around and figure out which PPE in line runs what function to keep the PPS we promise to customer.

Let's look it from another vantage point, let's cook-up IPv6 header with crapton of EH, in Trio, PPE keeps churning it out, taking long time, but eventually it gets there or raises exception and gives up.

Every other PPE in the box is fully available to perform work.

Same thing in FP? You have HOLB, the PPEs in the line after thisPPE are not doing anything and can't do anything, until the PPE before in line is done.

Today Cisco and Juniper do 'proper' CoPP, that is, they do ingressACL before and after lookup, before is normally needed for ingressACL but after lookup ingressACL is needed for CoPP (we only know after lookup if it is control-plane packet). Nokia doesn't do this at all, and I bet they can't do it, because if they'd add it in the core where it needs to be in line, total PPS would go down. as there is no budget for additional ACL. Instead all control-plane packets from ingressFP are sent to control plane FP, and inshallah we don't congest the connection there or it.


> Cheers,

> James.




Ing. Etienne-Victor Depasquale
Assistant Lecturer
Department of Communications & Computer Engineering
Faculty of Information & Communication Technology
University of Malta
Web. https://www.um.edu.mt/profile/etiennedepasquale
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nanog.org/pipermail/nanog/attachments/20220726/df1f2336/attachment.html>

More information about the NANOG mailing list