constant FEC errors juniper mpc10e 400g

Tom Beecher beecher at beecher.cc
Thu Apr 18 18:17:17 UTC 2024


FEC is occurring at the PHY , below the PCS.

Even if you're not sending any traffic, all the ethernet control frame juju
is still going back and forth, which FEC may have to correct.

I *think* (but not 100% sure) that for anything that by spec requires FEC,
there is a default RS-FEC type that will be used, which *may* be able to be
changed by the device. Could be fixed though, I honestly cannot remember.

On Thu, Apr 18, 2024 at 1:35 PM Aaron Gould <aaron1 at gvtc.com> wrote:

> Not to belabor this, but so interesting... I need a FEC-for-Dummies or FEC-for-IP/Ethernet-Engineers...
>
> Shown below, my 400g interface with NO config at all... Interface has no traffic at all, no packets at all....  BUT, lots of FEC hits.  Interesting this FEC-thing.  I'd love to have a fiber splitter and see if wireshark could read it and show me what FEC looks like...but something tells me i would need a 400g sniffer to read it, lol
>
> It's like FEC (fec119 in this case) is this automatic thing running between interfaces (hardware i guess), with no protocols and nothing needed at all in order to function.
>
> -Aaron
>
>
> {master}
> me at mx960> show configuration interfaces et-7/1/4 | display set
>
> {master}
> me at mx960>
>
> {master}
> me at mx960> clear interfaces statistics et-7/1/4
>
> {master}
> me at mx960> show interfaces et-7/1/4 | grep packet
>     Input packets : 0
>     Output packets: 0
>
> {master}
> me at mx960> show interfaces et-7/1/4 | grep "put rate"
>   Input rate     : 0 bps (0 pps)
>   Output rate    : 0 bps (0 pps)
>
> {master}
> me at mx960> show interfaces et-7/1/4 | grep rror
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled,
>     Bit errors                             0
>     Errored blocks                         0
>   Ethernet FEC statistics              Errors
>     FEC Corrected Errors                28209
>     FEC Uncorrected Errors                  0
>     FEC Corrected Errors Rate            2347
>     FEC Uncorrected Errors Rate             0
>
> {master}
> me at mx960> show interfaces et-7/1/4 | grep packet
>     Input packets : 0
>     Output packets: 0
>
> {master}
> me at mx960> show interfaces et-7/1/4 | grep "put rate"
>   Input rate     : 0 bps (0 pps)
>   Output rate    : 0 bps (0 pps)
>
> {master}
> me at mx960> show interfaces et-7/1/4 | grep rror
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled,
>     Bit errors                             0
>     Errored blocks                         0
>   Ethernet FEC statistics              Errors
>     FEC Corrected Errors                45153
>     FEC Uncorrected Errors                  0
>     FEC Corrected Errors Rate              29
>     FEC Uncorrected Errors Rate             0
>
> {master}
> me at mx960> show interfaces et-7/1/4 | grep packet
>     Input packets : 0
>     Output packets: 0
>
> {master}
> me at mx960> show interfaces et-7/1/4 | grep "put rate"
>   Input rate     : 0 bps (0 pps)
>   Output rate    : 0 bps (0 pps)
>
> {master}
> me at mx960> show interfaces et-7/1/4 | grep rror
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled,
>     Bit errors                             0
>     Errored blocks                         0
>   Ethernet FEC statistics              Errors
>     FEC Corrected Errors                57339
>     FEC Uncorrected Errors                  0
>     FEC Corrected Errors Rate            2378
>     FEC Uncorrected Errors Rate             0
>
> {master}
> me at mx960>
>
>
> On 4/18/2024 7:13 AM, Mark Tinka wrote:
>
>
>
> On 4/17/24 23:24, Aaron Gould wrote:
>
> Well JTAC just said that it seems ok, and that 400g is going to show 4x
> more than 100g "This is due to having to synchronize much more to support
> higher data."
>
>
> We've seen the same between Juniper and Arista boxes in the same rack
> running at 100G, despite cleaning fibres, swapping optics, moving ports,
> moving line cards, e.t.c. TAC said it's a non-issue, and to be expected,
> and shared the same KB's.
>
> It's a bit disconcerting when you plot the data on your NMS, but it's not
> material.
>
> Mark.
>
> --
> -Aaron
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nanog.org/pipermail/nanog/attachments/20240418/bcc9c23c/attachment.html>


More information about the NANOG mailing list