Devil's Advocate - Segment Routing, Why?

Mark Tinka mark.tinka at seacom.mu
Sun Jun 21 11:27:34 UTC 2020


On 21/Jun/20 12:10, Masataka Ohta wrote:

>  
> It was implemented and some technology was used by commercial
> router from Furukawa (a Japanese vendor selling optical
> fiber now not selling routers).

I won't lie, never heard of it.


> GMPLS, you are using, is the mechanism to guarantee QoS by
> reserving wavelength resource. It is impossible for GMPLS
> not to offer QoS.

That is/was the idea.

In practice (at least in our Transport network), deploying capacity as
an offline exercise is significantly simpler. In such a case, we
wouldn't use GMPLS for capacity reservation, just path re-computation in
failure scenarios.

Our Transport network isn't overly meshed. It's just stretchy. Perhaps
if one was trying to build a DWDM backbone into, out of and through
every city in the U.S., capacity reservation in GMPLS may be a use-case.
But unless someone is willing to pipe up and confess to implementing it
in this way, I've not heard of it.


>
> Moreover, as some people says they offer QoS with MPLS, they
> should be using some prioritized queueing mechanisms, perhaps
> not poor WFQ.

It would be a combination - PQ and WFQ depending on the traffic type and
how much customers want to pay.

But carrying an MPLS EXP code point does not make MPLS unscalable. It's
no different to carrying a DSCP or IPP code point in plain IP. Or even
an 802.1p code point in Ethernet.


> They are different, of course. But, GMPLS is to reserve bandwidth
> resource.

In theory. What are people doing in practice? I just told you our story.


> MPLS, in general, is to reserve label values, at least.

MPLS is the forwarding paradigm. Label reservation/allocation can be
done manually or with a label distribution protocol. MPLS doesn't care
how labels are generated and learned. It will just push, swap and pop as
it needs to.


> I didn't say scaling problem caused by QoS.
>
> But, as you are avoiding to extensively use MPLS, I think you
> are aware that extensive use of MPLS needs management of a
> lot of labels, which does not scale.
>
> Or, do I misunderstand something?

I'm not avoiding extensive use of MPLS. I want extensive use of MPLS.

In IPv4, we forward in MPLS 100%. In IPv6, we forward in MPLS 80%. This
is due to vendor nonsense. Trying to fix.



> No. IntServ specifies format to carry QoS specification in RSVP
> packets without assuming any specific model of QoS.

Then I'm failing to understand your point, especially since it doesn't
sound like any operator is deploying such a model, or if so, publicly
suffering from it.



> No. As experimental switches are working years ago and making
> it work >10Tbps is not difficult (switching is easy, generating
> 10Tbps packets needs a lot of parallel equipment), there is little
> remaining for research.

We'll get there. This doesn't worry me so much :-). Either horizontally
or vertically. I can see a few models to scale IP/MPLS carriage.


>    
> SDN, maybe. Though I'm not saying SDN scale, it should be no
> worse than MPLS.

I still can't tell you what SDN is :-). I won't suffer it in this
decade, thankfully.


> I did some retrospective research.
>
>    https://en.wikipedia.org/wiki/Multiprotocol_Label_Switching
>    History
>    1994: Toshiba presented Cell Switch Router (CSR) ideas to IETF BOF
>    1996: Ipsilon, Cisco and IBM announced label switching plans
>    1997: Formation of the IETF MPLS working group
>    1999: First MPLS VPN (L3VPN) and TE deployments
>    2000: MPLS traffic engineering
>    2001: First MPLS Request for Comments (RFCs) released
>
> as I was a co-chair of 1994 BOF and my knowledge on MPLS is
> mostly on 1997 ID:
>
>    https://tools.ietf.org/html/draft-ietf-mpls-arch-00
>
> there seems to be a lot of terminology changes.

My comment to that was in reference to your text, below:

    "What if, an inner label becomes invalidated around the
    destination, which is hidden, for route scalability,
    from the equipments around the source?"

I've never heard of such an issue in 16 years.


>
> I'm saying that, if some failure occurs and IGP changes, a
> lot of LSPs must be recomputed, which does not scale
> if # of LSPs is large, especially in a large network
> where IGP needs hierarchy (such as OSPF area).

That happens everyday, already. Links fail, IGP re-converges, LDP keeps
humming. RSVP-TE too, albeit all that state does need some consideration
especially if code is buggy.

Particularly, where you have LFA/IP-FRR both in the IGP and LDP, I've
not come across any issue where IGP re-convergence caused LSP's to fail.

In practice, IGP hierarchy (OSPF Areas or IS-IS Levels) doesn't help
much if you are running MPLS. FEC's are forged against /32 and /128
addresses. Yes, as with everything else, it's a trade-off.

Mark.



More information about the NANOG mailing list