bfd-like mechanism for LANPHY connections between providers

Richard A Steenbergen ras at e-gerbil.net
Wed Mar 16 14:28:15 CDT 2011


On Wed, Mar 16, 2011 at 02:55:14PM -0400, Jeff Wheeler wrote:
> 
> This is often my topology as well.  I am satisfied with BGP's 
> mechanism and default timers, and have been for many years.  The 
> reason for this is quite simple: failures are relatively rare, my 
> convergence time to a good state is largely bounded by CPU, and I do 
> not consider a slightly improved convergence time to be worth an 
> a-typical configuration.  Case in point, Richard says that none of his 
> customers have requested such configuration to date; and you indicate 
> that Level3 will provision BFD only if you use a certain vendor and 
> this is handled outside of their normal provisioning process.

There are still a LOT of platforms where BFD doesn't work reliably 
(without false positives), doesn't work as advertised, doesn't work 
under every configuration (e.g. on SVIs), or doesn't scale very well 
(i.e. it would fall over if you had more than a few neighbors 
configured). The list of caveats is huge, the list of vendors which 
support it well is small, and there should be giant YMMV stickers 
everywhere. But Juniper (M/T/MX series at any rate) is definitely one of 
the better options (though not without its flaws, inability to configure 
on the group level and selectively disable per-peer, and lack of support 
on the group level where any IPv6 neighbor is configured, come to mind).

Running BFD with a transit provider is USUALLY the least interesting use 
case, since you're typically connected either directly, or via a metro 
transport service which is capable of passing link state. One possible 
exception to this is when you need to bundle multiple links together, 
but link-agg isn't a good solution, and you need to limit the number of 
EBGP paths to reduce load on the routers. The typical solution for this 
is loopback peering, but this kills your link state detection mechanism 
for killing BGP during a failure, which is where BFD starts to make 
sense.

For IX's, where you have an active L2 switch in the middle and no link 
state, BFD makes the most sense. Unfortunately it's the area where we've 
seen the least traction among peers, with "zomg why are you sending me 
these udp packets" complaints outnumbering people interesting in 
configuring BFD 10:1.

-- 
Richard A Steenbergen <ras at e-gerbil.net>       http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)




More information about the NANOG mailing list