Is multihoming hard? [was: DNS amplification]

Jimmy Hess mysidia at gmail.com
Sat Mar 23 19:12:48 UTC 2013


On 3/23/13, Owen DeLong <owen at delong.com> wrote:
> A reliable cost-effective means for FTL signaling is a hard problem without
> a known solution.

Faster than light signalling is not merely a hard problem.
Special relativity doesn't provide that information may travel faster
than the maximum
speed C.    If you want to signal faster than light, then slow down the light.

> An idiot-proof simple BGP configuration is a well known solution. Automating
> it would be relatively simple if there were the will to do so.

Logistical problems...  if it's a multihomed connection, which of the
two or three providers manages it,  and gets to blame the other
provider(s) when anything goes wrong: or are you gonna rely on the
customer to manage it?

Someone might be able to make a protocol that lets this happen, which
would need to detect on a per-route basis any performance/connectivity
issues, but I would say it's not any known implementation of BGP.


> 1.	ISPs are actually motivated to prevent customer mobility, not enable it.

> 2.	ISPs are motivated to reduce, not increase the number of multi-homed
> 	sites occupying slots in routing tables.

    This is not some insignificant thing.   The ISPs have to maintain
routing tables
    as well;  ultimately the ISP's customers are in bad shape, if too many slots
    are consumed.

How about
   3.  Increased troubleshooting complexity when there are potential
issues or complaints.

The concept of a "fool proof"  BGP configuration is clearly a new sort of myth.

The idea that the protocol on its own, with a very basic config, does
not ever require
any additional attention,  to achieve expected results;  where
expected results include isolation from any faults with the path from
one of of the user's two, three, or four providers,  and  balancing
for optimal throughput and best latency/loss to every destination.

BGP multihoming doesn't  prevent users from having issues because:

      o Connectivity issues that are a responsibility of one of their provider's
         That they might have expected multihoming to protect them against
          (latency, packet loss).

      o very Poor performance of one of their links;  or poor
performance of one of their
         links to their favorite destination

      o Asymmetric paths;  which means that when latency or loss is poor,
         the customer doesn't necessarily know which provider to blame,
         or if both are at fault,  and  the providers can spend a lot of time
         blaming each other.

These are all solvable problems,   but at cost, and therefore not for
massmarket lowest cost ISP service.

It's not as if they can have
    "Hello, DSL technical support...  did you try shutting off your
other peers and retesting'?"

The average end user won't have a clue -- they will need one of the
providers, or someone else to be managing that for them,  and
understand  how each provider is connected.

I don't see large ISPs  training up their support reps for  DSL
$60/month services, to handle BGP troubleshooting, and multihoming
management/repair.

> In addition, most of the consumers that could benefit from such a solution
> do not have enough knowledge to know what they should be demanding
> from their vendors, so they don't demand it.

> Owen
--
-JH




More information about the NANOG mailing list