Geographic routing hack
woody at zocalo.net
Mon Aug 2 21:01:28 UTC 1999
> Some weeks ago I noticed that 184.108.40.206/32
> (www.digisle.net) appears to reach web servers
> located in physically different places broadly
> dependent on where you see it from.
> I presume this is done by advertising the same
> prefix from border routers which are in seperate
> IGP domains or something
Don't need to do anything complicated, actually... Just make sure that
your IGP hop-count represents your internal costs relatively well, when
your border router picks up an inbound packet, it'll forward it to the
topologically closest server.
> but I wonder what people's views on the concept are,
> since it could potentially be quite confusing in
> certain circumstances (e.g. debugging routing
> problems) ?
Nah, the per-logical-server shared addresses are virtual-hosted on
machines that have per-physical-server unique addresses, which you switch
to for everything after the initial page/connection/whatever. Else you
can't do any stateful connections with clients, since the connection with
the client might get re-routed to a different server after it was
So your debugging is always on the per-physical-server unique addresses,
just like it is now.
> Superficially it seems like a 'cool hack' for
> geographic content-distribution
Well, I wouldn't call it a hack, it's just straight-forward routing. And
it's really important to realize that it's not geographic distribution,
which is useless, but _topological_ load-balancing, which actually saves
money and latency.
> but up until now I've always
> seen this sort of thing done by exploiting NS
> record sorting order properties with the kludge
> of different A records in the various zonefiles,
Maybe by folks who haven't thought about the issue, or aren't trying to
distribute across servers in multiple locations. But that won't actually
do any geographic or topological load-balancing, since you have no way of
knowing which DNS server somebody's going to reach, without applying this
same trick to your DNS servers.
Which I recall suggesting wrt the roots, back at the Montreal IEPG, and
getting roundly boo'ed. :-)
Anyway, doing this type of topological load balancing has been reasonably
common, to the best of my knowledge, for at least three years. I know the
first time we tried it was in 1996, on a project for Oracle's corporate
> and I wondered if doing it with routing policy in
> this way is strictly RFC compliant (or for that
> matter if anyone cares if it isn't) ?
I sure _hope_ nobody's been drafting RFCs that tell me how I can route my
internal traffic. :-)
More information about the NANOG