Extra latency at ATT exchange for UVerse
Richard A Steenbergen
ras at e-gerbil.net
Thu Nov 11 21:19:35 UTC 2010
On Thu, Nov 11, 2010 at 03:39:42PM -0500, Srikanth Sundaresan wrote:
> Can anyone explain why ATT's UVerse adds significant delay to packets
> compared to their ADSL service?
>
> For example, pinging 8.8.8.8 from an ADSL gateway shows a latency of
> ~10ms. From an UVerse gateway, it's about 40ms. Of the extra 30ms,
> about 10ms can be explained by the fact that UVerse last hop is
> interleaved. ADSL seems to have Fastpath enabled more often than not
> (at least in my city).
>
> The extra 20ms is more interesting. By pinging each hop obtained by
> tracerouting to 8.8.8.8, the extra latency seems to be added on the
> exchange between ATT and Google. It's not just for 8.8.8.8. The same
> holds for other hosts too. ATT seems to add 20ms when it hands off a
> (UVerse) packet at an exchange.
First off, this thread is useless without actual traceroutes. :)
Whenever you see the latency change significantly at the boundry between
networks, the two most obvious things to look for are congestion, and an
asymmetric reverse path.
Congestion is usually pretty easy to spot, if you're seeing it with high
latency you'll usually find that latency to be pretty jittery (as tcp
windows probe for more capacity, then back off), and you'll see the
associated packet loss starting at the link in question.
Asymmetric reverse paths are responsible for a lot of other issues too.
Traceroute measures the round-trip latency but only shows you the path
in a single direction, leaving the entire return trip completely
invisible. There is no guarantee that the packet will come back to you
the same way that you sent it, so what you may be seeing is the traffic
returning via a different exit between networks. The best way to
troubleshoot something like this is to get a copy of a traceroute in the
opposite direction. For more information, see:
http://www.nanog.org/meetings/nanog47/presentations/Sunday/RAS_Traceroute_N47_Sun.pdf
One other thing to keep in mind is that a company like Google may be
more interested in keeping their servers located somewhere with ample
(and cheap) space and power, than they are with ensuring close proximity
to an Internet interconnection point. For example, Google is well known
for building a datacenter in The Dalles Oregon, which is a significant
distance away from ANY network interconnection. From Chicago, directly
connected to Google, 8.8.8.8 is actually located an rtt of 12ms away:
1 core1-2-2-0.ord.net.google.com (206.223.119.21) 1.509 ms 1.769 ms 1.409 ms
2 72.14.236.176 (72.14.236.176) 1.677 ms 1.579 ms 1.878 ms
3 72.14.232.141 (72.14.232.141) 12.555 ms
209.85.241.22 (209.85.241.22) 12.150 ms 12.013 ms
4 209.85.241.37 (209.85.241.37) 11.974 ms
209.85.241.35 (209.85.241.35) 12.591 ms
209.85.241.37 (209.85.241.37) 12.125 ms
5 209.85.240.49 (209.85.240.49) 12.944 ms
72.14.239.189 (72.14.239.189) 21.509 ms
209.85.240.45 (209.85.240.45) 25.000 ms
6 google-public-dns-a.google.com (8.8.8.8) 12.890 ms 12.487 ms 12.770 ms
This would put the fiber distance at around 500+ miles, i.e. this
datacenter could actually be in Kansas City MO for all you know. Without
the original traceroute to verify your assumptions about where the
interconnection point between networks is, it's entirely possible that
you could be seeing something like this too.
--
Richard A Steenbergen <ras at e-gerbil.net> http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
More information about the NANOG
mailing list