Possible explanations for a large hop in latency
Frank Bulk - iNAME
frnkblk at iname.com
Fri Jun 27 20:30:15 CDT 2008
Just to close this issue on the list: a (top) engineer from AT&T contacted
me offline and helped us out.
Turns out that 188.8.131.52 is located in Kansas City and
tbr1.sl9mo.ip.att.net (184.108.40.206) is in St. Louis. AT&T has two L1
connections to that site for redundancy, but traffic was flowing over the
longer loop. The engineer tweaked route weights so that the traffic prefers
to flow over the shorter link to tbr2.sl9mo.ip.att.net (220.127.116.11),
shaving about 12 msec.
He also explained that the jump of ~70 msec is due to how ICMP traffic
within MPLS tunnels is handled. It wasn't until I ran a traceroute from a
Cisco router that I even saw the MPLS labels (that included in the ICMP
responses) for each of the hops within the tunnel. Apparently each ICMP
packet within an MPLS tunnel (where TTL decrementing is allowed) is sent to
the *end* of the tunnel and back again, so my next "hop" to
tbr1.sl9mo.ip.att.net (18.104.22.168) was really showing the RTT to the end
of the tunnel, Los Angeles.
From: Frank Bulk [mailto:frnkblk at iname.com]
Sent: Thursday, June 26, 2008 5:52 PM
To: nanog list
Subject: Possible explanations for a large hop in latency
Our upstream provider has a connection to AT&T (22.214.171.124) where I
relatively consistently measure with a RTT of 15 msec, but the next hop
(126.96.36.199) comes in with a RTT of 85 msec. Unless AT&T is sending that
traffic over a cable modem or to Europe and back, I can't see a reason why
there is a consistent ~70 msec jump in RTT. Hops farther along the route
are just a few msec more each hop, so it doesn't appear that 188.8.131.52
has some kind of ICMP rate-limiting.
Is this a real performance issue, or is there some logical explanation?
More information about the NANOG