ntp config tech note

Michael Sinatra michael at rancid.berkeley.edu
Fri May 21 23:00:45 UTC 2004


Hi John:

John Kristoff wrote:
> On Thu, 20 May 2004 21:08:43 -0700
> Michael Sinatra <michael at rancid.berkeley.edu> wrote:
> 
> 
>>I run two stratum-1 servers and a few stratum-2s and I provide time via 
>>multicast (224.0.0.1), but I don't use it for my servers, except for 
> 
> 
> Presumably you meant 224.0.1.1.

Yep, sorry.

>>testing and verification.  I am also providing anycast ntp, and, if the 
>>belt and suspenders weren't enough, I am experimenting with manycast. 
> 
> 
> Noting that NTP uses more than a reply response message exchange.  No
> concerns about session breakage?  SNTP would certainly be a very
> viable candidate for anycast.

Yes it is.  Session breakage doesn't seem to be a problem, as long as 
you're per-flow traffic sharing for equal-cost routes across your 
backbone.  I can see where per-packet switching would make a mess of 
things, but that's true with any anycast service.  Also, if the flow 
table entry ages out before the next NTP poll, I can see how a client 
running a full-blown ntpd would probably perceive an otherwise 
transparent switch back and forth between two servers as excess jitter. 
  If you're worried about this, then one way around it would be to 
adjust the costs of your injected routes so that one server is always 
preferred over another.  In that case, you're not buying 
load-distribution across servers, but backup for clients where ease of 
configuration is more important that accuracy, but where reliability 
(ability to poll at least one server all the time) is still important, 
and multicast may or may not provide the level of reliability that you 
need.  (I am a fan of multicast, BTW.)

In our case it's useful, as you note, for pointing an increasing number 
of SNTP clients (including network equipment) to one address that's 
reliable and redundant.

I know there are others who have experience in doing NTP anycast, at 
least within an enterprise, and perhaps as service provider, who can 
probably comment.

> Except in the extreme case such as wisc.edu's unfortunate experience,
> does multicast buy much?  Traffic loads for properly running clients
> and distributed servers tend to be relatively low in my experience.

Yes.  The main "buy" is ease of configuration, and that holds for 
multicast, manycast, and anycast.  I get a lot of requests for providing 
NTP in such a way that sysadmins and users can use a really simple 
bootstrap configuration in clients that they can replicate across their 
enterprise.  I'll also note that some OS vendors ship a stock multicast 
config for their (x)ntpd.

I have found that load can get higher when you start hammering servers 
with thousands of SNTP clients; it can be something of an issue 
considering that NTP servers tend to be hand-me-down hardware.  (BTW: Is 
rackety.udel.edu still a Sun IPC?)  So, my "problem" is, how can I 
properly distribute the load across servers, increase reliability, and 
still give users simplicity in their configuration?  Each of the three 
solutions has its own set of pros and cons:

manycast - probably the best solution; solves authentication and 
possible session issues with anycast, more accurate than multicast, 
since true associations are established with (hopefully) several 
participating servers.  Downside is that it's only supported in v4, 
which not all vendor OSes have as their stock daemon; doesn't work with 
SNTP.

anycast - probably best solution for SNTP clients, may also be useful 
for not v4 clients that can't do manycast.  May break authentication, 
depending on how keys are managed, but I haven't actually tested this. 
(If all anycast servers have the same key pair, will authentication 
break if a client switches servers due to a routing change?  I haven't 
tested this yet.)

multicast - Still a good solution for v3 clients that want simple 
configuration.  A lot of people still *ask* about multicast NTP, since 
that's the config some OS vendors ship.

Then of course, there's the good old configure-4+-ntp-servers-manually, 
which is good for important boxes that really need good time.  (I have 
one box that does all three of the above, plus has manually configured 
servers.)

Any thoughts or comments on the advantages and disadvantages of the 
above techniques are welcome, as well as corrections.

michael



More information about the NANOG mailing list