Dyn DDoS this AM?

LHC large.hadron.collider at gmx.com
Mon Oct 24 08:25:09 UTC 2016


All this TTL talk makes me think.

Why not have two ttls - a 'must-recheck' (does not expire the record but forces a recheck; updates record if server replies & serial has incremented) and a 'must-delete' (cache will be stale at this point)?

On October 23, 2016 3:42:58 PM PDT, Mark Andrews <marka at isc.org> wrote:
>
>In message
><CADJJukkadFbOYvWVan_8pdR=fxenqGRsyisiKBH6vpyDse6JrQ at mail.gmail.com>
>, Masood Ahmad Shah writes:
>> >
>> > > On Oct 21, 2016, at 6:35 PM, Eitan Adler <lists at eitanadler.com>
>wrote:
>> > >
>> > > [...]
>> > >
>> > > In practice TTLs tend to be ignored on the public internet. In
>past
>> > > research I've been involved with browser[0] behavior was
>effectively
>> > > random despite the TTL set.
>> > >
>> > > [0] more specifically, the chain of DNS resolution and caching
>down to
>> > > the browser.
>> >
>> >
>> > Yes, but that it can be both better and worse than your TTLs does
>not
>> > mean that you can ignore properly working implementations.
>> >
>> > If the other end device chain breaks you that's their fault and out
>of
>> > your control.  If your own settings break you that's your fault.
>> >
>>
>> +1 to what George wrote that we should make efforts to improve our
>part of
>> the network. There are ISPs that ignore TTL settings and only update
>their
>> cached records every two to three days or even more (particularly the
>> smaller ones). OTOH, this results in your DNS data being inconsistent
>but
>> it���s very common to cache DNS records at multiple levels. It's an
>effort
>> that everyone needs to contribute to.
>
>For TTL there is a tension between being able to update with new
>data and resiliance when servers are unreachable.  For zone transfers
>we have 3 timers refesh, retry and expire to deal with this tension.
>If we were doing DNS from scratch there would be at least two ttl
>values one for freshness and one for don't use past.
>
>Additionally a lot of the need for small TTL's is because clients
>don't fail over to second addresses in a reasonable amount of time.
>There is no reason for this other than poorly designed clients.  A
>client can failover using sub-second timers.  We do this for Happy
>Eyeballs.  This strategy is viable for ALL connection attempts.
>
>Mark

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.



More information about the NANOG mailing list