update

Pete Carah pete at altadena.net
Mon Sep 29 05:32:49 UTC 2014


On 09/28/2014 04:50 PM, Valdis.Kletnieks at vt.edu wrote:
> On Sun, 28 Sep 2014 15:06:18 -0600, "Keith Medcalf" said:
>
>>
>> Sorry to disappoint, but those are not changes that make the system more
>> vulnerable.  They are externalities that may change the likelihood of
>> exploitation of an existing vulnerability, but does not create any new
>> vulnerability.  Again, if the new exploit were targeting a vulnerability
>> which was fully mitigated already and thus could not be exploited, there
>> has not even been a change in likelihood of exploit or risk.
> So tell us Keith - since you said earlier that properly designed systems will
> already have 100% mitigations against these attackes _that you don't even know
> about yet_, how exactly did you design these mitigations?  (Fred Fish's thesis
> paper, where he shows that malware detection is equivalent to the Turing Halting
> Problem, is actually relevant here).
>
> In particular, how did you mitigate attacks that are _in the data stream
> that you're charging customers to carry_? (And yes, there *have* been
> fragmentation attacks and the like - and I'm not aware of a formal proof
> that any currently shipping IP stack is totally correct, either, so there
> may still be unidentified attacks).
>
>
For that matter, has the *specification* of tcp/ip been proven to be
"correct" in any complete way?  In particular is it correct in any
useful sense in the face of attackers that are ready and willing to
spoof any possible feature of tcp/ip and its applications?  (e.g. spam
protection involves turning several MUST clauses from 821/822 to MUST
NOT; the specs were not themselves reasonable in the face of determined
attackers.  And unfortunately Postel's famous maxim fails in the face of
determined attackers, or at least the definition of "liberal" has to be
tempered a lot.)

The main serious effects of slammer, for example, were not directly from
the worm itself, but due to the fact that the worm generated random
addresses (at about 30k pps on a 100meg ethernet) without regard to
"address family", and most default router's cpus were overwhelmed
issuing lots of "network unreachable" icmp messages (which I think were
a MUST at the time).  Now that we've run out of spare v4 addresses, we
wouldn't get so many unreachables :-)  Fortunately most of our
vulnerable customers back then were behind T1's so it didn't really hit
us too badly.  The only real workaround at the time was to disable
unreachables, which has other undesirable side effects when the
customers are mostly behaving.  (Unplugging worked best.) (and at least
that worm didn't write anything to disk, so a reboot fixed it for a
while...)

I know that there are features (e.g. fragmentation, source routing, even
host/net unreachable etc etc) in the spec that fall down badly in the
face of purposeful attacks.  And things like the 7007 incident, where
DATA passing between only a few nodes in the larger internet caused a
very large-scale cascading failure.  Jay mentioned another; there were
approximations to 7007 a few times a year for several years after that
incident (one involved routing the whole internet through Sweden). 
Mitigations for things like that are time-varying (e.g. IRR, if enough
people actually used it (and in the face of tcp/ip forgery, that isn't
sufficient either)...), and indeed one has to include the entire
internet (and all of the people running it) as part of the system for
that kind of analysis to mean anything.  And can anyone prove that any
one of the 7007-like incidents were really accidents or not?  For that
matter, maybe they were "normal accidents" (see Perrow - recommended
reading for anyone involved with complex systems which the internet
certainly qualifies (The internet is *much* more complex than the phone
system, and there must be people in NANOG who remember the fallout from
the Hinsdale fire (fast busy, anyone?  Even in Los Angeles...)).  (if
one believes that 7007 was accidental, then it was induced by a
combination of a misfeature in ios (11.x?) (mitigated with some of the
"address-family" features in 12.x, though I don' t know if that was the
intent) and a serious lack of documentation in matters concerning BGP. 
And a lack of error-checking outside that site (though IRR was the only
source of check-data at the time and was (still is?) woefully incomplete.)

The halting problem comes up in connection with _data_ handling in any
computer with even a language interpreter (e.g. is browser-based
javascript complete enough for the halting problem to apply to it?  I
think so.  Java certainly is, though most browser-based java is supposed
to be sandboxed.  Perl, python, ruby, php all are.)  (and in any real
piece of equipment, can any of these make permanent state changes to the
retained memory (flash or disk) in the computer?  If so then this
halting problem equivalence gives us trouble even if no changes are made
to any executable programs (including shared libs) that came with the
computer.)  (especially true if the box is a router with any sort of
dynamic routing, even ARP/NDP)

How, for example, am I to know a priori for all possible <script src=
strings, the difference between a 3rd party download of jquery and a
malicious downloader *before running the download* (for that matter,
even after...)  Whitelisting does not help here in the face of attackers
that spoof protocol features, in this case filenames and/or DNS. 
Noscript and related (e.g. chrome's script protection) whitelist whole
sites, which is nowhere near granular enough, but even then is a pain
for the user to manage.  If the user is the proverbial grandmother, it
is probably impossible to manage.

-- Pete



More information about the NANOG mailing list