What could have been done differently?

Iljitsch van Beijnum iljitsch at muada.com
Tue Jan 28 19:14:17 UTC 2003


Sean Donelan wrote:
> Many different companies were hit hard by the Slammer worm, some with
> better than average reputations for security awareness.  They bought
> finest firewalls, they had two-factor biometric locks on their data
> centers, they installed anti-virus software, they paid for SAS70
> audits by the premier auditors, they hired the best managed security
> consulting firms.  Yet, they still were hit.

> Its not as simple as don't use microsoft, because worms have hit other
> popular platforms too.

As a former boss of me was fond of saying when someone made a stupid 
mistake: "It can happen to anyone. It just happens more often to some 
people than others."

> Are there practical answers that actually work in the real world with
> real users and real business needs?

As this is still a network operators forum, let's get this out of the 
way: any time you put a 10 Mbps ethernet port in a box, expect that it 
has to deal with 14 kpps at some point. 100 Mbps -> 148 kpps, 1000 Mbps 
-> 1488 kpps. And each packet is a new flow. There are still routers 
being sold that have the interfaces, but can't handle the maximum 
traffic. Unfortunately, router vendors like to lure customers to boxes 
that can forward these amounts of traffic wire speed rather than 
implement features in their lower-end products that would allow a box 
to drop the excess traffic in a reasonable way.

But then there is the real source of the problem. Software can't be 
trusted. It doesn't mean anything that 1000000 lines of code are 
correct, if one line is incorrect something really bad can happen. 
Since we obviously can't make software do what we want it to do, we 
should focus on making it not do what we don't want it to do. This 
means every piece of software must be encapsulated inside a layer of 
restrictive measures that operate with sufficient granularity. In Unix, 
traditionally this is done per-user. Regular users can do a few things, 
but the super-user can do everything. If a user must do something that 
regular users can't do, the user must obtain super-user priviliges and 
then refrain from using these absolute priviliges for anything else 
than the intended purpose. This doesn't work. If I want to run a web 
server, I should be able to give a specific piece of web serving 
software access to port 80, and not also to every last bit of memory or 
disk space.

Another thing that could help is have software ask permission from some 
central authority before it gets to do dangerous things such as run 
services on UDP port 1434. The central authority can then keep track of 
what's going on and revoke permissions when it turns out the server 
software is insecure. Essentially, we should firewall on software 
versions as well as on traditional TCP/IP variables.

And it seems parsing protocols is a very difficult thing to do right 
with today's tools. The SNMP fiasco of not long ago shows as much, as 
does the new worm. It would proably a good thing if the IETF could 
build a good protocol parsing library so implementors don't have to do 
this "by hand" and skip over all that pesky bounds checking. Generating 
and parsing headers for a new protocol would then no longer require new 
code, but could be done by defining a template of some sort. The 
implementors can then focus on the functionality rather than which bit 
goes where. Obviously there would be a performance impact but the same 
goes for coding in higher languages than assembly. Moore's law and 
optimizers are your friends.




More information about the NANOG mailing list