What if it doesn't affect the ISP? (was Re: What do you want your ISP to block today?)

Iljitsch van Beijnum iljitsch at muada.com
Sun Aug 31 09:43:07 UTC 2003


On zaterdag, aug 30, 2003, at 20:54 Europe/Amsterdam, Sean Donelan 
wrote:

>> Only if it impacts the ISP, which it doesn't most of the time unless
>> they buy an unfortunate brand of dial-up concentrators.

> Bits are bits, very few of them actually impact the ISP itself. Most
> ISPs protect their own infrastructure. Routers are very good at
> forwarding bits.  Routers have problems filtering bits. Whether it is
> spam, viruses or other attacks; its mostly customers or end-users that
> bear the brunt of the impact, not the ISP.

Impact can be more than ISP equipment getting into trouble. It can also 
be congestion or excessive bandwidth use because of incoming abusive 
traffic, or infected customers.

> The recurring theme is: I don't want my ISP to block anything I do, but
> ISPs should block other people from doing things I don't think they
> should do.

Actually this doesn't have to be the paradox it seems to be. If we can 
find a way to make sure at the source that the destination welcomes the 
communication, we can have both.

> So how long is reasonable for an ISP to give a customer to fix an
> infected computer; when you have cases like Slammer where it takes only
> a few minutes to infect the entire Internet?  Do you wait 72 hours?
> or until the next business day? or block the traffic immediately?

> Or some major ISPs seem to have the practice of letting the infected
> computers continuing attacking as long as it doesn't hurt their
> network.

Let's first look at the reverse situation: infective traffic comes in. 
Customers may take the position that it is in their best interest that 
their ISP filters this traffic forever, so that they can't get 
infected, regardless of whether they patch their systems or not. But it 
isn't realistic to expect ISPs to do this.

First of all, because in many cases, the vulnerability is in a service 
that also has legitimate uses. In some cases this isn't much of a 
problem: for instance, with the slammer worm blocking the affected port 
didn't really impact the SQL service. Or with filtering blaster, 
windows file sharing doesn't work anymore but this isn't a public 
service so the people who need it can run it over a secure tunnel of 
some kind. However, shutting down port 80 because an HTTP 
implementation has a vulnerability wouldn't be acceptable because of 
the collateral damage.

Then there  are the issues of ISPs being able to do this effectively in 
the first place, and effectiveness. If ISPs were to filter everything 
forever everywhere, maybe this would be effective, but nearly all 
equipment takes a performance hit when it has to filter, and this 
usually gets worse as the filters get bigger, and there are limits to 
the length of filters. On top of that, there is the management issue: 
with 100k ADSL customers, you need to apply filters to 100k interfaces 
on hundreds of boxes. So in reality ISPs can only have a limited number 
of filter rules in a limited number of places. While this gets rid of 
most of the infective traffic for as long as the filter is in place, 
this doesn't really protect customers, as when one customer is 
infected, the infection can still spread to other customers (most worms 
are optimized for this) unless the ISP has put filters on all customer 
ports. And we've seen that worms are often carried from location to 
location in infected laptops.

And then, when the filter rules have to go (for instance because there 
is a new worm du jour) experience shows there is still some infecting 
traffic, however long after the initial outbreak, so at some point a 
vulnerable system WILL be infected.

Last but not least: if ISPs filter X worms, and then worm X+1 presents 
itself which proves unfilterable, things get really bad because users 
were depending on ISP action to prevent infection, rather than take 
their own measures. This could even lead to legal problems for ISPs.

Bottom line: unless ISPs explicitly want to take on this responsibility 
and invest in heavier equipment and very advanced network management, 
the best they can do is take the edge off by implementing some 
filtering that allows their users a little more time to patch their 
systems.

Then there is the other side of the coin: infected customers. I mostly 
work for content hosters these days, and there the situation is 
slightly different from the one that access ISPs are facing, as the 
number of customers is much smaller and the bandwidth they have is much 
larger. So one customer can do much more damage by either causing 
congestion in the local network or by driving up the bandwidth use on 
external connections (which is expensive because of the usual 95th 
percentile billing). There have been several cases the past year where 
my customers shut down ports of infected customers of theirs (sometimes 
lowering the port speed to 10 Mbps is a good compromise). But since 
this leads to many phone calls, I can imagine that doing this for every 
infected customer may be a problem for ISPs with many dial/ADSL/cable 
customers. Also, if the bandwidth use isn't too excessive, it may not 
always be apparent that a customer is infected.




More information about the NANOG mailing list