Two Tiered Internet

Steve Gibbard scg at gibbard.org
Wed Dec 14 19:47:23 UTC 2005


On Wed, 14 Dec 2005, Marshall Eubanks wrote:

> To me, this seems likely to lead to massive consumer dissatisfaction, and a 
> disaster of the
> magnitude of the recent Sony CD root exploit fiasco.
>
> Typical Pareto distribution models for usage mean that no matter
> how popular "tier 1" sites are, a substantial part of the user time will be 
> spent on degraded "tier 2" sites.
>
> If these don't work, people will complain. Just imagine for a second that 
> cable providers started
> a service that meant that every channel not owned by, say, Disney, had a bad 
> picture and sound. Would this
> be good  for the  cable companies ? Would their customers be happy ?
>
> Of course, based on some recent experience this probably  means that this 
> will be adopted enthusiastically.

I'm seeing a lot of comments here that appear to be looking at this as a 
very binary issue -- either it's ok, or it will cause the customers to 
defect en masse to the competition.  This seems to ignore questions of how 
it would be implemented, and what the competition's offering would be.

If I've got a choice between two providers, both of which are offering a 3 
Mb/s pipe, but one of them restricts services from other networks to half 
of that pipe, that's going to effectively be a situation where one 
provider is only offering half the Internet bandwidth the other offers.

On the other hand, there could be a scenario in which one network offered 
a 3 Mb/s unrestricted pipe, while the other offered a 6 Mb/s pipe, with 
prioritized traffic potentially eating 2 Mb/s of it.  That would still 
be 4 Mb/s of unrestricted traffic vs. the other provider's 3 Mb/s.

In other words, a provider with sufficiently better last mile technology 
than the competition should be able to do lots of stuff like this and 
still come out ahead.  Providers in markets that are technologically more 
even might have more trouble.

That assumes rate limiting in the last mile.  If what's instead being 
talked about is QoS tagging of last mile packets, that should be 
completely irrelevant to those who don't use the services that are 
prioritized.

Of course, if they're restricting capacity in the backbone and using QoS 
there, that may be a different story, but that seems unlikely to be what's 
being talked about.  Backbone congestion doesn't tend to happen much in 
major American cities these days, but individual DSL lines saturate pretty 
easily.

-Steve




More information about the NANOG mailing list