Transaction Based Settlements Encourage Waste (was Re: BBN/GTEI)

Dave Rand dlr at bungi.com
Sat Aug 22 15:36:21 UTC 1998


[In the message entitled "Re: Transaction Based Settlements Encourage Waste (was Re: BBN/GTEI)" on Aug 22,  7:55, Michael Dillon writes:]
> On Sat, 22 Aug 1998, Dave Rand wrote:
> 
> > All I can say is that if settlements are based on who is receiving
> > the traffic, there will be huge increase in the amount of money
> > that networks pay to attract everyone's favortive smurf target,
> > the IRC servers.  A 2-day-long 100 Mbps smurf will be _encouraged_ :-)
> 
> The peer whose network the SMURF originates in will trace that back to the
> source so fast you'll smell the rubber burning. DoS based on large numbers
> of packets will be a thing of the past.
> 

_Sure_ they will.  Someday.  But they won't be able to contact the
customer/university until monday morning, and the 2 days smurf-fun will
elapse, and that will adjust the 95th percentile data for the month
appropriately.

But the fact is that the vast majority of networks do not charge for
bandwidth used - and _that_ (not the direction) is the core of the problem.
If you bill customers for what they _use_, versus the size of the pipe, you
have:

1) A positive incentive for them to control the amount of data they use..
   If they are sourcing data, they will make sure that they adjust the
   graphics and so on to be efficient.  I _know_ that customers do
   this, on at least one major network.
   If they are sinking data, they have a reason to put in a cache, and
   better manage their users.  I _know_ that customers do this, on at
   least one major network.

2) A positive incentive for the network owner to keep ahead of the bandwidth.
   Currently, if a company buys a T1 - the NSP has _no_ incentive to ever
   permit the customer to use more than 300 bps over it.  In fact, there
   is negative incentive to provide adequate bandwidth.  You make _more_
   money by _not_ ungrading the backbone, and overselling the crap out
   of it.  I _know_ that NSP's do this, on at least one major network.

Currently, there is a strong incentive for networks that source a lot of
data to better interconnect to networks that sink a lot of data.  As a
classic example, the data rate to AOL doubled the day a large network added
a private interconnect to AOL, instead of going through ANS.  Was this
because ANS was underprovisioned, because the slow start algorithm sucked on
the TCP implementation, or because the latency was reduced?  In this new
model you propose, who cares?  Private interconnects like this would just
Not Happen.  This makes the net worse for everyone.

Large scale interconnections between networks are a good thing.  Settlement
based peering may be a way to do that, but the metrics for how to do this
are not clear.  Strictly looking at the size of the flows between peers does
not appear to be the correct way to measure this.  

-- 
Dave Rand
dlr at bungi.com
http://www.bungi.com



More information about the NANOG mailing list