PPP multilink help

Matthew Huff mhuff at ox.com
Mon May 11 16:32:03 UTC 2009


I would also think the problem is with flow control not allowing the maximum bandwidth. Trying multiple ftp streams and seeing if that would max it out would help.

I would think you would want to add a WRED to the class-default entry to prevent global tcp synchronization

...
class class-default
  fair-queue 4096
  random-detect dscp-based

----
Matthew Huff       | One Manhattanville Rd
OTA Management LLC | Purchase, NY 10577
http://www.ox.com  | Phone: 914-460-4039
aim: matthewbhuff  | Fax:   914-460-4139


-----Original Message-----
From: Rodney Dunn [mailto:rodunn at cisco.com] 
Sent: Monday, May 11, 2009 12:06 PM
To: Andrey Gordon
Cc: nanog at nanog.org
Subject: Re: PPP multilink help

On Mon, May 11, 2009 at 10:37:25AM -0400, Andrey Gordon wrote:
> Hey folks, I'm sure to you it's peanuts, but I'm a bit puzzled (most likely
> because of the lack of knowledge, I bet).
> 
> I'm buying an IP backbone from VNZ (presumably MPLS). I get a MLPPP hand off
> on all sites, so I don't do the actual labeling and switching, so I guess
> for practical purposes what I'm trying to say is that I have no physical
> control over the other side of my MLPPP links.
> 
> When I transfer a large file over FTP (or CIFS, or anything else), I'd
> expect it to max out either one or both T1,

Most MLPPP implementations don't has the flows at the IP layer to an
individual MLPPP member link. The bundle is a virtual L3 interface and
the packets themselves are distributed over the member links. Some people
reference it as a "load balancing" scenario vs. "load sharing" as the
traffic is given to the link that isn't currently "busy".

  but instead utilization on the
> T1s is hoovering at 70% on both and sometimes MLPPP link utilization even
> drops below 50%. What am I'm not gettting here?

If you have Multilink fragmentation disabled it sends a packet down each
path. It could be a reordering delay causing just enough variance in
the packet stream that the application thorttles back. If you have a bunch
of individual streams going you would probably see a higher throughput.
Remember there is additional overhead for the MLPPP.

Rodney


> 
> Tx,
> Andrey
> 
> Below is a snip of my config.
> 
> controller T1 0/0/0
>  cablelength long 0db
>  channel-group 1 timeslots 1-24
> !
> controller T1 0/0/1
>  cablelength long 0db
>  channel-group 1 timeslots 1-24
> !
> ip nbar custom rdesktop tcp 3389
> ip cef
> !
> class-map match-any VoIP
>  match  dscp ef
> class-map match-any interactive
>  match protocol rdesktop
>  match protocol telnet
>  match protocol ssh
> !
> policy-map QWAS
>  class VoIP
>     priority 100
>  class interactive
>     bandwidth 500
>  class class-default
>     fair-queue 4096
> !
> interface Multilink1
>  description Verizon Business MPLS Circuit
>  ip address x.x.x.150 255.255.255.252
>  ip flow ingress
>  ip nat inside
>  ip virtual-reassembly
>  load-interval 30
>  no peer neighbor-route
>  ppp chap hostname R1
>  ppp multilink
>  ppp multilink links minimum 1
>  ppp multilink group 1
>  ppp multilink fragment disable
>  service-policy output QWAS
> !
> interface Serial0/0/0:1
>  no ip address
>  ip flow ingress
>  encapsulation ppp
>  load-interval 30
>  fair-queue 4096 256 0
>  ppp chap hostname R1
>  ppp multilink
>  ppp multilink group 1
> !
> interface Serial0/0/1:1
>  no ip address
>  ip flow ingress
>  encapsulation ppp
>  load-interval 30
>  fair-queue 4096 256 0
>  ppp chap hostname R1
>  ppp multilink
>  ppp multilink group 1
> 
> 
> 
> 
> -----
> Andrey Gordon [andrey.gordon at gmail.com]





More information about the NANOG mailing list