PPP multilink help
Rodney Dunn
rodunn at cisco.com
Mon May 11 16:06:02 UTC 2009
On Mon, May 11, 2009 at 10:37:25AM -0400, Andrey Gordon wrote:
> Hey folks, I'm sure to you it's peanuts, but I'm a bit puzzled (most likely
> because of the lack of knowledge, I bet).
>
> I'm buying an IP backbone from VNZ (presumably MPLS). I get a MLPPP hand off
> on all sites, so I don't do the actual labeling and switching, so I guess
> for practical purposes what I'm trying to say is that I have no physical
> control over the other side of my MLPPP links.
>
> When I transfer a large file over FTP (or CIFS, or anything else), I'd
> expect it to max out either one or both T1,
Most MLPPP implementations don't has the flows at the IP layer to an
individual MLPPP member link. The bundle is a virtual L3 interface and
the packets themselves are distributed over the member links. Some people
reference it as a "load balancing" scenario vs. "load sharing" as the
traffic is given to the link that isn't currently "busy".
but instead utilization on the
> T1s is hoovering at 70% on both and sometimes MLPPP link utilization even
> drops below 50%. What am I'm not gettting here?
If you have Multilink fragmentation disabled it sends a packet down each
path. It could be a reordering delay causing just enough variance in
the packet stream that the application thorttles back. If you have a bunch
of individual streams going you would probably see a higher throughput.
Remember there is additional overhead for the MLPPP.
Rodney
>
> Tx,
> Andrey
>
> Below is a snip of my config.
>
> controller T1 0/0/0
> cablelength long 0db
> channel-group 1 timeslots 1-24
> !
> controller T1 0/0/1
> cablelength long 0db
> channel-group 1 timeslots 1-24
> !
> ip nbar custom rdesktop tcp 3389
> ip cef
> !
> class-map match-any VoIP
> match dscp ef
> class-map match-any interactive
> match protocol rdesktop
> match protocol telnet
> match protocol ssh
> !
> policy-map QWAS
> class VoIP
> priority 100
> class interactive
> bandwidth 500
> class class-default
> fair-queue 4096
> !
> interface Multilink1
> description Verizon Business MPLS Circuit
> ip address x.x.x.150 255.255.255.252
> ip flow ingress
> ip nat inside
> ip virtual-reassembly
> load-interval 30
> no peer neighbor-route
> ppp chap hostname R1
> ppp multilink
> ppp multilink links minimum 1
> ppp multilink group 1
> ppp multilink fragment disable
> service-policy output QWAS
> !
> interface Serial0/0/0:1
> no ip address
> ip flow ingress
> encapsulation ppp
> load-interval 30
> fair-queue 4096 256 0
> ppp chap hostname R1
> ppp multilink
> ppp multilink group 1
> !
> interface Serial0/0/1:1
> no ip address
> ip flow ingress
> encapsulation ppp
> load-interval 30
> fair-queue 4096 256 0
> ppp chap hostname R1
> ppp multilink
> ppp multilink group 1
>
>
>
>
> -----
> Andrey Gordon [andrey.gordon at gmail.com]
More information about the NANOG
mailing list