Linux Router: TCP slow, UDP fast

Lee ler762 at gmail.com
Sat Feb 14 12:51:22 UTC 2009


Try enabling window scaling
  echo 1 > /proc/sys/net/ipv4/tcp_window_scaling
or, if you really want it disabled, configure a larger minimum window size
  net.ipv4.tcp_rmem = 64240 87380 16777216

HTH,
Lee


On 2/14/09, Chris <chris at ghostbusters.co.uk> wrote:
> Hi All,
>
> I'm losing the will to live with this networking headache ! Please feel free
> to point me at a Linux list if NANOG isn't suitable. I'm at a loss where
> else to ask.
>
> I've diagnosed some traffic oddities and after lots of head-scratching,
> reading and trial and error I can say with certainty that:
>
> With and without shaping and over different bandwidth providers using the
> e1000 driver for an Intel PRO/1000 MT Dual Port Gbps NIC (82546EB) I can
> replicate full, expected throughput with UDP but consistently only get
> 300kbps - 600kbps throughput _per connection_ for outbound TCP (I couldn't
> find a tool I trusted to replicate ICMP traffic). Multiple connections are
> cumulative and increase incrementally at roughly 300kbps - 600kbps. Inbound
> seems slightly erratic in holding a consistent speed but manages 15Mbps as
> expected, a far cry from 300kbps to 600kbps.
>
> The router is Quad Core sitting at no load and there's very little traffic
> being forwarded back and forth. The NIC's kernel parameters are set at
> default as 'built-in'. NAPI is not enabled though (enabling it requires a
> reboot which is a problem as this box is in production).
>
> The only other change to the box is that over Christmas IPtables
> (ip_conntrack and its associated modules mainly) was loaded into the kernel
> as 'built-in'. There's no sign of packet loss on any tests and I upped the
> conntrack max_connections size suitably for the amount of RAM. Has anyone
> come across IPtables without any rules loaded causing throughput issues ?
>
> I've also changed the following kernel parameters with no luck:
>
>   net.core.rmem_max = 16777216
>   net.core.wmem_max = 16777216
>
>   net.ipv4.tcp_rmem = 4096 87380 16777216
>   net.ipv4.tcp_wmem = 4096 65536 16777216
>
>   net.ipv4.tcp_no_metrics_save = 1
>
>   net.core.netdev_max_backlog = 2500
>
>   echo 0 > /proc/sys/net/ipv4/tcp_window_scaling
>
> It feels to me like a buffer limit is being reached 'per connection'. The
> throughput spikes at around 1.54Mbps and TCP backs off to about 300kbps -
> 600kbps or so. What am I missing ? Is NAPI that essential for such low
> traffic ? A very similar build moved far higher throughput on cheap NICs.
> MTU is at 1500, txqueuelen is 1000.
>
> Any help would be massively appreciated !
>
> Chris
>




More information about the NANOG mailing list