TCP time_wait and port exhaustion for servers

Kyrian kyrian at ore.org
Thu Dec 6 13:25:28 UTC 2012


On  5 Dec 2012, rps at maine.edu wrote:

> > Where there is no way to change this though /proc

...
> Those netfilter connection tracking tunables have nothing to do with the
> kernel's TCP socket handling.
>
No, but these do...

net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_time = 90
net.ipv4.tcp_fin_timeout = 30

I think the OP was wrong, and missed something.

I'm no TCP/IP expert, but IME connections go into TIME_WAIT for a  
period pertaining to the above tuneables (X number of probes at Y  
interval until the remote end is declared likely dead and gone), and  
then go into FIN_WAIT and then IIRC FIN_WAIT2 or some other state like  
that before they are finally killed off. Those tunables certainly seem  
to have actually worked in the real world for me, whether they are  
right "in theory" or not is possibly another matter.

Broadly speaking I agree with the other posters who've suggested  
adding other IP addresses and opening up the local port range available.

I'm assuming the talk of 30k connections is because the OP's proxy has  
a 'one in one out' situation going on with connections, and that's why  
your ~65k pool for connections is halved.

K.





More information about the NANOG mailing list