[158657] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: TCP time_wait and port exhaustion for servers

daemon@ATHENA.MIT.EDU (Kyrian)
Thu Dec 6 08:26:14 2012

Date: Thu, 06 Dec 2012 13:25:28 +0000
From: Kyrian <kyrian@ore.org>
To: nanog@nanog.org
In-Reply-To: <mailman.9298.1354783150.12739.nanog@nanog.org>
X-orenet-MailScanner-From: kyrian@ore.org
Errors-To: nanog-bounces+nanog.discuss=bloom-picayune.mit.edu@nanog.org

On  5 Dec 2012, rps@maine.edu wrote:

> > Where there is no way to change this though /proc

...
> Those netfilter connection tracking tunables have nothing to do with the
> kernel's TCP socket handling.
>
No, but these do...

net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_time = 90
net.ipv4.tcp_fin_timeout = 30

I think the OP was wrong, and missed something.

I'm no TCP/IP expert, but IME connections go into TIME_WAIT for a  
period pertaining to the above tuneables (X number of probes at Y  
interval until the remote end is declared likely dead and gone), and  
then go into FIN_WAIT and then IIRC FIN_WAIT2 or some other state like  
that before they are finally killed off. Those tunables certainly seem  
to have actually worked in the real world for me, whether they are  
right "in theory" or not is possibly another matter.

Broadly speaking I agree with the other posters who've suggested  
adding other IP addresses and opening up the local port range available.

I'm assuming the talk of 30k connections is because the OP's proxy has  
a 'one in one out' situation going on with connections, and that's why  
your ~65k pool for connections is halved.

K.



home help back first fref pref prev next nref lref last post