[134078] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: TCP congestion control and large router buffers

daemon@ATHENA.MIT.EDU (Carsten Bormann)
Thu Dec 23 13:02:12 2010

From: Carsten Bormann <cabo@tzi.org>
In-Reply-To: <4D122BD6.5070503@freedesktop.org>
Date: Thu, 23 Dec 2010 19:00:52 +0100
To: Jim Gettys <jg@freedesktop.org>
Cc: nanog list <nanog@nanog.org>
Errors-To: nanog-bounces+nanog.discuss=bloom-picayune.mit.edu@nanog.org

Some more historical pointers:

If you want to look at the early history of the latency discussion,
look at Stuart Cheshire's famous rant "It's the Latency, Stupid"
(http://rescomp.stanford.edu/~cheshire/rants/Latency.html).  Then look
at Matt Mathis's 1997 TCP equation (and the 1998 Padhye-Firoiu version
of that): The throughput is proportional to the inverse square root of
the packet loss and the inverse RTT -- so as the RTT starts growing
due to increasing buffers, the packet loss must grow to keep
equilibrium!

We started to understand that you have to drop packets in order to
limit queueing pretty well in the late 1990s.  E.g., RFC 3819 contains
an explicit warning against keeping packets for too long (section 13).

But, as you notice, for faster networks, the bufferbloat effect can be
limited in effect by intelligent window size management, but the
dominating Windows XP was not intelligent, just limited in its widely
used default configuration.  So the first ones to fully see the effect
were the ones with many TCP connections, i.e. Bittorrent.  The modern
window size "tuning" schemes in Windows 7 and Linux break a lot of
things -- you are just describing the tip of the iceberg here.  The
IETF working group LEDBAT (motivated by the Bittorrent observations)
has been working on a scheme to run large transfers without triggering
humungous buffer growth.

Gruesse, Carsten



home help back first fref pref prev next nref lref last post