[3420] in linux-net channel archive
BW limiting/simulation
daemon@ATHENA.MIT.EDU (dennis)
Sun Jun 23 12:21:10 1996
Date: Sun, 23 Jun 1996 12:17:16 -0400
To: alan@lxorguk.ukuu.org.uk (Alan Cox)
From: dennis@etinc.com (dennis)
Cc: linux-net@vger.rutgers.edu
A. Cox wrote...
>> >It is counter intutitive in some ways but the way to slow a remote TCP down
>> >is to properly model a slow link including discarding packets
>>Dennis wrote..
>> If you discard packets then you are not modelling a slow(er) link.
>> Packets that are discarded change the characteristics of the connection
>> for those packets with a result that is not acceptable to end user
>> applications. Anyone whose been on a flakey link knows that a T1 line
>> that drops 10% of its traffic yields unacceptable throughput due to the
>> timeouts, certainly no-where near 90% of T1. More like 8k.
>
>Wrong...
>
>The properties of modelling a slower link are precisely those of dropping
frames
>once you pass the point at which the real link would drop packets. By dropping
>packets you cause the TCP layer to do congestion control and lower its end to
>end send rate. Thus you don't end up dropping 10% of traffic, you drop a few
>packets occasionally and the other end figures out the real throughput and
>adjusts. The end result is almost no packets lost.
I disagree. The properties of a slower link is that there is a greater amount
of time to get data from here to there. You must simulate the average delay
for the connection on a continuous basis. With many connections from many
different sources, your result will be that some connections have much better
than average throughput, and others will have terrible throughput, or worst case
that most of the connections will have terrible throughput. This method is much
like trying to maintain 55MPH (thats Miles Per Hour for you non-US folks) with a
stopwatch rather than a speedometer. You'll do more or less than the average
with great variance, and rarely exactly what you want. This is NOT simulation.
>
>The TCP layer with traditional flow control sets out to find
>
>1. How many packets can be "in flight". This measures the available buffers
>that can be committed along the link without loss.
>2. The round trip time of the link. This allows the TCP to tell when to
>retransmit a frame.
>
>The flow is regulated solely by #1 above assuming the windows are large enough.
>A slow hop on a link causes less buffers to be "in flight" at once without loss
>as it will buffer only a small number of frames (smoothing noise in the traffic
>flow).
You're assuming that every TCP implementation in the world works the way you
think
it does. Believe me this is a very dangerous assumption. You also have to
account for
other types of traffic (other than TCP), so this alone is not adaquate, even
if it works.
Try telling your customers that their throughput is unpredictable because
the TCP
implementation they are using doesnt adjust properly, or worse that someone
ELSE's
TCP implementation doesn't work properly and is hogging the link. The
problem with
standards is that sometimes when you do it correctly is just doesnt work
right. Sometimes
you have to be a bit more intuitive.
Dennis
----------------------------------------------------------------------------
Emerging Technologies, Inc. http://www.etinc.com
Synchronous PC Cards and Routers For Discriminating
Tastes. 56k to T1 and beyond. Frame Relay, PPP, HDLC,
and X.25 for BSD/OS, FreeBSD and LINUX.