[3409] in linux-net channel archive

home help back first fref pref prev next nref lref last post

Re: shaper or whatever

daemon@ATHENA.MIT.EDU (Dennis)
Sat Jun 22 16:10:55 1996

Date: 	Sat, 22 Jun 1996 15:44:54 -0400
To: "Eric Schenk" <schenk@cs.toronto.edu>
From: dennis@etinc.com (Dennis)
Cc: linux-net@vger.rutgers.edu, linux-isp@lightning.com

Erich Schenk writes.....


>Dennis <dennis@etinc.com> writes:
>>>There is a difference between a persistent 10% drop rate that
>>>is independent the senders transmission speed, and dropping
>>>packets due to congestion. TCP reacts to packets getting dropped
>>>by shrinking its congestion window. This slows down its transmission rate. 
>>>If slowing down transmission results in no packet drops we win.
>>>[This would be the case in the proposed traffic shaper.]
>>
>>It the drops are truly persistant, then it is not much different, as TCP
>>reacts much the same.  A packet that takes a long time to arrive and 
>>a packet that never arrives are quite different.
>
>I suspect we are talking at slight cross purpose here.
>As you say, delaying packets is not the same as throwing them away.
>I don't advocate a traffic shaper that delays traffic or has any knowledge
>of the TCP internals. I don't beleive such a strategy will work.
>
>The traffic shaper would have to drop any packet that crosses the
>box that exceeds the bandwidth assigned to the connection.
>The contents of the packet (ACK, data whatever) are irrelevant to
>the decision to drop. The only knowledge the shaper needs how to
>determine if the packet exceeds the bandwidth assigned to the connection.
>The connection a packet belongs to would have to be determined from
>the addresses in the headers. Note that this approach should work
>with a router just as well as a box that is acting as an end point.
>There is some information overhead that would not be present on a
>standard router. You need to track every active pair of end points
>that are sending packets through your router. This need not look
>to deeply into the protocol struture, a simple timeout to remove
>the endpoints from the data structure should suffice.
>
>>linux box as a router. An ISP, for example, has an undeterminable number
>>of simulatanous connections, so "drops" become somewhat random.
>
>I'm not quite sure what you are getting at here. If you mean that
>on a large router who gets a packet dropped is somehwat random, then
>this is of course true. I don't see how it is relevant to the issue at hand.
>
>As an aside, a machine that is acting as a router may have better
>aggregate behavior if it randomly chooses packets to drop from the queue
>when needs to make room for incoming data. [As opposed to simply
>dropping the incoming data.]
>There is some pretty serious research into using these type of gateways
>to smooth the congestion behavior of TCP networking.
>
>>It also depends on how you define congestion. An ethernet connection that
>>is limited to 56k could be in a constant state of congestion, which
complicates
>>your potential algorithm...particularly when with, say, 100 active
>>connections you could have a window availability many times the
>>available bandwidth.
>
>Sure. But combined congestion window should not be confused with the
>maximum window available on the combined TCP's open across a link.
>In the long run the combined congestion windows should reflect the
>real bandwidth of the link. In reality TCP is good at reacting to
>increased congestion, but not very good at quickly taking advantage
>of extra bandwidth, so the combined congestion windows could often
>be smaller than the real avialable bandwidth.
>
>>You also can't control anything at the TCP level for traffic passing THROUGH
>>the box (rather than originating in the box). A router cant withhold or
>>delay an ack, unless you plan on having the router learn about every
>>connection that passes through it.
>
>As noted above a router would need to track the bandwidth usage for each
>pair of endpoints that is currently transmitting. This would needs a small
>structure for each pair of end points. Say 12 bytes to specify the
>endpoints, and say another 8 to specify the bandwidth limit and the
>current traffic load on the connection. (I'm not sure what
>the "best" way to characterize load would be. I suspect some sort of
>smoothed exponentially decaying averager would be best, but I haven't
>given this any deep thought.)
>Add in a bit more overhead for a fast lookup data structure (say patricia
>trees or some such thing) and we should still be within feasible memory
>usage even for thousands of simultaneous active connections.
>

You academics really crack me up! :-)

Any why do you call it a "shaper"? whats up with that?

I suspect that what you propose above
woundnt work well and would have too much overhead. But, it would
make a good masters project. Perhaps you will prove me
wrong.

It really ain't that difficult, but I cant say any more..(corporate BS, you
know).
Anyone with FreeBSD or BSD/OS can try our demo which will be up
next week....it simulates virtually any physical speed (up the the actual
physical speed of the media, of course) practically to the byte. It also keeps
statisics and reports preformance on a per address or network basis over a 
user-selectable interval as well as current performance.

Judging from the discussion last week...it seems unlikely that we can port
it to linux without violating GPL. Not easily anyway.


Dennis
----------------------------------------------------------------------------
Emerging Technologies, Inc.      http://www.etinc.com

Synchronous Communications Cards and Routers For
Discriminating Tastes. 56k to T1 and beyond. Frame
Relay, PPP, HDLC, and X.25 for BSD/OS, FreeBSD 
and LINUX



home help back first fref pref prev next nref lref last post