[460] in linux-net channel archive
[Project] Managing Available Bandwidth
daemon@ATHENA.MIT.EDU (Pete)
Mon Jun 12 16:19:03 1995
To: submit-linux-dev-net@ratatosk.yggdrasil.com
From: pete@I_should_put_my_domain_in_etc_NNTP_INEWS_DOMAIN (Pete)
Date: 12 Jun 1995 18:38:56 GMT
I'm working on some development now to allow Linux to act as a throttling
router. I need to get some input on the best way to do this, and some
answers to a couple of questions I have.
The basic idea is that I can use Linux to both limit and prioritize the
bandwidth on its interfaces. For instance, I could have a 56k, 128k, and
10Mb connection all running over 10BaseT lines to various machines or
customers on my network. This is of primary interest where a limited
amount of outgoing bandwidth (like an ISDN or T1 line) is available to
share, but the shared medium is much higher (like a fiber or 10BaseT
internal network).
This project would bring some of the concepts of cell-relay-type
networking (eg frame-relay) to Linux: committed information rate, discard
eligibility, and bursting. It would allow an administrator to manage the
available bandwidth much more effectively than simple priorities.
Some of the features I'm planning on putting in are:
- Committed Information Rate: at any time, traffic from specified
addresses will be guaranteed bandwidth up to the CIR
- Bursting: any connection can at any time use up any additional
available bandwidth, as long as it doesn't infringe on someone else's CIR
usage (the burst ceiling can be set arbitrarily)
- All parameters configurable on an IP-address or IP-group level, for
in-bound and/or out-bound traffic
- Accounting to track all CIR/bursting usage by address
- Time-sensitive parameters (for instance, CIR could be effective during
business hours only for some customers, or CIR could be adjusted to
different levels to allow for special events, etc)
I'm starting out on a very low level right now. I'm trying to build a
kernel that will limit all traffic to 56k. This is what I need to get
some suggestions and help on.
First, I've started by putting in delays in the net/inet/ip.c code, right
before the calls to ip_fw_chk(). I've got two ways to do this. One is to
keep track of the last time a packet was sent/received and the size of
the packet, and refuse to accept any packets until an appropriate amount
of time has elapsed (packet_size/rate < elapsed_time).
The only problem with this is that the latency of Ethernet is pretty much
nil, so if I ping, a packet is sent out, then immediately replied to.
However, since the reply comes back long before the appropriate amount
of time has elapsed, it is refused. Of course, the reply will not be
repeated, so ping will not work. So, I don't think this is the way to go.
Another direction I've tried is on sending/receiving a packet, busy-wait
the equivelant amount of time for that packet to have been sent at 56k
(packet_size/rate). This seems to work better, but I'm worried about the
repercussions of having busy-wait code in ip.c. I don't know if it'd be
permissable to busy-wait with yield() (is there such a kernel call?), or
maybe have some kind of call-back, where I calculate the required time to
send, then have the IP process sleep until that time had elapsed (I don't
know how I'd do this, unfortunately).
This is very preliminary, so any input I can get now would be very
helpful. A third approach I'm thinking about is to attach a send_time
variable to the sk_buff structure, which would be calculated by the ip.c
code, then the device would monitor this variable until the appropriate
time was reached. This would actually be a cleaner way to implement
multiple CIR's and burst-rates, because other higher-CIR traffic wouldn't
be held up by lower-CIR traffic.
Am I approaching this right? Is the ip.c code the right place to do this,
or should I look into a network device driver implementation? I think
that a device-driver implementation might be easier, but only if it could
be a device on top of the actual network device driver, so that every
device driver wouldn't have to be modified. Can Linux even do this? Isn't
that how the IP-tunnel driver works in 1.3.0?
Well, I look forward to hearing what people have to say about this. I'm
working on this 56k solution right now, so any input you can give me on
it would be appreciated. Actually, any input at all about this project
would be appreciated.
Thanks.
Pete Kruckenberg
pete@ut.rockymt.net
pete@dsw.com