[30] in Commercialization & Privatization of the Internet
re: federal seed money for more bandwidth?
daemon@ATHENA.MIT.EDU (Craig Partridge)
Tue Oct 23 11:43:38 1990
To: tmn!cook@uunet.uu.net
Cc: com-priv@psi.com
From: Craig Partridge <craig@NNSC.NSF.NET>
Date: Tue, 23 Oct 90 11:14:22 -0400
There are several threads in this question -- I'd like to comment on the
very high-bandwidth issue -- is there a concensus about gigabit networking.
I think the answer is, to first order, yes, there's a concensus, and it
is a different concensus from last years -- we've made progress.
The concensus I'm seeing (as a researcher not a policy maker), is
the following:
* Just going fast isn't a big issue. In fact, if you want gigabits
(+/- 20%) you can get them now. Ultra Technologies is very close
with their technology. NSC was demoing an 800-mbit per channel,
8-channel HPPI switch at Interop (that's 6.4 gigabits of bandwidth
through the box). Recent protocol analysis work done by several
folks has shown that you could scale TCP/IP to go at gigabit speeds,
if all you were worried about was an increased clock rate.
Dave Borman at Cray is hard at work getting his Cray TCP/IP to
send in the high 100's of Mbits range, and I expect he'll get
there by this spring (loopback tests a year ago yielded 500+
Mbits/sec). Note this attitude is a change from two years, or even 18
months ago, when people were saying "current protocols won't work at
gigabits" We've narrowed the problem space a lot.
* However, it is clear that by the time you've reached gigabit speeds,
current protocols may not be what you want to run. There are several
reasons for this; here are a few
Latency vs. bandwidth gets out of whack. Right now we work in a regime
where the cross-network latency is measured in a few bits, or a few
packets worth of bits. But latency is limited by the speed of light,
and we send pretty close to the speed of light in fiber/copper right
now. So faster networks will give us more bandwidth, but the latency
to get a single bit across the country will remain just about constant.
As a result, the latency is measured in hundreds or thousands of packets
and millions or billions of bits in flight. Congestion control becomes
*very* interesting. So too does distributed data access (trying to
make sure that you don't end up sipping your data too slowly through
a long straw, or gulping from a gigabit firehose).
New applications appear. HDTV requires hundreds of Mbits of bandwidth
(gigabits if you don't use compression). Gigabit networks provide
that kind of bandwidth. As a result, one wants to think more about
protocols designed to deliver real-time voice and video. Other graphics
applications also become possible. The transfer of large amounts of
telemetry data, in real time, become possible. The space station is
supposed to have a 200 Mbit ground link last I heard. With a gigabit
network, a scientist or team of scientists doesn't have to fly to
JSC to see all the data coming back from their hour of experimental
time on the space station -- they can sit in their offices. (Note
this requires real reliability in the network -- if you tell to many
scientists that you're sorry, but their precious one-hour slot, booked
10 months in advance, and scheduled to coincide with a once-every two
years astronomical phenemena, has been wiped out because a router went
down, you're not gonna have a popular network very long).
Sizing issues. Networks are becoming more and more popular, and if
a gigabit network with new nifty technologies appear, lots of folks
will want to use it. So you've just bought into building very fast,
very big (bigger than today's) networks. Urp.
If you mix all these new variables together, the type of network you
get may be very different from that of today. The key research problems
in front of us involve looking at what's different, and how we might
approach it.
Note that much of this argument inspired by talks by Dave Clark, although I
take all responsibility for what's said here.
Craig