[532] in Humor

home help back first fref pref prev next nref lref last post

Bandwidth

daemon@ATHENA.MIT.EDU (jered@MIT.EDU)
Wed Nov 9 16:59:47 1994

From: jered@MIT.EDU
Date: Wed, 9 Nov 1994 16:55:13 +0500
To: humor@MIT.EDU



Arrrgh!!! This person is implying that if we throw billions of dollars
at researchers, we can create a lossless 100:1 compression algorithm for
arbitrary data. 

At least they didn't suggest putting the compressed data through the
compression algorithm again. (I found it in Fishwrap.)

--------------------

`bandwidth' problem

By Robin Stacy

Knight-Ridder Newspapers

``Everybody wants to go to heaven, but nobody wants to die.''

The notion inherent in the words of that old gospel tune is that you
can't get something for nothing -- you have to give up something, in
this case, your life, to get the ultimate something great, in this
case, heaven.

But, in the words of another old song, ``'Tain't necessarily so.''

The computer world these days is spending billions of dollars in
research and capital investment on what broadly falls under the
category of ``increasing bandwidth.''

While, for a few hundred dollars today, you can connect computers
together using Ethernet network interface cards and either
twisted-pair copper wire (telephone cords) or thin coaxial cable
(shortwave antenna wire), and achieve 10-megabit-per-second
connections between the machines, you can also spend a few thousand
dollars for any of several exotic networking schemes, such as ATM,
that connect your computers together using fiber-optic cable (beach
sand) and get connections capable of pumping about 10 times the volume
of data through them per second.

Is it worth the money?

Clearly, much of the world thinks so. Pick up almost any magazine
article about computer networking, and discussion of the newer,
high-speed networking technologies dominates.

If you want to travel the Information Superhighway, they imply, you
have to spend about a thousand dollars per workstation to get
100-megabit-per-second network interface cards, cables and routing
hubs.

Using the superhighway metaphor we in computers have all become so
fond of, imagine that your old 10-megabit-per-second network is a
six-lane road. What the 100-megabit-per-second networks do, then, is
to widen the road. Instead of six lanes, you now have 60.

That ought to bump up the throughput to a point where everything's
fast enough, right?.

Well, think of this. Imagine that, on your old six-lane network,
what's happening is that, in the three northbound lanes, three cars
are traveling side by side at 45 mph. The same thing's happening in
the southbound lane. You've got a lot of potential throughput on your
highway, but that potential isn't being realized because of the idiots
blocking the road, right? There are lots of cars behind them revving
their engines, but they have nowhere to go.

So you expand your highway to 60 lanes, and now you've got 30 cars
traveling side by side at 45 mph in each direction. You've Telephone
companies are spending fortunes installing high-speed hardware and
fiber-optic cables. What if they took a couple billion dollars to come
up with more efficient compression algorithms? still got lots of cars
behind them, revving their engines with nowhere to go. In fact, you've
now got 10 times the blocked cars. You've increased your throughput 10
times, all right, but at a tremendous expense.

What if, instead, you could get all the cars traveling at their
fastest possible speed in each of your original six lanes, and could
pack them so that there's no more than six inches between the cars'
bumpers?

Likely, you could move as many cars per hour through your old six-lane
network if the cars were six-inches apart and traveling at their
maximum speed as you could through the 60-lane network where the cars
were being blocked by uncourteous drivers. Maybe even more.

Maybe a lot more.

What you've done, effectively, is compress your traffic.

Moving back from metaphor to real computers, compression, I think, is
where the future lies.

Right now, multimedia is largely driving the information explosion.
We're trying to create networks with enough bandwidth to carry the
digital video and audio information that makes multimedia possible.

That's a lot of information, no joke. If you take your multimedia PC
and use it to play an audio CD, then use a utility to transfer that
audio data to a file, it uses about 100 kilobytes per second of
playing time. That's about a megabyte every 10 seconds, about six
megabytes per minute, about 360 megabytes per hour.

Add in video, and the amount of information increases dramatically, by
a multiplier that varies depending on the resolution of the video.
Let's say, though, that it's a 10-fold increase. For every hour of
multimedia, then, you've got about three-and-a-half gigabytes of
information.

Let's say, though, that your 3.5 gigabytes of multimedia displays one
long movie scene of a seduction, say of Mrs. Robinson torturously
pleading with young Benjamin.  The scene ends as they head to bed.

What if, instead, Mrs. Robinson just gives Ben a single, telling
two-second glance?

When Ben sees the two-second glance, his mind uncompresses it into the
full hour-long seduction, and they head to bed. Effectively, you've
compressed your 3.5-gigabyte file into one of no more than, say, two
megabytes.

Back to where we started, billions of dollars are being spent right
now on increasing bandwidth. The telephone companies alone are
spending Bill Gates' fortune many times over each year replacing their
hardware, installing high-speed switches and routers and running new
fiber-optic cables.

What if, however, they took ``just'' a couple billion dollars, and
divided it up among the world's top mathematicians to come up with
more efficient compression algorithms?

If they were able to find one that would compress everything just 10
times, then the phone companies' existing hardware would be equal to
what they're replacing it with.

Really, that's not some pie-in-the-sky kind of dream. Right now,
people working on new compression programs for the PC are achieving
results near that level. Hobbyists working at home, for the most part,
are writing programs that are fractional improvements on existing
compression, which can already reduce the size of dat a to a fifth or
tenth its original size.

Put the world's top minds on the problem, and compensate them richly,
and it's probably not unreasonable to expect whole new breakthroughs
in the field. If they were able to find some scheme to compress
everything to a hundredth its original size, say, then the phone
companies' existing networks would already far outstrip any potential
hardware upgrade we can do today.

If we all had this compression algorithm, then our 100-megabyte hard
drives would effectively become 10-gigabyte hard drives. Our
10-megabit-per-second Ethernet networks would become
100-megabyte-per-second networks.

Expensive hardware schemes such as ATM could be left sitting on the
shelves. The savings we realized could become profits, or could be
spent on research and development, funding other fundamental
breakthroughs to revolutionize the world.

X X X

(Robin Stacy is special projects editor of The Macon (Ga.) Telegraph,
120 Broadway, Macon, GA, 31201-3444.)



home help back first fref pref prev next nref lref last post