[24831] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 6982 Volume: 10

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Thu Sep 9 14:11:10 2004

Date: Thu, 9 Sep 2004 11:10:07 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Thu, 9 Sep 2004     Volume: 10 Number: 6982

Today's topics:
    Re: Xah Lee's Unixism <firstname@lastname.pr1v.n0>
    Re: Xah Lee's Unixism <firstname@lastname.pr1v.n0>
    Re: Xah Lee's Unixism <albalmer@att.net>
    Re: Xah Lee's Unixism <firstname@lastname.pr1v.n0>
    Re: Xah Lee's Unixism <lynn@garlic.com>
    Re: Xah Lee's Unixism <SPAMhukolauTRAP@SPAMworldnetTRAP.att.net>
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Thu, 9 Sep 2004 17:08:31 +0200
From: Morten Reistad <firstname@lastname.pr1v.n0>
Subject: Re: Xah Lee's Unixism
Message-Id: <flrphc.tlk1.ln@via.reistad.priv.no>

In article <4140688e$0$6912$61fed72c@news.rcn.com>,  <jmfbahciv@aol.com> wrote:
>In article <413F43AC.9D2088AF@yahoo.com>,
>   CBFalconer <cbfalconer@yahoo.com> wrote:
>>jmfbahciv@aol.com wrote:
>>> Alan Balmer <albalmer@att.net> wrote:
>>>> CBFalconer <cbfalconer@yahoo.com> wrote:
>>>>> Alan Balmer wrote:
>>>>>>
>>>>>... snip ...

[snipp Rush Limbaugh's's talks show mentioned]

>>I deplore your tast in radio talk shows.
>
>Oh!  Taste in talk shows.

Ah, then I have deplorable tastes in your opinion. I find Rush
greatly entertaining; but wouldn't use him as a data point.

I wish the left could dig up someone as entertaining as Rush.

>> ..  It doesn't take much to
>>create a rabble rousing poll to increase ratings.
>
>I listen to them for data about how the rabble is thinking
>and the logic they use to form their opinions.  I also
>watch those religious cable TV shows to gather the same kinds
>of information; note that I can only manage to listen to these
>about 10 minutes and not more than once/year.  I also listen
>to Rushie to see what kinds of lies that half of the world is
>listening to.  I watch CSPAN who never cut out for commericals,
>don't edit too much, and tend to leave the mike on after the
>meetings break up.  '

With most of these you miss the point if you listen for content
at all. The media IS the message. And you are the product, to
be entertained enough so you can be sold to advertisers. 

>>There is no need, nor cause, to impute Bush & Co. with
>>intrinsically evil intentions.  It is quite enough to point to
>>their lack of capability, and bull headed 'revenge for daddy'
>>propensities.  The state of the economy, unemployment, poverty
>>rate, medical care, deficit, death rate in Iraq (both of Americans
>>and Iraqis), abandonment of the Bin Laden hunt, abridgement of
>>civil liberties (as in the Patriot Act and the Gitmo gulag), poor
>>choice of companions (Halliburton and other political donors and
>>trough feeders, and the 'plausible deniability' of the Swiftboat
>>gang), irritation of allies, inability to deal with North Korea
>>(due to involvement with useless adventures), abandonment of
>>efforts towards a Palestinian peace, all spring to immediate mind.

A lack of focus on world politics has been a characteristica of the
US presidents since Eisenhower. Bush is not special, he just got
the mess in his lap and had to deal with it; just as Nixon inherited
the Vietnam war. 

>Well, your Bush-hater campaign is working beyond all your 
>expectations.  One day, you will have to live it.

-- mrr


------------------------------

Date: Thu, 9 Sep 2004 17:17:15 +0200
From: Morten Reistad <firstname@lastname.pr1v.n0>
Subject: Re: Xah Lee's Unixism
Message-Id: <r5sphc.a4l1.ln@via.reistad.priv.no>

In article <u1xhbv9s3.fsf@mail.comcast.net>,
Anne & Lynn Wheeler  <lynn@garlic.com> wrote:
>Morten Reistad <firstname@lastname.pr1v.n0> writes:
>> It was an upgrade from 56k. The first versions of NSFnet was not
>> really scalable either; noone knew quite how to design a erally
>> scalable network, so that came as we went.
>
>we had a project that i called HSDT
>http://www.garlic.com/~lynn/subnetwork.html#hsdt 
>
>for high-speed data transport ... to differentiate from a lot of stuff
>at the time that was communication oriented ... and had real T1 (in
>some cases clear-channel T1 w/o the 193rd bit) and higher speed
>connections. It had an operational backbone ... and we weren't allowed
>to directly bid NSFNET1 .... although my wife went to the director of
>NSF and got a technical audit. The technical audit summary said
>something to the effect that what we had running was at least five
>years ahead of all NSFNET1 bid submissions to build something new.

In 1987 T1's(or E1's in this end of the pond)  were pretty normal; 
T3's was state of the art. But it is not very difficult to design
interfaces that shift the data into memory; and 1987'is cumputers
could handle a few hundred megabit worth of data pipe without too
much trouble; but you needed direct DMA access, not some of the
then standard busses or channels.

IBM always designed stellar hardware for such things; what was 
normally needed was the software. To see what Cisco got away with
regarding lousy hardware (GS-series) is astonishing. 

There was a large job to be done to handle routing and network
management issues. BGP4 didn't come out until 1994, nor did 
a decent OSPF or SNMP. 

>one of the other nagging issues was that all links on the internal
>network
>http://www.garlic.com/~lynn/subnetwork.html#internalnet
>
>had to be encrypted. at the time, not only were there not a whole lot
>of boxes that supported full T1 and higher speed links ... but there
>also weren't a whole lot of boxes that support full T1 and higher
>speed encryption.

If you could do it hardware-assisted you could do T1s in 1987; but
in software you would have had large problems.

>a joke a like to tell ... which occured possibly two years before the
>NSFNET1 RFP announcement ... was about a posting defining "high-speed"
>.... earlier tellings:
>http://www.garlic.com/~lynn/94.html#33b High Speed Data Transport (HSDT)
>http://www.garlic.com/~lynn/2000b.html#69 oddly portable machines
>http://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
>http://www.garlic.com/~lynn/2003m.html#59 SR 15,15
>http://www.garlic.com/~lynn/2004g.html#12 network history

-- mrr


------------------------------

Date: Thu, 09 Sep 2004 08:47:55 -0700
From: Alan Balmer <albalmer@att.net>
Subject: Re: Xah Lee's Unixism
Message-Id: <ldu0k01fffmkfevuhpmuu19ufn8a70jrjp@4ax.com>

On Wed, 08 Sep 2004 23:36:49 GMT, iddw@hotmail.com (Dave Hansen)
wrote:

>On Tue, 07 Sep 2004 10:29:04 -0700, Alan Balmer <albalmer@att.net>
>wrote:
>
>>On Sat, 04 Sep 2004 00:49:18 GMT, gwschenk@fuzz.socal.rr.com (Gary
>>Schenk) wrote:
>>
>[...]
>>>Don't you dittoheads ever get your facts right?
>>
>>
>>What's a "dittohead"? Are you trying to convey a personal insult of
>>some kind? Please let me know, so I can call you a name, too.
>
>A "dittohead" is someone who regularly listens to and agrees with Rush
>Limbaugh (popular conservative U.S. radio talk show host).  It is a
>tradition that callers on his show (at least those that agree with
>him) start their call with something like "Country redneck dittos to
>you, Rush," or "Hey, Rush, blues-pickin' Cajun dittos" before
>launching into the subject of their call.  It is intended to be an
>insult implying the "dittoheads" don't have any thoughts of their own,
>but merely are told what to think (probably by Rush), and do so.  The
>"dittoheads" have embraced the moniker but not the implication, seeing
>the insult as an act of desperation attacking the person (ad hominem)
>rather than addressing the issues.
>
Ah, I see. Under the circumstances, that last observation may be
correct, especially when extended to those who cover their lack of
knowledge by accusing others of not having their facts right.

I have seen most of a TV interview with Mr. Limbaugh, when he was in
the news for prescription drug abuse, and I have heard him on the
radio briefly a couple of times. I find it distasteful and switch to
Tony Snow <G>.

-- 
Al Balmer
Balmer Consulting
removebalmerconsultingthis@att.net


------------------------------

Date: Thu, 9 Sep 2004 18:21:52 +0200
From: Morten Reistad <firstname@lastname.pr1v.n0>
Subject: Re: Xah Lee's Unixism
Message-Id: <0vvphc.8nl1.ln@via.reistad.priv.no>

In article <uwtz3trhy.fsf@mail.comcast.net>,
Anne & Lynn Wheeler  <lynn@garlic.com> wrote:
>Morten Reistad <firstname@lastname.pr1v.n0> writes:
>>
>> SMD filters were used at a quite high rate; even inside well
>> filtered rooms. ISTR 6 months was a pretty long interval between
>> PM's.
>
>360s, 370s, etc differentiated between smp ... which was either

smD  the TLA that represents a washing-machine size disk. Mountable. 
  ^  Made impressive head crashes from time to time.

But I won't interfere with this lovely thread drift with lots
of relevant facts. 

>symmetrical multiprocessing or shared memory (multi-)processing
>... and loosely-coupled multiprocessing (clusters).
>http://www.garlic.com/~lynn/subtopic.html#smp
>
>in the 70s, my wife did stint in POK responsible for loosely-coupled
>multiprocessing architecture and came up with peer-coupled shared
>data
>http://www.garlic.com/~lynn/subtopic.html#shareddata
>
>also in the 70s, i had done a re-org of the virtual memory
>infrastructure for vm/cms. part of it was released as something called
>discontiguous shared memory ... and other pieces of it was released
>as part of the resource manager having to do with page migration
>(moving virtual pages between different backing store devices).
>http://www.garlic.com/~lynn/subtopic.html#fairshare
>http://www.garlic.com/~lynn/subtopic.html#wsclock
>http://www.garlic.com/~lynn/subtopic.html#mmap
>http://www.garlic.com/~lynn/subtopic.html#adcon
>
>in the mid-70s, one of the vm/cms timesharing service bureaus
>http://www.garlic.com/~lynn/subtopic.html#timeshare
>
>was starting to offer 7x24 service to customers around the world; one
>of the issues was being able to still schedule PM .... when there
>was never a time that there wasn't anybody using the system. they
>had already providing support for loosely-coupled, similar to
>HONE
>http://www.garlic.com/~lynn/subtopic.html#hone
>
>for scallability & load balancing. what they did in the mid-70s was to
>expand the "page migration" ... to include all control blocks ...  so
>that processes could be migrated off one processor complex (in a
>loosely-coupled environment) to a different processor complex ...  so
>a processor complex could be taken offline for PM.
>
>in the late '80s, we started the high availability, cluster multiprocessing
>project:
>http://www.garlic.com/~lynn/subtopic.html#hacmp
>
>of course the airline res system had been doing similar things on 360s
>starting in the 60s.
>
>totally random references to airline res systems, tpf, acp, and/or pars:
>http://www.garlic.com/~lynn/96.html#29 Mainframes & Unix
>http://www.garlic.com/~lynn/99.html#17 Old Computers
>http://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
>http://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
>http://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
>http://www.garlic.com/~lynn/99.html#152 Uptime (was Re: Q: S/390 on PowerPC?)
>http://www.garlic.com/~lynn/2000b.html#20 How many Megaflops and when?
>http://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
>http://www.garlic.com/~lynn/2000b.html#65 oddly portable machines
>http://www.garlic.com/~lynn/2000c.html#60 Disincentives for MVS & future of MVS systems programmers
>http://www.garlic.com/~lynn/2000e.html#21 Competitors to SABRE?  Big Iron
>http://www.garlic.com/~lynn/2000e.html#22 Is a VAX a mainframe?
>http://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
>http://www.garlic.com/~lynn/2001.html#26 Disk caching and file systems.  Disk history...people forget
>http://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits
>http://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
>http://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
>http://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
>http://www.garlic.com/~lynn/2001g.html#45 Did AT&T offer Unix to Digital Equipment in the 70s?
>http://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
>http://www.garlic.com/~lynn/2001g.html#47 The Alpha/IA64 Hybrid
>http://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
>http://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
>http://www.garlic.com/~lynn/2001n.html#0 TSS/360
>http://www.garlic.com/~lynn/2001n.html#3 News IBM loses supercomputer crown
>http://www.garlic.com/~lynn/2002c.html#9 IBM Doesn't Make Small MP's Anymore
>http://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
>http://www.garlic.com/~lynn/2002g.html#3 Why are Mainframe Computers really still in use at all?
>http://www.garlic.com/~lynn/2002h.html#12 Why did OSI fail compared with TCP-IP?
>http://www.garlic.com/~lynn/2002h.html#43 IBM doing anything for 50th Anniv?
>http://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
>http://www.garlic.com/~lynn/2002i.html#83 HONE
>http://www.garlic.com/~lynn/2002j.html#83 Summary: Robots of Doom
>http://www.garlic.com/~lynn/2002m.html#67 Tweaking old computers?
>http://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
>http://www.garlic.com/~lynn/2002o.html#28 TPF
>http://www.garlic.com/~lynn/2002p.html#58 AMP  vs  SMP
>http://www.garlic.com/~lynn/2003.html#48 InfiniBand Group Sharply, Evenly Divided
>http://www.garlic.com/~lynn/2003c.html#30 diffence between itanium and alpha
>http://www.garlic.com/~lynn/2003d.html#67 unix
>http://www.garlic.com/~lynn/2003g.html#30 One Processor is bad?
>http://www.garlic.com/~lynn/2003g.html#32 One Processor is bad?
>http://www.garlic.com/~lynn/2003g.html#37 Lisp Machines
>http://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
>http://www.garlic.com/~lynn/2003k.html#3 Ping:  Anne & Lynn Wheeler
>http://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
>http://www.garlic.com/~lynn/2003p.html#45 Saturation Design Point
>http://www.garlic.com/~lynn/2004.html#24 40th anniversary of IBM System/360 on 7 Apr 2004
>http://www.garlic.com/~lynn/2004.html#49 Mainframe not a good architecture for interactive workloads
>http://www.garlic.com/~lynn/2004.html#50 Mainframe not a good architecture for interactive workloads
>http://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
>http://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good architecture for interactive workloads
>http://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
>http://www.garlic.com/~lynn/2004e.html#44 Infiniband - practicalities for small clusters
>http://www.garlic.com/~lynn/2004f.html#58 Infiniband - practicalities for small clusters
>http://www.garlic.com/~lynn/2004g.html#14 Infiniband - practicalities for small clusters
>
>-- 
>Anne & Lynn Wheeler | http://www.garlic.com/~lynn/




------------------------------

Date: Thu, 09 Sep 2004 10:29:16 -0600
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xah Lee's Unixism
Message-Id: <usm9rtnhf.fsf@mail.comcast.net>

Morten Reistad <firstname@lastname.pr1v.n0> writes:
> In 1987 T1's(or E1's in this end of the pond)  were pretty normal; 
> T3's was state of the art. But it is not very difficult to design
> interfaces that shift the data into memory; and 1987'is cumputers
> could handle a few hundred megabit worth of data pipe without too
> much trouble; but you needed direct DMA access, not some of the
> then standard busses or channels.
>
> IBM always designed stellar hardware for such things; what was 
> normally needed was the software. To see what Cisco got away with
> regarding lousy hardware (GS-series) is astonishing. 
>
> There was a large job to be done to handle routing and network
> management issues. BGP4 didn't come out until 1994, nor did 
> a decent OSPF or SNMP. 

even in mid-80s .... t1/e1 ... the only (ibm) support was the really
old 2701 and the special zirpel card in the Series/1 that had been
done for FSD.

in fall 1986, there was a technology project out of la gaude that was
looking at a T1 card for the 37xx ... however, the communication
division wasn't really planning on T1 until at least 1991. They had
done a customer survey. since ibm (mainframe) didn't have any T1
support ... they looked at customers that were using 37xx "fat pipe"
support that allowed ganging of multiple 56kbit into single logical
unit. they plotted the number of ganged 56kbit links that customers
had installed .... 2-56kbit links, 3-56kbit links, 4-56kbit links,
5-56kbit links. However, they found no customers with more than five
gnaged 56kbit links in a single fat-pipe. Based on that they weren't
projecting any (mainframe) T1 useage before 1991.

what they didn't appear to realize was that the (us) tariffs at the
time had cross-over where five or six 56kbit links were about the same
price as a single T1. so what was happening ... customers that hit
five or six 56kbit links ... were making transition directly to T1 and
then using non-IBM hardware to drive the link (which didn't show up on
the communication divisions 37xx high-speed communication
survey). hsdt easily identified at least 200 customers with T1
operation (using non-ibm hardware support) at the time the
communication division wasn't projecting any mainframe T1 support
before 1991.

because of the lack of T1 support (other than the really old 2701 and
the fairly expensive zirple-series/1 offering) ... was one of the
reasons that the NSFNET1 response went with (essentially) a pbx
multiplexor on the point-to-point telco T1 links ... with the actual
computer links running 440kbits/cards with the pc/rt 440kbit/sec cards.

hsdt
http://www.garlic.com/~lynn/subnetwork.html#hsdt

had several full-blown T1 links since the early 80s ... and was
working with a project for a full-blown ISA 16-bit T1 card ... with
some neat crypto tricks.

I think it was supercomputing 1990 (or 1991?) in austin where they
were demo'ing T3 links to offsite locations.

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/


------------------------------

Date: Thu, 09 Sep 2004 17:21:15 GMT
From: Nick Landsberg <SPAMhukolauTRAP@SPAMworldnetTRAP.att.net>
Subject: Re: Xah Lee's Unixism
Message-Id: <fi00d.336018$OB3.40405@bgtnsc05-news.ops.worldnet.att.net>

jmfbahciv@aol.com wrote:

> In article <20040908192913.67c07e7d.steveo@eircom.net>,
>    Steve O'Hara-Smith <steveo@eircom.net> wrote:
> 
>>On Wed, 08 Sep 04 11:48:36 GMT
>>jmfbahciv@aol.com wrote:
>>
>>
>>>In article <p9qdnTnxTYDJR6PcRVn-pw@speakeasy.net>,
>>>   rpw3@rpw3.org (Rob Warnock) wrote:
>>
>>>>*Only* a month?!?  Here's the uptime for one of my FreeBSD boxes
>>>>[an old, slow '486]:
>>>>
>>>>   %  uptime
>>>>    2:44AM  up 630 days, 21:14, 1 user, load averages: 0.06, 0.02, 
> 
> 0.00
> 
>>>>   % 
>>>>
>>>>That's over *20* months!!
>>>
>>>I bet we can measure the youngster's age by the uptimes he boasts.
>>
>>	The Yahoo! server farm ran to very long uptimes last time I had
>>any details. The reason being that they commission a machine, add it to
>>the farm and leave it running until it is replaced two or three years
>>later.
> 
> 
> Sure.  But regular users of such computing services never get an
> uptime report.  Hell, they have no idea how many systems their
> own webbit has used, let alone all the code that was executed
> to paint that pretty picture on their TTY screen.
> 
> I bet, if we start asking, we might even get some bizarre
> definitions of uptime.

Well, there are lies, damn lies and statistics, don't
you know? :)

I have absolutely no idea of the size of Yahoo's "server
farm," but let's assume that it's roughly 100 servers
to make the arithmetic easier.  Let's further assume
that the MTBF (Mean Time Between Failure) is roughly
2000 hours (about 3 months, or about 90 days).

Given these numbers (which are not real, I remind you,
just made up), it is likely that on any given day
one of those servers suffers some kind of failure.
However, one can argue, quite legitimately, that
the service which Yahoo! provides is still "up and
running."  1% of the users may not be able to access
their mail for a few hours, for example, but the Yahoo! is
still running.

> 
> I do know that the defintion of CPU runtime is disappearing.
> 

Not everywhere, Steve.  There are still shops
which do measure CPU time for transactions
and base their sizing computations on that.
The better ones actually start from the requirements
and derive the CPU budget, Disk I/O budget, Lan budget, etc.
for each transaction based on that!

(Examples: "Hmmm... an in-memory dbms access takes about 150 usec,
my dbms schema requires 12 reads for this query.  That's
1.8 msec.  My CPU budget is 750 usec.  Maybe I should
redesign something here?" ... or ... "Hmm... my CPU
budget is 3 ms. for this transaction, and I'm constrained
to use a particular XML parser.  Time to measure.  Whoops,
parsing takes around 6 ms for the average message on
my box.  Maybe we shouldn't be using this particular
parser just because it's cheap?  Or maybe we throw
more hardware at the problem and bid twice the number
of servers if we can't find a better XML parser.")

-- 
"It is impossible to make anything foolproof
because fools are so ingenious"
  - A. Bloch


------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc.  For subscription or unsubscription requests, send
#the single line:
#
#	subscribe perl-users
#or:
#	unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.  

NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice. 

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V10 Issue 6982
***************************************


home help back first fref pref prev next nref lref last post