[24828] in Perl-Users-Digest
Perl-Users Digest, Issue: 6979 Volume: 10
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Thu Sep 9 09:06:24 2004
Date: Thu, 9 Sep 2004 06:05:07 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Thu, 9 Sep 2004 Volume: 10 Number: 6979
Today's topics:
Re: Object Oriented Perl : Query (Anno Siegel)
Re: parsing XML using a regular expression <HelgiBriem_1@hotmail.com>
Re: parsing XML using a regular expression <jurgenex@hotmail.com>
Re: parsing XML using a regular expression <incoming@trainscan.com>
Re: parsing XML using a regular expression <tadmc@augustmail.com>
Re: parsing XML using a regular expression <matternc@comcast.net>
Re: Perl and Inheritance strangeness. <tadmc@augustmail.com>
Perl web automation question ref javascript:dt_pop <dunkirk_phil@hotmail.com>
Re: RE-Redirecting STDOUT <usenet@morrow.me.uk>
Re: Socket holding pattern (Anno Siegel)
Re: web services... <mark.clements@kcl.ac.uk>
Re: Xah Lee's Unixism <firstname@lastname.pr1v.n0>
Re: Xah Lee's Unixism <firstname@lastname.pr1v.n0>
Re: Xah Lee's Unixism <Brian.Inglis@SystematicSW.Invalid>
Re: Xah Lee's Unixism <Brian.Inglis@SystematicSW.Invalid>
Re: Xah Lee's Unixism <Brian.Inglis@SystematicSW.Invalid>
Re: Xah Lee's Unixism <Brian.Inglis@SystematicSW.Invalid>
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: 9 Sep 2004 11:13:36 GMT
From: anno4000@lublin.zrz.tu-berlin.de (Anno Siegel)
Subject: Re: Object Oriented Perl : Query
Message-Id: <chpdt0$qbn$1@mamenchi.zrz.TU-Berlin.DE>
Peter J. Acklam <pjacklam@online.no> wrote in comp.lang.perl.misc:
> anno4000@lublin.zrz.tu-berlin.de (Anno Siegel) wrote:
>
> > The shortcut of calling ->new as an object method is
> > inherently unclear. Unless there are massive advantages
> > in allowing it (there aren't in most classes), I think it
> > is better to leave it out. Its use as a matter of course
> > smacks of cargo cult.
>
> The "$class = ref($obj) || $obj;" construction is used many
> places in the Perl docs and the Perl standard modules, so
Yes. Much to the regret of some of us. It's a pretty idiom,
and it has been propagated by well-renowned people. (I'm
tempted to say, people who should know better :)
> its not strange that people choose to use it.
No, it isn't. It is one of the harder design decisions *not* to
do something on behalf of the user that seems to offer itself.
If it is easy to do, and can as well be done outside the routine,
think twice before doing it inside (and forcing it upon the user).
You wouldn't sort a list result just because some users may want
it sorted and others won't care. In this case, the performance
penalty is obvious, so experienced programmers won't do it.
The disadvantage of making ->new object-callable is "only" that
the reader can't guess what exactly the object call does. That's
less visible drawback, and so the idiom had a chance to spread.
> Anyway, if it's use is disallowed an appropriate error
> message should be given. Maybe something like this:
>
> sub new {
> my $class = shift;
> croak "new(): not an instance method" if ref $class;
> ...
> }
Perhaps, but only because users may have come to expect it to be
callable that way. Basically, ->new is a class method and the
user has no business calling it through an object. If they
do, they may as well deal with the consequences.
Anno
------------------------------
Date: Thu, 09 Sep 2004 10:35:56 +0000
From: Helgi Briem <HelgiBriem_1@hotmail.com>
Subject: Re: parsing XML using a regular expression
Message-Id: <77c0k0teunehh7481s7vrs04lrhgpd58v8@4ax.com>
On 9 Sep 2004 00:44:18 -0700, "Leif Wessman" <leifwessman@hotmail.com>
wrote:
Don't top-post. It annoys the regulars and severely damages
your chances of recieving useful help to your questions.
>I was trying to find a general solution to parsing both HTML and
>xml-files. And I didn't know that regular expressions was such a bad
>idea when parsing XML. Now I know, and now I will build a solution
>using regular expressions for HTML and an XML-parser for the XML-files.
Using regular expressions to parse HTML is just as bad
as using them to parse XML. HTML, after all, is just a
subset of XML. Use the appropriate modules to parse
HTML.
For details on why this is a bad idea, read the FAQ:
perldoc -q "remove HTML"
--
Helgi Briem hbriem AT simnet DOT is
Never worry about anything that you see on the news.
To get on the news it must be sufficiently rare
that your chances of being involved are negligible!
------------------------------
Date: Thu, 09 Sep 2004 11:13:39 GMT
From: "Jürgen Exner" <jurgenex@hotmail.com>
Subject: Re: parsing XML using a regular expression
Message-Id: <DVW%c.7337$Q44.2470@trnddc09>
[Would you PLEASE stop top-posting and blindly full-quoting?]
Leif Wessman wrote:
> I was trying to find a general solution to parsing both HTML and
> xml-files.
That general solution would be to use a proper parser.
If you want to write your own parser or even a parser that can handle both,
XML and HTML, please be my guest. But I highly recommend to lay some ground
work first, like e.g. attending a class about compiler construction or about
formal languages. There are several good books in that area, too.
Otherwise this is very likely to become an excercise in futility.
> And I didn't know that regular expressions was such a bad
> idea when parsing XML. Now I know, and now I will build a solution
> using regular expressions for HTML and an XML-parser for the
> XML-files.
Arrrrgggggg! Have you been reading _any_ previous postings or even the FAQ
about parsing HTML?
"Contrary to popular believe parsing HTML correctly is close to rocket
science and while it might be theoretically possible to parse HTML using
extended REs no sane person would ever attempt to do so."
For further information please see the FAQ (perldoc -q "remove HTML") and
the numerous postings about this topic on google.
jue
------------------------------
Date: Thu, 09 Sep 2004 12:10:19 GMT
From: Tim Green <incoming@trainscan.com>
Subject: Re: parsing XML using a regular expression
Message-Id: <Xns955F3E8FB3CA5incomingtrainscancom@24.70.95.211>
"Leif Wessman" <leifwessman@hotmail.com> wrote in
news:chn2ah$ilg@odbk17.prod.google.com:
> Hi!
>
> I'm trying to parse some xml with a regular expression (yes, i know
> that there is several XML modules that I can use).
See <http://www.cs.sfu.ca/~cameron/REX.html>
XML Shallow Parsing with Regular Expressions
--
###### |\^/| Timothy C. Green, CD, PEng, MEng
###### _|\| |/|_ incoming@TrainsCan.com
###### > < TrainsCan, Train Scan News
###### >_./|\._< http://www.TrainsCan.com
------------------------------
Date: Thu, 9 Sep 2004 07:06:33 -0500
From: Tad McClellan <tadmc@augustmail.com>
Subject: Re: parsing XML using a regular expression
Message-Id: <slrnck0hq9.5tl.tadmc@magna.augustmail.com>
Tintin <tintin@invalid.invalid> wrote:
> "Leif Wessman" <leifwessman@hotmail.com> wrote in message
> news:chp1ki$rpa@odbk17.prod.google.com...
>> using regular expressions for HTML
> That's like saying "I know that heating my room with a jet engine is a bad
> idea, but I'm going to do it anyway."
You CAN cool your beer with a jet engine though!
"The World's First Jet Powered Beer Cooler"
http://www.asciimation.co.nz/beer/
Some people have too much time on their hands...
--
Tad McClellan SGML consulting
tadmc@augustmail.com Perl programming
Fort Worth, Texas
------------------------------
Date: Thu, 09 Sep 2004 08:38:11 -0400
From: Chris Mattern <matternc@comcast.net>
Subject: Re: parsing XML using a regular expression
Message-Id: <3KCdnfM7g-Mu093cRVn-uQ@comcast.com>
Leif Wessman wrote:
>
> Hi!
>
> I was trying to find a general solution to parsing both HTML and
> xml-files. And I didn't know that regular expressions was such a bad
> idea when parsing XML. Now I know, and now I will build a solution
> using regular expressions for HTML and an XML-parser for the XML-files.
>
> THANKS for all your input!!
>
Argh. Regexps are no more a good idea for HTML than they are for XML.
--
Christopher Mattern
"Which one you figure tracked us?"
"The ugly one, sir."
"...Could you be more specific?"
------------------------------
Date: Thu, 9 Sep 2004 07:11:52 -0500
From: Tad McClellan <tadmc@augustmail.com>
Subject: Re: Perl and Inheritance strangeness.
Message-Id: <slrnck0i48.5tl.tadmc@magna.augustmail.com>
Anthony Roy <news@ant-roy.co.uk> wrote:
> Four example files are at the bottom of this email
^^^^^^^^^^
^^^^^^^^^^
Usenet is not email!
--
Tad McClellan SGML consulting
tadmc@augustmail.com Perl programming
Fort Worth, Texas
------------------------------
Date: Thu, 9 Sep 2004 11:38:16 +0100
From: "phil court" <dunkirk_phil@hotmail.com>
Subject: Perl web automation question ref javascript:dt_pop
Message-Id: <chpbqv$eek1@cvis05.marconicomms.com>
Hi all,
I am trying to write a script to retrieve a web page. the script is detailed
below. My problem is as follows.
The script can successfully obtain web pages such as http://news.bbc.co.uk
and http://www.dreamteamfc.com
However it fails on the following URL
http://www.dreamteamfc.com/dtfc04/servlet/PostPlayerList?catidx=1&title=GOAL
KEEPERS&gameid=167
The returned web page (saved in myOUT.txt) contains
<HTML><HEAD><SCRIPT
LANGUAGE="JAVASCRIPT">location.replace("http://www.dreamteamfc.com");</SCRIP
T></HEAD></HTML>
The above URL is valid as I have pasted into my browser and it displays OK.
The above URL is part of the
http://www.dreamteamfc.com page and is obtained via a javascript:dt_pop
(Whatever that is).
Anyway here is the script, any ideas ?? Thanks
#!/usr/bin/perl -w
use URI;
use LWP::Simple;
use LWP::UserAgent;
my $ua = LWP::UserAgent->new();
$ua->proxy('http', 'http://128.87.251.250:8080');
#my $content = get("http://news.bbc.co.uk");
my $content =
get("http://www.dreamteamfc.com/dtfc04/servlet/PostPlayerList?catidx=1&title
=GOALKEEPERS&gameid=167");
#my $content = get("http://www.dreamteamfc.com");
$script = "myOUT.txt";
unlink $script;
open (OUT,">>$script") || die "cannot open $script for open";
if (defined $content)
{
#$content will contain the html associated with the url mentioned above.
print OUT $content ;
}
else
{
#If an error occurs then $content will not be defined.
print "Error: Get stuffed";
}
close OUT;
------------------------------
Date: Wed, 8 Sep 2004 10:56:08 +0100
From: Ben Morrow <usenet@morrow.me.uk>
Subject: Re: RE-Redirecting STDOUT
Message-Id: <oi1312-2g4.ln1@osiris.mauzo.dyndns.org>
Quoth aisarosenbaum@gmail.com (aisarosenbaum):
> Brian McCauley <nobull@mail.com> wrote in message news:<ch7r2j$q1e$1@slavica.ukpost.com>...
> > aisarosenbaum@yahoo.com wrote:
> > > No I'm not a stuttering typist. ;^)
> > >
> > > I'm in the pecular position of working in an environment in which
> > > STDOUT has been rudely redirected by a script (A) that runs my
> > > scripts (B). I want to take STDOUT back without knowing the
> > > handle to which it was redirected. I've found a lot of advise
> > > on redirecting STDOUT, but I need to re-redirect it back to the
> > > console from 'B' without hacking 'A'.
> > >
> > > (This is on Solaris)
> >
> > On Unix-like OS opening the virual device /dev/tty opens the current
> > session's controling terminal.
> >
> > This, of course, has nothing to do with Perl.
>
> Thanks, this works:
>
> open( STDOUT, ">/dev/tty" );
>
> Any ideas for a MS-portable solution?
IIRC you can open CON: under MS to write to the current console window. You
could try suggesting to the maintainers of File::Spec that they add this to
that module.
Ben
--
Heracles: Vulture! Here's a titbit for you / A few dried molecules of the gall
From the liver of a friend of yours. / Excuse the arrow but I have no spoon.
(Ted Hughes, [ Heracles shoots Vulture with arrow. Vulture bursts into ]
/Alcestis/) [ flame, and falls out of sight. ] ben@morrow.me.uk
------------------------------
Date: 9 Sep 2004 10:07:25 GMT
From: anno4000@lublin.zrz.tu-berlin.de (Anno Siegel)
Subject: Re: Socket holding pattern
Message-Id: <chpa0t$o02$1@mamenchi.zrz.TU-Berlin.DE>
Gordon <clemmons@gmail.com> wrote in comp.lang.perl.misc:
> > If the duration of each connection is limited, an "overlapping restart"
> > is a possibility.
> >
> > On a signal (or something), the server forks and execs a new copy.
> > It also ceases to accept new connections, but continues serving
> > the old ones. When the last connection is gone, it dies. All
> > new requests are served by the new process.
> >
> > If you have unlimited connection time, and can't afford to break
> > long-lasting ones, this simple scheme won't work. Sketchy as it
> > is, it may well not work for other reasons.
> >
> > Anno
>
> Thanks for the message, it's actually for unlimited connections
> though. It's a mud / online game server.
I'd still go a long way to avoid having to hand over multiple
connections from one process to another. Apart from the technicalities
of doing that, you may run into compatibility problems when the
new server must handle connections that were established under the
old one.
You could just leave the old server running (until the next
system reboot -- nothing survives that anyway). Bug the players
still logged in on the old server with messages ("New features!
Log out and back in to use them!") until they do.
Another possibility is to open the relevant file handles (sockets)
so that they are immune to close-on-exec. (Set $^F to something
huge while they are opened). The old server process would have
to tell the new one which file descriptors are in use, so the new
one can take them over. Besides the compatibility issues already
mentioned, there may be synchronization problems when both processes
hold file handles to active connections.
Anno
------------------------------
Date: Thu, 09 Sep 2004 12:40:17 +0200
From: Mark Clements <mark.clements@kcl.ac.uk>
Subject: Re: web services...
Message-Id: <41403312@news.kcl.ac.uk>
Prasad Gadgil wrote:
> hi,
>
> Web services is very interesting. How does one code this using perl?
> Any famous examples ?
google for it. give us a break.
------------------------------
Date: Thu, 9 Sep 2004 12:18:36 +0200
From: Morten Reistad <firstname@lastname.pr1v.n0>
Subject: Re: Xah Lee's Unixism
Message-Id: <slaphc.beh1.ln@via.reistad.priv.no>
In article <uisaoutfz.fsf@mail.comcast.net>,
Anne & Lynn Wheeler <lynn@garlic.com> wrote:
>Morten Reistad <firstname@lastname.pr1v.n0> writes:
>> Since I am on a roll with timelines; just one off the top of my head :
>>
>> Project start : 1964
>> First link : 1969
>> Transatlantic : 1972 (to Britain and Norway)
>> Congested : 1976
>> TCP/IP : 1983 (the effort started 1979) (sort of a 2.0 version)
>> First ISP : 1983 (uunet, EUnet followed next year)
>> Nework Separation : 1983 (milnet broke out)
>> Large-scale design: 1987 (NSFnet, but still only T3/T1's)
>> Fully commercial : 1991 (WIth the "CIX War")
>> Web launced : 1992
>> Web got momentum : 1994
>> Dotcom bubble : 1999 (but it provided enough bandwith for the first time)
>> Dotcom burst : 2001
>
[i'll snip the excellent references you always come up with]
>was for backbone between regional locations ... it was suppose to be
>T1 links. What was installed was IDNX boxes that supported
>point-to-point T1 links between sites ... and multiplexed 440kbit
>links supported by racks & racks of PC/RTs with 440kbit boards ... at
>the backbone centers.
It was an upgrade from 56k. The first versions of NSFnet was not really
scalable either; noone knew quite how to design a erally scalable network,
so that came as we went.
>the t3 upgrades came with the nsfnet2 backbone RFP
For the grand timeline I'll see the two nsfnets as a continuing
development.
>my wife and i somewhat got to be the red team design for both nsfnet1
>and nsfnet2 RFPs.
>
>note that there was commercial internetworking protocol use long
>before 1991 ... in part evidence the heavy commercial turn-out at
>interop '88
>http://www.garlic.com/~lynn/subnetwork.html#interop88
Yes, commercial internet offerings were available as early as in
1983-84; but until Cisco, IBM, Wellfleet and Proteon made real router
gear (1986?) it was a little lame. I remember lamenting the software
of the IBM routers in ca1988; because they were light years ahead
of the competition in the actual hardware design.
But until 1991 (Gordon Cook has the gory detail) you had to accept the
NSFnet AUP if you wanted full connectivity. (academic only, in
principle; although dissimination of Open Source products was probably
acceptable). A lot of the important servers and sites was only reachable
through this "full connectivity"; so uunet, EUnet, PSInet and others
had a collaboration to build around NSFnet. The first 'Ix was born
to exchange traffic; the CIX.
It didn't go smoothly though. Some institutions had to be threatened
with retribution; and "Inverse AUP" to accept connectivity. But the
"CIX war" was won by the good guys; and the Internet became a commercial
endeavour.
In other jurisdictions it took a little longer. In Norway it took a
parliamentary debate to make it crystal clear that a soggy half-commercial
model was unacceptable; and the threat of legislation was used.
We had plans for a fully commercial ISP ready, in practice since 1986;
and in 1992 we ran to implement them.
>the issue leading up to the cix war was somewhat whether commercial
>traffic could be carried over the nsf funded backbone .... the
>internetworking protocol enabling the interconnection and heterogenous
>interoperability of large numbers of different "internet" networks.
>
>part of the issue was that increasing commercial use was starting to
>bring down the costs (volume use) .... so that a purely nsfnet
>operation was becomming less and less economically justified (the cost
>for a nsfnet only operation was more costly and less service than what
>was starting to show up in the commercial side).
It was the pains of the Internet growing out of academia, without a
good model to regulate it.
>part of the issue was that there was significant dark fiber in the
>ground by the early 80s and the telcos were faced with a significant
>dilemma .... if the dropped the bandwidth price by a factor of 20
>and/or offerred up 20 times the bandwidth at the same cost .... it was
>be years before the applications were availability to drive the
>bandwdith costs to the point where they were taking in sufficient
>funds to cover their fixed operating costs. so some of the things you
>saw happening were controlled bandwidth donations (in excess of what
>might be found covered by gov. RFPs) to educational institutions by
>large commercial institutions .... for strictly non-commercial use
>Such enourmous increases in bandwidth availability in a controlled
>manner for the educational market would hopefully promote the
>development of bandwidth hungry applications. They (supposedly) got
>tax-deduction for their educational-only donations .... and it
>wouldn't be made available for the commercial paying customers.
But this cannot be enforced without firewalls; and these institutions
didn't want to erect those; and wanted the policy hammered into the
Internet itself. That would have killed the Internet. Fortunatly
the "second internet"; a commercial Internet on purely commercially
obtained hardware and circuits; was built around the NSFnet. But the
two needed to interconnect. For a while there were two internets;
one commercial and one academic that only half-way interconnected.
It was finally resolved in 1991; and from then on the Internet as
such was a fully comemrcial internetwork; where AUP's only applied to
local networks.
-- mrr
------------------------------
Date: Thu, 9 Sep 2004 12:22:15 +0200
From: Morten Reistad <firstname@lastname.pr1v.n0>
Subject: Re: Xah Lee's Unixism
Message-Id: <nsaphc.beh1.ln@via.reistad.priv.no>
In article <2tjvj0ttc99io295ecg2l86lc2h4tug1jc@4ax.com>,
Reynir Stefánsson <reynirhs@mi.is> wrote:
>So spake Anne & Lynn Wheeler:
>
>>OSI can support x.25 packet switching and/or even the arpanet packet
>>switching from the 60s & 70s .... but it precludes internetworking
>>protocol. internetworking protocol (aka internet for short) is a
>>(non-existant) layer in an OSI protocol stack between
>>layer3/networking and layer4/transport. misc. osi (& other) comments
>>http://www.garlic.com/~lynn/subnetwork.html#xtphsp
>
>Wasn't the idea behind ISO/OSI that there should be One Network for
>everybody, instead of today's lot of interconnected nets?
There were provisions for many networks; but it was a design that
requires (large) service providers; aka Phone Companies to provide
service.
Self-provisioning like we do all the time on the Internet was difficult.
-- mrr
------------------------------
Date: Thu, 09 Sep 2004 11:48:17 GMT
From: Brian Inglis <Brian.Inglis@SystematicSW.Invalid>
Subject: Re: Xah Lee's Unixism
Message-Id: <bkg0k0ht24p9d1oo2bsb4tatumu177st82@4ax.com>
On 08 Sep 04 18:50:12 -0800 in alt.folklore.computers, "Charlie Gibbs"
<cgibbs@kltpzyxm.invalid> wrote:
>In article <413f6044.512285562@News.individual.net>, iddw@hotmail.com
>(Dave Hansen) writes:
>>ObUnix: Max OS X has a "ditto" command that's the same as "cp" only
>>different.
>
>Wasn't "ditto" the name of one of those console-driven mainframe
>utilities that would copy anything to anything? (Another version
>was known as DEBE, which stood for "Does Everything But Eat".)
>I got my hands on some source code and got one working on the Univac
>9400 and 90/30. Thanks to our convention of prefixing such utility
>program names with "UV" (for Univac Vancouver), it wound up being
>called UVDITO (so that it would fit into the 6-character name limit).
IBM DOS/VSE Data Interfile Transfer, Testing, and Operations utility
--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada
Brian.Inglis@CSi.com (Brian[dot]Inglis{at}SystematicSW[dot]ab[dot]ca)
fake address use address above to reply
------------------------------
Date: Thu, 09 Sep 2004 11:57:10 GMT
From: Brian Inglis <Brian.Inglis@SystematicSW.Invalid>
Subject: Re: Xah Lee's Unixism
Message-Id: <p6h0k0ds97g1ti7jl7jjmmac39bmgd0l6a@4ax.com>
On Thu, 09 Sep 2004 04:05:31 +0000 in alt.folklore.computers, Reynir
Stefánsson <reynirhs@mi.is> wrote:
>So spake Anne & Lynn Wheeler:
>
>>OSI can support x.25 packet switching and/or even the arpanet packet
>>switching from the 60s & 70s .... but it precludes internetworking
>>protocol. internetworking protocol (aka internet for short) is a
>>(non-existant) layer in an OSI protocol stack between
>>layer3/networking and layer4/transport. misc. osi (& other) comments
>>http://www.garlic.com/~lynn/subnetwork.html#xtphsp
>
>Wasn't the idea behind ISO/OSI that there should be One Network for
>everybody, instead of today's lot of interconnected nets?
A common network run by PTTs with ISDN terminal links IIRC.
--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada
Brian.Inglis@CSi.com (Brian[dot]Inglis{at}SystematicSW[dot]ab[dot]ca)
fake address use address above to reply
------------------------------
Date: Thu, 09 Sep 2004 12:04:35 GMT
From: Brian Inglis <Brian.Inglis@SystematicSW.Invalid>
Subject: Re: Xah Lee's Unixism
Message-Id: <qbh0k0p80gc4j0dilf3329lc3oahq3voae@4ax.com>
On Tue, 07 Sep 2004 22:04:52 +0100 (BST) in alt.folklore.computers,
bhk@dsl.co.uk (Brian {Hamilton Kelly}) wrote:
>On Tuesday, in article
> <qsdrj0dl4qi558bopev159fg4m7rn6mfoq@4ax.com>
> Brian.Inglis@SystematicSW.Invalid "Brian Inglis" wrote:
>
>> I was never aware that DEC offered TCP/IP.
>
>You'll have seen my later post about "TCP/IP Services for Vax/VMS"
>(which, a niggle tells me, had a different name, either before or after).
>This was written by the Unix developers at DEC, and consequently was very
>kuldgy and astonishingly badly-documented (for those of us used to the
>high quality of VMS documentation).
>
>Did you never see a
> UCX>
>prompt?
>
>> Politics and not timing was why TCP/IP didn't get into VMS:
>> d|i|g|i|t|a|l backed the European horse that never ran as it fitted
>> better with their network hardware capabilities and DECnet plans.
>> It also meant they did not have to deal with those BBN guys that had
>> developed a competing OS and network.
>> They had whole suites of products layered on top of DECnet that were
>> sold to European governments and contractors.
>
>Can you say "Colour Book Software"? :-(
I thought it was "Colouring Book Networking" ;^>
>(Mind you, unattended file transfer running overnight beats FTP hands
>down.)
Until you measure the transfer rate. Reliable unattended FTP file
transfer is doable with some work (mainly due to FTP not always
returning useful error codes), and finishes much faster.
--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada
Brian.Inglis@CSi.com (Brian[dot]Inglis{at}SystematicSW[dot]ab[dot]ca)
fake address use address above to reply
------------------------------
Date: Thu, 09 Sep 2004 12:11:32 GMT
From: Brian Inglis <Brian.Inglis@SystematicSW.Invalid>
Subject: Re: Xah Lee's Unixism
Message-Id: <prh0k0pgqga9cfg4847uugu38hfcvpq70a@4ax.com>
On 07 Sep 04 09:44:24 -0800 in alt.folklore.computers, "Charlie Gibbs"
<cgibbs@kltpzyxm.invalid> wrote:
>In article <20040904.0140.57670snz@dsl.co.uk>, bhk@dsl.co.uk
>(Brian {Hamilton Kelly}) writes:
>
>>On Thursday, in article
>><slrncjf52a.oa.amajorel@vulcain.knox.com> amajorel@teezer.fr
>>"Andre Majorel" wrote:
>>
>>> Are you arguing that the stability comes from the API, not from
>>> the implementation ? If so, why has NT become more stable over
>>> the years, since its API has not changed ?
>>
>>I'd like to imagine that it's because there are fewer fuckwits using
>>it; BICBW....
>
>Does this mean that XP is getting less stable?
Well MS touted their SP2 security upgrade, then backed down rather
quickly, as it created as many new bugs and holes as it fixed, and
also broke a large number of third party applications, possibly
because they were coded to work with the way XP actually behaved,
rather than as it was documented to work.
--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada
Brian.Inglis@CSi.com (Brian[dot]Inglis{at}SystematicSW[dot]ab[dot]ca)
fake address use address above to reply
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc. For subscription or unsubscription requests, send
#the single line:
#
# subscribe perl-users
#or:
# unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.
NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice.
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V10 Issue 6979
***************************************