[24209] in Perl-Users-Digest
Perl-Users Digest, Issue: 6401 Volume: 10
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Wed Apr 14 14:10:56 2004
Date: Wed, 14 Apr 2004 11:10:10 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Wed, 14 Apr 2004 Volume: 10 Number: 6401
Today's topics:
Earthquake and tornado data evaluation Perl program Ap <edgrsprj@ix.netcom.com>
Re: eval running external script (Anno Siegel)
Re: eval running external script <jwillmore@remove.adelphia.net>
Re: fixed the bbs... <me@privacy.net>
Re: import() override - comments? (Anno Siegel)
Is there any perl script for converting XML file to tab (gongwuming@hotmail.com)
Re: Is there any perl script for converting XML file to <ZedGama3@nospam.com>
Re: Many filename arguments: EOF in "while(@slurp=<>){. <fma@doe.carleton.ca>
Re: Many filename arguments: EOF in "while(@slurp=<>){. <tadmc@augustmail.com>
Re: searching a structured text data base <jwillmore@remove.adelphia.net>
Re: Where To Go For Classroom Training in CGI? <jwcorpening@verizon.net>
Re: Where To Go For Classroom Training in CGI? <jurgenex@hotmail.com>
Re: Where To Go For Classroom Training in CGI? <ittyspam@yahoo.com>
Re: Where To Go For Classroom Training in CGI? <spamtrap@dot-app.org>
Re: Where To Go For Classroom Training in CGI? <tadmc@augustmail.com>
Re: Why are arrays and hashes this way? (Xavier Noria)
Re: Why are arrays and hashes this way? (Malcolm Dew-Jones)
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Wed, 14 Apr 2004 15:42:48 GMT
From: "edgrsprj" <edgrsprj@ix.netcom.com>
Subject: Earthquake and tornado data evaluation Perl program Apr. 14, 2004
Message-Id: <YZcfc.7030$zj3.3864@newsread3.news.atl.earthlink.net>
Summary: If all goes according to plan, the earthquake and tornado data
evaluation Perl program discussed in this report will be made available to
researchers, governments, disaster mitigation personnel, and computer
programmers around the world, possibly later this week. This report has
been posted here in an effort go see if anyone would like to comment on a
demonstration version of that computer program.
The information in this report represents expressions of personal opinion.
Posted by E.D.G. Professional Analyst April 14, 2004
http://www.freewebz.com/eq-forecasting/index.html
Summary: A small demonstration version of an earthquake and tornado data
evaluation Perl program can now be downloaded for free from the following
address:
http://www.freewebz.com/eq-forecasting/311.zip (About 400,000 bytes)
That file contains only ASCII text and html files including one Perl program
which is in a ASCII text file format. However, to be safe that download zip
file should of course be checked for viruses before it is opened etc. I
don't run that Web server. The zip file expands to about 11 files for a
total of roughly 3 million bytes.
Support documentation is contained in a Read-Me file in that zip file and on
the following Web pages:
http://www.freewebz.com/eq-forecasting/301.html
http://www.freewebz.com/eq-forecasting/302.html
http://www.freewebz.com/eq-forecasting/303.html
If you would like to see the type of data the program generates and examine
its source code before downloading the zip file then that information can be
found on that 303.html Web page. That page and the 302.html Web page also
contain information regarding how to interpret the data which the program
generates. The numbers in the PR (a probability rating) column are the most
important. They show how well the program's test routines indicate a
warning signal matched some past earthquake.
The small demonstration version of the program will complete a run in just a
few seconds. Then if it is being run with Window XP it will try to have a
text editor automatically open a data output file named "results.out" You
may have to tell Windows which text editor should open files with that
extension. Several commands near the bottom of the program can be changed
to have it automatically open the output file with Windows 98. And Windows
98 Progman.exe will then have to be closed before another run is started.
With other operating systems and perhaps even with Windows XP and 98 it
might be necessary to go to the directory where the program resides and
manually open the output file with a text editor.
I would be interested in hearing if the program runs Ok with Unix and other
operating systems.
Plans are for the full program to be made available for download at that Web
site as soon as all of the documentation has been finalized and some
adjustments have been made to a few of the program data files.
For the following reason and others I believe that this program has the
potential to revolutionize the science of earthquake forecasting and perhaps
also move the science of tornado forecasting forward:
1. It actually works for at least some earthquakes. There have been many
attempt to develop such a data processing program over the centuries. But I
am not aware of any others which have been successful. This one works in
part because of two important discoveries which I made regarding how sun and
moon gravitational pull data need to be combined and regarding the existence
of an earthquake triggering and precursor signal symmetry effect.
Information regarding those discoveries can be found on the following Web
page and others at that Web site:
http://www.freewebz.com/eq-forecasting/90-05.html
The program has not yet been used to do much tornado research. But I
believe that its data processing routines might work for evaluating what
appear to me to be certain types of tornado warning signals.
In its present form it does not actually predict earthquakes or tornados.
Rather it attempts to tell where fault zones are located which are
generating the precursor signals which people are detecting. Once they know
where those fault zone are located they can then use different tests to
check the areas for other signs of approaching earthquakes. I have seen the
program produce some amazingly accurate data. Along those lines that
303.html file contains some information which the program generated which I
believe pointed to that approaching destructive California, USA earthquake
which occurred last December 22, 2003.
2. The most important reason that it has so much potential may be the fact
that it should, or at least I hope that it will, open the door to these
sciences to both professional and amateur researchers around the world
including computer programmers who I expect might be doing quite a bit of
the initial computer program expansion work.
Normally if you want to do any work in these areas you need a laboratory
filled with expensive scientific equipment. With this computer program all
you now really need is a sufficiently powerful computer. And any personal
computer built in the past 10 years should probably be adequate. One of the
downloadable files will contain records of more than 20,000 earthquakes
going back to the start of 1990 along with quite a few records of earthquake
warning signals and what are believed to be tornado warning signals which
were detected during the past decade. Researchers can use the program or
some future version of it to study both the earthquakes and warning signals
in that file. They don't need any expensive instrumentation. And they
don't even need Internet access if they can get the program files from some
other source. Computer programmers should be able to dramatically improve
both the data processing routines in the program and also its structure, how
it collects, processes, generates, and displays data.
My discussions with other researchers around the world indicate to me that
it should be possible for people to obtain new, reasonably high quality
earthquake precursor data without too much trouble for free from numerous
sources around the world.
The program has virtually unlimited room for improvement.
It relies heavily on sun, moon, ocean tide, and Solid Earth Tide data. And
it needs to have subroutines added to it which would enable it to generate
those types of data as they are needed. I am generating them myself with
programs such as the commercially available MICA program. But for the
foreseeable future, downloadable versions of the program will have to
extract those types of data from enormous data files which will be in the
download files. One of them has already been stored at my Web site:
http://www.freewebz.com/eq-forecasting/313.zip (About 4 million bytes)
That file does not have to be downloaded for the demonstration version of
the program to work.
The program presently uses relatively simple tests to compare data regarding
things such as the locations of the moon in the sky when earthquakes
occurred with that same type of data for the times when warning signals were
detected. It needs to have more sophisticated comparison tests added.
Last but not least, some organization is going to have to keep track of
where development efforts stand and try to keep them moving forward. I will
not have the resources to do that myself if very many computer programmers
and researchers around the world get involved with program development. I
am planning to contact one international organization that I have in mind in
an effort to see if they would like to take charge of the effort.
It is my expectation that this effort will eventually become sufficiently
advanced and complex that only specialists in geophysics and atmospheric and
space sciences will be able to make any significant discoveries. But I
believe that at the moment this represents a relatively new area of science.
This is not a sophisticated or even well written Perl program. As the
documentation explains, one of my primary goals was to get something running
as quickly as possible. I needed the program for my own forecasting
efforts. My older data evaluation programs written in different versions of
Basic and other languages ran out of computing power some time last year.
Also, this new program was written with a very simple structure so that
other people could easily add new data processing routines to it without
having to be Perl programming experts and so that it could be easily
translated into other programming languages, an effort which is actually
already underway.
If you do not like the program code then just remember this: Earthquakes
claim a reported 10,000 lives a year. On the average that is one life every
hour, 24 hours a day, 365 days a year. If this program has the potential to
reduce those casualty numbers then more could be accomplished by producing
better looking versions of the program than by moaning and groaning about
its present code style.
In a day or two after people reading this report have had a chance to
comment on the program I am planning to let Pascal, Ruby, Basic, C, and
Fortran programmers know about the existence of the program. And then when
the full version is ready for downloading I will be circulating information
regarding it to researchers, governments, and disaster mitigation groups
around the world.
FINAL COMMENTS
With many technologies which have some lifesaving potential progress can be
slow because the work relies on expensive lab instruments, highly trained
personnel, and research programs which can take years to plan and develop.
That is in my opinion no longer the case for the science of earthquake
forecasting and hopefully to some extent tornado forecasting. From this
point on the limiting factors in the development of this particular type of
technology might be how quickly this computer program can be expanded, and
the amount of success that researchers have with interpreting and
understanding the data that it is generating and using that knowledge to add
more powerful data processing routines to the program.
When will deadly earthquakes become a thing of the past?
Throughout history we have waited for scientists (I am one myself) and other
researchers to provide us with an answer to that question. A door might now
have opened which will make it possible for computer programmers around the
world to answer the question.
------------------------------
Date: 14 Apr 2004 08:27:49 GMT
From: anno4000@lublin.zrz.tu-berlin.de (Anno Siegel)
Subject: Re: eval running external script
Message-Id: <c5ism5$a70$1@mamenchi.zrz.TU-Berlin.DE>
Shailesh <shailesh@nothing.but.net> wrote in comp.lang.perl.misc:
> In an eval statement, I am executing an external perl script, and I
> want to trap its errors. However, when I do this, the errors are not
> trapped in the $@ variable.
Of course not! Eval doesn't even know that the external script is
in Perl, it could be anything.
> Is there a better way to do this? Do I
> need to read in the perl script into a string, and then execute it?
You want to catch the external program's stderr (and probably stdout too).
See "perldoc perlipc" for how to do that. Also look at the module
IPC::Open2.
Reading the script in a string and executing it is what "do FILE"
does. That is another possibility.
Anno
------------------------------
Date: Wed, 14 Apr 2004 11:15:31 -0400
From: James Willmore <jwillmore@remove.adelphia.net>
Subject: Re: eval running external script
Message-Id: <pan.2004.04.14.15.15.26.157656@remove.adelphia.net>
On Wed, 14 Apr 2004 03:02:08 +0000, Shailesh wrote:
> In an eval statement, I am executing an external perl script, and I
> want to trap its errors. However, when I do this, the errors are not
> trapped in the $@ variable. Is there a better way to do this? Do I
> need to read in the perl script into a string, and then execute it?
> The problem with that is I don't want any of the scripts' variables to
> be shared. My goal is for the task_scheduler to be simple, reliable,
> and run forever running tasks, while the run_task script may change
> from time to time. (Only one parameter needs to be passed to the
> run_task script via the command line--not shown below.) The exact
> code and output of my two scripts is below.
[ ... ]
Try reading ...
`perldoc -q 'How can I capture STDERR from an external command'`
HTH
--
Jim
Copyright notice: all code written by the author in this post is
released under the GPL. http://www.gnu.org/licenses/gpl.txt
for more information.
a fortune quote ...
Minnie Mouse is a slow maze learner.
------------------------------
Date: Wed, 14 Apr 2004 20:29:16 +1200
From: "Tintin" <me@privacy.net>
Subject: Re: fixed the bbs...
Message-Id: <c5isni$26bji$1@ID-172104.news.uni-berlin.de>
"Robin" <robin @ infusedlight.net> wrote in message
news:c5idnk$nl6$4@reader2.nmix.net...
> I think I figured ouit taint mode, so try to hack me now...
> http://www.infusedlight.net/cgi-bin/bbs.pl
> thanks in advance.
Are you sure you're not the Alaskan electrician in a new life?
------------------------------
Date: 14 Apr 2004 09:28:08 GMT
From: anno4000@lublin.zrz.tu-berlin.de (Anno Siegel)
Subject: Re: import() override - comments?
Message-Id: <c5j078$edg$1@mamenchi.zrz.TU-Berlin.DE>
Matthew Braid <mb@uq.net.au.invalid> wrote in comp.lang.perl.misc:
> Anno Siegel wrote:
>
> >>BEGIN {
> >> require Functions; # One of the library packages
> >> my $old_x = \&Functions::x;
> >> sub x {
> >> print "OVERRIDE!\n";
> >> $old_x->(@_);
> >
> >
> > This may be a place for "goto &$old_x". You'd avoid building up a new
> > stack frame for each override, and the caller remains the same.
>
> Oh definitely. I stripped this down from an original that did something before
> and after the call to $old_x. Quite a few of the real subs do end with a goto.
>
> >>However, it also means that use'ing a library package like:
> >>
> >>use Functions qw/x/;
> >>
> >>can be problematic and often loses overrides. At first I fixed this by using:
> >
> >
> > I don't see how. "use" loads the file only once. Since overriding
> > changes what &Functions::x points to, standard exportation should export
> > the overridden version of "x".
>
> I thought this too, but there does seem to be a difference in practice. I think
> it has to do with how the use'd functions are imported (which is why I changed
> the import function).
I get it now... it's just a matter of timing. Importing a function
copies a coderef to *whatever the function is at the moment* to the
importing package. Later changing the definition in the exporting
package won't change the behavior of the already exported function.
You can redefine import semantics to change that. A simpler way would
be to delay imports until all modifications have been done:
use T1; # no imports here. loading may actually not be necessary
# at this point.
# load behavior-modifying modules
use T2;
use T3_a_III;
# etc...
# now import modified functions
use T1 qw( t1 t2 s1 s2);
This would import the functions as modified by T1 and T3_a_III. If
that is a possibility, I'd go with that and leave Exporter alone.
[example snipped]
Anno
------------------------------
Date: 14 Apr 2004 09:17:14 -0700
From: gongwuming@hotmail.com (gongwuming@hotmail.com)
Subject: Is there any perl script for converting XML file to tab-delimited file ?
Message-Id: <8846e5eb.0404140817.7a40fbb3@posting.google.com>
Hello..
I want to load a very big XML file to MySQL database. Because I have
already written a script for loading a tab-delimited file to database
(specific column to specific field of table) before, I just need to
convert the XML file to tab-delimited format.
Is there a perl script for solving this kind of problem?
I tried to write one this afternoon, but i felt it is not a very easy
problem...Can someone give a some hits on it and what XML::* module
should I use? ( I used XML::SimpleObject, but I think it is a little
light-weighted.)
Thanks in advance..
------------------------------
Date: Wed, 14 Apr 2004 17:15:19 GMT
From: "ZedGama3" <ZedGama3@nospam.com>
Subject: Re: Is there any perl script for converting XML file to tab-delimited file ?
Message-Id: <Hkefc.143710$JO3.84721@attbi_s04>
I believe XML::Parser is what your looking for
<gongwuming@hotmail.com> wrote in message
news:8846e5eb.0404140817.7a40fbb3@posting.google.com...
> Hello..
> I want to load a very big XML file to MySQL database. Because I have
> already written a script for loading a tab-delimited file to database
> (specific column to specific field of table) before, I just need to
> convert the XML file to tab-delimited format.
> Is there a perl script for solving this kind of problem?
> I tried to write one this afternoon, but i felt it is not a very easy
> problem...Can someone give a some hits on it and what XML::* module
> should I use? ( I used XML::SimpleObject, but I think it is a little
> light-weighted.)
> Thanks in advance..
------------------------------
Date: 14 Apr 2004 08:01:31 GMT
From: Fred Ma <fma@doe.carleton.ca>
Subject: Re: Many filename arguments: EOF in "while(@slurp=<>){..}"?
Message-Id: <407CEFD3.AA657DB9@doe.carleton.ca>
Tad McClellan wrote:
>
> >> > This was in order to understand how perl recognizes different
> >> > files so that it can rename the originals, the case of
> >> > "-i.bak".
#!/bin/perl -w
my @slurp;
while (@slurp=<>) {
printf "LINE: @slurp"
} continue {
printf "DOG\n" or die "-p destination: $!\n";
}
> >> Why a while() rather than the more usual foreach() ?
> >
> > Haven't gotten to foreach yet. I just wanted to understand at the
> > fundamental level the simple examples I encountered in the perldoc.
>
> But putting "@slurp=<>" in the while condition makes a simple
> example into a complicated example.
> The complications can get in the way of figuring out what is
> going on...
Yes. I can see that. However, it did force me to delve into the
intricacies of this thread, which I should be aware of if only to
avoid doing more damage than good.
> > I already have 4 xterms open to various pieces of perldoc, so I'm
> > taking a bite at a time.
>
> Have you read the "I/O Operators" section in perlop.pod yet? In
> particular, the part that starts with "The null filehandle <> is
> special".
No. I wish there was an index that tells you what perldoc sections to
look in. I wouldn't have guessed that I/O would be under "operators",
though I've figured to use the expression search of "less" to find
"I/O" in perltoc. Kind of tricky to find potential whitespace in that
expression, since the \+ doesn't seem to be recognized in our build of
less. I've been visiting www.perldoc.com, but there doesn't seem to
be an index there, at least I haven't rummaged into it yet. Searching
perltoc does the trick, though. Thanks alot for pointing me to the
right section.
Fred
--
Fred Ma
Dept. of Electronics, Carleton University
1125 Colonel By Drive, Ottawa, Ontario
Canada, K1S 5B6
------------------------------
Date: Wed, 14 Apr 2004 08:48:44 -0500
From: Tad McClellan <tadmc@augustmail.com>
Subject: Re: Many filename arguments: EOF in "while(@slurp=<>){..}"?
Message-Id: <slrnc7qg9s.b2b.tadmc@magna.augustmail.com>
Fred Ma <fma@doe.carleton.ca> wrote:
> I wish there was an index that tells you what perldoc sections to
> look in.
I make my own index:
perldoc -l perlfunc # find out where the .pods are
cd .../pod # go to where the pods are
perl -ne 'print "$ARGV: $_" if /^=/' *.pod >all.heads
(or: grep ^= *.pod >all.heads)
Then I search it with grep:
grep SOMETHING all.heads
or with grep written in Perl:
perl -ne 'print if /SOMETHING/' all.heads
--
Tad McClellan SGML consulting
tadmc@augustmail.com Perl programming
Fort Worth, Texas
------------------------------
Date: Wed, 14 Apr 2004 10:59:44 -0400
From: James Willmore <jwillmore@remove.adelphia.net>
Subject: Re: searching a structured text data base
Message-Id: <pan.2004.04.14.14.59.38.258802@remove.adelphia.net>
On Fri, 09 Apr 2004 10:50:02 -0400, Michael Friendly wrote:
> I have a LaTeX document composed of historical items with structured
> fields (on the history of data visualization,
> http://www.math.yorku.ca/SCS/Gallery/milestone/)
>
> I'd like to create a web-based facility to provide searching of these
> items. As a first step, I've written a perl script to translate the
> LaTeX stuff into various formats: tagged, CSV, HTML, XML. But I don't
> know how to choose a data format and appropriate software tools to
> accomplish this most easily.
>
> There's a bewildering array of perl modules for databases, XML, etc. but
> I'm not sure what would be most useful in this context. Can anyone help
> point me in useful directions? I'm doing this on a debian linux system,
> and a solution involving software other than perl is possible.
[ ... ]
You might want to think about parsing the documents and putting them into
a database instead of using the XML files *as* a database.
XML::Simple might work for you. However, I don't use XML that much (right
now, at least :-) ).
You could try looking over the various XML modules and see which might fit
the bill for you.
http://search.cpan.org/
HTH
--
Jim
Copyright notice: all code written by the author in this post is
released under the GPL. http://www.gnu.org/licenses/gpl.txt
for more information.
a fortune quote ...
"It's a summons." "What's a summons?" "It means summon's in
trouble." -- Rocky and Bullwinkle
------------------------------
Date: Wed, 14 Apr 2004 14:18:21 GMT
From: JC <jwcorpening@verizon.net>
Subject: Re: Where To Go For Classroom Training in CGI?
Message-Id: <NKbfc.52560$1y1.1255@nwrdny03.gnilink.net>
You are nit-picking, and you're wrong.
PERL = Practical Extraction & Report Language
Why do folks have a problem with others using PERL as the acronym that
it is? I understand that PERL has evolved to be used as a word, and,
probably for that reason, I've never seen a PERL person complain about
the use of Perl, but too many of the Perl folks NEED (not an acronym) to
complain about the use of PERL. In the same way that we may be in the
norm by writing Radar, Sonar, and Laser in other contexts, we are
justified in using RADAR, SONAR, and LASER. Lighten up.
-jc
Sherm Pendley wrote:
> I'd be wary of those. The proper name of the language is Perl, not PERL. It
> might seem like a trivial thing, and in general I don't bother nit-picking
> such things.
------------------------------
Date: Wed, 14 Apr 2004 14:29:09 GMT
From: "Jürgen Exner" <jurgenex@hotmail.com>
Subject: Re: Where To Go For Classroom Training in CGI?
Message-Id: <VUbfc.30179$hd3.1864@nwrddc03.gnilink.net>
[Please don't post jeopardy style]
JC wrote:
> You are nit-picking, and you're wrong.
> PERL = Practical Extraction & Report Language
You must have missed the FAQ "What's the difference between "perl" and
"Perl"?"
> Why do folks have a problem with others using PERL as the acronym that
> it is?
Maybe because it is not an acronym, see FAQ?
> I understand that PERL has evolved to be used as a word, and,
Actually it has never been an acronym, see FAQ.
> probably for that reason, I've never seen a PERL person complain about
> the use of Perl, but too many of the Perl folks NEED (not an acronym)
> to complain about the use of PERL. In the same way that we may be in
> the norm by writing Radar, Sonar, and Laser in other contexts, we are
> justified in using RADAR, SONAR, and LASER. Lighten up.
Which actually used to be acronyms and became words. Much the opposite with
"Perl".
jue
------------------------------
Date: Wed, 14 Apr 2004 10:50:09 -0400
From: Paul Lalli <ittyspam@yahoo.com>
Subject: Re: Where To Go For Classroom Training in CGI?
Message-Id: <20040414104735.N27629@dishwasher.cs.rpi.edu>
On Wed, 14 Apr 2004, JC wrote:
> You are nit-picking, and you're wrong.
> PERL = Practical Extraction & Report Language
That 'expansion' was created long after Larry Wall invented the language -
and name - "Perl". That is why PERL is incorrect. It's not an acronym
Please read the FAQ:
perldoc -q difference
> Why do folks have a problem with others using PERL as the acronym that
> it is?
Because it's not.
> I understand that PERL has evolved to be used as a word
No, you don't understand. It's a word that has 'evolved' to be
occasionally used as an acronym.
, and,
> probably for that reason, I've never seen a PERL person complain about
> the use of Perl, but too many of the Perl folks NEED (not an acronym) to
> complain about the use of PERL. In the same way that we may be in the
> norm by writing Radar, Sonar, and Laser in other contexts, we are
> justified in using RADAR, SONAR, and LASER. Lighten up.
We're perfectly light. You, however, are quite misinformed
Paul Lalli
------------------------------
Date: Wed, 14 Apr 2004 11:09:06 -0400
From: Sherm Pendley <spamtrap@dot-app.org>
Subject: Re: Where To Go For Classroom Training in CGI?
Message-Id: <Ve2dnWRTz8-YyeDd4p2dnA@adelphia.com>
JC wrote:
> You are nit-picking
Yes, and like I said, I don't normally criticize ordinary beginners for
making such a simple mistake - but I *do* judge self-styled "experts"
according to the benchmark that they've set for themselves.
When a school that claims to teach something doesn't even know enough about
it to correctly spell the name of the subject, that's nothing short of
fraud. They're representing themselves as experts who are qualified to
teach a subject, when in fact they're ignorant of even the bare basics.
I would likewise criticise a school that claimed to teach "HTML Programming"
or "Deisel Mechanics." When someone makes such a simple, beginner-level
mistake, it calls into doubt the level of expertise they've claimed for
themselves.
> , and you're wrong.
I'm not, but it would be silly to argue with you about how to spell a word,
when there's a dictionary within arm's reach. Just look it up: perldoc -q
"What's the difference"
Found in /System/Library/Perl/5.8.1/pods/perlfaq1.pod
What's the difference between "perl" and "Perl"?
... snip ...
But never write "PERL",
because perl is not an acronym, apocryphal folklore and post-
facto expansions notwithstanding.
sherm--
--
Cocoa programming in Perl: http://camelbones.sourceforge.net
Hire me! My resume: http://www.dot-app.org
------------------------------
Date: Wed, 14 Apr 2004 11:08:53 -0500
From: Tad McClellan <tadmc@augustmail.com>
Subject: Re: Where To Go For Classroom Training in CGI?
Message-Id: <slrnc7qogl.bj3.tadmc@magna.augustmail.com>
Jürgen Exner <jurgenex@hotmail.com> wrote:
> [Please don't post jeopardy style]
> JC wrote:
>> You are nit-picking, and you're wrong.
>> PERL = Practical Extraction & Report Language
>
> You must have missed the FAQ "What's the difference between "perl" and
> "Perl"?"
I doubt that, as you yourself pointed him to it the last time
he made this same claim.
Message-ID: <myU%b.2284$C65.1352@nwrddc01.gnilink.net>
Many people in the Perl community object to "PERL".
A training that calls it "PERL" is either not in tune with the
community or doesn't care, they don't even know the secret handshake.
So, the use of "PERL" _does_ reveal useful info about the quality
of training that could be expected.
Whether PERL is right or wrong doesn't matter in the context that
it was used in upthread here, it still is a useful "indicator".
--
Tad McClellan SGML consulting
tadmc@augustmail.com Perl programming
Fort Worth, Texas
------------------------------
Date: 14 Apr 2004 00:52:59 -0700
From: fxn@hashref.com (Xavier Noria)
Subject: Re: Why are arrays and hashes this way?
Message-Id: <31a13074.0404132352.59a7b546@posting.google.com>
Thank you very much for your response Xho!
Nevertheless both your post and Uri's seem to answer "why structures
cannot be nested in Perl 5". That's helpful, but it is not the real
question. The argument more or less goes: "Those semantics in Perl 5
are the most reasonable choice because otherwise how could you achieve
backwards compatibility?".
But since that's a consequence of history, to answer the question "why
structures cannot be nested today" we have in turn to answer why they
didn't nest in previous versions of Perl. Why they didn't nest from
the start being first-class citizens and containers. I don't mean to
read Larry's mind, but maybe somebody around just know it.
-- fxn
------------------------------
Date: 14 Apr 2004 08:23:35 -0800
From: yf110@vtn1.victoria.tc.ca (Malcolm Dew-Jones)
Subject: Re: Why are arrays and hashes this way?
Message-Id: <407d5777@news.victoria.tc.ca>
Xavier Noria (fxn@hashref.com) wrote:
: When I introduce references the first thing I mention is that they
: allow us to build nested structures. However, the importance of that
: feature is a consequence of the fact that structures cannot be nested
: themselves.
: Does anybody know why structures were designed so that they could just
: hold scalars?
It's conceptually simple, it's consistent, it allows for all the necessary
functionality of nested data structures, and the references themselves are
a general purpose mechanism that provides a lot more than just nested
structures.
Anything else would be more complicated in virtually every situation other
than doing deep copies of nested structures.
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc. For subscription or unsubscription requests, send
#the single line:
#
# subscribe perl-users
#or:
# unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.
NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice.
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V10 Issue 6401
***************************************