[23731] in Perl-Users-Digest
Perl-Users Digest, Issue: 5937 Volume: 10
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Sat Dec 13 21:05:39 2003
Date: Sat, 13 Dec 2003 18:05:05 -0800 (PST)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Sat, 13 Dec 2003 Volume: 10 Number: 5937
Today's topics:
Caching results (?) of lengthy cgi process <henryn@zzzspacebbs.com>
Re: Caching results (?) of lengthy cgi process <spamfilter@dot-app.org>
Re: Caching results (?) of lengthy cgi process <syscjm@gwu.edu>
Re: Caching results (?) of lengthy cgi process <flavell@ph.gla.ac.uk>
Re: Caching results (?) of lengthy cgi process <henryn@zzzspacebbs.com>
Re: LWP install MacOS X <spamfilter@dot-app.org>
Re: LWP install MacOS X <henryn@zzzspacebbs.com>
Re: Newsgroup Searching Program <asu1@c-o-r-n-e-l-l.edu>
Re: Newsgroup Searching Program <seawolf@attglobal.net>
Perl 5.8.0 foreach & glob BUG <recycle@bin.com>
Re: Perl 5.8.0 foreach & glob BUG <abigail@abigail.nl>
Re: Perl 5.8.0 foreach & glob BUG <krahnj@acm.org>
redo undefines loop variable (Chris Charley)
Re: redo undefines loop variable (Jay Tilton)
Re: Tutorials/examples for SAX2 with Perl <shiva.blacklist@sewingwitch.com>
Re: Tutorials/examples for SAX2 with Perl (Tad McClellan)
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Sat, 13 Dec 2003 19:48:07 GMT
From: Henry <henryn@zzzspacebbs.com>
Subject: Caching results (?) of lengthy cgi process
Message-Id: <BC00ACF7.1943B%henryn@zzzspacebbs.com>
Folks:
I'd look this up in the Perl docs, if I could figure out what to call it.
Consider: Perl "results.cgi" generates a perfectly good page of juicy
results, except it takes much too long in human time to do it, over 30
seconds -- and it is likely to get longer, not shorter.
In fact, the results don't change very often, so the hard work only needs to
be done periodically, maybe every couple of weeks. So, most of the time,
there's no need to have the user wait for a full update.
I can see two obvious, possibly naïve, ways to minimize computation time for
most accesses to the results:
1) Arrange for "update_results.cgi" to run every few weeks and write
plain-old "results.html" which replaces "results.cgi", or
2) Write "update_results.cgi" to compute the data and put it in a safe place
(e.g. file "results.dat") and have "results.cgi" grab the stuff from there.
Is there a better way?
Is there any special terminology/methodology or Perl language
features/modules to support this kind of design?
Thanks,
Henry
henryn@zzzspacebbs.com remove 'zzz'
------------------------------
Date: Sat, 13 Dec 2003 21:13:34 GMT
From: Sherm Pendley <spamfilter@dot-app.org>
Subject: Re: Caching results (?) of lengthy cgi process
Message-Id: <2iLCb.737$xH2.577933@news1.news.adelphia.net>
Henry wrote:
> 1) Arrange for "update_results.cgi" to run every few weeks and write
> plain-old "results.html" which replaces "results.cgi", or
Why would you need to use a CGI at all, in this case? Couldn't you
simply schedule an ordinary script to run every few weeks? That would
save you the bother of having to write a client script and worry about
network time-outs.
sherm--
------------------------------
Date: Sat, 13 Dec 2003 17:16:38 -0500
From: Chris Mattern <syscjm@gwu.edu>
Subject: Re: Caching results (?) of lengthy cgi process
Message-Id: <3FDB8FC6.5010601@gwu.edu>
Henry wrote:
> Folks:
>
> I'd look this up in the Perl docs, if I could figure out what to call it.
>
> Consider: Perl "results.cgi" generates a perfectly good page of juicy
> results, except it takes much too long in human time to do it, over 30
> seconds -- and it is likely to get longer, not shorter.
>
> In fact, the results don't change very often, so the hard work only needs to
> be done periodically, maybe every couple of weeks. So, most of the time,
> there's no need to have the user wait for a full update.
>
> I can see two obvious, possibly naïve, ways to minimize computation time for
> most accesses to the results:
>
> 1) Arrange for "update_results.cgi" to run every few weeks and write
> plain-old "results.html" which replaces "results.cgi", or
Well, yes. Except why does update_results have to be cgi? Make it a
plain ol' script that runs out of your crontab.
>
> 2) Write "update_results.cgi" to compute the data and put it in a safe place
> (e.g. file "results.dat") and have "results.cgi" grab the stuff from there.
Don't run cgi for every page fetch if you don't need to. Serving static
pages is always better if you can do that.
>
> Is there a better way?
Er, why are you using cgi at all? The correct solution to your problem is
to serve up a static page that is periodically updated by a script in a
crontab.
Chris Mattern
------------------------------
Date: Sat, 13 Dec 2003 22:08:43 +0000
From: "Alan J. Flavell" <flavell@ph.gla.ac.uk>
Subject: Re: Caching results (?) of lengthy cgi process
Message-Id: <Pine.LNX.4.53.0312132204240.31846@ppepc56.ph.gla.ac.uk>
On Sat, 13 Dec 2003, Sherm Pendley wrote:
> Henry wrote:
>
> > 1) Arrange for "update_results.cgi" to run every few weeks and write
> > plain-old "results.html" which replaces "results.cgi", or
>
> Why would you need to use a CGI at all, in this case? Couldn't you
> simply schedule an ordinary script to run every few weeks?
A "make file" with suitable dependency definitions would seem to be
the solution of choice for this kind of requirement. With "make"
being run independently of the web server, as you say (although I have
been known, in the case of remote web servers, to kick-off the
maintenance task by poking an unadvertised CGI script at the remote
server, using a web browser or other client action at a computer close
to myself).
ObPerl: there's more than one way to do it. ;-)
------------------------------
Date: Sat, 13 Dec 2003 23:39:57 GMT
From: Henry <henryn@zzzspacebbs.com>
Subject: Re: Caching results (?) of lengthy cgi process
Message-Id: <BC00E347.1945D%henryn@zzzspacebbs.com>
Sherm (and Chris...who gives essentially the same response):
Thanks for your response on this thread:
in article 2iLCb.737$xH2.577933@news1.news.adelphia.net, Sherm Pendley at
spamfilter@dot-app.org wrote on 12/13/03 1:13 PM:
> Henry wrote:
>
>> 1) Arrange for "update_results.cgi" to run every few weeks and write
>> plain-old "results.html" which replaces "results.cgi", or
>
> Why would you need to use a CGI at all, in this case? Couldn't you
> simply schedule an ordinary script to run every few weeks? That would
> save you the bother of having to write a client script and worry about
> network time-outs.
Could do that.
My general approach is keep as much interactivity as possible, even if
there's some extra cost.
In most basic terms this means having the time-since-last-update visible in
a base-page prompt: "It's been 'n' days since the last update, hit 'submit'
to re-synch." That is a bit different from an automagic, crontab-scheduled
update.
The re-synch process will certainly be visible, e.g. "checking result
'37'..." and may offer choices, "Old results for #292 are 'x', new results
are 'y' -- do you really want to update?" Of course, a log file would take
care of visibility in the case of a crontab-scheduled update. But not
interactive choices.
I'm imagining that it would also be good to be able to choose when to do the
updates from a compute load point of view. First, I must admit I sleep my
machine at night -- for which I've already taken a lot of static.
Anyway, it seems like a good option to allow me to start a long update just
before I go to lunch. Right -- I could mostly likely figure out a way to do
that with the crontab method, too, just a matter of a bit more interaction.
By the way, the results changes are completely out of my control and there's
no information at all about when or how they will change -- no history. It
may be that a passive update method will work just fine, or I may discover
that I really need offer interactive choices. It seems sensible to prepare
for the worst.
Network timeouts are currently not a big issue; the server is my own
machine.
The cgi code I have prints to stdout and what I want appears in the browser.
How much more difficult could it be to print to a file that's served up
later?
This would seem to be an issue of "Why do in a complex environment what you
can do in a simple one?" Right? How many times have I, myself, given
_that_ warning! How many times have people listened?
Thanks,
Henry
henryn@zzzspacebbs.com remove 'zzz'
>
> sherm--
------------------------------
Date: Sat, 13 Dec 2003 21:00:57 GMT
From: Sherm Pendley <spamfilter@dot-app.org>
Subject: Re: LWP install MacOS X
Message-Id: <d6LCb.734$xH2.574284@news1.news.adelphia.net>
Henry wrote:
> impressive that you have taken this level of precautions.
It's necessary for what I do. I maintain a Cocoa/Perl bridge that links
against libperl. Since 5.6 & 5.8 aren't binary compatible, I need to
keep both systems available for testing purposes, and I have only one
machine.
To think, I switched to get away from having to dual-boot between
Windows & Linux... :-)
> "By hand..." Sure, no problem. I can do that, assuming there are few or no
> dependencies to resolve.
As far as I know, the CPAN.pm dependencies are "soft." That is, there
are some modules it *can* use to enhance its operation - i.e. verify
downloads with MD5 checksums, use Perl's "Archive::Tar" module instead
of making a system call to "tar", etc - but they're optional.
> Actually, it's a lot less intimidating to me than
> using the CPAN shell.
You can use the CPAN shell to manage the downloads only, if you'd prefer
to handle the build process yourself. For some modules this is a
requirement - for example, when installing DBD::mysql, parameters need
to be passed to "perl Makefile.PL" that tell it what user/password to
use for running its tests.
In the CPAN shell, running 'look Module::Name' will download the latest
version of the module, unpack it, and open up a subshell in the
directory where it was unpacked. You can then run the 'perl Makefile.PL;
make; make test; make install' sequence by hand.
Also, you can view a module's readme with 'readme Module::Name'.
> OK, so I'll ask explicitly: Given version 'n' of a particular module is
> installed on your system, and you've found new module 'm', how can one
> figure out the importance--necessity-- advisability-- safety--chances of
> success of installing the newer one?
Well, there are no guarantees. There's always a chance that you'll be
the first to discover a particular bug. On the other hand, a surprising
number of Perl's core developers are using Macs, so that's not very likely.
As far as the importance of upgrading goes, I tend to be conservative
and follow the "if it ain't broke, don't fix it" rule. I delay any and
all upgrades until they're truly necessary. For example, if what I'm
writing doesn't work on a client's server because they're using a newer
module, and the API has been changed, I'll install the newer version on
my machine.
The chances for installation success vary. Pure Perl modules rarely have
problems installing. Modules that provide wrappers around C libraries
can be problematic, or more accurately, compiling and installing the
libraries they wrap can be. Fink can help, but if you use it you'll
often have to pass additional parameters to Makefile.PL to help it find
the libraries you've installed - /sw/lib isn't one of the traditional
places to look for them.
If you're uncertain of success, and production scripts rely on a
particular module, you can always use the "PREFIX" option to Makefile.PL
to install the module in an out-of-the-way location, and test your
script against it by using 'use lib' to add that location to the module
search path.
> Same question for the next dot revision of Perl itself.
I'm constantly amazed by the number of folks who claim to need the
latest version, but can't give a single reason beyond a vague notion
that it's "better" somehow.
The fact is, if you're just learning the language, 5.8 doesn't offer
much that you truly need. Better Unicode support is useful if you need
to process European or Asian text, and a number of standard modules have
been updated. But, hardly any changes were made to the core language,
and the changes that were made are fairly esoteric.
In short, if you're beginning to learn Perl now, then by the time you
actually need 5.8, you'll probably be running Panther anyway.
If you *do* find yourself installing a newer Perl, for whatever reason,
the one major, golden rule to follow is: Don't overwrite Apple's Perl.
Doing so leaves you no margin of safety if something goes wrong.
Configure the new one with a prefix of /opt, or /usr/local, or
/whatever, and you're golden. No matter how badly you mis-configure it -
and you will, the first few times, count on it - it won't cause any
fatal damage.
sherm--
------------------------------
Date: Sat, 13 Dec 2003 22:33:24 GMT
From: Henry <henryn@zzzspacebbs.com>
Subject: Re: LWP install MacOS X
Message-Id: <BC00D3B1.1944E%henryn@zzzspacebbs.com>
Sherm:
Thanks for your email:
in article d6LCb.734$xH2.574284@news1.news.adelphia.net, Sherm Pendley at
spamfilter@dot-app.org wrote on 12/13/03 1:00 PM:
> Henry wrote:
>
>> impressive that you have taken this level of precautions.
>
> It's necessary for what I do. I maintain a Cocoa/Perl bridge that links
> against libperl. Since 5.6 & 5.8 aren't binary compatible, I need to
> keep both systems available for testing purposes, and I have only one
> machine.
I guess that would be CamelBones...
I'm pretty sure I don't understand the uses of this technology, but I'm an
embedded systems designer; we don't usually need high-level object-oriented
stuff. We're glad if we can get the bare machine to run some instructions.
>
> To think, I switched to get away from having to dual-boot between
> Windows & Linux... :-)
You got away from Win, which sounds like an advantage to me. The safest
execution mode for Win, I've found, is to never boot into it at all.
>
>> "By hand..." Sure, no problem. I can do that, assuming there are few or no
>> dependencies to resolve.
>
> As far as I know, the CPAN.pm dependencies are "soft." That is, there
> are some modules it *can* use to enhance its operation - i.e. verify
> downloads with MD5 checksums, use Perl's "Archive::Tar" module instead
> of making a system call to "tar", etc - but they're optional.
Right, that accounts for the rather indefinite language I've already seen
when I tried to get lwp.
I've managed to manually download and install the latest CPAN without any
real problem.
>
>> Actually, it's a lot less intimidating to me than
>> using the CPAN shell.
>
> You can use the CPAN shell to manage the downloads only, if you'd prefer
> to handle the build process yourself. For some modules this is a
> requirement - for example, when installing DBD::mysql, parameters need
> to be passed to "perl Makefile.PL" that tell it what user/password to
> use for running its tests.
OK.
I looked for a way to ask the CPAN shell to tell me what it _would_ do, like
the "make -n" option, without actually doing anything. Couldn't find
anything. Seems like that would be helpful.
>
> In the CPAN shell, running 'look Module::Name' will download the latest
> version of the module, unpack it, and open up a subshell in the
> directory where it was unpacked. You can then run the 'perl Makefile.PL;
> make; make test; make install' sequence by hand.
Aha! What a great idea!
I _did_ look at the CPAN shell help, but its short description "open
subshell in these dists' directories" at a glance seemed like it would open
a CPAN subshell, not a unix subshell -- and that didn't seem very
interesting.
>
> Also, you can view a module's readme with 'readme Module::Name'.
Thanks. I have been keeping cpan.org open and looking at the full
descriptions there.
>
>> OK, so I'll ask explicitly: Given version 'n' of a particular module is
>> installed on your system, and you've found new module 'm', how can one
>> figure out the importance--necessity-- advisability-- safety--chances of
>> success of installing the newer one?
>
> Well, there are no guarantees. There's always a chance that you'll be
> the first to discover a particular bug. On the other hand, a surprising
> number of Perl's core developers are using Macs, so that's not very likely.
Sounds like a good reason to pick fully released versions, and to avoid
development releases.
I'm known for my capability to find obscure bugs, so I need to be especially
careful.
>
> As far as the importance of upgrading goes, I tend to be conservative
> and follow the "if it ain't broke, don't fix it" rule. I delay any and
> all upgrades until they're truly necessary. For example, if what I'm
> writing doesn't work on a client's server because they're using a newer
> module, and the API has been changed, I'll install the newer version on
> my machine.
Right. Similar reason: I'm delaying upgrading to Panther.
>
> The chances for installation success vary. Pure Perl modules rarely have
> problems installing. Modules that provide wrappers around C libraries
> can be problematic, or more accurately, compiling and installing the
> libraries they wrap can be. Fink can help, but if you use it you'll
> often have to pass additional parameters to Makefile.PL to help it find
> the libraries you've installed - /sw/lib isn't one of the traditional
> places to look for them.
I think I'll stick with the mainline, standard modules as much as possible.
> If you're uncertain of success, and production scripts rely on a
> particular module, you can always use the "PREFIX" option to Makefile.PL
> to install the module in an out-of-the-way location, and test your
> script against it by using 'use lib' to add that location to the module
> search path.
I started to do that, then forgot.
But what I'm working with right now is pretty reliable -- I have to believe
that updating the CPAN shell, adding Net::FTP, LWP, and similar have to be
pretty robust.
If I feel the need to try 'sleazebag.pm' I'll definitely install it as far
away as possible from the standard stuff. (Maybe on one of my NT boxes.)
>
>> Same question for the next dot revision of Perl itself.
>
> I'm constantly amazed by the number of folks who claim to need the
> latest version, but can't give a single reason beyond a vague notion
> that it's "better" somehow.
If we're talking about, say, MacOS, I think there's definitely a fall-off in
support for version 'n' when version 'n+1' is in general release. It's
also true, for commercial packages, that marketing departments' major task
to make you feel at least vaguely uneasy about using an older version.
These motivations might be less significant for a product like Perl. For
one --as far as I know-- no marketing department.
>
> The fact is, if you're just learning the language, 5.8 doesn't offer
> much that you truly need. Better Unicode support is useful if you need
> to process European or Asian text, and a number of standard modules have
> been updated. But, hardly any changes were made to the core language,
> and the changes that were made are fairly esoteric.
'Nuf said; I did the right thing to stay with the existing 5.6.
>
> In short, if you're beginning to learn Perl now, then by the time you
> actually need 5.8, you'll probably be running Panther anyway.
Right.
>
> If you *do* find yourself installing a newer Perl, for whatever reason,
> the one major, golden rule to follow is: Don't overwrite Apple's Perl.
> Doing so leaves you no margin of safety if something goes wrong.
> Configure the new one with a prefix of /opt, or /usr/local, or
> /whatever, and you're golden. No matter how badly you mis-configure it -
> and you will, the first few times, count on it - it won't cause any
> fatal damage.
You're preaching to the choir. I live in fear of losing something that
works tolerably well and spending days trying to recover that functionality.
I have a bad feeling that messing with the distributed version would lead
inevitably to a complete re-install of the OS. --Maybe that's a carry-over
from working in NT, where even minor glitches routinely require re-installs.
To bring you up to date: I've got LWP installed and I can at least do "use
LWP" without error. I guess I won't need to do the wget hack after all!
Thanks for your handholding!
Thanks,
Henry
henryn@zzzspacebbs.com remove 'zzz'
>
> sherm--
------------------------------
Date: 13 Dec 2003 21:38:17 GMT
From: "A. Sinan Unur" <asu1@c-o-r-n-e-l-l.edu>
Subject: Re: Newsgroup Searching Program
Message-Id: <Xns9450A940AACBDasu1cornelledu@132.236.56.8>
Les Hazelton <seawolf@attglobal.net> wrote in
news:pan.2003.12.13.17.23.49.63381@attglobal.net:
> On Sat, 13 Dec 2003 16:11:23 +0000, A. Sinan Unur wrote:
...
>> I am going to suggest changing the line above to:
>>
>> or die "Can't connect to server $SERVER: " $!, $@;
>>
>> just in case there is a better error message in $@ coming from the
>> IO::Socket module.
...
> Did as you suggested, with the following results:
...
> lrh@Farpoint:~> ./read-news-new.pl
> Can't connect to server inetnews.worldnet.att.net: [Bad file
> descriptor]-[] lrh@Farpoint:~>
OK, it was a shot in the dark. Sorry.
Sinan.
--
A. Sinan Unur
asu1@c-o-r-n-e-l-l.edu
Remove dashes for address
Spam bait: mailto:uce@ftc.gov
------------------------------
Date: Sat, 13 Dec 2003 22:01:57 GMT
From: Les Hazelton <seawolf@attglobal.net>
Subject: Re: Newsgroup Searching Program
Message-Id: <pan.2003.12.13.22.01.46.829961@attglobal.net>
On Sat, 13 Dec 2003 21:38:17 +0000, A. Sinan Unur wrote:
> Les Hazelton <seawolf@attglobal.net> wrote in
> news:pan.2003.12.13.17.23.49.63381@attglobal.net:
>
>> On Sat, 13 Dec 2003 16:11:23 +0000, A. Sinan Unur wrote:
> ...
>>> I am going to suggest changing the line above to:
>>>
>>> or die "Can't connect to server $SERVER: " $!, $@;
>>>
>>> just in case there is a better error message in $@ coming from the
>>> IO::Socket module.
> ...
>> Did as you suggested, with the following results:
> ...
>> lrh@Farpoint:~> ./read-news-new.pl
>> Can't connect to server inetnews.worldnet.att.net: [Bad file
>> descriptor]-[] lrh@Farpoint:~>
>
> OK, it was a shot in the dark. Sorry.
>
> Sinan.
No need to be sorry. I *need* and appreciate al the help I cna get. My
Perl skills are not yet of great price :)
--
Les Hazelton
--- Registered Linux user # 272996 ---
With all the fancy scientists in the world, why can't they just once
build a nuclear balm?
------------------------------
Date: Sun, 14 Dec 2003 02:35:07 +0100
From: "Craig Manley" <recycle@bin.com>
Subject: Perl 5.8.0 foreach & glob BUG
Message-Id: <3fdbbe4b$0$41998$e4fe514c@dreader10.news.xs4all.nl>
Hi all,
Please check out the code below that shows that when I iterate of an array
of values for use in a glob() function, then glob() returns an empty string
for every odd array index. This is really weird. I don't see how 'foreach'
and 'glob' can be connected here in any way. In all my years of Perl
programming, I've never come across a single Perl bug, so even though this
really does seem like a bug, perhaps I'm overlooking something. Perl version
is 5.8.0.
-bash-2.05b$ perl
use strict;
foreach my $value (1,2,3,4,5,6,7) {
my $filename = glob("~/var/util/import/cdr/$value.csv");
print "Value: $value\tFilename: $filename\n";
}
#### ctrl - D pressed here ###
Value: 1 Filename:
/var/www/html/vhosts/mijnbel/.//var/util/import/cdr/1.csv
Value: 2 Filename:
Value: 3 Filename:
/var/www/html/vhosts/mijnbel/.//var/util/import/cdr/3.csv
Value: 4 Filename:
Value: 5 Filename:
/var/www/html/vhosts/mijnbel/.//var/util/import/cdr/5.csv
Value: 6 Filename:
Value: 7 Filename:
/var/www/html/vhosts/mijnbel/.//var/util/import/cdr/7.csv
-bash-2.05b$
-Craig Manley
------------------------------
Date: 14 Dec 2003 01:52:20 GMT
From: Abigail <abigail@abigail.nl>
Subject: Re: Perl 5.8.0 foreach & glob BUG
Message-Id: <slrnbtngik.les.abigail@alexandra.abigail.nl>
Craig Manley (recycle@bin.com) wrote on MMMDCCLVII September MCMXCIII in
<URL:news:3fdbbe4b$0$41998$e4fe514c@dreader10.news.xs4all.nl>:
`` Hi all,
``
`` Please check out the code below that shows that when I iterate of an array
`` of values for use in a glob() function, then glob() returns an empty string
`` for every odd array index. This is really weird. I don't see how 'foreach'
`` and 'glob' can be connected here in any way. In all my years of Perl
`` programming, I've never come across a single Perl bug, so even though this
`` really does seem like a bug, perhaps I'm overlooking something. Perl version
`` is 5.8.0.
Documented in 'perlop' ('perldoc -f glob' points to it):
A (file)glob evaluates its (embedded) argument only when
it is starting a new list. All values must be read before
it will start over. In list context, this isn't important
because you automatically get them all anyway. However,
in scalar context the operator returns the next value each
time it's called, or "undef" when the list has run out.
As with filehandle reads, an automatic "defined" is gener
ated when the glob occurs in the test part of a "while",
because legal glob returns (e.g. a file called 0) would
otherwise terminate the loop. Again, "undef" is returned
only once. So if you're expecting a single value from a
glob, it is much better to say
($file) = <blurch*>;
than
$file = <blurch*>;
because the latter will alternate between returning a
filename and returning false.
Abigail
--
perl -weprint\<\<EOT\; -eJust -eanother -ePerl -eHacker -eEOT
------------------------------
Date: Sun, 14 Dec 2003 01:57:34 GMT
From: "John W. Krahn" <krahnj@acm.org>
Subject: Re: Perl 5.8.0 foreach & glob BUG
Message-Id: <3FDBC384.5283D8CC@acm.org>
Craig Manley wrote:
>
> Please check out the code below that shows that when I iterate of an array
> of values for use in a glob() function, then glob() returns an empty string
> for every odd array index. This is really weird. I don't see how 'foreach'
> and 'glob' can be connected here in any way. In all my years of Perl
> programming, I've never come across a single Perl bug, so even though this
> really does seem like a bug, perhaps I'm overlooking something. Perl version
> is 5.8.0.
>
> -bash-2.05b$ perl
> use strict;
> foreach my $value (1,2,3,4,5,6,7) {
> my $filename = glob("~/var/util/import/cdr/$value.csv");
> print "Value: $value\tFilename: $filename\n";
> }
> #### ctrl - D pressed here ###
> Value: 1 Filename:
> /var/www/html/vhosts/mijnbel/.//var/util/import/cdr/1.csv
> Value: 2 Filename:
> Value: 3 Filename:
> /var/www/html/vhosts/mijnbel/.//var/util/import/cdr/3.csv
> Value: 4 Filename:
> Value: 5 Filename:
> /var/www/html/vhosts/mijnbel/.//var/util/import/cdr/5.csv
> Value: 6 Filename:
> Value: 7 Filename:
> /var/www/html/vhosts/mijnbel/.//var/util/import/cdr/7.csv
> -bash-2.05b$
That is not a bug, it is the defined behaviour for using glob in scalar context.
perldoc perlop
[snip]
A (file)glob evaluates its (embedded) argument only when
it is starting a new list. All values must be read before
it will start over. In list context, this isn't important
because you automatically get them all anyway. However,
in scalar context the operator returns the next value each
time it's called, or C run out. As with filehandle reads,
an automatic `defined' is generated when the glob occurs
in the test part of a `while', because legal glob returns
(e.g. a file called 0) would otherwise terminate the loop.
Again, `undef' is returned only once. So if you're
expecting a single value from a glob, it is much better to
say
($file) = <blurch*>;
than
$file = <blurch*>;
because the latter will alternate between returning a
filename and returning false.
John
--
use Perl;
program
fulfillment
------------------------------
Date: 13 Dec 2003 15:56:11 -0800
From: charley@pulsenet.com (Chris Charley)
Subject: redo undefines loop variable
Message-Id: <4f7ed6d.0312131556.359125b5@posting.google.com>
A small piece of code (below) to demonstrate the bug.
If the code would run without a bug, it would be an infinite loop.
But the bug produces warnings about the use of an uninitialized value. . .
#####################################################################
################### code #######################################
#!/usr/bin/perl
use strict;
use warnings;
while (my $data = <DATA>) {
print $data;
redo if $data == 2;
}
__DATA__
1
2
3
4
################### end code ##################################
#####################################################################
Note: I also found mention of this bug in a Google search in Groups,
(with demonstration of the bug).
############################################
Message 1 in thread #
From: Bart Lateur (bart.lateur@pandora.be)
Subject: redo and scope bug #
#
View this article only
Newsgroups: comp.lang.perl.misc #
Date: 2002-01-25 03:49:38 PST
############################################
I tried to use the perlbug program to submit the report, but I need
Mail::Send and can't find it in any of Active State's repositories.
Chris
------------------------------
Date: Sun, 14 Dec 2003 00:28:53 GMT
From: tiltonj@erols.com (Jay Tilton)
Subject: Re: redo undefines loop variable
Message-Id: <3fdbab97.588960710@news.erols.com>
charley@pulsenet.com (Chris Charley) wrote:
: A small piece of code (below) to demonstrate the bug.
: If the code would run without a bug, it would be an infinite loop.
: But the bug produces warnings about the use of an uninitialized value. . .
:
: #!/usr/bin/perl
: use strict;
: use warnings;
:
: while (my $data = <DATA>) {
: print $data;
: redo if $data == 2;
: }
: __DATA__
: 1
: 2
: 3
: 4
As a workaround, you could add another set of curlies to create an interior
block that the redo() will re-execute without clobbering $data.
while (my $data = <DATA>) {{
print $data;
redo if $data == 2;
}}
It probably deserves an explanatory comment so future maintainers of your
code will know what's going on and won't bork it up.
: Note: I also found mention of this bug in a Google search in Groups,
: (with demonstration of the bug).
Nice detective work.
: ############################################
: Message 1 in thread #
: From: Bart Lateur (bart.lateur@pandora.be)
: Subject: redo and scope bug #
:
: #
: View this article only
: Newsgroups: comp.lang.perl.misc #
: Date: 2002-01-25 03:49:38 PST
: ############################################
The article's Message-ID header is a more convenient way to cite a specific
article or thread. That one is:
Message-ID: <e6h25u8bq8uea8mq0nbebrk16v74nmhbl0@4ax.com>
: I tried to use the perlbug program to submit the report, but I need
: Mail::Send and can't find it in any of Active State's repositories.
You might first check the bug tracker at http://rt.perl.org/perlbug/ to see
if it was submitted back then and what its disposition was.
------------------------------
Date: Sat, 13 Dec 2003 13:18:43 -0600
From: Kenneth Porter <shiva.blacklist@sewingwitch.com>
Subject: Re: Tutorials/examples for SAX2 with Perl
Message-Id: <Xns94507312B65D5shivawellcom@216.196.97.136>
Kenneth Porter <shiva.blacklist@sewingwitch.com> wrote in
news:Xns94505F917A932shivawellcom@216.196.97.136:
> Can anyone direct me to some examples that show XML::SAX in action?
Doh. Right after posting that I found some good stuff in the perldoc pages
for the modules. Being relatively new to Perl, I'm not yet used to having
useful documentation. ;)
------------------------------
Date: Sat, 13 Dec 2003 15:53:14 -0600
From: tadmc@augustmail.com (Tad McClellan)
Subject: Re: Tutorials/examples for SAX2 with Perl
Message-Id: <slrnbtn2ia.1qi.tadmc@magna.augustmail.com>
Kenneth Porter <shiva.blacklist@sewingwitch.com> wrote:
> Kenneth Porter <shiva.blacklist@sewingwitch.com> wrote in
> news:Xns94505F917A932shivawellcom@216.196.97.136:
>
>> Can anyone direct me to some examples that show XML::SAX in action?
>
> Doh. Right after posting that I found some good stuff in the perldoc pages
> for the modules. Being relatively new to Perl, I'm not yet used to having
> useful documentation. ;)
There is also a mailing list for doing XML stuff using Perl:
http://lists.perl.org/showlist.cgi?name=perl-xml
--
Tad McClellan SGML consulting
tadmc@augustmail.com Perl programming
Fort Worth, Texas
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
The Perl-Users Digest is a retransmission of the USENET newsgroup
comp.lang.perl.misc. For subscription or unsubscription requests, send
the single line:
subscribe perl-users
or:
unsubscribe perl-users
to almanac@ruby.oce.orst.edu.
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
To request back copies (available for a week or so), send your request
to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
where x is the volume number and y is the issue number.
For other requests pertaining to the digest, send mail to
perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
sending perl questions to the -request address, I don't have time to
answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V10 Issue 5937
***************************************