[30253] in Perl-Users-Digest
Perl-Users Digest, Issue: 1496 Volume: 11
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Thu May 1 00:09:43 2008
Date: Wed, 30 Apr 2008 21:09:07 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Wed, 30 Apr 2008 Volume: 11 Number: 1496
Today's topics:
Re: cperl-mode.el (was: Re: FAQ 3.12 Where can I get pe <nospam-abuse@ilyaz.org>
Re: cperl-mode.el (Ben Bullock)
Re: Devel::SmallProf claims "return 1" needs much time xhoster@gmail.com
Re: Devel::SmallProf claims "return 1" needs much time <w.c.humann@arcor.de>
Re: Devel::SmallProf claims "return 1" needs much time xhoster@gmail.com
File Help Please <Ramroop@gmail.com>
Re: File Help Please <1usa@llenroc.ude.invalid>
Re: File Help Please <noreply@gunnar.cc>
Frequency in large datasets <XXjbhuntxx@white-star.com>
Re: Frequency in large datasets <noreply@gunnar.cc>
Re: Frequency in large datasets <1usa@llenroc.ude.invalid>
Re: Frequency in large datasets xhoster@gmail.com
Re: Frequency in large datasets xhoster@gmail.com
Re: Frequency in large datasets <XXjbhuntxx@white-star.com>
Re: Frequency in large datasets <jurgenex@hotmail.com>
Re: pop langs website ranking <xahlee@gmail.com>
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Wed, 30 Apr 2008 20:49:24 +0000 (UTC)
From: Ilya Zakharevich <nospam-abuse@ilyaz.org>
Subject: Re: cperl-mode.el (was: Re: FAQ 3.12 Where can I get perl-mode for emacs?)
Message-Id: <fvam0k$1fft$1@agate.berkeley.edu>
[A complimentary Cc of this posting was sent to
Ben Bullock
<benkasminbullock@gmail.com>], who wrote in article <fv8e6n$rs2$1@ml.accsnet.ne.jp>:
> PerlFAQ Server <brian@stonehenge.com> wrote:
> >
> > In the Perl source directory, you'll find a directory called "emacs",
> > which contains a cperl-mode that color-codes keywords, provides
> > context-sensitive help, and other nifty things.
>
> This advice is outdated, because 'cperl-mode.el' is now part of the
> Emacs distribution, so it's not necessary to look in the Perl source
> directory.
??? Emacs distribution contains a broken, unsupported version of
cperl-mode.el. The source directory contains a working (but
somewhat outdated) supported version.
Enough said,
Ilya
------------------------------
Date: Thu, 1 May 2008 00:30:48 +0000 (UTC)
From: benkasminbullock@gmail.com (Ben Bullock)
Subject: Re: cperl-mode.el
Message-Id: <fvb2vo$hl3$1@ml.accsnet.ne.jp>
Ilya Zakharevich <nospam-abuse@ilyaz.org> wrote:
>> PerlFAQ Server <brian@stonehenge.com> wrote:
>> >
>> > In the Perl source directory, you'll find a directory called "emacs",
>> > which contains a cperl-mode that color-codes keywords, provides
>> > context-sensitive help, and other nifty things.
>>
>> This advice is outdated, because 'cperl-mode.el' is now part of the
>> Emacs distribution, so it's not necessary to look in the Perl source
>> directory.
>
> ??? Emacs distribution contains a broken, unsupported version of
> cperl-mode.el. The source directory contains a working (but
> somewhat outdated) supported version.
The current Emacs distribution (22.1) contains a version with a
copyright of 2007, which works very well - it isn't broken. I remember
that cperl mode was broken in the previous Emacs version, and the
Emacs distribution contained an old version which had to be
replaced. The 5.10.0 Perl sources contain a version of cperl-mode.el
with a copyright of 2006. The version numbers tell a different story,
with the one in the Emacs source having a version number of 5.22 and
the one in the Perl source having a version number of 5.23. A diff
produces a large number of results which indicate that a lot of
comments have been removed from the Emacs source tree version of
cperl-mode.el, and the results of wc (count the number of lines /
words / bytes command on Unix) give similar results:
$ wc *cperl*
2535 14198 105876 cperl-mode-diff
9041 36268 331296 emacs-source-cperl-mode.el
10441 44057 393631 perlsource-cperl-mode.el
22017 94523 830803 total
What it looks like is a fork. But the above comment about
cperl-mode.el in the Emacs tree being out of date and broken is
itself now out of date.
> Enough said,
Maybe not.
------------------------------
Date: 30 Apr 2008 20:27:20 GMT
From: xhoster@gmail.com
Subject: Re: Devel::SmallProf claims "return 1" needs much time !?
Message-Id: <20080430162722.890$ux@newsreader.com>
Wolfram Humann <w.c.humann@arcor.de> wrote:
> On Apr 30, 6:38=A0pm, xhos...@gmail.com wrote:
> >
> > This just seems weird. =A0My 3GHz machine does an if defined test 32
> > times=
>
> > faster, so unless you have an old computer I would say that this casts
> > doubt on the entire reliability of the SmallProf output.
>
> I think you got somthing wrong here. The profiler runs on the comiled
> and optimized code where several source-lines may have become one. If
> you look at the "count"-column, you will see that line 110 includes
> the time for seek() in the next line.
While I can't rule that out, I've never seen that done in this setting.
I've seen the condition of an elsif reported as if it were the condition of
the preceding if, but I've never seen that type of conflation happen
between the if condition and if block (except when the block contents are
on the same line as the condition). And I can't reproduce it with test
cases.
I assumed the behavior of the count column was simply because $loc is never
defined in the use-case you used, and therefore line 111 is never executed.
But I could be wrong.
Xho
--
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.
------------------------------
Date: Wed, 30 Apr 2008 13:40:27 -0700 (PDT)
From: Wolfram Humann <w.c.humann@arcor.de>
Subject: Re: Devel::SmallProf claims "return 1" needs much time !?
Message-Id: <998a05ac-3403-4fbe-98dc-e8981f20c33a@l42g2000hsc.googlegroups.com>
On Apr 30, 10:27=A0pm, xhos...@gmail.com wrote:
>
> I assumed the behavior of the count column was simply because $loc is neve=
r
> defined in the use-case you used, and therefore line 111 is never executed=
.
> But I could be wrong.
Well, I'm also not *that* sure about my claim, I should better check
that :-)
Wolfram
------------------------------
Date: 30 Apr 2008 21:52:32 GMT
From: xhoster@gmail.com
Subject: Re: Devel::SmallProf claims "return 1" needs much time !?
Message-Id: <20080430175234.699$eB@newsreader.com>
Wolfram Humann <w.c.humann@arcor.de> wrote:
>
> Every script that imports heavily in DBM::Deep will do. Mine looks
> like this:
>
> BEGIN
> {
> $DB::profile = 0;
> %DB::packages = ( 'DBM::Deep::Engine' => 1,
> 'DBM::Deep::Engine::Sector' => 1, 'DBM::Deep::File' => 1 );
> }
I'm not sure, but I think that by restricting your packages that way, all
the time spent in a non-monitored package will get attributed to the
most recently executed statement which is in one of the monitored packages.
That statement is likely to be a "return".
I tried profiling your program, but the profiled code looked nothing like
yours. I realized I have an old DBM::Deep. I installed the current
version in a test directory, and holy cow is it slow compared to the old
version. If it ever finishes, I'll see what the profile looks like.
It looks like DBM::Deep is trying to change from a module to tie hashes to
disk with as little differences as possible (behvior-wise) from a regular
Perl hash; and instead turn into a full-fledged ACID database. I think
that that is unfortunate. Perhaps a code fork is in order.
Xho
--
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.
------------------------------
Date: Wed, 30 Apr 2008 14:15:17 -0700 (PDT)
From: Andy <Ramroop@gmail.com>
Subject: File Help Please
Message-Id: <d8bedb0c-5a45-4694-8ea3-6ef4dfdb6ddf@w7g2000hsa.googlegroups.com>
Hiya Guys
Well I am new, and still trying to learn perl...while at work on 10
different things....sheesh is there ever enough time to learn
something..
Needless to say
I need to accomplish the following.
Data Below
155073040~06/04/1998
155073040~04/28/1998
155073040~04/29/1998
255256040~04/29/1998
255293040~05/27/1999
255322040~12/09/1999
55322040~12/08/1999
755379040~04/30/1998
755383040~04/30/1998
755412040~01/19/1999
755612040~04/19/2000
755633040~04/26/1999
755763040~06/04/1998
this is an example File that gets updated
Basically I need to be able to pull the latest data.
for instance
155073040~06/04/1998
155073040~04/28/1998
155073040~04/29/1998
Has 3 Id Numbers for the same data.
If Id's are the same Pull Latest Data?
so If I pulled this I would only get
155073040~06/04/1998
Any help would be appreciated.
------------------------------
Date: Wed, 30 Apr 2008 22:37:30 GMT
From: "A. Sinan Unur" <1usa@llenroc.ude.invalid>
Subject: Re: File Help Please
Message-Id: <Xns9A90BD75BF9A8asu1cornelledu@127.0.0.1>
Andy <Ramroop@gmail.com> wrote in news:d8bedb0c-5a45-4694-8ea3-
6ef4dfdb6ddf@w7g2000hsa.googlegroups.com:
> Well I am new, and still trying to learn perl...while at work on 10
> different things....sheesh is there ever enough time to learn
> something..
Still, not much of an excuse not to have tried anything. Please read the
posting guidelines for this group before you post again.
...
<Data snipped here for brevity>
...
> for instance
>
> 155073040~06/04/1998
> 155073040~04/28/1998
> 155073040~04/29/1998
>
> Has 3 Id Numbers for the same data.
>
> If Id's are the same Pull Latest Data?
As you read the identifiers, separate the date from the data set id. Use
a hash keyed by the data set id to store an array of dates. Sort the
dates.
There are many other ways of doing this.
#!/usr/bin/perl
use strict;
use warnings;
my %dataset;
while ( my $id = <DATA> ) {
$id =~ s/^\s+//;
$id =~ s/\s+$//;
last unless length $id;
my ($set, $date) = split /~/, $id;
my ($m, $d, $y) = split '/', $date;
push @{ $dataset{$set} }, "$y/$m/$d";
}
print "Sets / dates (sorted by set)\n";
for my $set ( sort keys %dataset ) {
my @dates = sort { $b cmp $a } @{ $dataset{$set} };
$dataset{$set} = [ @dates ];
my $most_recent = shift @dates;
print(
join("\t", $set, $most_recent),
' ( ', join(',', @dates), ' ) ',
"\n",
);
}
my @sorted_sets = sort {
$dataset{$b}->[0] cmp $dataset{$a}->[0]
} keys %dataset;
print "Sets / dates (sorted by date of set)\n";
for my $set ( @sorted_sets ) {
my $most_recent = $dataset{$set}->[0];
print "$set\t$most_recent\n";
}
__DATA__
155073040~06/04/1998
155073040~04/28/1998
155073040~04/29/1998
255256040~04/29/1998
255293040~05/27/1999
255322040~12/09/1999
55322040~12/08/1999
755379040~04/30/1998
755383040~04/30/1998
755412040~01/19/1999
755612040~04/19/2000
755633040~04/26/1999
755763040~06/04/1998
--
A. Sinan Unur <1usa@llenroc.ude.invalid>
(remove .invalid and reverse each component for email address)
comp.lang.perl.misc guidelines on the WWW:
http://www.rehabitation.com/clpmisc/
------------------------------
Date: Thu, 01 May 2008 04:12:57 +0200
From: Gunnar Hjalmarsson <noreply@gunnar.cc>
Subject: Re: File Help Please
Message-Id: <67sn9lF2qqlmmU1@mid.individual.net>
Andy wrote:
> Hiya Guys
>
> Well I am new, and still trying to learn perl...
Do not multi-post!!
http://www.mail-archive.com/beginners%40perl.org/msg93766.html
--
Gunnar Hjalmarsson
Email: http://www.gunnar.cc/cgi-bin/contact.pl
------------------------------
Date: Thu, 01 May 2008 02:15:51 GMT
From: Cosmic Cruizer <XXjbhuntxx@white-star.com>
Subject: Frequency in large datasets
Message-Id: <Xns9A90C3D86EFCEccruizermydejacom@207.115.17.102>
I've been able to reduce my dataset by 75%, but it still leaves me with a
file of 47 gigs. I'm trying to find the frequency of each line using:
open(TEMP, "< $tempfile") || die "cannot open file $tempfile:
$!";
foreach (<TEMP>) {
$seen{$_}++;
}
close(TEMP) || die "cannot close file
$tempfile: $!";
My program keeps aborting after a few minutes because the computer runs out
of memory. I have four gigs of ram and the total paging files is 10 megs,
but Perl does not appear to be using it.
How can I find the frequency of each line using such a large dataset? I
tried to have two output files where I kept moving the databack and forth
each time I grabbed the next line from TEMP instead of using $seen{$_}++,
but I did not have much success.
------------------------------
Date: Thu, 01 May 2008 04:24:51 +0200
From: Gunnar Hjalmarsson <noreply@gunnar.cc>
Subject: Re: Frequency in large datasets
Message-Id: <67so01F2nertiU1@mid.individual.net>
Cosmic Cruizer wrote:
> I've been able to reduce my dataset by 75%, but it still leaves me with a
> file of 47 gigs. I'm trying to find the frequency of each line using:
>
> open(TEMP, "< $tempfile") || die "cannot open file $tempfile:
> $!";
> foreach (<TEMP>) {
> $seen{$_}++;
> }
> close(TEMP) || die "cannot close file
> $tempfile: $!";
>
> My program keeps aborting after a few minutes because the computer runs out
> of memory.
This line:
> foreach (<TEMP>) {
reads the whole file into memory. You should read the file line by line
instead by replacing it with:
while (<TEMP>) {
--
Gunnar Hjalmarsson
Email: http://www.gunnar.cc/cgi-bin/contact.pl
------------------------------
Date: Thu, 01 May 2008 02:31:40 GMT
From: "A. Sinan Unur" <1usa@llenroc.ude.invalid>
Subject: Re: Frequency in large datasets
Message-Id: <Xns9A90E528BEACCasu1cornelledu@127.0.0.1>
Cosmic Cruizer <XXjbhuntxx@white-star.com> wrote in
news:Xns9A90C3D86EFCEccruizermydejacom@207.115.17.102:
> I've been able to reduce my dataset by 75%, but it still leaves me
> with a file of 47 gigs. I'm trying to find the frequency of each line
> using:
>
> open(TEMP, "< $tempfile") || die "cannot open file
> $tempfile:
> $!";
> foreach (<TEMP>) {
Well, that is simply silly. You have a huge file yet you try to read all
of it into memory. Ain't gonna work.
How long is each line and how many unique lines do you expect?
If the number of unique lines is small relative to the number of total
lines, I do not see any difficulty if you get rid of the boneheaded for
loop.
> $seen{$_}++;
> }
> close(TEMP) || die "cannot close file
> $tempfile: $!";
my %seen;
open my $TEMP, '<', $tempfile
or die "Cannot open '$tempfile': $!";
++ $seen{ $_ } while <$TEMP>;
close $TEMP
or die "Cannot close '$tempfile': $!";
> My program keeps aborting after a few minutes because the computer
> runs out of memory. I have four gigs of ram and the total paging files
> is 10 megs, but Perl does not appear to be using it.
I don't see much point to having a 10 MB swap file. To make the best use
of 4 GB physical memory, AFAIK, you need to be running a 64 bit OS.
> How can I find the frequency of each line using such a large dataset?
> I tried to have two output files where I kept moving the databack and
> forth each time I grabbed the next line from TEMP instead of using
> $seen{$_}++, but I did not have much success.
If the number of unique lines is large, I would periodically store the
current counts, clear the hash, keep processing the original file. Then,
when you reach the end of the original data file, go back to the stored
counts (which will have multiple entries for each unique line) and
aggregate the information there.
Sinan
--
A. Sinan Unur <1usa@llenroc.ude.invalid>
(remove .invalid and reverse each component for email address)
comp.lang.perl.misc guidelines on the WWW:
http://www.rehabitation.com/clpmisc/
------------------------------
Date: 01 May 2008 02:38:25 GMT
From: xhoster@gmail.com
Subject: Re: Frequency in large datasets
Message-Id: <20080430223829.229$zg@newsreader.com>
Cosmic Cruizer <XXjbhuntxx@white-star.com> wrote:
> I've been able to reduce my dataset by 75%, but it still leaves me with a
> file of 47 gigs. I'm trying to find the frequency of each line using:
>
> open(TEMP, "< $tempfile") || die "cannot open file
> $tempfile: $!";
> foreach (<TEMP>) {
> $seen{$_}++;
> }
> close(TEMP) || die "cannot close file
> $tempfile: $!";
If each line shows up a million times on average, that shouldn't
be a problem. If each line shows up twice on average, then it won't
work so well with 4G of RAM. We don't which of those is closer to your
case.
> My program keeps aborting after a few minutes because the computer runs
> out of memory. I have four gigs of ram and the total paging files is 10
> megs, but Perl does not appear to be using it.
If the program is killed due to running out of memory, then I would
say that the program *does* appear to be using the available memory. What
makes you think it isn't using it?
> How can I find the frequency of each line using such a large dataset?
I probably wouldn't use Perl, but rather the OS's utilities. For example
on linux:
sort big_file | uniq -c
> I
> tried to have two output files where I kept moving the databack and forth
> each time I grabbed the next line from TEMP instead of using $seen{$_}++,
> but I did not have much success.
But in line 42.
Xho
--
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.
------------------------------
Date: 01 May 2008 02:39:23 GMT
From: xhoster@gmail.com
Subject: Re: Frequency in large datasets
Message-Id: <20080430223926.784$rP@newsreader.com>
Gunnar Hjalmarsson <noreply@gunnar.cc> wrote:
> Cosmic Cruizer wrote:
> > I've been able to reduce my dataset by 75%, but it still leaves me with
> > a file of 47 gigs. I'm trying to find the frequency of each line using:
> >
> > open(TEMP, "< $tempfile") || die "cannot open file
> > $tempfile: $!";
> > foreach (<TEMP>) {
> > $seen{$_}++;
> > }
> > close(TEMP) || die "cannot close file
> > $tempfile: $!";
> >
> > My program keeps aborting after a few minutes because the computer runs
> > out of memory.
>
> This line:
>
> > foreach (<TEMP>) {
>
> reads the whole file into memory. You should read the file line by line
> instead by replacing it with:
>
> while (<TEMP>) {
Duh, I completely overlooked that.
Xho
--
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.
------------------------------
Date: Thu, 01 May 2008 03:32:45 GMT
From: Cosmic Cruizer <XXjbhuntxx@white-star.com>
Subject: Re: Frequency in large datasets
Message-Id: <Xns9A90D0E1FD16ccruizermydejacom@207.115.17.102>
Gunnar Hjalmarsson <noreply@gunnar.cc> wrote in
news:67so01F2nertiU1@mid.individual.net:
> Cosmic Cruizer wrote:
>> I've been able to reduce my dataset by 75%, but it still leaves me
>> with a file of 47 gigs. I'm trying to find the frequency of each line
>> using:
>>
>> open(TEMP, "< $tempfile") || die "cannot open file
>> $tempfile:
>> $!";
>> foreach (<TEMP>) {
>> $seen{$_}++;
>> }
>> close(TEMP) || die "cannot close file
>> $tempfile: $!";
>>
>> My program keeps aborting after a few minutes because the computer
>> runs out of memory.
>
> This line:
>
>> foreach (<TEMP>) {
>
> reads the whole file into memory. You should read the file line by
> line instead by replacing it with:
>
> while (<TEMP>) {
>
<sigh> As both you and Sinan pointed out... I'm using foreach. Everywhere
else I used the while statement to get me to this point. This solves the
problem.
Thank you.
------------------------------
Date: Thu, 01 May 2008 03:44:30 GMT
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: Frequency in large datasets
Message-Id: <s0fi14thpkq4a2khf0hh5go7c2r04pqgmj@4ax.com>
Cosmic Cruizer <XXjbhuntxx@white-star.com> wrote:
>I've been able to reduce my dataset by 75%, but it still leaves me with a
>file of 47 gigs. I'm trying to find the frequency of each line using:
>
> open(TEMP, "< $tempfile") || die "cannot open file $tempfile:
>$!";
> foreach (<TEMP>) {
This slurps the whole file (yes, all 47GB) inot a list and then iterates
over that list. Read the file line-by-line instead:
while (<TEMP>){
This should work unless you have a lot of different data points.
jue
------------------------------
Date: Wed, 30 Apr 2008 17:53:17 -0700 (PDT)
From: "xahlee@gmail.com" <xahlee@gmail.com>
Subject: Re: pop langs website ranking
Message-Id: <2819d214-1208-4552-9b59-122e980e01d4@t12g2000prg.googlegroups.com>
I have updated the computing sites popularity ranking, based on both
alexa.com and quantcast.com.
The whole report nicely formatted in HTML is here:
http://xahlee.org/lang_traf/lang_sites.html
The following is a summary of some highlights.
the relative popularity of the following sites is roughly this:
sun.com
java.com
php.net
slashdot.com
Mysql.com
gnu.org
wolfram.com
Python.org
cpan.org
xahlee.org
Perl.org
Perl.com
paulgraham.com
haskell.org
novig.com
emacswiki.org
franz.com
lispworks.com
Gigamonkeys.com
schemers.org
This is just sites i'm familiar or used to or comes to mind. There are
of course many other popularly used computing sites.
Alexa's data is more reliable than quantcast. Quantcast often has very
bad info as of current. However, alexa's data does not seems reliable
in some absolute sense.
The one question that puzzles me is why gnu.org is ranked so high,
since as far as i know it isn't that much of a active site for news,
blog, wiki, or anything. The only reason i can think of is that lots
of software points to it for the GPL, but this can't explain all.
Note that adding together the traffic of cpan.org, perl.com, perl.org,
they are about a bit higher than python.org, as expected.
I'm not clear what's the relative popularity of sun.com, java.com, and
php.com. Before alexa changed their ranking algorithm recently,
php.net is ranked high above sun.com (a ranking of 500 vs 900), but
now alexa shows that sun.com has about 10 times more visitors than
php.net. Quantcast's data on these 3 sites is more bewilding. For
example, it estimates that java.com has 5 million unique visitors per
month, while giving sun.com 1.5 M only, and php only 78 k. I think
quantcast's data here is quite fuckd.
Xah
xah@xahlee.org
=E2=88=91 http://xahlee.org/
=E2=98=84
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc. For subscription or unsubscription requests, send
#the single line:
#
# subscribe perl-users
#or:
# unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.
NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice.
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V11 Issue 1496
***************************************