[31045] in Perl-Users-Digest
Perl-Users Digest, Issue: 2290 Volume: 11
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Sun Mar 22 21:09:48 2009
Date: Sun, 22 Mar 2009 18:09:11 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Sun, 22 Mar 2009 Volume: 11 Number: 2290
Today's topics:
Re: How to separate a big text file (say 400 news stori <tadmc@seesig.invalid>
Re: How to separate a big text file (say 400 news stori <tadmc@seesig.invalid>
Re: How to separate a big text file (say 400 news stori <huxiankui@gmail.com>
Re: How to separate a big text file (say 400 news stori <huxiankui@gmail.com>
Re: How to separate a big text file (say 400 news stori <noreply@gunnar.cc>
Re: How to separate a big text file (say 400 news stori <noreply@gunnar.cc>
Re: How to separate a big text file (say 400 news stori <RedGrittyBrick@SpamWeary.foo>
i have a question <Broli00@gmail.com>
Re: i have a question <tadmc@seesig.invalid>
Re: i have a question <Broli00@gmail.com>
Re: i have a question <tadmc@seesig.invalid>
Re: i have a question <jurgenex@hotmail.com>
Re: software design question <rvtol+usenet@xs4all.nl>
Re: software design question <hjp-usenet2@hjp.at>
Re: software design question <rvtol+usenet@xs4all.nl>
Re: software design question <hjp-usenet2@hjp.at>
Re: software design question <rvtol+usenet@xs4all.nl>
Re: software design question <nospam-abuse@ilyaz.org>
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Sun, 22 Mar 2009 07:58:10 -0500
From: Tad J McClellan <tadmc@seesig.invalid>
Subject: Re: How to separate a big text file (say 400 news stories) to many small text files?
Message-Id: <slrngscdf2.sd8.tadmc@tadmc30.sbcglobal.net>
william <huxiankui@gmail.com> wrote:
> It would be neat to have the date converted to the format you
> mentioned. Since I'm not a Perl expert, I may have to do it later
> using SAS.
You do not need to be an expert, as the expert-level work has
probably already been done and packaged up for others to use.
There are many modules on CPAN (perldoc -q CPAN) that can parse
dates for you such as DateTime::Format::Natural.
Using such a module, it should take about 3 lines of code to
convert "December 5, 2008" to "20081205".
> open IN,"dell" or die "could not open $!";
You should use the 3-argument form of open() and a lexical filehandle
like I did in my code.
open my $IN, '<', 'dell' or die "could not open $!";
> while ( <IN> ) {
while ( <$IN> ) {
> chop($date);
Don't use chop() for removing newlines.
chomp($date); # much safer than chop()
--
Tad McClellan
email: perl -le "print scalar reverse qq/moc.noitatibaher\100cmdat/"
------------------------------
Date: Sun, 22 Mar 2009 19:03:36 -0500
From: Tad J McClellan <tadmc@seesig.invalid>
Subject: Re: How to separate a big text file (say 400 news stories) to many small text files?
Message-Id: <slrngsdkeo.40p.tadmc@tadmc30.sbcglobal.net>
william <huxiankui@gmail.com> wrote:
> I want to
But are you _allowed_ to do what you "want"?
> search news stories from Lexis-Nexis database through my
> library server (ezproxy). Then download the news stories
http://www.lexisnexis.com/terms/
...
You may not decompile, reverse engineer, disassemble, rent,
lease, loan, sell, sublicense, or create derivative works
from this Web Site or the Content. Nor may you use any
network monitoring or discovery software to determine the
site architecture, or extract information about usage,
individual identities or users. You may not use any robot,
spider, other automatic software or device, or manual process
to monitor or copy our Web Site or the Content without
Provider's prior written permission.
--
Tad McClellan
email: perl -le "print scalar reverse qq/moc.noitatibaher\100cmdat/"
------------------------------
Date: Sun, 22 Mar 2009 15:07:46 -0700 (PDT)
From: william <huxiankui@gmail.com>
Subject: Re: How to separate a big text file (say 400 news stories) to many small text files?
Message-Id: <c2c1247a-05ea-4e47-8e81-e176e571795e@y13g2000yqn.googlegroups.com>
Tad and Gunnar,
Thank you very much for your replies. I have largely achieved one of
my goals: to divide a big file into separate small files. But I still
have a huge task before this step. Here is a little background.
I want to search news stories from Lexis-Nexis database through my
library server (ezproxy). Then download the news stories and separate
them. The last step would be to classify them into various groups.
Right now, I have some clue on how to finish the last two steps. It is
the first step that is giving me a lot of trouble. I tried to write
perl script to automatically login to lexis-nexis through my
university library using www::mechanize, Crypt::SSLeay, http::cookies,
and lwp::userAgent. I think that I can log into Lexis Nexis. But to do
the news search, Lexis Nexis uses javascripts. I still need to figure
out how to handle these queries through javascripts.
Anyway, thank you again for your code and suggestions!
William
------------------------------
Date: Sun, 22 Mar 2009 17:30:15 -0700 (PDT)
From: william <huxiankui@gmail.com>
Subject: Re: How to separate a big text file (say 400 news stories) to many small text files?
Message-Id: <4dc9c155-a043-49f6-9bd2-b444c50d5653@c9g2000yqm.googlegroups.com>
Tad,
I think that you have a very good point. Although my purpose is to
analyze news effects on the financial markets, purely for academic
uses, I'd rather be a bit careful and maybe discard my first step of
crawling over the Lexis-Nexis database.
Gunnar,
Your suggestion of hiring a consultant is fine with me. When I get
permission from Lexis-Nexis for data-mining their news database, I
will definitely need some help from expert like you and Tad. It would
be a project too big to under my control.
Thanks again for all your suggestions for solving my step 2 problem.
Best,
William
------------------------------
Date: Sun, 22 Mar 2009 15:10:46 +0100
From: Gunnar Hjalmarsson <noreply@gunnar.cc>
Subject: Re: How to separate a big text file (say 400 news stories) to many small text files?
Message-Id: <72mv7gFqq29eU1@mid.individual.net>
Tad J McClellan wrote:
> There are many modules on CPAN (perldoc -q CPAN) that can parse
> dates for you such as DateTime::Format::Natural.
>
> Using such a module, it should take about 3 lines of code to
> convert "December 5, 2008" to "20081205".
I prefer Date::Parse before a heavy-weight module like that.
$ perl -MDate::Parse -e '
($d, $m, $y) = (strptime "December 5, 2008")[3..5]; # 1
printf "%d%02d%02d\n", $y+1900, $m+1, $d; # 2
'
20081205
$
--
Gunnar Hjalmarsson
Email: http://www.gunnar.cc/cgi-bin/contact.pl
------------------------------
Date: Sun, 22 Mar 2009 23:57:53 +0100
From: Gunnar Hjalmarsson <noreply@gunnar.cc>
Subject: Re: How to separate a big text file (say 400 news stories) to many small text files?
Message-Id: <72nu3rFr878uU1@mid.individual.net>
william wrote:
> Tad and Gunnar,
>
> Thank you very much for your replies. I have largely achieved one of
> my goals: to divide a big file into separate small files. But I still
> have a huge task before this step. Here is a little background.
>
> I want to search news stories from Lexis-Nexis database through my
> library server (ezproxy). Then download the news stories and separate
> them. The last step would be to classify them into various groups.
> Right now, I have some clue on how to finish the last two steps. It is
> the first step that is giving me a lot of trouble. I tried to write
> perl script to automatically login to lexis-nexis through my
> university library using www::mechanize, Crypt::SSLeay, http::cookies,
> and lwp::userAgent. I think that I can log into Lexis Nexis. But to do
> the news search, Lexis Nexis uses javascripts. I still need to figure
> out how to handle these queries through javascripts.
Please note that this is a Perl group, not a group for JavaScript, and
its purpose is to discuss Perl, possibly answering specific questions,
not writing programs out from vague specifications.
Sounds to me as if you need a consultant.
--
Gunnar Hjalmarsson
Email: http://www.gunnar.cc/cgi-bin/contact.pl
------------------------------
Date: Sun, 22 Mar 2009 23:06:52 +0000
From: RedGrittyBrick <RedGrittyBrick@SpamWeary.foo>
Subject: Re: How to separate a big text file (say 400 news stories) to many small text files?
Message-Id: <1-mdnVOFQOETWVvUnZ2dnUVZ8uudnZ2d@bt.com>
william wrote:
> Tad and Gunnar,
>
> Thank you very much for your replies. I have largely achieved one of
> my goals: to divide a big file into separate small files. But I still
> have a huge task before this step. Here is a little background.
>
> I want to search news stories from Lexis-Nexis database through my
> library server (ezproxy). Then download the news stories and separate
> them. The last step would be to classify them into various groups.
> Right now, I have some clue on how to finish the last two steps. It is
> the first step that is giving me a lot of trouble. I tried to write
> perl script to automatically login to lexis-nexis through my
> university library using www::mechanize, Crypt::SSLeay, http::cookies,
> and lwp::userAgent. I think that I can log into Lexis Nexis. But to do
> the news search, Lexis Nexis uses javascripts. I still need to figure
> out how to handle these queries through javascripts.
Could you use a proper API instead of emulating a browser?
http://www.lexisnexis.com/webserviceskit/
--
RGB
------------------------------
Date: Sun, 22 Mar 2009 13:51:37 -0700 (PDT)
From: pereges <Broli00@gmail.com>
Subject: i have a question
Message-Id: <fb0ea94a-4aec-42c3-a8fb-ec1c9ce55a42@w35g2000yqm.googlegroups.com>
I have written the code for an order form in html and there is an
order number associated with an order. I have written some code in
perl for form handling and what this basically does is display a table
with items purchased, print the total cost and the order number
associated with the order.
This order number is initialized to some arbitrary number and must be
incremented ever time a new order is made. How can I do this ? Should
I store the order number in a file or something (retrieve the last
stored order number and increment by 1) ?
------------------------------
Date: Sun, 22 Mar 2009 19:09:52 -0500
From: Tad J McClellan <tadmc@seesig.invalid>
Subject: Re: i have a question
Message-Id: <slrngsdkqg.40p.tadmc@tadmc30.sbcglobal.net>
pereges <Broli00@gmail.com> wrote:
> Subject: i have a question
Please put the subject of your article in the Subject of your article.
> I have written the code for an order form in html and there is an
> order number associated with an order. I have written some code in
> perl
s/perl/Perl/;
> for form handling and what this basically does is display a table
> with items purchased, print the total cost and the order number
> associated with the order.
>
> This order number is initialized to some arbitrary number and must be
> incremented ever time a new order is made. How can I do this ?
Using an RDBMS of some sort would be the most expedient way of
dealing with the locking problem associated with a multitasking
environment such as the CGI.
--
Tad McClellan
email: perl -le "print scalar reverse qq/moc.noitatibaher\100cmdat/"
------------------------------
Date: Sun, 22 Mar 2009 14:08:54 -0700 (PDT)
From: pereges <Broli00@gmail.com>
Subject: Re: i have a question
Message-Id: <6f321f65-b87b-4e2e-ac46-9085918293de@h20g2000yqj.googlegroups.com>
Btw this is how I tried to do it but its not working:
...
open (FPTR,"data.txt");
$order_num = <FPTR>;
$new_order_num = $order_num + 1;
close (FPTR);
my $datafile = '/home2/s09/webber36/apache2/cgi-bin/data.txt';
open (FPTR, ">$datafile");
print MYFILE $new_order_num ;
close (MYFILE);
print $order_num;
print "<br />";
print $new_order_num;
....
The file data.txt contains the most recent order number, we have
initialize it with some random value.
------------------------------
Date: Sun, 22 Mar 2009 19:19:01 -0500
From: Tad J McClellan <tadmc@seesig.invalid>
Subject: Re: i have a question
Message-Id: <slrngsdlbl.40p.tadmc@tadmc30.sbcglobal.net>
pereges <Broli00@gmail.com> wrote:
> open (FPTR,"data.txt");
You should always, yes *always*, check the return value from open().
You should use the 3-argument form of open().
You should use a lexical filehandle.
You should use single quotes unless you need one of the two extra
things that double quotes gives you.
open my $FPTR, '<', 'data.txt' or die "could not open 'data.txt' $!";
I am guessing that there is some money involved here somewhere?
If so, you really should hire someone who knows what they're doing
to handle this for you.
Do you know what file locking is?
Do you know why you need file locking for this?
Do you know what can happen if you proceed without proper locking?
(Those are all rhetorical questions, as the answers are obvious.)
--
Tad McClellan
email: perl -le "print scalar reverse qq/moc.noitatibaher\100cmdat/"
------------------------------
Date: Sun, 22 Mar 2009 17:33:52 -0700
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: i have a question
Message-Id: <gjlds4tkqtla1h4veimoc7m5ptmlciiapg@4ax.com>
pereges <Broli00@gmail.com> wrote:
Please put the subject of your article into the Subject of your article.
"I have a question" is about as useless as it gets.
>open (FPTR,"data.txt");
Most people would suggest to use the 3-argument form of open() instead
of the 2-argument form.
Most people would suggest to use lexical file handles.
You should always check the success of any open() statement.
>$order_num = <FPTR>;
Most people would strongly suggest to
use strict;
use warnings;
which would require you to declare $order_num
>$new_order_num = $order_num + 1;
Same for $new_order_num.
The assignment works only by chance, because Perl automatically uses the
leading numerical portion of a string when a string is used in numerical
context. It is far better to remove the trailing newline explicitely via
a chomp().
>close (FPTR);
>
>my $datafile = '/home2/s09/webber36/apache2/cgi-bin/data.txt';
>
>open (FPTR, ">$datafile");
Again:
- three argument form
- use lexical file handle
- provide error checking and handling
>print MYFILE $new_order_num ;
You never declared or opened MYFILE
You do realize that your code when used in a CGI-application contains a
race condition, do you?
jue
------------------------------
Date: Sun, 22 Mar 2009 12:12:51 +0100
From: "Dr.Ruud" <rvtol+usenet@xs4all.nl>
Subject: Re: software design question
Message-Id: <49c61d33$0$191$e4fe514c@news.xs4all.nl>
Uri Guttman wrote:
>Ruud:
>> Uri:
>>> [printing late] but scalar refs
>>> solves the passing around big strings. you still need at least one large
>>> buffer for it though.
>>
>> Or store the strings in an array (and pass around its ref) because
>>
>> print LIST
>
> arrays take up more storage than a large scalar.
I don't see that as an important factor here.
> and i like to process
> whole files with regexes vs looping over an array of lines.
I like to build parsers from combining small regexps. Often you need a
state machine (can lead to a while loop) because the file contains data
and meta data. I also like (nested) state machines a lot.
--
Ruud
------------------------------
Date: Sun, 22 Mar 2009 13:16:05 +0100
From: "Peter J. Holzer" <hjp-usenet2@hjp.at>
Subject: Re: software design question
Message-Id: <slrngscb06.a4m.hjp-usenet2@hrunkner.hjp.at>
On 2009-03-22 11:12, Dr.Ruud <rvtol+usenet@xs4all.nl> wrote:
> Uri Guttman wrote:
>>Ruud:
>>> Uri:
>>>> [printing late] but scalar refs
>>>> solves the passing around big strings. you still need at least one large
>>>> buffer for it though.
>>>
>>> Or store the strings in an array (and pass around its ref) because
>>>
>>> print LIST
>>
>> arrays take up more storage than a large scalar.
>
> I don't see that as an important factor here.
Remember that the problem referred to above was memory requirements. So
it may not be an important factor *elsewhere*, but it is *the* important
factor *here*.
I wrote a small test program (see below) and used it to read the
contents of a 6.8 MB e-mail message into memory and measure the VM
usage:
mode=scalarref, filesize=6.7997MB, before = 5.08203MB, after = 11.8828MB, diff= 6.80078MB
mode=scalar, filesize=6.7997MB, before = 5.08203MB, after = 18.6836MB, diff=13.6016MB
mode=scalarcopy, filesize=6.7997MB, before = 5.08203MB, after = 18.6836MB, diff=13.6016MB
mode=arrayloop, filesize=6.7997MB, before = 5.08203MB, after = 15.4023MB, diff=10.3203MB
mode=array, filesize=6.7997MB, before = 5.08203MB, after = 26.1133MB, diff=21.0312MB
so in this example using an array with the common idiom "@lines = <$fh>"
has an overhead of about 200%. No problem if you are dealing with a few
MB, but if your files are a few hundred MB, that may just be make the
difference between feasible and infeasible. If your file has very short
lines, it gets worse. If you split the lines into fields (say, you want
to represent a comma-separated file as an AoA), it gets much worse.
#!/usr/bin/perl
use warnings;
use strict;
use constant M => 1024*1024;
my $size0 = getvmsize();
my $contents = readfile(@ARGV);
my $size1 = getvmsize();
printf "before = %gMB, after = %gMB, diff=%gMB\n",
$size0/M, $size1/M, ($size1-$size0)/M;
exit(0);
sub readfile {
my ($filename, $mode) = @_;
open (my $fh, '<', $filename) or die "cannot open $filename: $!";
if ($mode eq 'scalar') {
local $/;
return <$fh>;
} elsif ($mode eq 'scalarcopy') {
local $/;
my $contents = <$fh>;
return $contents;
} elsif ($mode eq 'scalarref') {
local $/;
my $contents = <$fh>;
return \$contents;
} elsif ($mode eq 'array') {
my @lines = <$fh>;
return \@lines;
} elsif ($mode eq 'arrayloop') {
my @lines;
while (<$fh>) {
push @lines, $_;
}
return \@lines;
}
}
sub getvmsize {
# from linux/Documentation/filesystems/proc.txt
# size total program size··············
# resident size of memory portions·········
# shared number of pages that are shared·
# trs number of pages that are 'code'·
# drs number of pages of data/stack···
# lrs number of pages of library······
# dt number of dirty pages···········
open (my $fh, '<', "/proc/$$/statm");
my $line = <$fh>;
my ($size, $resident, $shared, $trs, $drs, $lrs, $dt) = split(/\s+/, $line);
return $size * 4096; # XXX
}
__END__
hp
------------------------------
Date: Sun, 22 Mar 2009 13:33:11 +0100
From: "Dr.Ruud" <rvtol+usenet@xs4all.nl>
Subject: Re: software design question
Message-Id: <49c63007$0$191$e4fe514c@news.xs4all.nl>
Peter J. Holzer wrote:
> Dr.Ruud:
>> Uri Guttman:
>>> Ruud:
>>>> Uri:
>>>>> [printing late] but scalar refs
>>>>> solves the passing around big strings. you still need at least one large
>>>>> buffer for it though.
>>>> Or store the strings in an array (and pass around its ref) because
>>>>
>>>> print LIST
>>> arrays take up more storage than a large scalar.
>> I don't see that as an important factor here.
>
> Remember that the problem referred to above was memory requirements. So
> it may not be an important factor *elsewhere*, but it is *the* important
> factor *here*.
That is a different context than I assumed.
I was only talking about storing the strings *to print later* in an
array, in stead of concatenating them.
--
Ruud
------------------------------
Date: Sun, 22 Mar 2009 17:14:17 +0100
From: "Peter J. Holzer" <hjp-usenet2@hjp.at>
Subject: Re: software design question
Message-Id: <slrngscouq.cg6.hjp-usenet2@hrunkner.hjp.at>
On 2009-03-22 12:33, Dr.Ruud <rvtol+usenet@xs4all.nl> wrote:
> Peter J. Holzer wrote:
>> Dr.Ruud:
>>> Uri Guttman:
>>>> Ruud:
>>>>> Uri:
>
>>>>>> [printing late] but scalar refs
>>>>>> solves the passing around big strings. you still need at least one large
>>>>>> buffer for it though.
>>>>> Or store the strings in an array (and pass around its ref) because
>>>>>
>>>>> print LIST
>>>> arrays take up more storage than a large scalar.
>>> I don't see that as an important factor here.
>>
>> Remember that the problem referred to above was memory requirements. So
>> it may not be an important factor *elsewhere*, but it is *the* important
>> factor *here*.
>
> That is a different context than I assumed.
>
> I was only talking about storing the strings *to print later* in an
> array, in stead of concatenating them.
Same thing. An array has a quite substantial overhead per element. In
the example I just posted it's about 40 bytes (perl 5.10.0 on Linux/i386),
and I think that's typical (about 100 bytes for hash elements, IIRC). So
if you produce output in small chunks and put them into an array you
waste an additional 40 bytes of memory for each chunk compared to
concatening them to a string.
hp
------------------------------
Date: Sun, 22 Mar 2009 19:13:43 +0100
From: "Dr.Ruud" <rvtol+usenet@xs4all.nl>
Subject: Re: software design question
Message-Id: <49c67fd7$0$194$e4fe514c@news.xs4all.nl>
Peter J. Holzer wrote:
> Ruud:
>> Peter:
>>> Dr.Ruud:
>>>> Uri:
>>>>> Ruud:
>>>>>> Uri:
>>>>>>> [printing late] but scalar refs
>>>>>>> solves the passing around big strings. you still need at least one large
>>>>>>> buffer for it though.
>>>>>>
>>>>>> Or store the strings in an array (and pass around its ref) because
>>>>>> print LIST
>>>>>
>>>>> arrays take up more storage than a large scalar.
>>>>
>>>> I don't see that as an important factor here.
>>>
>>> Remember that the problem referred to above was memory requirements. So
>>> it may not be an important factor *elsewhere*, but it is *the* important
>>> factor *here*.
>>
>> That is a different context than I assumed.
>>
>> I was only talking about storing the strings *to print later* in an
>> array, in stead of concatenating them.
>
> Same thing. An array has a quite substantial overhead per element. In
> the example I just posted it's about 40 bytes (perl 5.10.0 on Linux/i386),
> and I think that's typical (about 100 bytes for hash elements, IIRC). So
> if you produce output in small chunks and put them into an array you
> waste an additional 40 bytes of memory for each chunk compared to
> concatening them to a string.
Concatenation needs resources too, so as always just use what is
appropriate for your situation. Once your print array, or your print
buffer, is full enough, you can print and reset it. Etc. Etc.
--
Ruud
------------------------------
Date: Sun, 22 Mar 2009 20:42:31 GMT
From: Ilya Zakharevich <nospam-abuse@ilyaz.org>
Subject: Re: software design question
Message-Id: <slrngsd8ln.ae5.nospam-abuse@chorin.math.berkeley.edu>
On 2009-03-22, Peter J. Holzer <hjp-usenet2@hjp.at> wrote:
> mode=scalarref, filesize=6.7997MB, before = 5.08203MB, after = 11.8828MB, diff= 6.80078MB
> mode=array, filesize=6.7997MB, before = 5.08203MB, after = 26.1133MB, diff=21.0312MB
>
> so in this example using an array with the common idiom "@lines = <$fh>"
> has an overhead of about 200%.
On my system:
scalar: 8.6M
arrayref: 28.1M 5.8.8
arrayref: 20.8M 5.6.1
So there an additional bug in 5.8.8...
Mail.old>env PERL_DEBUG_MSTATS=1 perl -we "sub f{my @a=<>; print scalar @a; \@a} $a=f" eprints
Name "main::a" used only once: possible typo at -e line 1.
Memory allocation statistics after execution: (buckets 4(4)..1052668(1048576)
9947924 free: 1085 567 8680 20926 55898 15 2 4 0 0 0 0 0 0 0 0 0 0
17903 2631 11329 19275 908
17705532 used: 1210 703 8708 21048 5174 17 6 2 1 1655 3 0 0 0 0 0 0 3
17867 2639 11876 19407 51642
Total sbrk(): 28143616/975:1124. Odd ends: pad+heads+chain+tail: 0+479920+0+10240.
138811
And arrayref is about 9x slower... I suspect that list-<> operator is
not marking its results as TEMP, and/or array-copy operator is not
granting TEMP status of SVs...
Hope this helps,
Ilya
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc. For subscription or unsubscription requests, send
#the single line:
#
# subscribe perl-users
#or:
# unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.
NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice.
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V11 Issue 2290
***************************************