[31405] in Perl-Users-Digest
Perl-Users Digest, Issue: 2657 Volume: 11
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Wed Oct 28 18:14:17 2009
Date: Wed, 28 Oct 2009 15:14:07 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Wed, 28 Oct 2009 Volume: 11 Number: 2657
Today's topics:
Re: software engineering, program construction <cartercc@gmail.com>
Re: software engineering, program construction <ben@morrow.me.uk>
Re: software engineering, program construction <cartercc@gmail.com>
Re: software engineering, program construction <uri@StemSystems.com>
Re: software engineering, program construction <jurgenex@hotmail.com>
Re: software engineering, program construction <jurgenex@hotmail.com>
Re: software engineering, program construction <ben@morrow.me.uk>
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Wed, 28 Oct 2009 11:27:35 -0700 (PDT)
From: ccc31807 <cartercc@gmail.com>
Subject: Re: software engineering, program construction
Message-Id: <9a140923-b0ec-4e04-a1e5-739b61f458f2@y32g2000prd.googlegroups.com>
On Oct 28, 1:27=A0pm, J=FCrgen Exner <jurge...@hotmail.com> wrote:
> If function f() computes a data item x, and function g() needs
> information from this data item, then f() needs to return this data item
> and g() needs to receive it:
>
> =A0 =A0 =A0 =A0 g(f(....), ....);
> or
> =A0 =A0 =A0 =A0 my $thisresult =3D f(...);
> =A0 =A0 =A0 =A0 g($thisresult);
> or =A0 =A0 =A0
> =A0 =A0 =A0 =A0 f(..., $thisresult);
> =A0 =A0 =A0 =A0 g($thisresult);
>
> jue
Or, maybe...
my %information_hash;
%build_hash;
%test_hash;
&use_hash;
... where %information_hash is a data structure that contains tens of
thousands of records four layers deep, like this:
$information_hash{$level}{$site}{$term}
... and
sub use_hash
{
foreach my $level (keys %information_hash)
{
foreach my $site (keys %{$information_hash{$level}})
{
foreach my $term (keys %{$information_hash{$level}{$site}
{$term}})
{
print "Dear $information_hash{$level}{$site}{$term}
{'name'} ...";
}
}
}
}
Frankly, it seems a lot easier to use one global hash than to either
pass a copy to a function or pass a reference to a function.
Yesterday, I completed a task that used an input file of appox 300,000
records, analyzed the data, created 258 charts (as gifs) and printed
the charts to a PDF document for distribution. In this case, I created
several subroutines to shake and bake the data, and used just one
global hash to throughout. Is this so wrong?
CC.
------------------------------
Date: Wed, 28 Oct 2009 19:27:48 +0000
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: software engineering, program construction
Message-Id: <kiimr6-7bs2.ln1@osiris.mauzo.dyndns.org>
Quoth ccc31807 <cartercc@gmail.com>:
> On Oct 28, 1:27 pm, Jürgen Exner <jurge...@hotmail.com> wrote:
> > If function f() computes a data item x, and function g() needs
> > information from this data item, then f() needs to return this data item
> > and g() needs to receive it:
> >
> > g(f(....), ....);
> > or
> > my $thisresult = f(...);
> > g($thisresult);
> > or
> > f(..., $thisresult);
> > g($thisresult);
>
> Or, maybe...
>
> my %information_hash;
> %build_hash;
> %test_hash;
> &use_hash;
Oh, for God's sake! How long have you been here?
DON'T CALL SUBS WITH & UNLESS YOU KNOW YOU NEED TO.
Or, at any rate, don't *post* code that does so. Writing in a peculiar
style in your own code is one thing, but you should not be encouraging
it in others.
> ... where %information_hash is a data structure that contains tens of
> thousands of records four layers deep, like this:
> $information_hash{$level}{$site}{$term}
>
> ... and
>
> sub use_hash
> {
> foreach my $level (keys %information_hash)
> {
> foreach my $site (keys %{$information_hash{$level}})
> {
> foreach my $term (keys %{$information_hash{$level}{$site}
> {$term}})
> {
> print "Dear $information_hash{$level}{$site}{$term}
> {'name'} ...";
Yuck yuck yuck. At the very least, can you not rewrite that as
for my $level (values %information_hash) {
for my $site (values %$level) {
for my $term (values %$site) {
print "Dear $term->{name} ...";
}
}
}
? If you need the keys as well then a
while (my ($level, $level_hash) = each %information_hash) {
loop might be more appropriate.
> Frankly, it seems a lot easier to use one global hash than to either
> pass a copy to a function or pass a reference to a function.
This is only because you've never been bitten by using globals when you
shouldn't have; probably because you've only ever written relatively
small programs, and never come back to a program six months later to add
a new feature.
> Yesterday, I completed a task that used an input file of appox 300,000
> records, analyzed the data, created 258 charts (as gifs) and printed
> the charts to a PDF document for distribution. In this case, I created
> several subroutines to shake and bake the data, and used just one
> global hash to throughout. Is this so wrong?
If it works, then no, it isn't 'wrong'. It is bad style, though. If you
had written (say) the chart-creating code as a module with functions
that took parameters, then when you need another set of charts tomorrow
you could reuse it. As it is you have to copy/paste and modify it for
your new set of global data structures.
That may be practical when your programs are only ever run once, but
quickly becomes less so when you have many programs in long-term use
with almost-but-not-quite the same subroutine in: when you find a bug,
how are you going to find all the places you've copy/pasted it to
correct it?
Ben
------------------------------
Date: Wed, 28 Oct 2009 13:17:23 -0700 (PDT)
From: ccc31807 <cartercc@gmail.com>
Subject: Re: software engineering, program construction
Message-Id: <5d5b6191-4f6f-4b91-b72f-e1d71471cccf@r24g2000prf.googlegroups.com>
On Oct 28, 3:27=A0pm, Ben Morrow <b...@morrow.me.uk> wrote:
> ? If you need the keys as well then a
>
> =A0 =A0 while (my ($level, $level_hash) =3D each %information_hash) {
>
> loop might be more appropriate.
I am using the keys to do other things, so yes, I need the keys, but
thanks for your suggestion. I find myself doing this a lot, so I'm
open to making it easier.
> This is only because you've never been bitten by using globals when you
> shouldn't have; probably because you've only ever written relatively
> small programs, and never come back to a program six months later to add
> a new feature.
Okay, let's consider an evolving programming style. Suppose you wrote
a very short script that looks like this:
my %hash;
#step_one
open IN, '<', 'in.dat';
while (<IN>)
{
chomp;
my ($val1, $val2, $val3 ...) =3D split /,/;
$hash($val1} =3D (name =3D> $val2, $id =3D> $val3 ...);
}
close IN;
#step_two
open OUT, '>', 'out.csv';
foreach my $key (sort keys %hash)
{
print OUT qq("$hash{$key}{name}","$hash{$key}{name}"\n);
}
close OUT;
exit(0);
Now, suppose you rewrote it like this:
my %hash;
step_one();
step_two();
exit(0);
sub step_one
{
open IN, '<', 'in.dat';
while (<IN>)
{
chomp;
my ($val1, $val2, $val3 ...) =3D split /,/;
$hash($val1} =3D (name =3D> $val2, $id =3D> $val3 ...);
}
close IN;
}
sub step_two
{
open OUT, '>', 'out.csv';
foreach my $key (sort keys %hash)
{
print OUT qq("$hash{$key}{name}","$hash{$key}{name}"\n);
}
close OUT;
}
exit(0);
Ben, I could make the case that the second version is clearer and
easier to maintain that the first version, even though the second
version breaks the rules and the first version doesn't. What's the
REAL difference between the two versions? And why should the
decomposition of code into subroutines NECESSARILY require a
functional style and variable localization?
> If it works, then no, it isn't 'wrong'. It is bad style, though. If you
> had written (say) the chart-creating code as a module with functions
> that took parameters, then when you need another set of charts tomorrow
> you could reuse it. As it is you have to copy/paste and modify it for
> your new set of global data structures.
You are 100 percent correct. I don't know if I will ever run this
script again. If I do, I'll certainly revise it (as I wrote it like
version one above).
> That may be practical when your programs are only ever run once, but
> quickly becomes less so when you have many programs in long-term use
> with almost-but-not-quite the same subroutine in: when you find a bug,
> how are you going to find all the places you've copy/pasted it to
> correct it?
Again, I agree totally. However, I'm a lot more interested in the
architecture of a script than the other issues that have been
mentioned. With this particular issue, I try my best to follow the DRY
practice, and the second or third time I write the same thing, I often
will place it in a function and call it from there.
CC.
------------------------------
Date: Wed, 28 Oct 2009 16:36:31 -0400
From: "Uri Guttman" <uri@StemSystems.com>
Subject: Re: software engineering, program construction
Message-Id: <87bpjr8ddc.fsf@quad.sysarch.com>
>>>>> "c" == ccc31807 <cartercc@gmail.com> writes:
c> On Oct 28, 3:27 pm, Ben Morrow <b...@morrow.me.uk> wrote:
>> ? If you need the keys as well then a
>>
>> while (my ($level, $level_hash) = each %information_hash) {
>>
>> loop might be more appropriate.
c> I am using the keys to do other things, so yes, I need the keys, but
c> thanks for your suggestion. I find myself doing this a lot, so I'm
c> open to making it easier.
>> This is only because you've never been bitten by using globals when you
>> shouldn't have; probably because you've only ever written relatively
>> small programs, and never come back to a program six months later to add
>> a new feature.
c> Okay, let's consider an evolving programming style. Suppose you wrote
c> a very short script that looks like this:
c> my %hash;
c> step_one();
c> step_two();
you pass no args to those subs. they are using the file global %hash
c> Ben, I could make the case that the second version is clearer and
c> easier to maintain that the first version, even though the second
c> version breaks the rules and the first version doesn't. What's the
c> REAL difference between the two versions? And why should the
c> decomposition of code into subroutines NECESSARILY require a
c> functional style and variable localization?
you didn't listen to the rules. it isn't about just globals or passing
args. it is WHEN and HOW do you choose to do either. a single global
hash is FINE in some cases as are a few top level file lexicals. doing
it ALL the time with every variable is bad. you need to learn the
balance of when to choose globals. the issue is blindly using globals
all over the place and using too many of them vs judicious use of
globals. you just about can't write any decent sized program without
file level globals so it isn't a hard and fast rule. the goal is to keep
the number of globals to a nice and easy to understand/maintain
minimum. sometimes that minimum can be zero.
>> That may be practical when your programs are only ever run once, but
>> quickly becomes less so when you have many programs in long-term use
>> with almost-but-not-quite the same subroutine in: when you find a bug,
>> how are you going to find all the places you've copy/pasted it to
>> correct it?
c> Again, I agree totally. However, I'm a lot more interested in the
c> architecture of a script than the other issues that have been
c> mentioned. With this particular issue, I try my best to follow the DRY
c> practice, and the second or third time I write the same thing, I often
c> will place it in a function and call it from there.
it is easier to get into the habit of writing subs for all logical
sections. loading/parsing a file is a logical section. processing that
data is a logical section, etc. then you can pass in file names for
args, or the ref to the hash for an arg, etc. one way to avoid file
lexicals (not that i do this all the time) is to use a top level driver
sub
main() ;
exit ;
sub main {
my $file = shift @ARGV || 'default_name' ;
my $parsed_data = parse_file( $file ) ;
my $results = process_data( $parsed_data ) ;
output_report( $results ) ;
}
etc.
isolation is the goal. now no one can mess with those structures by
accident or even by ill will. they will be garbage collected when the
sub main exits which can be a good thing too in some cases. the logical
steps are clear and easy to follow. it is easy to add more steps or
modify each step. the subs could be reused if needed with data coming
from other places as they aren't hardwired to the file level
lexicals. the advantages of that style of code are major and the losses
for using too many globals are also big. there is a reason this style
has been developed, taught and espoused for years. it isn't a random
event. small programs develop into large ones all the time. bad habits
in small programs don't get changed when the scale of the program
grows. bad habits will kill you in larger programs so it is best to
practice good habits at all program scales, small and large.
uri
--
Uri Guttman ------ uri@stemsystems.com -------- http://www.sysarch.com --
----- Perl Code Review , Architecture, Development, Training, Support ------
--------- Gourmet Hot Cocoa Mix ---- http://bestfriendscocoa.com ---------
------------------------------
Date: Wed, 28 Oct 2009 13:38:18 -0700
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: software engineering, program construction
Message-Id: <p7ahe5db3eorvrtkisss0hqc072iab2t1c@4ax.com>
ccc31807 <cartercc@gmail.com> wrote:
>On Oct 28, 1:27 pm, Jürgen Exner <jurge...@hotmail.com> wrote:
>> If function f() computes a data item x, and function g() needs
>> information from this data item, then f() needs to return this data item
>> and g() needs to receive it:
>>
>> g(f(....), ....);
>> or
>> my $thisresult = f(...);
>> g($thisresult);
>> or
>> f(..., $thisresult);
>> g($thisresult);
>>
>> jue
>
>Or, maybe...
>
>my %information_hash;
>%build_hash;
>%test_hash;
>&use_hash;
Most definitely not. For once the second and third line will give you
syntax errors.
And even if you meant to write
my %information_hash;
&build_hash;
&test_hash;
&use_hash;
then
1: why on earth are you passing @_ to those functions?
2: why aren't you passing the hash to those functions instead:
build_hash(\%information_hash);
&test_hash(\%information_hash);
&use_hash(\%information_hash);
Then it would be obvious what data those functions are processing.
Otherwise you don't know.
>... where %information_hash is a data structure that contains tens of
>thousands of records four layers deep, like this:
>$information_hash{$level}{$site}{$term}
>
>... and
>
>sub use_hash
Just do
my %information_hash = %{$_[0]};
and the rest of your code remains unchanged except that now you are not
operating on a global variable.
>{
> foreach my $level (keys %information_hash)
> {
> foreach my $site (keys %{$information_hash{$level}})
> {
> foreach my $term (keys %{$information_hash{$level}{$site}
>{$term}})
> {
> print "Dear $information_hash{$level}{$site}{$term}
>{'name'} ...";
> }
> }
> }
>}
>
>Frankly, it seems a lot easier to use one global hash than to either
>pass a copy to a function or pass a reference to a function.
As long as you are just installing a new shower head you can do that.
Once you start designing the plumbing for a high rise or a city block it
will bite you in your extended rear. Better get used to good practices
early. Unlearning bad habits is very hard.
>Yesterday, I completed a task that used an input file of appox 300,000
>records, analyzed the data, created 258 charts (as gifs) and printed
>the charts to a PDF document for distribution. In this case, I created
>several subroutines to shake and bake the data, and used just one
>global hash to throughout. Is this so wrong?
In general: yes. If a student of mine did that we would have a very
serious talk about very basic programming principles.
jue
------------------------------
Date: Wed, 28 Oct 2009 14:03:24 -0700
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: software engineering, program construction
Message-Id: <gtbhe5t87buiu9k6sjaurfet9b6ifd8cu2@4ax.com>
ccc31807 <cartercc@gmail.com> wrote:
>On Oct 28, 3:27 pm, Ben Morrow <b...@morrow.me.uk> wrote:
>> ? If you need the keys as well then a
>>
>> while (my ($level, $level_hash) = each %information_hash) {
>>
>> loop might be more appropriate.
>
>I am using the keys to do other things, so yes, I need the keys, but
>thanks for your suggestion. I find myself doing this a lot, so I'm
>open to making it easier.
>
>> This is only because you've never been bitten by using globals when you
>> shouldn't have; probably because you've only ever written relatively
>> small programs, and never come back to a program six months later to add
>> a new feature.
>
>Okay, let's consider an evolving programming style. Suppose you wrote
>a very short script that looks like this:
>
>my %hash;
>#step_one
>open IN, '<', 'in.dat';
>while (<IN>)
>{
> chomp;
> my ($val1, $val2, $val3 ...) = split /,/;
> $hash($val1} = (name => $val2, $id => $val3 ...);
>}
>close IN;
>#step_two
>open OUT, '>', 'out.csv';
>foreach my $key (sort keys %hash)
>{
> print OUT qq("$hash{$key}{name}","$hash{$key}{name}"\n);
>}
>close OUT;
>exit(0);
>
>Now, suppose you rewrote it like this:
>
>my %hash;
>step_one();
>step_two();
>exit(0);
>sub step_one
>{
> open IN, '<', 'in.dat';
> while (<IN>)
> {
> chomp;
> my ($val1, $val2, $val3 ...) = split /,/;
> $hash($val1} = (name => $val2, $id => $val3 ...);
> }
> close IN;
>}
>sub step_two
>{
> open OUT, '>', 'out.csv';
> foreach my $key (sort keys %hash)
> {
> print OUT qq("$hash{$key}{name}","$hash{$key}{name}"\n);
> }
> close OUT;
>}
>exit(0);
I wouldn't. I would write this as
my %data; #no point in naming a hash hash
%data = get_data();
sort_data(%data);
print_data(%data);
Maybe with references as appropriate
.
Then
- I know what data items those functions are working on
- I know which data items those functions are _NOT_ working on (if it is
not in the parameter list then they don't touch them)
- and i can use the same functions to process a second or third or
fourth set of data, which maybe has a different input format and
therefore requires a different get_other_data() sub, but my internal
representation is the same such that I can reuse the sort_data() and
print_data() functions.
>Ben, I could make the case that the second version is clearer and
>easier to maintain that the first version,
I wouldn't even say that.
>You are 100 percent correct. I don't know if I will ever run this
>script again. If I do, I'll certainly revise it (as I wrote it like
>version one above).
It is very hard to unlearn bad habits and even harder to refactor poorly
written code.
jue
------------------------------
Date: Wed, 28 Oct 2009 21:34:25 +0000
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: software engineering, program construction
Message-Id: <10qmr6-kat2.ln1@osiris.mauzo.dyndns.org>
Quoth ccc31807 <cartercc@gmail.com>:
> On Oct 28, 3:27 pm, Ben Morrow <b...@morrow.me.uk> wrote:
> > ? If you need the keys as well then a
> >
> > while (my ($level, $level_hash) = each %information_hash) {
> >
> > loop might be more appropriate.
>
> I am using the keys to do other things, so yes, I need the keys, but
> thanks for your suggestion. I find myself doing this a lot, so I'm
> open to making it easier.
As a general rule when you find yourself doing the same
$great{long}{dereference}{expression}
over and over, you should find a way to put it in a variable. This is a
simple application of DRY.
> > This is only because you've never been bitten by using globals when you
> > shouldn't have; probably because you've only ever written relatively
> > small programs, and never come back to a program six months later to add
> > a new feature.
>
> Okay, let's consider an evolving programming style. Suppose you wrote
> a very short script that looks like this:
>
> my %hash;
> #step_one
> open IN, '<', 'in.dat';
> while (<IN>)
> {
> chomp;
> my ($val1, $val2, $val3 ...) = split /,/;
> $hash($val1} = (name => $val2, $id => $val3 ...);
> }
> close IN;
> #step_two
> open OUT, '>', 'out.csv';
> foreach my $key (sort keys %hash)
> {
> print OUT qq("$hash{$key}{name}","$hash{$key}{name}"\n);
> }
> close OUT;
> exit(0);
>
> Now, suppose you rewrote it like this:
>
> my %hash;
> step_one();
> step_two();
> exit(0);
> sub step_one
> {
> open IN, '<', 'in.dat';
> while (<IN>)
> {
> chomp;
> my ($val1, $val2, $val3 ...) = split /,/;
> $hash($val1} = (name => $val2, $id => $val3 ...);
> }
> close IN;
> }
> sub step_two
> {
> open OUT, '>', 'out.csv';
> foreach my $key (sort keys %hash)
> {
> print OUT qq("$hash{$key}{name}","$hash{$key}{name}"\n);
> }
> close OUT;
> }
> exit(0);
>
> Ben, I could make the case that the second version is clearer and
> easier to maintain that the first version, even though the second
> version breaks the rules and the first version doesn't.
I think you'd have a hard time making that case. Your sub declarations
aren't doing any more for you than the '#step 1' comments in the first
example, since the subs are still scribbling on global state.
The whole point of a subroutine is that you should be able to understand
the sub *without* looking at the rest of the code. Wherever possible,
subs should avoid touching global state, because that means that you
have to look at everything *else* in the program that touches that state
to understand what's going on. As Uri pointed out, it is often useful to
have a small amount of global state, simply because passing it around
all the time becomes tedious, but this should be kept to a minimum.
I might write that program something like this:
#!/usr/bin/perl
# ALWAYS. Even when you don't think you need to.
use warnings;
use strict;
# Give the subs meaningful names, that describe what they do.
# Pass the file to read as a parameter, so you can (say) change the
# code to read 3 files in succession if you need to.
# Have the sub return the data structure, so that when you *do* call
# it several times you can keep the results separate.
# Consider taking the input and output filenames from @ARGV, so you
# can easily rerun the program on a different data set.
my $data = read_data("in.dat");
write_csv($data, "out.csv");
sub read_data {
my ($file) = @_;
# Use a lexical instead of a global filehandle. That way you
# don't need to worry about other subs in the program that might
# be using the IN filehandle.
# Check for errors, even when you think you don't need to. Good
# habits require practice.
open my $IN, "<", $file
or die "can't read '$file': $!";
my %hash;
while (<$IN>) {
# I hope you *know* this file can be parsed like this. If
# this is supposed to be a CSV file you should use a module,
# because parsing CSV is not as trivial as it seems.
chomp;
my ($val1, $name, $id) = split /,/;
$hash{$val1} = {name => $name, id => $id};
}
return \%hash;
}
sub write_csv {
my ($data, $file) = @_;
open my $OUT, ">", $file
or die "can't write to '$file': $!";
# OK, so this *is* supposed to be CSV. Stop wasting time and
# just use Text::CSV_XS already. It'll correctly handle the
# cases like 'a name with a comma in' that you haven't thought
# about.
for my $key (sort keys %$hash) {
my $person = $hash{$key};
print $OUT qq("$person->{name}","$person->{id}"\n);
}
# We have written to this filehandle, so we need to check the
# return value of close.
close $OUT or die "writing to '$file' failed: $!";
}
> What's the
> REAL difference between the two versions? And why should the
> decomposition of code into subroutines NECESSARILY require a
> functional style and variable localization?
What I wrote above isn't in 'a functional style'. Doing non-idempotent
actions like IO in a functional style is *hard*, and requires concepts
like 'monad'. See Haskell.
The point here is that there's no *point* decomposing your code into
subs unless you're going to make those subs self-contained. You aren't
gaining anything.
> > If it works, then no, it isn't 'wrong'. It is bad style, though. If you
> > had written (say) the chart-creating code as a module with functions
> > that took parameters, then when you need another set of charts tomorrow
> > you could reuse it. As it is you have to copy/paste and modify it for
> > your new set of global data structures.
>
> You are 100 percent correct. I don't know if I will ever run this
> script again. If I do, I'll certainly revise it (as I wrote it like
> version one above).
Hmm, you're still thinking about 'this script'. You need to be thinking
'What will I be doing tomorrow, next week, next year? Can I take some of
this and make it reusable, to save myself some work tomorrow?'.
> > That may be practical when your programs are only ever run once, but
> > quickly becomes less so when you have many programs in long-term use
> > with almost-but-not-quite the same subroutine in: when you find a bug,
> > how are you going to find all the places you've copy/pasted it to
> > correct it?
>
> Again, I agree totally. However, I'm a lot more interested in the
> architecture of a script than the other issues that have been
> mentioned. With this particular issue, I try my best to follow the DRY
> practice, and the second or third time I write the same thing, I often
> will place it in a function and call it from there.
What about the second or third time you write a given function? Do you
place it in a module and import it from there? Once you start doing that
regularly, you'll start to see why relying on global state just makes
things harder in the long run.
Ben
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
Back issues are available via anonymous ftp from
ftp://cil-www.oce.orst.edu/pub/perl/old-digests.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V11 Issue 2657
***************************************