[30710] in Perl-Users-Digest
Perl-Users Digest, Issue: 1955 Volume: 11
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Fri Oct 31 21:09:50 2008
Date: Fri, 31 Oct 2008 18:09:10 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Fri, 31 Oct 2008 Volume: 11 Number: 1955
Today's topics:
Re: crisis Perl sln@netherlands.com
Re: How to overwrite or mock -e for testing? <nospam-abuse@ilyaz.org>
Re: How to overwrite or mock -e for testing? <bik.mido@tiscalinet.it>
Re: out of memory <glex_no-spam@qwest-spam-no.invalid>
Re: out of memory <cwilbur@chromatico.net>
Re: out of memory <smallpond@juno.com>
Re: out of memory xhoster@gmail.com
Re: out of memory <jurgenex@hotmail.com>
Re: out of memory <jurgenex@hotmail.com>
Re: out of memory sln@netherlands.com
Re: out of memory sln@netherlands.com
Re: out of memory <jwkenne@attglobal.net>
Re: out of memory <jwkenne@attglobal.net>
Re: Profiling using DProf <brian.d.foy@gmail.com>
Re: sharing perl code between directories <tim@burlyhost.com>
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Fri, 31 Oct 2008 21:56:08 GMT
From: sln@netherlands.com
Subject: Re: crisis Perl
Message-Id: <4kumg4hr8iojg6mk5vlpddjfoimg3kfn8h@4ax.com>
On Fri, 31 Oct 2008 16:27:03 GMT, Juha Laiho <Juha.Laiho@iki.fi> wrote:
>Charlton Wilbur <cwilbur@chromatico.net> said:
>>>>>>> "cc" == cartercc <cartercc@gmail.com> writes:
>>
>> cc> How do you deal with a manager who tells you to leave a script
>> cc> alone, when you know good and well that it's such poorly written
>> cc> code that it will be extremely hard to maintain, and perhaps
>> cc> will be buggy as well? Getting another job isn't an option, and
>> cc> firing the manager isn't an option, either.
>>
>>Educate the manager. Keeping shoddy code in production is a gamble:
>>you're gambling that the cost of fixing the code *now* is higher than
>>the cost of fixing it when it breaks or when there's a crisis.
>
>Further, making modifications to a clean code base is cheaper than
>making the same modifications to a degraded code base.
>
>The common term for this is "design debt"; see articles at
>http://jamesshore.com/Articles/Business/Software%20Profitability%20Newsletter/Design%20Debt.html
>http://www.ri.fi/web/en/technology-and-research/design-debt
>
>As far as the code does what it is intended to do now, it is, stirctly
>speaking, not a technical issue (which, I guess, would make it belong to
>your domain), but instead it is an economic issue (and as such belongs
>to your managers domain). What you could do would be to try and see
>whether your manager understands the "debt" aspect of the situation
>his decision is creating, and how much of this debt he is willing
>to carry (and also pay the running interest for -- in more laborious
>changes to the code).
^^^^^^^
This is not a good argument. The best you can do is what was intended.
Nobody can get schizo about it and program place holders for what if's
pathologically.
Most economical companies don't hire/have working program managers, ie:
somebody who wan't to program, let alone someone who will lay out details
for the real workers.
As a result, technical knowledge usually works out to be:
programmer > manager > director > vice > president > CEO > Bill Gates
Unfortunately, the idea/blame runs in the opposite direction.
Shit does roll downhill.
Lastly, the paygrade runs in the opposite direction as well, where
it is cheaper to toss the programmer, and justify starting all over again.
If you work in such a place (I have in many). Shut your mouth, never admit
a problem, modify thier fuck-up fast. If you have a programmer that blames
everything on you, beat the crap out of them, see if that helps next time.
sln
------------------------------
Date: Fri, 31 Oct 2008 20:50:28 +0000 (UTC)
From: Ilya Zakharevich <nospam-abuse@ilyaz.org>
Subject: Re: How to overwrite or mock -e for testing?
Message-Id: <gefr2k$1m1b$1@agate.berkeley.edu>
[A complimentary Cc of this posting was NOT [per weedlist] sent to
Michele Dondi
<bik.mido@tiscalinet.it>], who wrote in article <ebtlg4lhbkhl0g5rueoornbjhjderlkvqu@4ax.com>:
> >that the inverse implication does not hold[*].
... And where is [*]?
When I (first) implemented prototype on CORE::***, I used an existing
table in the lexer, and just translated the semantic of this table to
the semantic of prototype(). I did only a very quick scan through the
table to check the validity. The lexer has too many special cases
which massaged the argument before access to the table, and I could
miss some...
> >Anyway, as they say, the proof of the pudding is in the eating: the
> >above in fact would imply that -X functions are *not* overridable.
This is what I would like to change if I ever work on Perl again: it
must have a concept of IFS in the core...
> somebody made me notice (see <http://perlmonks.org/?node_id=720682>,)
> that qw// is now overridable.
You managed to give me several very deep breaths... You meant qx()
here, right?
Yours,
Ilya
------------------------------
Date: Fri, 31 Oct 2008 22:13:34 +0100
From: Michele Dondi <bik.mido@tiscalinet.it>
Subject: Re: How to overwrite or mock -e for testing?
Message-Id: <p2tmg49ohjh6u3rrtmu9o7oobrh4btdp6e@4ax.com>
On Fri, 31 Oct 2008 20:50:28 +0000 (UTC), Ilya Zakharevich
<nospam-abuse@ilyaz.org> wrote:
>> >that the inverse implication does not hold[*].
>
>... And where is [*]?
It was in the twice quoted message. Pasted hereafter:
: [*] E.g. require() returns undef() but I have *seen* it duly
: overridden.
>> >Anyway, as they say, the proof of the pudding is in the eating: the
>> >above in fact would imply that -X functions are *not* overridable.
>
>This is what I would like to change if I ever work on Perl again: it
>must have a concept of IFS in the core...
And... What is IFS supposed to mean?
>> somebody made me notice (see <http://perlmonks.org/?node_id=720682>,)
>> that qw// is now overridable.
>
>You managed to give me several very deep breaths... You meant qx()
>here, right?
Oops! Well, of course. Apologies for the "several very deep breaths,"
or... was it a *positive* experience, perhaps? ;)
Michele
--
{$_=pack'B8'x25,unpack'A8'x32,$a^=sub{pop^pop}->(map substr
(($a||=join'',map--$|x$_,(unpack'w',unpack'u','G^<R<Y]*YB='
.'KYU;*EVH[.FHF2W+#"\Z*5TI/ER<Z`S(G.DZZ9OX0Z')=~/./g)x2,$_,
256),7,249);s/[^\w,]/ /g;$ \=/^J/?$/:"\r";print,redo}#JAPH,
------------------------------
Date: Fri, 31 Oct 2008 13:38:26 -0500
From: "J. Gleixner" <glex_no-spam@qwest-spam-no.invalid>
Subject: Re: out of memory
Message-Id: <490b50a2$0$89392$815e3792@news.qwest.net>
friend.05@gmail.com wrote:
> On Oct 31, 1:41 pm, "friend...@gmail.com" <hirenshah...@gmail.com>
> wrote:
>> On Oct 31, 1:22 pm, Jürgen Exner <jurge...@hotmail.com> wrote:
>>
>>
>>
>>
>>
>>> "friend...@gmail.com" <hirenshah...@gmail.com> wrote:
>>>> if I output as text file and read it again later on will be able to
>>>> search based on key. (I mean when read it again I will be able to use
>>>> it as hash or not )
>>> That depends upon what you do with the data when reading it in again. Of
>>> course you can construct hash, but then you wouldn't have gained
>>> anything. Why would this hash be any smaller than the one you were
>>> trying to construct the first time?
>>> Your current approach (put everything into a hash) and your current
>>> hardware are incompatible.
>>> Either get larger hardware (expensive) or rethink your basic approach,
>>> e.g. use a database system or compute your desired results on the fly
>>> while parsing through the file or write intermediate results to a file
>>> in a format that later can be processed line by line or by any other of
>>> the gazillions ways of preversing RAM. Don't you learn those techniques
>>> in basic computer science classes any more?
>>> jue
>> output to a file and using it again will take lot of time. It will be
>> very slow.
>>
>> will be helpful in speed if I use DB_FILE module- Hide quoted text -
>>
>> - Show quoted text -
>
> here is what I am trying to do.
>
> I have two large files. I will read one file and see if that is also
> present in second file. I also need count how many time it is appear
> in both the file. And according I do other processing.
>
> so if I process line by line both the file then it will be like (eg.
> file1 has 10 line and file2 has 10 line. for each line file1 it will
> loop 10 times. so total 100 loops.) I am dealing millions of lines so
> this approach will be very slow.
Maybe you shouldn't do your own math. It'd be 10 reads, for each file,
so 20.
>
>
> this is my current code. It runs fine with small file.
>
use strict;
use warnings;
>
>
> open ($INFO, '<', $file) or die "Cannot open $file :$!\n";
open( my $INFO, ...
> while (<$INFO>)
> {
> (undef, undef, undef, $time, $cli_ip, $ser_ip, undef, $id,
> undef) = split('\|');
my( $time, $cli_ip, $ser_ip, $id ) = (split( /\|/ ))[3,4,5,7];
> push @{$time_table{"$cli_ip|$dns_id"}}, $time;
> }
close( $INFO );
>
>
> open ($INFO_PRI, '<', $pri_file) or die "Cannot open $pri_file :$!
> \n";
open( my $INFO_PRI, ...
> while (<$INFO_PRI>)
> {
> (undef, undef, undef, $pri_time, $pri_cli_ip, undef, undef,
> $pri_id, undef, $query, undef) = split('\|');
my( $pri_time, $pri_cli_ip, $pri_id, $query ) = (split( /\|/ ))[3,4,7,9];
> $pri_ip_id_table{"$pri_cli_ip|$pri_id"}++;
> push @{$pri_time_table{"$pri_cli_ip|$pri_id"}}, $pri_time;
> }
Read one file into memory/hash, if possible. As you're processing
the second one, store/push some data to process later, or process
it at that time, if it matches your criteria. There's no need to
store both in memory.
>
> @pri_ip_id_table_ = keys(%pri_ip_id_table);
>
> for($i = 0; $i < @pri_ip_id_table_; $i++) #file 2
Ugg.. the keys for %pri_ip_id_table are 'something|somethingelse'
how that works with that for loop is probably not what one
would expect.
> {
> if($time_table{"$pri_ip_dns_table_[$i]"}) #chk if it
> is there in file 1
Really? Where is pri_ip_dns_table_ defined?
> so for above example which I approach will be best ?
------------------------------
Date: Fri, 31 Oct 2008 14:32:05 -0400
From: Charlton Wilbur <cwilbur@chromatico.net>
Subject: Re: out of memory
Message-Id: <86tzasobtm.fsf@mithril.chromatico.net>
>>>>> "JE" == Jürgen Exner <jurgenex@hotmail.com> writes:
JE> Don't you learn those techniques in basic computer science
JE> classes any more?
The assumption that someone who is getting paid to program has had -- or
even has had any interest in -- computer science classes gets less
tenable with each passing day.
Charlton
--
Charlton Wilbur
cwilbur@chromatico.net
------------------------------
Date: Fri, 31 Oct 2008 12:27:06 -0700 (PDT)
From: smallpond <smallpond@juno.com>
Subject: Re: out of memory
Message-Id: <ed14965e-e361-4fda-8843-5810ccb61d2d@d42g2000prb.googlegroups.com>
On Oct 31, 1:59 pm, "friend...@gmail.com" <hirenshah...@gmail.com>
wrote:
>
> here is what I am trying to do.
>
> I have two large files. I will read one file and see if that is also
> present in second file. I also need count how many time it is appear
> in both the file. And according I do other processing.
>
> so if I process line by line both the file then it will be like (eg.
> file1 has 10 line and file2 has 10 line. for each line file1 it will
> loop 10 times. so total 100 loops.) I am dealing millions of lines so
> this approach will be very slow.
>
This problem was solved 50 years ago. You sort the two files and then
take
one pass through both comparing records. Why are you reinventing the
wheel?
--S
------------------------------
Date: 31 Oct 2008 19:58:33 GMT
From: xhoster@gmail.com
Subject: Re: out of memory
Message-Id: <20081031155903.714$7U@newsreader.com>
"friend.05@gmail.com" <hirenshah.05@gmail.com> wrote:
> > > Either get larger hardware (expensive) or rethink your basic
> > > approach, e.g. use a database system or compute your desired results
> > > on the fly while parsing through the file or write intermediate
> > > results to a file in a format that later can be processed line by
> > > line or by any other of the gazillions ways of preversing RAM. Don't
> > > you learn those techniques in basic computer science classes any
> > > more?
> >
> > > jue
> >
> > output to a file and using it again will take lot of time. It will be
> > very slow.
That depends on how you do it.
> >
> > will be helpful in speed if I use DB_FILE module
That depends on what you are comparing it to. Compared to an in memory
hash, DB_File makes things slower, not faster. Except in the sense that
something which runs out of memory and dies before completing the job is
infinitely slow, so preventing that is, in a sense, faster. One exception
I know of would be if one of the files is constant, so it only needs to be
turned into a DB_File once, and if only a small fraction of the keys are
ever probed by the process driven by other file. Then it could be faster.
Also, DB_File doesn't take nested structures, so you would have to flatten
your HoA. Once you flatten it, it might fit in memory anyway.
>
> here is what I am trying to do.
>
> I have two large files. I will read one file and see if that is also
> present in second file. I also need count how many time it is appear
> in both the file. And according I do other processing.
If you *only* need to count, then you don't need the HoA in the first
place.
> so if I process line by line both the file then it will be like (eg.
> file1 has 10 line and file2 has 10 line. for each line file1 it will
> loop 10 times. so total 100 loops.) I am dealing millions of lines so
> this approach will be very slow.
I don't think anyone was recommending that you do a Cartesian join on the
files. You could break the data up into files by hashing on IP address and
making a separate file for each hash value. For each hash bucket you would
have two files, one from each starting file, and they could be processed
together with your existing script. Or you could reformat the two files
and then sort them jointly, which would group all the like keys together
for you for later processing.
>
> @pri_ip_id_table_ = keys(%pri_ip_id_table);
For very large hashes when you have memory issues, you should iterate
over it with "each" rather than building a list of keys.
>
> for($i =3D 0; $i < @pri_ip_id_table_; $i++) #file 2
> {
> if($time_table{"$pri_ip_dns_table_[$i]"})
> {
> #do some processing.
Could you "do some processing" incrementally, as each line from file 2 is
encountered, rather than having to load all keys of file2 into memory
at once?
Xho
--
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.
------------------------------
Date: Fri, 31 Oct 2008 13:09:23 -0700
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: out of memory
Message-Id: <ajomg4pn7tnqfkhntv0tp1b9a9n56pe5tf@4ax.com>
"friend.05@gmail.com" <hirenshah.05@gmail.com> wrote:
>I have two large files. I will read one file and see if that is also
>present in second file.
The way you wrote this means you are checking if file A is a subset of
file B. However I have a strong feeling, you are talking about the
records in each file, not the files themself.
>I also need count how many time it is appear
>in both the file. And according I do other processing.
>so if I process line by line both the file then it will be like (eg.
>file1 has 10 line and file2 has 10 line. for each line file1 it will
>loop 10 times. so total 100 loops.) I am dealing millions of lines so
>this approach will be very slow.
So you need to pre-process your data.
One possibility: read only the smaller file into a hash. Then you can
compare the larger file line by line against this hash. This is a linear
algorithm. Of course this only works if at least the relevant data from
the smaller file will fit into RAM.
Another approach: sort both input files. There are many sorting
algorithms around, including those that sort completely on disk and
require very minimum RAM. They were very popular back when 32kB was a
lot of memory. Then you can walk through both files line by line in
parallel, requiring only a tiny little bit of RAM.
Depending upon the sorting algorithm this would be O(n)log(n) or
somewhat worse.
Yet another option: put your relevant data into a database and use
database operators to extract the information you want, in your case a
simple intersection: all records, that are in A and in B. Database
systems are optimized to handle large sets of data efficiently.
>this is my current code. It runs fine with small file.
Well, that is great. But it seems you still don't believe me when I'm
saying that your problem cannot be fixed by a little tweak in your
existing code. Any gain you may get by storing a smaller data item or
similar will very soon be eaten up by larger data sets.
THIS IS NOT GOING TO WORK. YOU HAVE TO RETHINK YOUR APPROACH AND CHOOSE
A DIFFERENT STRATEGIE/ALGORITHM!
jue
------------------------------
Date: Fri, 31 Oct 2008 13:18:46 -0700
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: out of memory
Message-Id: <qrpmg45btv3q1kk1v0l4r246m68oenngp1@4ax.com>
Jürgen Exner <jurgenex@hotmail.com> wrote:
>"friend.05@gmail.com" <hirenshah.05@gmail.com> wrote:
>>I have two large files. I will read one file and see if that is also
>>present in second file.
>
>The way you wrote this means you are checking if file A is a subset of
>file B. However I have a strong feeling, you are talking about the
>records in each file, not the files themself.
>
>>I also need count how many time it is appear
>>in both the file. And according I do other processing.
>
>>so if I process line by line both the file then it will be like (eg.
>>file1 has 10 line and file2 has 10 line. for each line file1 it will
>>loop 10 times. so total 100 loops.) I am dealing millions of lines so
>>this approach will be very slow.
>
>So you need to pre-process your data.
>
>One possibility: read only the smaller file into a hash. Then you can
>compare the larger file line by line against this hash. This is a linear
>algorithm. Of course this only works if at least the relevant data from
>the smaller file will fit into RAM.
>
>Another approach: sort both input files. There are many sorting
>algorithms around, including those that sort completely on disk and
>require very minimum RAM. They were very popular back when 32kB was a
>lot of memory. Then you can walk through both files line by line in
>parallel, requiring only a tiny little bit of RAM.
>Depending upon the sorting algorithm this would be O(n)log(n) or
>somewhat worse.
>
>Yet another option: put your relevant data into a database and use
>database operators to extract the information you want, in your case a
>simple intersection: all records, that are in A and in B. Database
>systems are optimized to handle large sets of data efficiently.
Forgot one other common approach: bucketize your data.
Create buckets of IPs or IDs or whatever criteria works for your case.
Then sort the data into 20 or 50 or 100 individual buckets (aka files)
for each of your input files. And then compare bucket x from file A with
bucket x from file B.
jue
------------------------------
Date: Fri, 31 Oct 2008 22:08:10 GMT
From: sln@netherlands.com
Subject: Re: out of memory
Message-Id: <mc0ng497gq9q6agh6uh9mirgg2i1mqd1ir@4ax.com>
On Fri, 31 Oct 2008 14:32:05 -0400, Charlton Wilbur <cwilbur@chromatico.net> wrote:
>>>>>> "JE" == Jürgen Exner <jurgenex@hotmail.com> writes:
>
> JE> Don't you learn those techniques in basic computer science
> JE> classes any more?
>
>The assumption that someone who is getting paid to program has had -- or
>even has had any interest in -- computer science classes gets less
>tenable with each passing day.
>
>Charlton
Well said.. that should be its own thread.
sln
------------------------------
Date: Fri, 31 Oct 2008 23:49:19 GMT
From: sln@netherlands.com
Subject: Re: out of memory
Message-Id: <ss5ng49gaa63ct7j3q4ucetp1fkc0esksi@4ax.com>
On Fri, 31 Oct 2008 13:09:23 -0700, Jürgen Exner <jurgenex@hotmail.com> wrote:
>"friend.05@gmail.com" <hirenshah.05@gmail.com> wrote:
>>I have two large files. I will read one file and see if that is also
>>present in second file.
>
>The way you wrote this means you are checking if file A is a subset of
>file B. However I have a strong feeling, you are talking about the
>records in each file, not the files themself.
>
>>I also need count how many time it is appear
>>in both the file. And according I do other processing.
>
>>so if I process line by line both the file then it will be like (eg.
>>file1 has 10 line and file2 has 10 line. for each line file1 it will
>>loop 10 times. so total 100 loops.) I am dealing millions of lines so
>>this approach will be very slow.
>
>So you need to pre-process your data.
>
>One possibility: read only the smaller file into a hash. Then you can
>compare the larger file line by line against this hash. This is a linear
>algorithm. Of course this only works if at least the relevant data from
>the smaller file will fit into RAM.
>
>Another approach: sort both input files. There are many sorting
>algorithms around, including those that sort completely on disk and
>require very minimum RAM. They were very popular back when 32kB was a
>lot of memory. Then you can walk through both files line by line in
>parallel, requiring only a tiny little bit of RAM.
>Depending upon the sorting algorithm this would be O(n)log(n) or
>somewhat worse.
>
>Yet another option: put your relevant data into a database and use
>database operators to extract the information you want, in your case a
>simple intersection: all records, that are in A and in B. Database
>systems are optimized to handle large sets of data efficiently.
>
>>this is my current code. It runs fine with small file.
>
>Well, that is great. But it seems you still don't believe me when I'm
>saying that your problem cannot be fixed by a little tweak in your
>existing code. Any gain you may get by storing a smaller data item or
>similar will very soon be eaten up by larger data sets.
>THIS IS NOT GOING TO WORK. YOU HAVE TO RETHINK YOUR APPROACH AND CHOOSE
>A DIFFERENT STRATEGIE/ALGORITHM!
>
>jue
He cannot get past the idea of 'millions' of lines in a file, even
though he states items of interrest. He won't think of items, just
the millions of lines.
In todays large data mining, there are billions of lines to consider.
Of course the least common denominator reduces that down to billions
of items.
Like a hash, it can be separated into alphabetical sequence files,
matched with available memory, usually 16 gigabytes, then reduced
exponentially until the desired form is achieved.
But his outlook is panicy and without resolve. The world is coming
to an end for him and he would like to share it with the world.
sln
------------------------------
Date: Fri, 31 Oct 2008 20:24:55 -0400
From: John W Kennedy <jwkenne@attglobal.net>
Subject: Re: out of memory
Message-Id: <490ba1d7$0$4970$607ed4bc@cv.net>
smallpond wrote:
> On Oct 31, 1:59 pm, "friend...@gmail.com" <hirenshah...@gmail.com>
> wrote:
>
>> here is what I am trying to do.
>>
>> I have two large files. I will read one file and see if that is also
>> present in second file. I also need count how many time it is appear
>> in both the file. And according I do other processing.
>>
>> so if I process line by line both the file then it will be like (eg.
>> file1 has 10 line and file2 has 10 line. for each line file1 it will
>> loop 10 times. so total 100 loops.) I am dealing millions of lines so
>> this approach will be very slow.
>>
>
>
> This problem was solved 50 years ago.
At least 80; the IBM 077 punched-card collator came out in 1937.
--
John W. Kennedy
"Only an idiot fights a war on two fronts. Only the heir to the
throne of the kingdom of idiots would fight a war on twelve fronts"
-- J. Michael Straczynski. "Babylon 5", "Ceremonies of Light and Dark"
------------------------------
Date: Fri, 31 Oct 2008 20:32:04 -0400
From: John W Kennedy <jwkenne@attglobal.net>
Subject: Re: out of memory
Message-Id: <490ba384$0$4965$607ed4bc@cv.net>
John W Kennedy wrote:
> smallpond wrote:
>> On Oct 31, 1:59 pm, "friend...@gmail.com" <hirenshah...@gmail.com>
>> wrote:
>>
>>> here is what I am trying to do.
>>>
>>> I have two large files. I will read one file and see if that is also
>>> present in second file. I also need count how many time it is appear
>>> in both the file. And according I do other processing.
>>>
>>> so if I process line by line both the file then it will be like (eg.
>>> file1 has 10 line and file2 has 10 line. for each line file1 it will
>>> loop 10 times. so total 100 loops.) I am dealing millions of lines so
>>> this approach will be very slow.
>>>
>>
>>
>> This problem was solved 50 years ago.
>
> At least 80; the IBM 077 punched-card collator came out in 1937.
Arrggghhhh! Make that 70 (or 71), of course.
--
John W. Kennedy
"Only an idiot fights a war on two fronts. Only the heir to the
throne of the kingdom of idiots would fight a war on twelve fronts"
-- J. Michael Straczynski. "Babylon 5", "Ceremonies of Light and Dark"
------------------------------
Date: Fri, 31 Oct 2008 15:12:30 -0500
From: brian d foy <brian.d.foy@gmail.com>
Subject: Re: Profiling using DProf
Message-Id: <311020081512305465%brian.d.foy@gmail.com>
In article
<5b468b5e-d500-4204-a6cc-9526a411b0d5@p31g2000prf.googlegroups.com>,
howa <howachen@gmail.com> wrote:
> I am using the command :
>
> perl -d:DProf index.cgi
DProf is rather long in the tooth. The new hotness is Devel::NYTProf.
It has much nicer (and layered) reports that record at a much finer
granularity. :)
------------------------------
Date: Fri, 31 Oct 2008 11:54:35 -0700
From: Tim Greer <tim@burlyhost.com>
Subject: Re: sharing perl code between directories
Message-Id: <LvIOk.1582$TK2.153@newsfe01.iad>
adam.at.prisma wrote:
> I have a directory tree in our code repository containing Perl code.
> AppOne and AppTwo both use some of the same functions and as the copy-
> paste method will get out of hand really soon, I want to create a
> "Common" directory that is visible to the other two (and likely more
> than 2 in the future).
>
> I've used O'Reillys excellent "Programming Perl" but I don't get how I
> make the code in the "Common" directory available to the other two?
> Btw, this is Windows and I am using "Strawberry Perl".
>
> +---admin
> |
> +---AppOne
> | AppOne.pl
> |
> +---AppTwo
> | AppTwoFileOne.pl
> | AppTwoFileTwo.pl
> |
> |
> \---Common
> CommonCode.pm
>
> BR,
> Adam
So, you wan to include the module, perhaps, from the AppOne and AppTwo
directories? Or you want to share code between the .pl files? Or? By
the sound of it, you need to run perldoc on lib and/or module.
--
Tim Greer, CEO/Founder/CTO, BurlyHost.com, Inc.
Shared Hosting, Reseller Hosting, Dedicated & Semi-Dedicated servers
and Custom Hosting. 24/7 support, 30 day guarantee, secure servers.
Industry's most experienced staff! -- Web Hosting With Muscle!
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc. For subscription or unsubscription requests, send
#the single line:
#
# subscribe perl-users
#or:
# unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.
NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice.
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V11 Issue 1955
***************************************