[30709] in Perl-Users-Digest
Perl-Users Digest, Issue: 1954 Volume: 11
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Fri Oct 31 14:09:58 2008
Date: Fri, 31 Oct 2008 11:09:19 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Fri, 31 Oct 2008 Volume: 11 Number: 1954
Today's topics:
__DATA__ filehandle, duplicate reads. <shrike@cyberspace.org>
Re: crisis Perl <Juha.Laiho@iki.fi>
Re: How to overwrite or mock -e for testing? <bik.mido@tiscalinet.it>
out of memory <hirenshah.05@gmail.com>
Re: out of memory <jurgenex@hotmail.com>
Re: out of memory <Juha.Laiho@iki.fi>
Re: out of memory <hirenshah.05@gmail.com>
Re: out of memory xhoster@gmail.com
Re: out of memory <jurgenex@hotmail.com>
Re: out of memory <hirenshah.05@gmail.com>
Re: out of memory <hirenshah.05@gmail.com>
Re: Perl - Gnuplot Program Oct. 29, 2008 <edgrsprj@ix.netcom.com>
Re: Perl - Gnuplot Program Oct. 29, 2008 <sfeam@users.sourceforge.net>
Profiling using DProf <howachen@gmail.com>
Re: Profiling using DProf xhoster@gmail.com
Re: sharing perl code between directories <rkb@i.frys.com>
Re: sharing perl code between directories <tadmc@seesig.invalid>
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Fri, 31 Oct 2008 08:25:38 -0700 (PDT)
From: "shrike@cyberspace.org" <shrike@cyberspace.org>
Subject: __DATA__ filehandle, duplicate reads.
Message-Id: <c1ea1450-93e0-4e30-9638-9823901dbf4d@g17g2000prg.googlegroups.com>
This is perl, v5.8.8 built for cygwin-thread-multi-64int
The following code is presenting some problems. This reads the
__DATA__ filehandle of the inheriting class, into a value, which is
accessed as a reference. The reference is contained within a a
property of the object.
This is being used in conjunction with Net::Server::PreForkSimple.
When this code is used without sockets, it works normally. When it is
used with sockets, I get a duplicate read of the DATA filehandle, such
that if __DATA__ contained "asdf" I would get:
asdf
asdf
This only happens the during the first execution. so:
$self->loadfh() ; # produces a warning:
asdf
adsf
$self->loadfh() ;
$self->loadfh() ;
produces warnings:
asdf
asdf
asdf
Note that the function checks the value of $$Tref before reading it,
so there should never be a second read, period.
__________________________________________
sub loadfh {
my $self = shift ;
my $Tref = $self->{'__TEMPLATE__'} ;
return 1 if ($$Tref =~ /\w+/) ;
my $class = ref($self) ;
my $fh ;
my $localDATA = $class . '::DATA';
warn(fileno($localDATA)) ; # ---------------------> warns "6"
once
my $fhblock = '$fh = ' . '*' . $localDATA . ';' ;
eval($fhblock);
while(<$fh>) {
$$Tref .= $_ ;
}
warn $$Tref ; # ------> contains the content of __DATA__, twice
close($fh) ;
return 0 ;
}
------------------------------
Date: Fri, 31 Oct 2008 16:27:03 GMT
From: Juha Laiho <Juha.Laiho@iki.fi>
Subject: Re: crisis Perl
Message-Id: <gefbdt$2rn$1@ichaos2.ichaos-int>
Charlton Wilbur <cwilbur@chromatico.net> said:
>>>>>> "cc" == cartercc <cartercc@gmail.com> writes:
>
> cc> How do you deal with a manager who tells you to leave a script
> cc> alone, when you know good and well that it's such poorly written
> cc> code that it will be extremely hard to maintain, and perhaps
> cc> will be buggy as well? Getting another job isn't an option, and
> cc> firing the manager isn't an option, either.
>
>Educate the manager. Keeping shoddy code in production is a gamble:
>you're gambling that the cost of fixing the code *now* is higher than
>the cost of fixing it when it breaks or when there's a crisis.
Further, making modifications to a clean code base is cheaper than
making the same modifications to a degraded code base.
The common term for this is "design debt"; see articles at
http://jamesshore.com/Articles/Business/Software%20Profitability%20Newsletter/Design%20Debt.html
http://www.ri.fi/web/en/technology-and-research/design-debt
As far as the code does what it is intended to do now, it is, stirctly
speaking, not a technical issue (which, I guess, would make it belong to
your domain), but instead it is an economic issue (and as such belongs
to your managers domain). What you could do would be to try and see
whether your manager understands the "debt" aspect of the situation
his decision is creating, and how much of this debt he is willing
to carry (and also pay the running interest for -- in more laborious
changes to the code).
--
Wolf a.k.a. Juha Laiho Espoo, Finland
(GC 3.0) GIT d- s+: a C++ ULSH++++$ P++@ L+++ E- W+$@ N++ !K w !O !M V
PS(+) PE Y+ PGP(+) t- 5 !X R !tv b+ !DI D G e+ h---- r+++ y++++
"...cancel my subscription to the resurrection!" (Jim Morrison)
------------------------------
Date: Fri, 31 Oct 2008 13:13:51 +0100
From: Michele Dondi <bik.mido@tiscalinet.it>
Subject: Re: How to overwrite or mock -e for testing?
Message-Id: <ebtlg4lhbkhl0g5rueoornbjhjderlkvqu@4ax.com>
On Fri, 31 Oct 2008 10:40:14 +0100, Michele Dondi
<bik.mido@tiscalinet.it> wrote:
>that the inverse implication does not hold[*]. Now, I tried to see
>what happens with -e() and it turns out that it gives a run-time error
>I had *never* seen:
>
> whisky:~ [09:58:08]$ perl -E 'say prototype "CORE::$_" // "undef"
> > for qw/rand require -e/'
> ;$
> undef
> Can't find an opnumber for "-e" at -e line 2.
>
>Anyway, as they say, the proof of the pudding is in the eating: the
>above in fact would imply that -X functions are *not* overridable.
BTW: I brought this up in PerlMonks. Incidentally, there someone
pointed me to <http://perlmonks.org/?node_id=584078> and in particular
to the *second footnote* which may be interesting for the OP: it seems
that definitely filetest operators are not overridable in any way, and
that a patch was submitted to p5p to enable that instead. But it was
refused. Still, the node is not very recent... In the meanwhile,
somebody made me notice (see <http://perlmonks.org/?node_id=720682>,)
that qw// is now overridable.
Michele
--
{$_=pack'B8'x25,unpack'A8'x32,$a^=sub{pop^pop}->(map substr
(($a||=join'',map--$|x$_,(unpack'w',unpack'u','G^<R<Y]*YB='
.'KYU;*EVH[.FHF2W+#"\Z*5TI/ER<Z`S(G.DZZ9OX0Z')=~/./g)x2,$_,
256),7,249);s/[^\w,]/ /g;$ \=/^J/?$/:"\r";print,redo}#JAPH,
------------------------------
Date: Fri, 31 Oct 2008 09:00:27 -0700 (PDT)
From: "friend.05@gmail.com" <hirenshah.05@gmail.com>
Subject: out of memory
Message-Id: <9b5599d9-9a51-4b17-82eb-0ee9e7ab84f8@c36g2000prc.googlegroups.com>
Hi,
I want to parse large log file (in GBs)
and I am readin 2-3 such files in hash array.
But since it will very big hash array it is going out of memory.
what are the other approach I can take.
Example code:
open ($INFO, '<', $file) or die "Cannot open $file :$!\n";
while (<$INFO>)
{
(undef, undef, undef, $time, $cli_ip, $ser_ip, undef, $id,
undef) = split('\|');
push @{$time_table{"$cli_ip|$id"}}, $time;
}
close $INFO;
In above code $file is very big in size(in Gbs); so I am getting out
of memory !
------------------------------
Date: Fri, 31 Oct 2008 09:25:30 -0700
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: out of memory
Message-Id: <5rbmg4hp9p2uvsg4e6kb6cc44lvp95nkor@4ax.com>
"friend.05@gmail.com" <hirenshah.05@gmail.com> wrote:
>I want to parse large log file (in GBs)
>
>and I am readin 2-3 such files in hash array.
>
>But since it will very big hash array it is going out of memory.
>
>what are the other approach I can take.
"Doctor, it hurts when I do this."
"Well, then don't do it."
Simple: don't read them into RAM but process them line by line.
>Example code:
>
>open ($INFO, '<', $file) or die "Cannot open $file :$!\n";
>while (<$INFO>)
Oh, you are processing them line by line,
>{
> (undef, undef, undef, $time, $cli_ip, $ser_ip, undef, $id,
>undef) = split('\|');
> push @{$time_table{"$cli_ip|$id"}}, $time;
>}
>close $INFO;
If for whatever reason your requirement (sic!!!) is to create an array
with all this data, then you need better hardware and probably a 64bit
OS and Perl.
Of course a much better approach would probably be to trade time for
space and find a different algorithm to solve your original problem
(which you didn't tell us about) by using less RAM in the first place. I
personally don't see any need to store more than one data set in RAM for
"parsing log files", but of course I don't know what kind of log files
you are talking about and what information you want to compute from
those log files.
Another common solution is to use a database to handle large sets of
data.
jue
------------------------------
Date: Fri, 31 Oct 2008 16:37:03 GMT
From: Juha Laiho <Juha.Laiho@iki.fi>
Subject: Re: out of memory
Message-Id: <gefc6r$2rn$2@ichaos2.ichaos-int>
"friend.05@gmail.com" <hirenshah.05@gmail.com> said:
>I want to parse large log file (in GBs)
>
>and I am readin 2-3 such files in hash array.
>
>But since it will very big hash array it is going out of memory.
Do you really need to have the whole file available in order to
extract the data you're interested in?
>Example code:
>
>open ($INFO, '<', $file) or die "Cannot open $file :$!\n";
>while (<$INFO>)
>{
> (undef, undef, undef, $time, $cli_ip, $ser_ip, undef, $id,
>undef) = split('\|');
> push @{$time_table{"$cli_ip|$id"}}, $time;
>}
>close $INFO;
>
>In above code $file is very big in size(in Gbs); so I am getting out
>of memory !
So, you're storing times based on client ip and id, if I read correctly.
How about not keeping that data in memory, but writing it out as you
gather it?
- to a text file, to be processed further in a next stage of the script
- to a database format file (via DB_File module, or one of its sister
modules), so that you can do fast indexed searches on the data
- to a "real" database in a proper relational structure, to allow
you to do any kind of relational reporting rather easily
Also, where $time above apparently is a string containing some kind of
a timestamp, you could convert that timestamp into something else
(number of seconds from epoch comes to mind) that takes a lot less
memory than a string representation such as "2008-10-31 18:33:24".
--
Wolf a.k.a. Juha Laiho Espoo, Finland
(GC 3.0) GIT d- s+: a C++ ULSH++++$ P++@ L+++ E- W+$@ N++ !K w !O !M V
PS(+) PE Y+ PGP(+) t- 5 !X R !tv b+ !DI D G e+ h---- r+++ y++++
"...cancel my subscription to the resurrection!" (Jim Morrison)
------------------------------
Date: Fri, 31 Oct 2008 10:09:08 -0700 (PDT)
From: "friend.05@gmail.com" <hirenshah.05@gmail.com>
Subject: Re: out of memory
Message-Id: <2fcb5960-1774-4793-9d19-de06509be2d1@b38g2000prf.googlegroups.com>
On Oct 31, 12:37=A0pm, Juha Laiho <Juha.La...@iki.fi> wrote:
> "friend...@gmail.com" <hirenshah...@gmail.com> said:
>
> >I want to parse large log file (in GBs)
>
> >and I am readin 2-3 such files in hash array.
>
> >But since it will very big hash array it is going out of memory.
>
> Do you really need to have the whole file available in order to
> extract the data you're interested in?
>
> >Example code:
>
> >open ($INFO, '<', $file) or die "Cannot open $file :$!\n";
> >while (<$INFO>)
> >{
> > =A0 =A0 =A0 =A0(undef, undef, undef, $time, $cli_ip, $ser_ip, undef, $i=
d,
> >undef) =3D split('\|');
> > =A0 =A0 =A0 =A0 =A0 =A0push @{$time_table{"$cli_ip|$id"}}, $time;
> >}
> >close $INFO;
>
> >In above code $file is very big in size(in Gbs); so I am getting out
> >of memory !
>
> So, you're storing times based on client ip and id, if I read correctly.
>
> How about not keeping that data in memory, but writing it out as you
> gather it?
> - to a text file, to be processed further in a next stage of the script
> - to a database format file (via DB_File module, or one of its sister
> =A0 modules), so that you can do fast indexed searches on the data
> - to a "real" database in a proper relational structure, to allow
> =A0 you to do any kind of relational reporting rather easily
>
> Also, where $time above apparently is a string containing some kind of
> a timestamp, you could convert that timestamp into something else
> (number of seconds from epoch comes to mind) that takes a lot less
> memory than a string representation such as "2008-10-31 18:33:24".
> --
> Wolf =A0a.k.a. =A0Juha Laiho =A0 =A0 Espoo, Finland
> (GC 3.0) GIT d- s+: a C++ ULSH++++$ P++@ L+++ E- W+$@ N++ !K w !O !M V
> =A0 =A0 =A0 =A0 =A0PS(+) PE Y+ PGP(+) t- 5 !X R !tv b+ !DI D G e+ h---- r=
+++ y++++
> "...cancel my subscription to the resurrection!" (Jim Morrison)
Thanks.
if I output as text file and read it again later on will be able to
search based on key. (I mean when read it again I will be able to use
it as hash or not )
------------------------------
Date: 31 Oct 2008 17:19:08 GMT
From: xhoster@gmail.com
Subject: Re: out of memory
Message-Id: <20081031131938.405$Vb@newsreader.com>
"friend.05@gmail.com" <hirenshah.05@gmail.com> wrote:
> Hi,
>
> I want to parse large log file (in GBs)
>
> and I am readin 2-3 such files in hash array.
>
> But since it will very big hash array it is going out of memory.
>
> what are the other approach I can take.
The other approaches you can take depend on what you are trying to
do.
>
> Example code:
>
> open ($INFO, '<', $file) or die "Cannot open $file :$!\n";
> while (<$INFO>)
> {
> (undef, undef, undef, $time, $cli_ip, $ser_ip, undef, $id,
> undef) = split('\|');
> push @{$time_table{"$cli_ip|$id"}}, $time;
> }
> close $INFO;
You could get some improvement by having just a hash rather than a hash of
arrays. Replace the push with, for example:
$time_table{"$cli_ip|$id"} .= "$time|";
Then you would have to split the hash values into a list/array one at a
time as they are needed.
Xho
--
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.
------------------------------
Date: Fri, 31 Oct 2008 10:22:08 -0700
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: out of memory
Message-Id: <u6fmg4hq8c4o229347ie35ju1viuq1pd6t@4ax.com>
"friend.05@gmail.com" <hirenshah.05@gmail.com> wrote:
>if I output as text file and read it again later on will be able to
>search based on key. (I mean when read it again I will be able to use
>it as hash or not )
That depends upon what you do with the data when reading it in again. Of
course you can construct hash, but then you wouldn't have gained
anything. Why would this hash be any smaller than the one you were
trying to construct the first time?
Your current approach (put everything into a hash) and your current
hardware are incompatible.
Either get larger hardware (expensive) or rethink your basic approach,
e.g. use a database system or compute your desired results on the fly
while parsing through the file or write intermediate results to a file
in a format that later can be processed line by line or by any other of
the gazillions ways of preversing RAM. Don't you learn those techniques
in basic computer science classes any more?
jue
------------------------------
Date: Fri, 31 Oct 2008 10:41:00 -0700 (PDT)
From: "friend.05@gmail.com" <hirenshah.05@gmail.com>
Subject: Re: out of memory
Message-Id: <135723aa-883b-472c-9ba5-8d44d8977e28@i18g2000prf.googlegroups.com>
On Oct 31, 1:22=A0pm, J=FCrgen Exner <jurge...@hotmail.com> wrote:
> "friend...@gmail.com" <hirenshah...@gmail.com> wrote:
> >if I output as text file and read it again later on will be able to
> >search based on key. (I mean when read it again I will be able to use
> >it as hash or not )
>
> That depends upon what you do with the data when reading it in again. Of
> course you can construct hash, but then you wouldn't have gained
> anything. Why would this hash be any smaller than the one you were
> trying to construct the first time?
>
> Your current approach (put everything into a hash) and your current
> hardware are incompatible.
>
> Either get larger hardware (expensive) or rethink your basic approach,
> e.g. use a database system or compute your desired results on the fly
> while parsing through the file or write intermediate results to a file
> in a format that later can be processed line by line or by any other of
> the gazillions ways of preversing RAM. Don't you learn those techniques
> in basic computer science classes any more?
>
> jue
output to a file and using it again will take lot of time. It will be
very slow.
will be helpful in speed if I use DB_FILE module
------------------------------
Date: Fri, 31 Oct 2008 10:59:29 -0700 (PDT)
From: "friend.05@gmail.com" <hirenshah.05@gmail.com>
Subject: Re: out of memory
Message-Id: <badc9300-9027-4f65-abeb-111256dd5e3f@n33g2000pri.googlegroups.com>
On Oct 31, 1:41=A0pm, "friend...@gmail.com" <hirenshah...@gmail.com>
wrote:
> On Oct 31, 1:22=A0pm, J=FCrgen Exner <jurge...@hotmail.com> wrote:
>
>
>
>
>
> > "friend...@gmail.com" <hirenshah...@gmail.com> wrote:
> > >if I output as text file and read it again later on will be able to
> > >search based on key. (I mean when read it again I will be able to use
> > >it as hash or not )
>
> > That depends upon what you do with the data when reading it in again. O=
f
> > course you can construct hash, but then you wouldn't have gained
> > anything. Why would this hash be any smaller than the one you were
> > trying to construct the first time?
>
> > Your current approach (put everything into a hash) and your current
> > hardware are incompatible.
>
> > Either get larger hardware (expensive) or rethink your basic approach,
> > e.g. use a database system or compute your desired results on the fly
> > while parsing through the file or write intermediate results to a file
> > in a format that later can be processed line by line or by any other of
> > the gazillions ways of preversing RAM. Don't you learn those techniques
> > in basic computer science classes any more?
>
> > jue
>
> output to a file and using it again will take lot of time. It will be
> very slow.
>
> will be helpful in speed if I use DB_FILE module- Hide quoted text -
>
> - Show quoted text -
here is what I am trying to do.
I have two large files. I will read one file and see if that is also
present in second file. I also need count how many time it is appear
in both the file. And according I do other processing.
so if I process line by line both the file then it will be like (eg.
file1 has 10 line and file2 has 10 line. for each line file1 it will
loop 10 times. so total 100 loops.) I am dealing millions of lines so
this approach will be very slow.
this is my current code. It runs fine with small file.
open ($INFO, '<', $file) or die "Cannot open $file :$!\n";
while (<$INFO>)
{
(undef, undef, undef, $time, $cli_ip, $ser_ip, undef, $id,
undef) =3D split('\|');
push @{$time_table{"$cli_ip|$dns_id"}}, $time;
}
open ($INFO_PRI, '<', $pri_file) or die "Cannot open $pri_file :$!
\n";
while (<$INFO_PRI>)
{
(undef, undef, undef, $pri_time, $pri_cli_ip, undef, undef,
$pri_id, undef, $query, undef) =3D split('\|');
$pri_ip_id_table{"$pri_cli_ip|$pri_id"}++;
push @{$pri_time_table{"$pri_cli_ip|$pri_id"}}, $pri_time;
}
@pri_ip_id_table_ =3D keys(%pri_ip_id_table);
for($i =3D 0; $i < @pri_ip_id_table_; $i++) #file 2
{
if($time_table{"$pri_ip_dns_table_[$i]"}) #chk if it
is there in file 1
{
#do some processing.
}
}
so for above example which I approach will be best ?
Thanks for your help.
------------------------------
Date: Fri, 31 Oct 2008 05:13:51 -0500
From: "E.D.G." <edgrsprj@ix.netcom.com>
Subject: Re: Perl - Gnuplot Program Oct. 29, 2008
Message-Id: <m-ednUqGnbXDR5fUnZ2dnUVZ_g2dnZ2d@earthlink.com>
"Jim Gibson" <jimsgibson@gmail.com> wrote in message
news:301020081104491635%jimsgibson@gmail.com...
> vector maps. The U.S. Census bureau provides such data in their
> TIGER/Line series <http://www.census.gov/geo/www/tiger/>
Thanks for that informatin.
>> Is there some type of program that can import a drawing such as a GIF
>> file
>> and reproduce its major structures in a Gnuplot compatible file?
>
> Not likely. A GIF file is a rasterized image: a set of pixels, each
> with its own color and intensity. Gnuplot works with (x,y) values that
> define points and lines, commonly called "vector" data. While it is
> possible to extract line data from a raster image, it is not easy.
I have a program that will trace shapes on a map but am still learning how
to use it. But I don't think that it will produce the types of files that
Gnuplot reads.
>> Something I suggested in the past is that commands be built into Gnuplot
>> that enable it to import a picture file such as a GIF file and use it as
>> the
>> computer screen background instead of just having solid color
>> backgrounds.
>
> You are mistaking Gnuplot for a general-purpose drawing and
> presentation program. It is not. It is a scientific plotting program
> and Gnuplot users have little need for fancy graphics.
What we are doing is displaying things like world maps and then plotting
relatively simple shapes and text on top of them. This is an interactive
application that at times refreshes the entire screen perhaps two times a
second. Using a picture or drawing for the background could eliminate the
need to redraw everything.
>> crash on regular basis. The Perl - Gnuplot pipe changes everything.
>> Gnuplot now keeps running almost no matter what happens. It never seems
>> to
>> crash regardless of how fast Perl sends it commands.
>
> Did you try using the Gnuplot modules available on CPAN?
No. I did not know that there are any. I largely relied on the
documentation that came with Gnuplot. In any case the code that I am now
working with is just a few lines of text.
>> took too long to load. Perl kept running. But one of the programs
>> generated an error message saying that Gnuplot was not ready. And some
>> manual steps had to be taken to get it running again.
>
> I am using Perl to run and control Gnuplot under Windows XP and have
> not encountered this problem. What code do you use to start Gnuplot?
The pipe initialization code is just a few lines of text. For example,
open gnuplot, '|pgnuplot.exe';
use FileHandle;
gnuplot->autoflush(1);
print gnuplot 'sin(x)', "\n";
sleep 4;
close gnuplot;
I am running this software on two computers, one fairly new and fast, the
other at least five years old and pretty slow. This timing problem never
occurs on the faster one. However it happens all the time on the slower one
probably because it takes a long time for it to read and process files from
the internal hard drive. This would not be a problem if I were the only
person running the program. But since plans are to make it available to
researchers around the world, many of who will probably be using slower
computers, it was necessary to find a way to get around the problem.
Having Perl use its system command to tell Gnuplot to start a simple .gnu
program running the first time it is called takes care of the matter with
perhaps a three second delay. After that the Perl to Gnuplot pipe
initializes without any problems. And I expect that this procedure would
work with other programming languages that are being use to create pipes to
Gnuplot.
------------------------------
Date: Fri, 31 Oct 2008 09:54:20 -0700
From: sfeam <sfeam@users.sourceforge.net>
Subject: Re: Perl - Gnuplot Program Oct. 29, 2008
Message-Id: <gefd7v$lh3$1@registered.motzarella.org>
E.D.G. wrote:
>
> The pipe initialization code is just a few lines of text. For example,
>
> open gnuplot, '|pgnuplot.exe';
> use FileHandle;
> gnuplot->autoflush(1);
> print gnuplot 'sin(x)', "\n";
> sleep 4;
> close gnuplot;
>
> I am running this software on two computers, one fairly new and fast, the
> other at least five years old and pretty slow. This timing problem never
> occurs on the faster one. However it happens all the time on the slower
> one.
The code is fundamentally broken. You should never try to synchronize
two programs by assuming that you know the timing in advance.
Why 4 seconds?
Instead, you should synchonize via a lock file or use _two_ pipes,
one in each direction.
For instance, to use a lock file:
open gnuplot, '|pgnuplot.exe';
use FileHandle;
gnuplot->autoflush(1);
print gnuplot 'plot sin(x)', "\n";
print gnuplot "set print 'lock.0'; print 'done'; unset print", "\n"
... loop waiting for the lock file to appear
unlink 'lock.0'
close gnuplot;
As a bonus, you can actually have gnuplot write useful information
into the lock file and have the perl script read it back.
> probably because it takes a long time for it to read and process files
> from
> the internal hard drive. This would not be a problem if I were the only
> person running the program. But since plans are to make it available to
> researchers around the world, many of who will probably be using slower
> computers, it was necessary to find a way to get around the problem.
>
> Having Perl use its system command to tell Gnuplot to start a simple .gnu
> program running the first time it is called takes care of the matter with
> perhaps a three second delay. After that the Perl to Gnuplot pipe
> initializes without any problems. And I expect that this procedure would
> work with other programming languages that are being use to create pipes
> to Gnuplot.
------------------------------
Date: Fri, 31 Oct 2008 03:18:15 -0700 (PDT)
From: howa <howachen@gmail.com>
Subject: Profiling using DProf
Message-Id: <5b468b5e-d500-4204-a6cc-9526a411b0d5@p31g2000prf.googlegroups.com>
I am using the command :
perl -d:DProf index.cgi
Using "dprofpp", it give me...
Total Elapsed Time = 0.270455 Seconds
User+System Time = 0.120455 Seconds
Exclusive Times
%Time ExclSec CumulS #Calls sec/call Csec/c Name
41.5 0.050 0.089 7 0.0071 0.0127 MyModule::BEGI
N
16.6 0.020 0.020 48 0.0004 0.0004 Exporter::import
8.30 0.010 0.010 4 0.0025 0.0025 utf8::SWASHNEW
8.30 0.010 0.010 4 0.0025 0.0025 Data::Dumper::BEGIN
8.30 0.010 0.010 2 0.0050 0.0050 lib::BEGIN
8.30 0.010 0.010 5 0.0020 0.0020 IO::Seekable::BEGIN
8.30 0.010 0.010 159 0.0001 0.0001 XML::Twig::Elt::set_gi
7.47 0.009 0.008 89 0.0001 0.0001 XML::Twig::_twig_end
0.00 0.000 0.000 1 0.0000 0.0000 File::Glob::GLOB_BRACE
0.00 0.000 0.000 1 0.0000 0.0000 File::Glob::GLOB_NOMAGIC
0.00 0.000 0.000 1 0.0000 0.0000 File::Glob::GLOB_QUOTE
0.00 0.000 0.000 1 0.0000 0.0000 File::Glob::GLOB_TILDE
0.00 0.000 0.000 1 0.0000 0.0000
File::Glob::GLOB_ALPHASORT
0.00 0.000 0.000 1 0.0000 0.0000
Exporter::Heavy::heavy_export_to_l
evel
0.00 0.000 0.000 1 0.0000 0.0000 LWP::Simple::_get
However, what is "MyModule::BEGIN"? Can I see more detail how to time
was spent?
Thanks.
------------------------------
Date: 31 Oct 2008 15:26:57 GMT
From: xhoster@gmail.com
Subject: Re: Profiling using DProf
Message-Id: <20081031112727.836$NA@newsreader.com>
howa <howachen@gmail.com> wrote:
> I am using the command :
>
> perl -d:DProf index.cgi
>
> Using "dprofpp", it give me...
>
> Total Elapsed Time = 0.270455 Seconds
> User+System Time = 0.120455 Seconds
> Exclusive Times
> %Time ExclSec CumulS #Calls sec/call Csec/c Name
> 41.5 0.050 0.089 7 0.0071 0.0127 MyModule::BEGIN
>
> However, what is "MyModule::BEGIN"?
The union of all BEGIN blocks encountered in package MyModule.
> Can I see more detail how to time
> was spent?
You can look at the options for dprofpp, like -S. And in probably -r, as
more than half the time in your program seems to be off the CPU.
There is also SmallProf, and some newer profiling modules I haven't looked
into yet myself.
But don't underestimate the usefulness of just looking at the code and
using your judgment about what parts are likely to be slow.
Xho
--
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.
------------------------------
Date: Fri, 31 Oct 2008 05:16:04 -0700 (PDT)
From: Ron Bergin <rkb@i.frys.com>
Subject: Re: sharing perl code between directories
Message-Id: <0687d29d-6570-4044-9904-b7549bb410be@s1g2000prg.googlegroups.com>
On Oct 31, 1:15=A0am, "adam.at.prisma" <adam.at.pri...@gmail.com> wrote:
> I have a directory tree in our code repository containing Perl code.
> AppOne and AppTwo both use some of the same functions and as the copy-
> paste method will get out of hand really soon, I want to create a
> "Common" directory that is visible to the other two (and likely more
> than 2 in the future).
>
> I've used O'Reillys excellent "Programming Perl" but I don't get how I
> make the code in the "Common" directory available to the other two?
> Btw, this is Windows and I am using "Strawberry Perl".
>
> +---admin
> |
> +---AppOne
> | =A0 =A0 =A0 AppOne.pl
> |
> +---AppTwo
> | =A0 =A0 =A0 AppTwoFileOne.pl
> | =A0 =A0 =A0 AppTwoFileTwo.pl
> |
> |
> \---Common
> =A0 =A0 =A0 =A0 CommonCode.pm
>
> BR,
> Adam
perldoc -q lib
------------------------------
Date: Fri, 31 Oct 2008 06:07:44 -0500
From: Tad J McClellan <tadmc@seesig.invalid>
Subject: Re: sharing perl code between directories
Message-Id: <slrngglpo0.vqv.tadmc@tadmc30.sbcglobal.net>
adam.at.prisma <adam.at.prisma@gmail.com> wrote:
> I have a directory tree in our code repository containing Perl code.
> AppOne and AppTwo both use some of the same functions and as the copy-
> paste method will get out of hand really soon, I want to create a
> "Common" directory that is visible to the other two (and likely more
> than 2 in the future).
>
> I've used O'Reillys excellent "Programming Perl" but I don't get how I
> make the code in the "Common" directory available to the other two?
perldoc -q module
How do I keep my own module/library directory?
--
Tad McClellan
email: perl -le "print scalar reverse qq/moc.noitatibaher\100cmdat/"
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc. For subscription or unsubscription requests, send
#the single line:
#
# subscribe perl-users
#or:
# unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.
NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice.
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V11 Issue 1954
***************************************