[23260] in Perl-Users-Digest
Perl-Users Digest, Issue: 5480 Volume: 10
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Wed Sep 10 18:10:45 2003
Date: Wed, 10 Sep 2003 15:10:19 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Wed, 10 Sep 2003 Volume: 10 Number: 5480
Today's topics:
Defeating OS buffering? <emschwar@pobox.com>
Re: Defeating OS buffering? <minceme@start.no>
Re: Defeating OS buffering? <emschwar@pobox.com>
Re: Defeating OS buffering? <minceme@start.no>
Re: Defeating OS buffering? <tcurrey@no.no.no.i.said.no>
Re: Defeating OS buffering? <tcurrey@no.no.no.i.said.no>
Re: Defeating OS buffering? <emschwar@pobox.com>
Re: Defeating OS buffering? <emschwar@pobox.com>
Re: Detecting duplicate keys in a hash I am "requiring" (Gupit)
Re: Detecting duplicate keys in a hash I am "requiring" (Anno Siegel)
Download a file in Perl <founder@pege.org>
Re: Download a file in Perl <glex_nospam@qwest.net>
Re: exit() argument evaluation <tcurrey@no.no.no.i.said.no>
Re: exit() argument evaluation (Quantum Mechanic)
File Locking Follow-up <ducott@hotmail.com>
Is this perl poblem? <mdudley@execonn.com>
Re: Is this perl poblem? <jurgenex@hotmail.com>
Re: Is this perl poblem? <mdudley@execonn.com>
Re: Is this perl poblem? <mdudley@execonn.com>
List Hash 'Field' Names <paanwa@hotmail.com>
Re: looping every x seconds (Anno Siegel)
Mime::Parser::Filer problemo (Chris Larsen)
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Wed, 10 Sep 2003 12:54:12 -0600
From: Eric Schwartz <emschwar@pobox.com>
Subject: Defeating OS buffering?
Message-Id: <etollswzd7v.fsf@wormtongue.emschwar>
I'm trying to use Perl to automate a specific test process, but OS
buffering is defeating some of the main utility of it. I run a
program like this:
open(TEST, "$testcmd |") or die "bah-- $!";
while(<TEST>) {
# do stuff
}
close(TEST);
Part of the problem here is that the test referred to in $testcmd is a
bash script that execs a C program, which I do not have source to.
This C program doesn't print a lot of output, so as a consequence, I
don't see any of it until the C program exits. Is there any way,
short of modifying the C program, that I can defeat the OS buffering
of the C program's output?
-=Eric, fully expecting the answer to be "no". :(
--
Come to think of it, there are already a million monkeys on a million
typewriters, and Usenet is NOTHING like Shakespeare.
-- Blair Houghton.
------------------------------
Date: Wed, 10 Sep 2003 19:13:38 +0000 (UTC)
From: Vlad Tepes <minceme@start.no>
Subject: Re: Defeating OS buffering?
Message-Id: <bjnt52$705$1@troll.powertech.no>
Eric Schwartz <emschwar@pobox.com> wrote:
> I'm trying to use Perl to automate a specific test process, but OS
> buffering is defeating some of the main utility of it. I run a
> program like this:
>
> open(TEST, "$testcmd |") or die "bah-- $!";
> while(<TEST>) {
> # do stuff
> }
> close(TEST);
>
> Part of the problem here is that the test referred to in $testcmd is a
> bash script that execs a C program, which I do not have source to.
> This C program doesn't print a lot of output, so as a consequence, I
> don't see any of it until the C program exits. Is there any way,
> short of modifying the C program, that I can defeat the OS buffering
> of the C program's output?
>
> -=Eric, fully expecting the answer to be "no". :(
How about reading single bytes bytes from the program?
open(TEST, "/home/t/c/a.out |") or die "bah-- $!";
$|++;
while( 0 != read TEST, $_, 1, 1 ) {
print;
}
close(TEST);
HTH,
--
Vlad
------------------------------
Date: Wed, 10 Sep 2003 13:29:29 -0600
From: Eric Schwartz <emschwar@pobox.com>
Subject: Re: Defeating OS buffering?
Message-Id: <etod6e8zbl2.fsf@wormtongue.emschwar>
Vlad Tepes <minceme@start.no> writes:
> How about reading single bytes bytes from the program?
>
> open(TEST, "/home/t/c/a.out |") or die "bah-- $!";
To reproduce this properly, you'd need a bash program that just did:
exec /home/t/c/a.out
and you'd invoke the bash program there.
> $|++;
This unbuffers STDOUT, which doesn't help when the problem is that the
OS is buffering the TEST filehandle.
> while( 0 != read TEST, $_, 1, 1 ) {
> print;
> }
> close(TEST);
How is reading input that doesn't show up a character at a time going
to help? Either it's there, or it isn't; reading it in smaller chunks
is just making things more inefficient without actually changing
anything.
> HTH,
I'm afraid not. Thanks for the effort, anyhow.
-=Eric
--
Come to think of it, there are already a million monkeys on a million
typewriters, and Usenet is NOTHING like Shakespeare.
-- Blair Houghton.
------------------------------
Date: Wed, 10 Sep 2003 20:10:31 +0000 (UTC)
From: Vlad Tepes <minceme@start.no>
Subject: Re: Defeating OS buffering?
Message-Id: <bjo0fn$8gf$1@troll.powertech.no>
Eric Schwartz <emschwar@pobox.com> wrote:
> Vlad Tepes <minceme@start.no> writes:
>
>> open(TEST, "/home/t/c/a.out |") or die "bah-- $!";
>
> To reproduce this properly, you'd need a bash program that just did:
>
> exec /home/t/c/a.out
Ok, now I have created a bash-script that exec'ed the a.out, and used
that in the open-statement above. It worked nicely, doing what you
wanted.
Maybe I'm missing something?
The script I used:
#!/bin/bash
exec "/home/t/c/a.out";
The C-program:
#include <stdio.h>
#include <unistd.h>
int main ( void ) {
printf("Some text...");
fflush(NULL);
sleep(3);
printf("some more text...\n");
return(0);
}
The perl program:
#!/usr/bin/perl
use warnings;
use strict;
open(TEST, "/tmp/t.sh |") or die "bah-- $!";
$|++;
while( 0 != read TEST, $_, 1, 1 ){
print;
}
close(TEST);
Hope this helps,
--
Vlad
------------------------------
Date: Wed, 10 Sep 2003 13:55:30 -0700
From: "Trent Curry" <tcurrey@no.no.no.i.said.no>
Subject: Re: Defeating OS buffering?
Message-Id: <bjo38a$1oa$1@news.astound.net>
"Vlad Tepes" <minceme@start.no> wrote in message
news:bjo0fn$8gf$1@troll.powertech.no...
> Eric Schwartz <emschwar@pobox.com> wrote:
> > Vlad Tepes <minceme@start.no> writes:
>
[Other code]
> The perl program:
>
> #!/usr/bin/perl
> use warnings;
> use strict;
>
> open(TEST, "/tmp/t.sh |") or die "bah-- $!";
> $|++;
> while( 0 != read TEST, $_, 1, 1 ){
> print;
> }
> close(TEST);
Or, as an FYI, you could rewrite the while as:
print while (0 != read TEST, $_, 1, 1);
------------------------------
Date: Wed, 10 Sep 2003 14:06:46 -0700
From: "Trent Curry" <tcurrey@no.no.no.i.said.no>
Subject: Re: Defeating OS buffering?
Message-Id: <bjo3tf$21n$1@news.astound.net>
"Eric Schwartz" <emschwar@pobox.com> wrote in message
news:etod6e8zbl2.fsf@wormtongue.emschwar...
> Vlad Tepes <minceme@start.no> writes:
> > How about reading single bytes bytes from the program?
> >
> > open(TEST, "/home/t/c/a.out |") or die "bah-- $!";
>
> To reproduce this properly, you'd need a bash program that just did:
>
> exec /home/t/c/a.out
>
> and you'd invoke the bash program there.
>
> > $|++;
>
> This unbuffers STDOUT, which doesn't help when the problem is that the
> OS is buffering the TEST filehandle.
From perldoc perlvar:
$OUTPUT_AUTOFLUSH
$|
If set to nonzero, forces a flush right away and after every write
or print on the currently selected output channel. [Snipped rest]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
So in this case, TEST is the currently selected output handle.
------------------------------
Date: Wed, 10 Sep 2003 15:26:04 -0600
From: Eric Schwartz <emschwar@pobox.com>
Subject: Re: Defeating OS buffering?
Message-Id: <eto1xuoz66r.fsf@wormtongue.emschwar>
Vlad Tepes <minceme@start.no> writes:
<snip>
> The C-program:
>
> #include <stdio.h>
> #include <unistd.h>
>
> int main ( void ) {
> printf("Some text...");
> fflush(NULL);
^^^^^^^^^^^^^
<snip>
That's the problem. Take that out, and it won't work. As I said, I
don't have control over the C program, so I can't make it fflush.
-=Eric
--
Come to think of it, there are already a million monkeys on a million
typewriters, and Usenet is NOTHING like Shakespeare.
-- Blair Houghton.
------------------------------
Date: Wed, 10 Sep 2003 15:28:17 -0600
From: Eric Schwartz <emschwar@pobox.com>
Subject: Re: Defeating OS buffering?
Message-Id: <etowucgxrim.fsf@wormtongue.emschwar>
"Trent Curry" <tcurrey@no.no.no.i.said.no> writes:
> "Eric Schwartz" <emschwar@pobox.com> wrote in message
> news:etod6e8zbl2.fsf@wormtongue.emschwar...
>> Vlad Tepes <minceme@start.no> writes:
>> > $|++;
>>
>> This unbuffers STDOUT, which doesn't help when the problem is that the
>> OS is buffering the TEST filehandle.
>
> From perldoc perlvar:
>
> $OUTPUT_AUTOFLUSH
> $|
>
> If set to nonzero, forces a flush right away and after every write
> or print on the currently selected output channel. [Snipped rest]
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> So in this case, TEST is the currently selected output handle.
No, TEST is open for *INPUT*. Hint: if you can read from it, it's
open for input. If you can write to it, it's open for output. And
even if it were open for output, $| only works on filehandles that you
select with the one-arg select(), which Vlad did not do.
-=Eric
--
Come to think of it, there are already a million monkeys on a million
typewriters, and Usenet is NOTHING like Shakespeare.
-- Blair Houghton.
------------------------------
Date: 10 Sep 2003 14:11:42 -0700
From: gupit@yahoo.com (Gupit)
Subject: Re: Detecting duplicate keys in a hash I am "requiring"
Message-Id: <1ea0ec40.0309101311.2ca32d01@posting.google.com>
Unfortunately duplicate keys may have been specified in the lower
level hash structure as well. Replacing only the top level hash
structure wont help there.
e.g. see key7 in the following example
$hash = {
key1 => { key2 => val2,
key3 => val3,
},
key4 => [ vala,valb,],
key1 => {key5 => val5,
key6 => {
key7 => val7,
key8 => val8,
key7 => val9,
}
),
};
(I dont call this structure $hash in real life, I have simplified the
actual data structure here for ease of understanding and to retain the
focus of my question :))
regards,
G
anno4000@lublin.zrz.tu-berlin.de (Anno Siegel) wrote in message news:<bjmqhk$2ce$1@mamenchi.zrz.TU-Berlin.DE>...
> Gupit <gupit@yahoo.com> wrote in comp.lang.perl.misc:
> > Hi,
> > I have a configuration file which is in a perl hash (of hashes and
> > arrays) format.
> >
> > $hash = {
> > key1 => { key2 => val2,
> > key3 => val3,
> > },
> > key4 => [ vala,valb,],
> > key1 => {key5 => val5,
> > key6 => val6
> > ),
> > };
> >
> > In my script I do
> > use vars ($hash);
> > if (eval "require config_file") {
> > print "Required config_file";
> > } else {
> print "Couldn't require config_file\n";
> > }
> >
> > Now if you notice in the config_file, $hash has two duplicate keys
> > i.e. key1 has been specified twice.
>
> There are no duplicate keys in a hash, hash keys are unique by
> construction. Every entry for the same key overwrites the previous
> one.
>
> > When using require, perl simply
> > keeps the last value it encountered for key1.
>
> Not only with require -- the hash doesn't ever contain the duplicate.
>
> > I would like to detect these duplicates. Is there a stricter mode in
> > perl that will print warnings when perl encounters duplicate keys in
> > hashes. I could turn it on for the eval require bit.
>
> Then specify you configuration structure as a list(ref), not a hash.
> Replace the outermost pair of "{}" with "[]" (and change the name from
> $hash to something descriptive). Then read "perldoc -q duplicate"
> to see how to detect duplicates in the list you are importing.
>
> > Is there any other way to detect these duplicates? Unfortunately the
> > config_file is really huge and uses hash of hashes of hashes and
> > arrays and so on and writing a script to parse the file would be
> > painful.
>
> Whatever the structure of the hash, the top level structure is one
> of key/value pairs. That isn't so hard to parse.
>
> Anno
------------------------------
Date: 10 Sep 2003 21:52:47 GMT
From: anno4000@lublin.zrz.tu-berlin.de (Anno Siegel)
Subject: Re: Detecting duplicate keys in a hash I am "requiring"
Message-Id: <bjo6ff$1f0$1@mamenchi.zrz.TU-Berlin.DE>
Gupit <gupit@yahoo.com> wrote in comp.lang.perl.misc:
Please don't top-post. To restore context, I have to recapitulate.
My suggestion, WRT the data structure below, was to use a list (of pairs)
instead of the top-level hash to allow duplicate keys. The list can
trivially be converted to the hash, but can also be checked for duplicates.
> Unfortunately duplicate keys may have been specified in the lower
> level hash structure as well. Replacing only the top level hash
> structure wont help there.
>
> e.g. see key7 in the following example
>
> $hash = {
> key1 => { key2 => val2,
> key3 => val3,
> },
> key4 => [ vala,valb,],
> key1 => {key5 => val5,
> key6 => {
> key7 => val7,
> key8 => val8,
> key7 => val9,
> }
> ),
> };
>
> (I dont call this structure $hash in real life, I have simplified the
> actual data structure here for ease of understanding and to retain the
> focus of my question :))
Well, if the problem re-appears at a lower level, re-apply the solution.
In other words, re-design the data structure so that every hash that may
have a key specified more than once is replaced with a list of pairs.
Mind you, I don't know if this is a good solution in your case -- I know
too little about what use you will make of the structure. It's just
the first thing that came to mind.
[snip TOFU]
Anno
------------------------------
Date: Wed, 10 Sep 2003 22:21:26 +0200
From: =?Windows-1252?Q?Roland_M=F6sl?= <founder@pege.org>
Subject: Download a file in Perl
Message-Id: <3f5f8866$0$18312$91cee783@newsreader02.highway.telekom.at>
I have each month to download all the logfiles from
my web server as GZ compressed files.
From each domain access and referer log files.
The downlaod is only possible in http
Username and password is required.
The proposed name what gives the server
for the downlaod has to be changed.
I work in Perl since 1997, but I have no idea
how to perform this task
A nice extra would be to decompress the files
also, because all the log files are later stored in
one single zip file
--
Roland Mösl
http://www.pege.org Clear targets for a confused civilization
http://web-design-sutie.com Web Design starts at the search engine
------------------------------
Date: Wed, 10 Sep 2003 15:47:41 -0500
From: "J. Gleixner" <glex_nospam@qwest.net>
Subject: Re: Download a file in Perl
Message-Id: <J3M7b.236$aT.29292@news.uswest.net>
Roland Mösl wrote:
> I have each month to download all the logfiles from
> my web server as GZ compressed files.
>
> From each domain access and referer log files.
>
> The downlaod is only possible in http
>
> Username and password is required.
>
> The proposed name what gives the server
> for the downlaod has to be changed.
>
>
> I work in Perl since 1997, but I have no idea
> how to perform this task
>
>
> A nice extra would be to decompress the files
> also, because all the log files are later stored in
> one single zip file
>
Use LWP to "get" a file using http:
perldoc lwpcook
To unzip, you could use system/backticks/or one of many modules
avaialble on CPAN.
------------------------------
Date: Wed, 10 Sep 2003 12:58:32 -0700
From: "Trent Curry" <tcurrey@no.no.no.i.said.no>
Subject: Re: exit() argument evaluation
Message-Id: <bjnvtg$tm$1@news.astound.net>
Dr. Peter Dintelmann wrote:
> Hi John,
>
> "John W. Krahn" <krahnj@acm.org> wrote in message
> news:3F5C4F58.A0C4CD38@acm.org...
>
> [snip]
>
>> exit (a named unary operator) has higher precedence than == while
>> print and die (list operators (rightward)) have lower precedence.
>
>
> ok; this was a fairly stupid question, sorry. It seems that I
> expected unary builtins to behave like list operators called with
> a one element list. Is this counter-intuitive ;-)?
>
> Peter
Wel, I'd say that seems to be how it (read unary) is supposed to behave. If
you use it as you've written it in your subject lines (with the ()s ), then
that solves it.
------------------------------
Date: 10 Sep 2003 14:36:47 -0700
From: quantum_mechanic_1964@yahoo.com (Quantum Mechanic)
Subject: Re: exit() argument evaluation
Message-Id: <f233f2f0.0309101336.3f047013@posting.google.com>
"Dr. Peter Dintelmann" <peter.dintelmann@dresdner-bank.com> wrote:
> ok; this was a fairly stupid question, sorry. It seems that I expected
> unary builtins to behave like list operators called with a one element
> list. Is this counter-intuitive ;-)?
This behavior was most likely chosen to follow the DWIM philosophy --
more often than not, you want named unary operators to grab just one
thing. If they behaved like list operators, and grabbed more than one
thing, what would they do with the extra?
Also, when it doesn't DWIM, it is more likely to generate an obvious
error, rather than eliminate parts or the whole of the following
expression. [Odd behavior is more noticeable than missing behavior.]
The result of a DDWIM [Doesn't DWIM] is more likely to be easily
associated [by the programmer] with the first "list" element, and
using parens should follow.
It might be interesting to have a pipe syntax, so you could have
written:
4==7 ? 0 : 1 | exit
[But then when 4 *does* equal 7, is the result 0, exit(0), or exit(1)?
You'd still need parens.]
------------------------------
Date: Wed, 10 Sep 2003 21:17:26 GMT
From: "\"Dandy\" Randy" <ducott@hotmail.com>
Subject: File Locking Follow-up
Message-Id: <GxM7b.927764$ro6.18579193@news2.calgary.shaw.ca>
Hey, after going through several responses to my previous enquires about
file locking, I have opted to go with the following. Please have a look and
advise me if i'm on the right track. Basically the script is sort of a hit
counter, but with a re-direct. The main critisism of my previous scripts was
having the read and write file open's non contained within a single lock
sequence. So ... at least in my mind ... the following script groups the
read and write within a single lock.
CODE WITH COMMENTS
#!/usr/bin/perl
use Fcntl qw(:DEFAULT :flock);
# opens a simple a dummy file that contains no data
open (LOCKIT, ">data/lockfile.txt") or die "Can't open: $!";
# starts main lock
flock (LOCKIT, LOCK_EX) or die "Can't lock: $!";
# open the real data file for READ
open (DATA, "<data/data.txt") or die "Can't open file: $!";
# starts data READ lock
flock (DATA, LOCK_EX) or die "Can't lock: $!";
# get data contents
$data=<DATA>;
# closes data READ lock and closes data file
close(DATA);
# advance data
$data = $data + 1;
# open the real data file for WRITE
open (DATA, ">data/data.txt") or die "Can't open file: $!";
# starts data WRITE lock
flock (DATA, LOCK_EX) or die "Can't lock: $!";
# writes the advanced data to file
print DATA $data;
# closes data WRITE lock and closes data file
close(DATA);
# closes the main lock
close(LOCKIT);
# redirects user to the required page
$webview="http://www.mysite.com";
print "Location: $webview\n\n";
exit;
CODE WITHOUT COMMENTS
#!/usr/bin/perl
use Fcntl qw(:DEFAULT :flock);
open (LOCKIT, ">data/lockfile.txt") or die "Can't open: $!";
flock (LOCKIT, LOCK_EX) or die "Can't lock: $!";
open (DATA, "<data/data.txt") or die "Can't open file: $!";
flock (DATA, LOCK_EX) or die "Can't lock: $!";
$data=<DATA>;
close(DATA);
$data = $data + 1;
open (DATA, ">data/data.txt") or die "Can't open file: $!";
flock (DATA, LOCK_EX) or die "Can't lock: $!";
print DATA $data;
close(DATA);
close(LOCKIT);
$webview="http://www.mysite.com";
print "Location: $webview\n\n";
exit;
Now both the read and write open's are controlled by a parent lock ... what
do you think? Using this method will my data remain protected in the event
multipul users happen to run the script at the same time? Thanx for all ur
help.
Randy
------------------------------
Date: Wed, 10 Sep 2003 12:48:17 -0400
From: Marshall Dudley <mdudley@execonn.com>
Subject: Is this perl poblem?
Message-Id: <3F5F55D1.29D0BE0D@execonn.com>
When exec is used to shell tar, and tar is piped back to stdout, after
the apache timeout, if the download is cancelled, tar locks up and uses
100% of the free cpu time.
I have posted the problem in several forums, but cannot get any ideas on
it. At this point I don't know if this is an Apache problem, FreeBSD
core problem, gtar problem or perl problem.
Code line is:
exec("/usr/bin/tar cflz - $store_path
/home/kingcart/html/store/$username");
Anyone have any ideas on at least program/system I should be working on
to try and resolve this?
Thanks,
Marshall
------------------------------
Date: Wed, 10 Sep 2003 18:17:19 GMT
From: "Jürgen Exner" <jurgenex@hotmail.com>
Subject: Re: Is this perl poblem?
Message-Id: <PUJ7b.18564$98.1510@nwrddc03.gnilink.net>
Marshall Dudley wrote:
> When exec is used to shell tar, and tar is piped back to stdout, after
> the apache timeout, if the download is cancelled, tar locks up and
> uses 100% of the free cpu time.
>
> I have posted the problem in several forums, but cannot get any ideas
> on it. At this point I don't know if this is an Apache problem,
> FreeBSD core problem, gtar problem or perl problem.
>
> Code line is:
>
> exec("/usr/bin/tar cflz - $store_path
> /home/kingcart/html/store/$username");
>
> Anyone have any ideas on at least program/system I should be working
> on to try and resolve this?
Have you tried using Archive::Tar instead of shelling out to /usr/bin/tar?
jue
------------------------------
Date: Wed, 10 Sep 2003 17:26:49 -0400
From: Marshall Dudley <mdudley@execonn.com>
Subject: Re: Is this perl poblem?
Message-Id: <3F5F9719.882367DA@execonn.com>
No, but I will research it.
Thanks,
Marshall
"Jürgen Exner" wrote:
> Marshall Dudley wrote:
> > When exec is used to shell tar, and tar is piped back to stdout, after
> > the apache timeout, if the download is cancelled, tar locks up and
> > uses 100% of the free cpu time.
> >
> > I have posted the problem in several forums, but cannot get any ideas
> > on it. At this point I don't know if this is an Apache problem,
> > FreeBSD core problem, gtar problem or perl problem.
> >
> > Code line is:
> >
> > exec("/usr/bin/tar cflz - $store_path
> > /home/kingcart/html/store/$username");
> >
> > Anyone have any ideas on at least program/system I should be working
> > on to try and resolve this?
>
> Have you tried using Archive::Tar instead of shelling out to /usr/bin/tar?
>
> jue
------------------------------
Date: Wed, 10 Sep 2003 17:33:20 -0400
From: Marshall Dudley <mdudley@execonn.com>
Subject: Re: Is this perl poblem?
Message-Id: <3F5F98A0.1E6CC434@execonn.com>
It appears to not support compression.
Marshall
"Jürgen Exner" wrote:
> Marshall Dudley wrote:
> > When exec is used to shell tar, and tar is piped back to stdout, after
> > the apache timeout, if the download is cancelled, tar locks up and
> > uses 100% of the free cpu time.
> >
> > I have posted the problem in several forums, but cannot get any ideas
> > on it. At this point I don't know if this is an Apache problem,
> > FreeBSD core problem, gtar problem or perl problem.
> >
> > Code line is:
> >
> > exec("/usr/bin/tar cflz - $store_path
> > /home/kingcart/html/store/$username");
> >
> > Anyone have any ideas on at least program/system I should be working
> > on to try and resolve this?
>
> Have you tried using Archive::Tar instead of shelling out to /usr/bin/tar?
>
> jue
------------------------------
Date: Wed, 10 Sep 2003 17:31:28 -0400
From: "Paanwa" <paanwa@hotmail.com>
Subject: List Hash 'Field' Names
Message-Id: <3f5f9830$0$12611$a0465688@nnrp.fuse.net>
I am trying to write more automated code - either I am a lazy typist or am
just trying to get more with it!
I have a hash I define; I have a form with textboxes for each hash elements.
When the form is submitted the user gets a chance to review their data prior
to finalizing (writing to file).
The form is running from the same piece of code; I would like it to use the
hash values if it is the first time through (the hidden form object named
cycle is set to 0), but use the submitted (either changed or left alone)
data for the second time through - which is the 'review' part mentioned
above (cycle is set to 1).
I really don't want to set my form values manually:
(summary, not actual code)
if (param('cycle') = 0)
{
set Address1 = $hash{$key}{Address1};
}
else
{
set Address1 = param('Address1');
}
I would like to loop through the names of the hash elements like this:
(summary, not actual code)
foreach element_name (keys %hash)
{
if (param('cycle') = 0)
{
set form element_name value = $hash{$key}{element_name};
}
else
{
set form element_name value = param('element_name');
}
}
Note: I name each form object after the corresponding hash name.
So, can this be done?
------------------------------
Date: 10 Sep 2003 21:29:45 GMT
From: anno4000@lublin.zrz.tu-berlin.de (Anno Siegel)
Subject: Re: looping every x seconds
Message-Id: <bjo549$4l$1@mamenchi.zrz.TU-Berlin.DE>
Jürgen Exner <jurgenex@hotmail.com> wrote in comp.lang.perl.misc:
> kernal32 wrote:
> > In PERL how do I execute a block of code every "x" seconds?
> > I'm talking a loop or similar but it runs every x seconds, where x is
> > given.
>
> That is not possible from Perl.
Why on earth not?
> You can put your the Perl program to rest for x seconds using sleep(). But
> this is different from running the loop every x seconds.
> Just imagine it takes y seconds to run the loop. Then the next start of the
> loop will happen x+y seconds after the previous start.
What's stopping you from looking how long it took and subtracting that
from the sleep time? Or something equivalent, more robust? Within
the precision of the timer, it is quite possible to run a Perl loop
at specified intervals.
Anno
------------------------------
Date: 10 Sep 2003 13:18:32 -0700
From: clarsen@chicagogear-dojames.com (Chris Larsen)
Subject: Mime::Parser::Filer problemo
Message-Id: <7c1d85a.0309101218.9886c18@posting.google.com>
Hi,
Trying to run MIME::Parser from amavisd (in chroot jail) and am
getting a writing/reading tempfile error. Permissions are 777 and
amavis ownership on any possible tmp dir. Still getting the same
error. This issue strays from the realm of the amavis newsgroup so
I'm hoping yall can give a brother som luv.
anyone??
thanks
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
The Perl-Users Digest is a retransmission of the USENET newsgroup
comp.lang.perl.misc. For subscription or unsubscription requests, send
the single line:
subscribe perl-users
or:
unsubscribe perl-users
to almanac@ruby.oce.orst.edu.
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
To request back copies (available for a week or so), send your request
to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
where x is the volume number and y is the issue number.
For other requests pertaining to the digest, send mail to
perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
sending perl questions to the -request address, I don't have time to
answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V10 Issue 5480
***************************************