[24497] in Perl-Users-Digest
Perl-Users Digest, Issue: 6677 Volume: 10
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Thu Jun 10 18:10:45 2004
Date: Thu, 10 Jun 2004 15:10:10 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Thu, 10 Jun 2004 Volume: 10 Number: 6677
Today's topics:
Re: Looking for Perl work? <dha@panix2.panix.com>
Re: Looking for Perl work? <postmaster@castleamber.com>
Re: Looking for Perl work? <usenet@morrow.me.uk>
Re: Object oriented form parsing <1usa@llenroc.ude>
Re: parsing file through an array <jgibson@mail.arc.nasa.gov>
Re: parsing file through an array <dwall@fastmail.fm>
Re: parsing file through an array <andrea@spitaleri.fsnet.co.uk>
Re: Passing custom sessionID to cookie 'value' <Juha.Laiho@iki.fi>
perl IF DBI::errsrt <javier@t-online.de>
Re: perl IF DBI::errsrt <usenet@morrow.me.uk>
Re: perl IF DBI::errsrt <javier@t-online.de>
Re: Perl TIMOUT <emschwar@pobox.com>
Re: Print a section of a text file. (Roy)
Reading chunks from file? <bryan@akanta.com>
Re: Reading chunks from file? <ittyspam@yahoo.com>
Re: Reading chunks from file? ctcgag@hotmail.com
Re: Reading chunks from file? <nobull@mail.com>
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Thu, 10 Jun 2004 18:36:20 +0000 (UTC)
From: "David H. Adler" <dha@panix2.panix.com>
Subject: Re: Looking for Perl work?
Message-Id: <slrncchah4.s83.dha@panix2.panix.com>
On 2004-06-10, John Bokma <postmaster@castleamber.com> wrote:
> And no, you don't get job offers via this group (or at least I never
> had), it was a joke, hence the smiley.
More to the point, you shouldn't try...
As I've been known to say:
You have posted a job posting or a resume in a technical group.
Longstanding Usenet tradition dictates that such postings go into
groups with names that contain "jobs", like "misc.jobs.offered", not
technical discussion groups like the ones to which you posted.
Had you read and understood the Usenet user manual posted frequently to
"news.announce.newusers", you might have already known this. :) (If
n.a.n is quieter than it should be, the relevent FAQs are available at
http://www.faqs.org/faqs/by-newsgroup/news/news.announce.newusers.html)
Another good source of information on how Usenet functions is
news.newusers.questions (information from which is also available at
http://www.geocities.com/nnqweb/).
Please do not explain your posting by saying "but I saw other job
postings here". Just because one person jumps off a bridge, doesn't
mean everyone does. Those postings are also in error, and I've
probably already notified them as well.
If you have questions about this policy, take it up with the news
administrators in the newsgroup news.admin.misc.
http://jobs.perl.org may be of more use to you
Yours for a better usenet,
dha
--
David H. Adler - <dha@panix.com> - http://www.panix.com/~dha/
It didn't bother me before there was a for modifier, and now that
there is one, it still doesn't bother me. I'm just not very easy to
bother. - Larry Wall
------------------------------
Date: Thu, 10 Jun 2004 14:17:26 -0500
From: John Bokma <postmaster@castleamber.com>
Subject: Re: Looking for Perl work?
Message-Id: <40c8b3c7$0$198$58c7af7e@news.kabelfoon.nl>
David H. Adler wrote:
> On 2004-06-10, John Bokma <postmaster@castleamber.com> wrote:
>
>
>>And no, you don't get job offers via this group (or at least I never
>>had), it was a joke, hence the smiley.
>
> More to the point, you shouldn't try...
>
> As I've been known to say:
>
> You have posted a job posting or a resume in a technical group.
LOL, sure. Let's break this group down in 3 categories:
1 - shoppers - they need badly help with a script, sometimes downloaded
from a CGI resource site. They need help, free. They don't read the
documentation. They see this group as a helpdesk.
2 - contributers - people who ask questions only after reading
documentation sometimes they help others. Most have some knowledge
about Perl
3 - lurkers - they (try to) learn from reading solutions or just read
for fun.
Can you please explain which category is going to offer me a project?
The first are in for the free ride, the second can do it themselves,
the third most likely too.
Yeah, I have seen project offers now and then. They fall in the no 1
category. People drop an ad, and just sit and wait, and pick most often
cheapest programmer.
I am seriously looking for a good site that offers Perl projects. Here I
probably find none, jobs.perl.org doesn't work for me. And all those
bidding sites are loaded with $5/hour people.
--
John MexIT: http://johnbokma.com/mexit/
personal page: http://johnbokma.com/
Experienced Perl programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html
------------------------------
Date: Thu, 10 Jun 2004 19:26:22 +0000 (UTC)
From: Ben Morrow <usenet@morrow.me.uk>
Subject: Re: Looking for Perl work?
Message-Id: <caacku$2e1$1@wisteria.csv.warwick.ac.uk>
Quoth John Bokma <postmaster@castleamber.com>:
> David H. Adler wrote:
> > On 2004-06-10, John Bokma <postmaster@castleamber.com> wrote:
> >
> >>And no, you don't get job offers via this group (or at least I never
> >>had), it was a joke, hence the smiley.
> >
> > More to the point, you shouldn't try...
> >
> > As I've been known to say:
> >
> > You have posted a job posting or a resume in a technical group.
>
> LOL, sure. Let's break this group down in 3 categories:
>
<snip>
>
> I am seriously looking for a good site that offers Perl projects. Here I
> probably find none, jobs.perl.org doesn't work for me. And all those
> bidding sites are loaded with $5/hour people.
The point is that posts about jobs are OT in this group, and you have
made the same request several times recently, and received what answers
you are going to get here (jobs.perl.org). If that answer is not to your
satisfaction, for whatever reason, please ask elsewhere. Asking here
again will not give you more answers, it will merely irritate people.
Ben
--
'Deserve [death]? I daresay he did. Many live that deserve death. And some die
that deserve life. Can you give it to them? Then do not be too eager to deal
out death in judgement. For even the very wise cannot see all ends.'
ben@morrow.me.uk
------------------------------
Date: 10 Jun 2004 19:39:04 GMT
From: "A. Sinan Unur" <1usa@llenroc.ude>
Subject: Re: Object oriented form parsing
Message-Id: <Xns95049F3684F86asu1cornelledu@132.236.56.8>
"David K. Wall" <dwall@fastmail.fm> wrote in
news:Xns95046E947FA0Bdkwwashere@216.168.3.30:
> A. Sinan Unur <1usa@llenroc.ude> wrote:
>
>> I am testing out an idea and trying to see whether it makes sense.
>>
>> I have a pretty basic CGI application that has multiple forms with
>> a number of common elements on them. But depending on application
>> state some elements may appear on a given form and others may not.
>>
>> I thought it might make sense to encapsulate the validation and
>> untainting of input in a simple object.
>
> Have you looked at CGI::FormBuilder?
I did but all the HTML stuff is in HTML::Template's and I want to stick
with that. OTOH, I may not be missing something.
> I've been playing with a little module for setting up a quick form for
> searching a single database table (or view), where each field is an
> object, and a form/request is handled by an array of those objects.
> It's *very* crude at the moment, as I'm pretty new to OOP, but I may
> eventually use it for something real. (at the very least I'm getting
> some practice, anyway) If you're interested I can send you a copy --
> it's only a few hundred lines.
I would appreciate that actually. I am afraid I can't figure out from the
description above exactly what you are doing and how and it would be
useful for me to see it.
If de-munging my email address below causes problems, you might want to
try:
perl -e "print pack q{H*}, q{73696e616e40756e75722e636f6d}"
Thank you.
Sinan.
--
A. Sinan Unur
1usa@llenroc.ude (reverse each component for email address)
------------------------------
Date: Thu, 10 Jun 2004 08:13:56 -0700
From: Jim Gibson <jgibson@mail.arc.nasa.gov>
Subject: Re: parsing file through an array
Message-Id: <100620040813565567%jgibson@mail.arc.nasa.gov>
In article <ca9je8$d3f$3@wisteria.csv.warwick.ac.uk>, Ben Morrow
<usenet@morrow.me.uk> wrote:
> Quoth spiritelllo@interfree.it (Andrea Spitaleri):
> > Hi
> > I have a file and I would like to remove from it the lines that match
> > the values from an array.
> > Here is the code that I unsuccessfully tried:
[ initial part of program and good advice from Ben snipped ]
>
> LINE: while (<>) {
> for my $i (@h) {
> next if / +\Q$i/;
--------------^
I think that needs to be
next LINE if / +\Q$i/;
[ rest of program and more good advice snipped ]
==================================================================
And just for kicks and because I am trying to learn how to use the
Benchmark module, I ran the following on my Mac G4 using perl 5.8.2:
Jim 56% cat spitaleri.pl
#!/usr/local/bin/perl
use strict;
use warnings;
use Benchmark qw(:all);
my @h = qw/H N/;
my @lines = <DATA>;
print scalar(@lines), " lines read from input\n";
my $re = join '|', map { qr/ +\Q$_/ } @h;
print "re=/$re/\n";
my $count = 10000;
timethese( $count, {
'Regex' => \&do_regex,
'Grep' => \&do_grep,
'Loop' => \&do_loop
});
sub do_grep
{
my @out;
for my $line (@lines) {
push(@out,$line) unless grep($line =~ / +\Q$_/,@h);
}
1;
}
sub do_regex
{
my @out;
for (@lines) {
next if /$re/;
push(@out,$_);
}
1;
}
sub do_loop
{
my @out;
LINE: for(@lines) {
for my $i (@h) {
next LINE if / +\Q$i/;
}
push(@out,$_);
}
1;
}
__END__
2 C2 -0.4158 0.5051 -0.2805 C.3 1 UNK 0.2800
3 H3 -0.0655 0.8795 0.6861 H 1 UNK 0.0000
13 H13 -2.5997 0.4032 -0.4902 H 1 UNK 0.0000
14 N14 -2.0421 -0.8226 1.0724 N.2 1 UNK 0.0476
15 C15 -1.9418 -2.1366 1.4487 C.2 1 UNK 0.0365
1 O1 -0.4981 1.6455 -1.1635 O.3 1 UNK -0.6800
2 C2 -0.4158 0.5051 -0.2805 C.3 1 UNK 0.2800
Jim 57% spitaleri.pl
8 lines read from input
re=/(?-xism: +H)|(?-xism: +N)/
Benchmark: timing 10000 iterations of Grep, Loop, Regex...
Grep: 20 wallclock secs ( 4.54 usr + 0.00 sys = 4.54 CPU) @
2202.64/s (n=10000)
Loop: 17 wallclock secs ( 3.73 usr + 0.00 sys = 3.73 CPU) @
2680.97/s (n=10000)
Regex: 19 wallclock secs ( 3.52 usr + 0.00 sys = 3.52 CPU) @
2840.91/s (n=10000)
------------------------------
Date: Thu, 10 Jun 2004 15:19:42 -0000
From: "David K. Wall" <dwall@fastmail.fm>
Subject: Re: parsing file through an array
Message-Id: <Xns9504733CFF400dkwwashere@216.168.3.30>
Jim Gibson <jgibson@mail.arc.nasa.gov> wrote:
> In article <4de1519a.0406100239.6a902668@posting.google.com>,
> Andrea Spitaleri <spiritelllo@interfree.it> wrote:
>
>> Hi
>> I have a file and I would like to remove from it the lines that
>> match the values from an array.
>> Here is the code that I unsuccessfully tried:
>> #!/usr/bin/perl
>>
>> use warnings;
>> use strict;
>>
>> open (FILE,"<$ARGV[0]") or die "$!";
>>
>>
>> my @h = ("H","N");
>>
>> while (my $line=<FILE>){
>> chomp $line;
>> foreach my $i (@h){
>> next if ($line=~ / +$i/);
>> print "$line\n";
>> }
>> }
>
> And to add to what Anno and Ben have already suggested (putting me
> in august company, indeed), you may also use grep with the array
> of strings to match:
>
> my @h = qw/H N/;
> while (my $line = <FILE>){
> print $line unless grep($line =~ / +\Q$_/,@h);
>}
>
> I am not claiming this is better or faster, just different. :)
>
Or use Tie::File, as the FAQ suggests:
use strict;
use warnings;
use Tie::File;
my @lines;
tie @lines, 'Tie::File', 'filename' or die "Error tieing file: $!";
@lines = grep { !/ H| N/ } @lines;
untie @lines;
------------------------------
Date: Thu, 10 Jun 2004 18:27:09 +0100
From: xspirix <andrea@spitaleri.fsnet.co.uk>
Subject: Re: parsing file through an array
Message-Id: <caa5u4$4et$1@newsg2.svr.pol.co.uk>
Thanks again for the clear and exhaustive explanations. :D
I am learning a lot with you... :))
thanks again
and
------------------------------
Date: Thu, 10 Jun 2004 19:22:03 GMT
From: Juha Laiho <Juha.Laiho@iki.fi>
Subject: Re: Passing custom sessionID to cookie 'value'
Message-Id: <caacbn$rq8$1@ichaos.ichaos-int>
"Robert TV" <ducott_99@yahoo.com> said:
>Hello, i'm writing a script to generate a md5 sessionID and then set it to a
>cookie. I'll show you the script first:
>
>#!/usr/bin/perl
>
>use CGI;
>use CGI::Cookie;
>use CGI::Carp qw(fatalsToBrowser);
>use Digest::MD5 'md5_hex';
A nit, but use something that is not predictable as the session id.
MD5 of something that is predictable is still predictable, and localtime
is rather easily predictable. Output of 'rand' should be ok; so, instead of
>$timeID = localtime;
>$sessionID = md5_hex("$timeID"); #create sessionID out of localtime
have
$sessionID = md5_hex(rand());
And while you're at it, read the perl FAQ to find answer to the question
'What's wrong with always quoting "$vars"?' . Also, running your code
without warnings and without strictures (so, neither "use warnings;", nor
"use strict;" lines in the beginning of your code) is asking for trouble.
Enabling these will require slightly more work to write your scripts, but
it'll also most probably save you from some error situations often enough
to more than make up the added effort.
--
Wolf a.k.a. Juha Laiho Espoo, Finland
(GC 3.0) GIT d- s+: a C++ ULSH++++$ P++@ L+++ E- W+$@ N++ !K w !O !M V
PS(+) PE Y+ PGP(+) t- 5 !X R !tv b+ !DI D G e+ h---- r+++ y++++
"...cancel my subscription to the resurrection!" (Jim Morrison)
------------------------------
Date: Thu, 10 Jun 2004 18:20:03 +0200
From: Xaver Biton <javier@t-online.de>
Subject: perl IF DBI::errsrt
Message-Id: <caa1ng$s65$06$1@news.t-online.com>
Hi,
I'writing a program which will be used to migrate a mysql DB to another
mysql DB.
Because the data quantity is relative big, if an arror occur while
inserting a record in the new db I would like to deviate this record
error table.
How ban achive that.
If someone could make a little example I would be grathefull.
Ceers.
Xaver
------------------------------
Date: Thu, 10 Jun 2004 19:08:54 +0000 (UTC)
From: Ben Morrow <usenet@morrow.me.uk>
Subject: Re: perl IF DBI::errsrt
Message-Id: <caabk6$1kl$2@wisteria.csv.warwick.ac.uk>
Quoth Xaver Biton <javier@t-online.de>:
>
> I'writing a program which will be used to migrate a mysql DB to another
> mysql DB.
>
> Because the data quantity is relative big, if an arror occur while
> inserting a record in the new db I would like to deviate this record
> error table.
I'm afraid I don't understand you here: do you mean 'if an error
occurred I would like to insert the record into a separate table of
errors'? That shouldn't be hard; I'm not sure what it would gain you,
though. In particular, what do you want to do if the insert into the
error table fails as well?
Ben
--
'Deserve [death]? I daresay he did. Many live that deserve death. And some die
that deserve life. Can you give it to them? Then do not be too eager to deal
out death in judgement. For even the very wise cannot see all ends.'
ben@morrow.me.uk
------------------------------
Date: Thu, 10 Jun 2004 23:49:36 +0200
From: Xaver Biton <javier@t-online.de>
Subject: Re: perl IF DBI::errsrt
Message-Id: <caal1d$cr2$04$1@news.t-online.com>
Ben Morrow wrote:
> I'm afraid I don't understand you here: do you mean 'if an error
> occurred I would like to insert the record into a separate table of
> errors'? That shouldn't be hard; I'm not sure what it would gain you,
> though. In particular, what do you want to do if the insert into the
> error table fails as well?
hi;
Sorry if I didn't explain the concept clearly enough. What I mean is for
example, the program has to insert 30000 record in a table, now if the
record 25550 causes an error than the whole process break. I want to
avoid this.
I would like to insert the record which fail in a separate table and let
the program finish the work.
the biggest problem is that I must check the referetial integrity of the
data in the new database, the old database has not referiatial integrity
and many records have no reference, but also why I must tranfer about 20
GB data and I can't stop and go with a such volume.
regards
Xaver
------------------------------
Date: Thu, 10 Jun 2004 11:39:51 -0600
From: Eric Schwartz <emschwar@pobox.com>
Subject: Re: Perl TIMOUT
Message-Id: <etozn7bmhw8.fsf@fc.hp.com>
wwfpalmaria@libero.it (Achille) writes:
> I run a PERL script from Browser (internet Explorer) but the execution
> is very slow so the IIS stop my perl.exe
I can't be arsed to look it up, but Randal Schwartz has a column or
two in his WebTechniques series on how to track and report progress on
long-running jobs started from a CGI script. You could do that
instead of starting a long-running job and forcing the user to wait
until you're done.
-=Eric
--
Come to think of it, there are already a million monkeys on a million
typewriters, and Usenet is NOTHING like Shakespeare.
-- Blair Houghton.
------------------------------
Date: 10 Jun 2004 09:17:05 -0700
From: roy@colibase.bham.ac.uk (Roy)
Subject: Re: Print a section of a text file.
Message-Id: <6bcfdab6.0406100817.876da86@posting.google.com>
> Tut tut... only if $/="\n" :)
Only if $/ eq "\n", surely? 8^)
Roy.
------------------------------
Date: Thu, 10 Jun 2004 16:35:54 GMT
From: Bryan <bryan@akanta.com>
Subject: Reading chunks from file?
Message-Id: <K50yc.69038$EF6.63618@newssvr29.news.prodigy.com>
Hi, I'm reading in a file in fasta format:
>header
DATADATADATA
DATADATA
>header
DATA
I have been doing this:
open (INFILE, "< $filename") or die "Cannot open $filename] for read\n\n";
undef $/;
my @chunks = split(/>/, <INFILE>);
$/ = "\n";
close INFILE;
This works, but this split loses the '>' from the header part of the
file, which I would rather keep for identifying header info later. So
first, why do I lose the '>' on this particular split, is there
something I can do to keep it? Second, is there a better way to split
this file into chunks than I am doing?
Thanks,
Bryan
------------------------------
Date: Thu, 10 Jun 2004 13:05:39 -0400
From: Paul Lalli <ittyspam@yahoo.com>
Subject: Re: Reading chunks from file?
Message-Id: <20040610130257.U8971@dishwasher.cs.rpi.edu>
On Thu, 10 Jun 2004, Bryan wrote:
> Hi, I'm reading in a file in fasta format:
> >header
> DATADATADATA
> DATADATA
>
> >header
> DATA
>
> I have been doing this:
> open (INFILE, "< $filename") or die "Cannot open $filename] for read\n\n";
> undef $/;
> my @chunks = split(/>/, <INFILE>);
> $/ = "\n";
> close INFILE;
>
> This works, but this split loses the '>' from the header part of the
> file, which I would rather keep for identifying header info later. So
> first, why do I lose the '>' on this particular split, is there
> something I can do to keep it?
Have you read the documentation for split? The answer to both questions
is found within.
perldoc -f split
> Second, is there a better way to split
> this file into chunks than I am doing?
Do you need to store the whole file in memory at once? Might it be a
better idea to read one record at a time? Rather than undefining the
input record separator, maybe you want to set that variable to the actual
string which separates your records, and then read a file in one record at
a time.
perldoc perlop
for info on $/
Hope this helps,
Paul Lalli
------------------------------
Date: 10 Jun 2004 17:06:34 GMT
From: ctcgag@hotmail.com
Subject: Re: Reading chunks from file?
Message-Id: <20040610130634.443$Bu@newsreader.com>
Bryan <bryan@akanta.com> wrote:
> Hi, I'm reading in a file in fasta format:
> >header
> DATADATADATA
> DATADATA
>
> >header
> DATA
>
> I have been doing this:
> open (INFILE, "< $filename") or die "Cannot open $filename] for
> read\n\n"; undef $/;
> my @chunks = split(/>/, <INFILE>);
> $/ = "\n";
> close INFILE;
>
> This works, but this split loses the '>' from the header part of the
> file, which I would rather keep for identifying header info later. So
> first, why do I lose the '>' on this particular split, is there
> something I can do to keep it?
You lose the '>' because that is what split does.
You could keep it by using a look-ahead assertion.
split /(?=>)/ , <DATA>
This will probably produce an empty string or a sting containing just
whitespace as the first element.
> Second, is there a better way to split
> this file into chunks than I am doing?
If the file is big, it would probably be better not to slurp it all
at once. You could set $/ ='>', but then you would have an '>' at the
end of every record (except the last), and not one at the beginning if
every record. (You would also have a blank record as the first one read).
This is kind of ugly, but what you gonna do?
Xho
--
-------------------- http://NewsReader.Com/ --------------------
Usenet Newsgroup Service $9.95/Month 30GB
------------------------------
Date: 10 Jun 2004 18:16:48 +0100
From: Brian McCauley <nobull@mail.com>
Subject: Re: Reading chunks from file?
Message-Id: <u9fz93e3jz.fsf@wcl-l.bham.ac.uk>
ctcgag@hotmail.com writes:
> If the file is big, it would probably be better not to slurp it all
> at once. You could set $/ ='>', but then you would have an '>' at the
> end of every record (except the last), and not one at the beginning if
> every record. (You would also have a blank record as the first one read).
> This is kind of ugly, but what you gonna do?
Perpaps File::Stream would help?
--
\\ ( )
. _\\__[oo
.__/ \\ /\@
. l___\\
# ll l\\
###LL LL\\
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc. For subscription or unsubscription requests, send
#the single line:
#
# subscribe perl-users
#or:
# unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.
NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice.
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V10 Issue 6677
***************************************