[31619] in Perl-Users-Digest
Perl-Users Digest, Issue: 2878 Volume: 11
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Wed Mar 17 14:09:26 2010
Date: Wed, 17 Mar 2010 11:09:06 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Wed, 17 Mar 2010 Volume: 11 Number: 2878
Today's topics:
array into hash array with qw <slick.users@gmail.com>
Re: array into hash array with qw <m@rtij.nl.invlalid>
Re: array into hash array with qw <spamtrap@shermpendley.com>
Re: array into hash array with qw (Jens Thoms Toerring)
Controlling traversal depth using File::Find <google@markginsburg.com>
Re: Controlling traversal depth using File::Find (Randal L. Schwartz)
Re: Controlling traversal depth using File::Find <google@markginsburg.com>
Re: Controlling traversal depth using File::Find <ben@morrow.me.uk>
Re: FAQ 5.21 Why can't I just open(FH, ">file.lock")? <kst-u@mib.org>
Re: FAQ 5.21 Why can't I just open(FH, ">file.lock")? <brian.d.foy@gmail.com>
Re: Fast way to fill memory <newsojo@web.de>
Re: Fast way to fill memory <derykus@gmail.com>
Re: Fast way to fill memory <cartercc@gmail.com>
Re: to RG - Lisp lunacy and Perl psychosis <nick_keighley_nospam@hotmail.com>
Re: Well, that's the most obscure Perl bug I've ever se <KBfoMe@realdomain.net>
Word count program that can remove of common words and <pengyu.ut@gmail.com>
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Wed, 17 Mar 2010 00:06:09 -0700 (PDT)
From: Slickuser <slick.users@gmail.com>
Subject: array into hash array with qw
Message-Id: <cf3edc4e-9975-4034-af4c-1a077c0a8156@z1g2000prc.googlegroups.com>
my @ar = ('x','1','y','z');
my %hashVar =
(
'key1' => [qw(1 2 3 4 5 5 6 7 8 9)],
'key2' => [qw(a b c d e) ]
);
@ar values are being populate my parsing a file and get push in.
How do I add @ar into 'key2'?
so it will contain a b c d e x 1 y z
Thanks.
------------------------------
Date: Wed, 17 Mar 2010 08:17:26 +0100
From: Martijn Lievaart <m@rtij.nl.invlalid>
Subject: Re: array into hash array with qw
Message-Id: <69c677-cj6.ln1@news.rtij.nl>
On Wed, 17 Mar 2010 00:06:09 -0700, Slickuser wrote:
> my @ar = ('x','1','y','z');
> my %hashVar =
> (
> 'key1' => [qw(1 2 3 4 5 5 6 7 8 9)], 'key2' => [qw(a b c d
e) ]
> );
>
> @ar values are being populate my parsing a file and get push in.
>
> How do I add @ar into 'key2'?
> so it will contain a b c d e x 1 y z
push @{$hashVar{key2}}, @ar;
Or more clearly:
my $arrayref = $hashVar{key2};
push @$arrayref, @ar;
HTH,
M4
------------------------------
Date: Wed, 17 Mar 2010 03:22:19 -0400
From: Sherm Pendley <spamtrap@shermpendley.com>
Subject: Re: array into hash array with qw
Message-Id: <m2634vxvis.fsf@shermpendley.com>
Slickuser <slick.users@gmail.com> writes:
> my @ar = ('x','1','y','z');
> my %hashVar =
> (
> 'key1' => [qw(1 2 3 4 5 5 6 7 8 9)],
> 'key2' => [qw(a b c d e) ]
> );
>
> @ar values are being populate my parsing a file and get push in.
>
> How do I add @ar into 'key2'?
> so it will contain a b c d e x 1 y z
push @{$hashVar{'key2'}}, @ar;
See also:
perldoc perlref
perldoc perllol
sherm--
------------------------------
Date: 17 Mar 2010 07:57:07 GMT
From: jt@toerring.de (Jens Thoms Toerring)
Subject: Re: array into hash array with qw
Message-Id: <80bgajFhbaU1@mid.uni-berlin.de>
Slickuser <slick.users@gmail.com> wrote:
> my @ar = ('x','1','y','z');
> my %hashVar =
> (
> 'key1' => [qw(1 2 3 4 5 5 6 7 8 9)],
> 'key2' => [qw(a b c d e) ]
> );
> @ar values are being populate my parsing a file and get push in.
> How do I add @ar into 'key2'?
> so it will contain a b c d e x 1 y z
Others have pointed out how to stuff it into the array pointed
to by the 'key2' element when it already exists, but you can do
it also when creating the hash, using
my %hashVar =
(
key1 => [ qw( 1 2 3 4 5 5 6 7 8 9 ) ],
key2 => [ qw( a b c d e ), @ar ]
);
Regards, Jens
--
\ Jens Thoms Toerring ___ jt@toerring.de
\__________________________ http://toerring.de
------------------------------
Date: Tue, 16 Mar 2010 16:10:32 -0700 (PDT)
From: Mark <google@markginsburg.com>
Subject: Controlling traversal depth using File::Find
Message-Id: <a4d53c21-e82b-4ae9-9057-ee764a769580@i25g2000yqm.googlegroups.com>
I am using File::Find to process files in a specified directory tree.
I would like to be have find() only traverse a top level directory
(controlling how deep to go would also be helpful).
This is the best I could do. This code breaks if a relative directory
path is specified. It seems that File::Find should provide a way to
specify the maximum depth but I could not find anything.
use strict ;
use warnings;
use File::Find;
$| = 1;
my $TopDir = '/temp/sub';
find({wanted => \&wanted},$TopDir);
sub wanted {
if (-d $File::Find::name && $File::Find::name ne $TopDir) {
$File::Find::prune = 1 ;
return;
}
return if -d;
# Processing for the file goes here
print "file: $File::Find::name\n";
}
------------------------------
Date: Tue, 16 Mar 2010 18:06:02 -0700
From: merlyn@stonehenge.com (Randal L. Schwartz)
To: Mark <google@markginsburg.com>
Subject: Re: Controlling traversal depth using File::Find
Message-Id: <86r5njn4ed.fsf@blue.stonehenge.com>
>>>>> "Mark" == Mark <google@markginsburg.com> writes:
Mark> I am using File::Find to process files in a specified directory tree.
Mark> I would like to be have find() only traverse a top level directory
Why not just glob it then?
The whole point of File::Find is because glob is only one level!
--
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
<merlyn@stonehenge.com> <URL:http://www.stonehenge.com/merlyn/>
Smalltalk/Perl/Unix consulting, Technical writing, Comedy, etc. etc.
See http://methodsandmessages.vox.com/ for Smalltalk and Seaside discussion
------------------------------
Date: Tue, 16 Mar 2010 19:05:24 -0700 (PDT)
From: Mark <google@markginsburg.com>
Subject: Re: Controlling traversal depth using File::Find
Message-Id: <8f40d5a0-1fb5-4ff2-9eea-cf4ddb4a97c3@q15g2000yqj.googlegroups.com>
> Why not just glob it then?
>
> The whole point of File::Find is because glob is only one level!
The user will specify, as an input parameter, how deep to traverse a
directory tree. I was hoping to be able to use File::Find for all
depths, including the depth of 0. I suppose I could treat depth 0 as
a special condition and use glob in that case but I would like to
avoid that if File::Find is capable.
------------------------------
Date: Wed, 17 Mar 2010 02:33:31 +0000
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: Controlling traversal depth using File::Find
Message-Id: <rkr577-gs22.ln1@osiris.mauzo.dyndns.org>
Quoth Mark <google@markginsburg.com>:
> I am using File::Find to process files in a specified directory tree.
> I would like to be have find() only traverse a top level directory
> (controlling how deep to go would also be helpful).
File::Find::Rule provides a ->maxdepth method, which does exactly this.
It also provides an interface much more like find(1), which is generally
easier to work with.
Ben
------------------------------
Date: Tue, 16 Mar 2010 12:38:15 -0700
From: Keith Thompson <kst-u@mib.org>
Subject: Re: FAQ 5.21 Why can't I just open(FH, ">file.lock")?
Message-Id: <lnfx40f460.fsf@nuthaus.mib.org>
PerlFAQ Server <brian@theperlreview.com> writes:
> 5.21: Why can't I just open(FH, ">file.lock")?
>
> A common bit of code NOT TO USE is this:
>
> sleep(3) while -e "file.lock"; # PLEASE DO NOT USE
> open(LCK, "> file.lock"); # THIS BROKEN CODE
>
> This is a classic race condition: you take two steps to do something
> which must be done in one. That's why computer hardware provides an
> atomic test-and-set instruction. In theory, this "ought" to work:
>
> sysopen(FH, "file.lock", O_WRONLY|O_EXCL|O_CREAT)
> or die "can't open file.lock: $!";
>
> except that lamentably, file creation (and deletion) is not atomic over
> NFS, so this won't work (at least, not every time) over the net. Various
> schemes involving link() have been suggested, but these tend to involve
> busy-wait, which is also less than desirable.
Should this be updated to use the more modern form of open()?
open($LCK, '>', 'file.lock');
And likewise for the sysopen call, which should probably use a scalar
object rather than a bareword ($FH rather than FH)..
--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
------------------------------
Date: Wed, 17 Mar 2010 10:45:38 -0500
From: brian d foy <brian.d.foy@gmail.com>
Subject: Re: FAQ 5.21 Why can't I just open(FH, ">file.lock")?
Message-Id: <170320101045384098%brian.d.foy@gmail.com>
In article <lnfx40f460.fsf@nuthaus.mib.org>, Keith Thompson
<kst-u@mib.org> wrote:
> PerlFAQ Server <brian@theperlreview.com> writes:
> > 5.21: Why can't I just open(FH, ">file.lock")?
>
> Should this be updated to use the more modern form of open()?
Yep. I'll make the fix.
------------------------------
Date: 17 Mar 2010 10:00:58 GMT
From: Oliver 'ojo' Bedford <newsojo@web.de>
Subject: Re: Fast way to fill memory
Message-Id: <4ba0a85a$0$6981$9b4e6d93@newsspool4.arcor-online.net>
Am Mon, 15 Mar 2010 16:00:42 +0000 schrieb bugbear:
> Jürgen Exner wrote:
>> Oliver 'ojo' Bedford <newsojo@web.de> wrote:
>>> For testing purposes I would like to fill chunks of memory (say 20M)
>>> with arbitrary data (say bytes with values 1,2,...255,1,...).
>>>
>>> What would be the fastest method?
>>
>> The easiest method would probably be to define a string with say 256
>> bytes and then use the x operator to repeat it. I would guess it would
>> also be quite fast because it uses perl interna only and doesn't
>> involve a user-level loop.
>
> If speed is really important (it often isn't) you may be able to
> repeatedly "double up" by copying the head of the 20Meg space to itself.
Thanks for the help. I'll try both methods.
Speed is not important in my case - it's just a matter of convenience
(and impatience on my side).
Oliver
------------------------------
Date: Wed, 17 Mar 2010 06:05:11 -0700 (PDT)
From: "C.DeRykus" <derykus@gmail.com>
Subject: Re: Fast way to fill memory
Message-Id: <15cbef1b-f4f7-496e-a0c5-f1a398b30b2e@s25g2000prd.googlegroups.com>
On Mar 17, 3:00=A0am, Oliver 'ojo' Bedford <news...@web.de> wrote:
> Am Mon, 15 Mar 2010 16:00:42 +0000 schrieb bugbear:
>
> > J=FCrgen Exner wrote:
> >> Oliver 'ojo' Bedford <news...@web.de> wrote:
> >>> For testing purposes I would like to fill chunks of memory (say 20M)
> >>> with arbitrary data (say bytes with values 1,2,...255,1,...).
>
> >>> What would be the fastest method?
>
> >> The easiest method would probably be to define a string with say 256
> >> bytes and then use the x operator to repeat it. I would guess it would
> >> also be quite fast because it uses perl interna only and doesn't
> >> involve a user-level loop.
>
> > If speed is really important (it often isn't) you may be able to
> > repeatedly "double up" by copying the head of the 20Meg space to itself=
.
>
> =A0 Thanks for the help. I'll try both methods.
>
> =A0 Speed is not important in my case - it's just a matter of convenience
> (and impatience on my side).
>
You may want to reconsider speed on Win32
though if you've seen the recent thread
"Help on string to array":
http://groups.google.com/group/comp.lang.perl.misc/browse_thread/thread/ddc=
58df14fe6e965/9c713aab4e254fdf?hl=3Den&lnk=3Dgst&q=3Dmalloc+Win32#
One speed-up possibility is to grow the string
by doubling:
$max =3D 20_000_000;
$str =3D pack "C*",0..255;
do {$str .=3D $str;} until length $str >=3D $max;
substr($str, $max-length($str)) =3D '');
--
Charles DeRykus
------------------------------
Date: Wed, 17 Mar 2010 07:00:13 -0700 (PDT)
From: ccc31807 <cartercc@gmail.com>
Subject: Re: Fast way to fill memory
Message-Id: <b385b8b5-1156-454f-b23c-2fe81b576b2d@d27g2000yqf.googlegroups.com>
On Mar 15, 5:22=A0am, Oliver 'ojo' Bedford <news...@web.de> wrote:
> Hi!
>
> For testing purposes I would like to fill chunks of memory (say 20M) with
> arbitrary data (say bytes with values 1,2,...255,1,...).
>
> What would be the fastest method?
>
> Oliver
Use the recursive Fibonacci function, not tail recursive. It calls
itself twice on each iteration, so clogs up memory even for small
values.
use strict;
use warnings;
sub fib
{
my $n =3D shift;
if ($n <=3D 1) { return 1; }
else { return (fib($n - 1) + fib($n - 2)); }
}
my $N =3D $ARGV[0];
my $f =3D fib($N);
print "$f\n";
exit(0);
------------------------------
Date: Wed, 17 Mar 2010 06:08:52 -0700 (PDT)
From: Nick Keighley <nick_keighley_nospam@hotmail.com>
Subject: Re: to RG - Lisp lunacy and Perl psychosis
Message-Id: <a66a4e5f-509e-4b0c-8a75-3c72a26386d6@q16g2000yqq.googlegroups.com>
On 15 Mar, 23:23, RG <rNOSPA...@flownet.com> wrote:
> But FWIW, there are sound theoretical reasons to believe that Lisp
> programs are easier to debug than Perl programs, mainly because Lisp has
> a REPL and Perl (normally) does not.
why would the presence of a REPL theoretically make debugging
something easier? Whose theory? Ive debugged small Perl and Scheme
programs.
------------------------------
Date: Wed, 17 Mar 2010 12:01:37 -0500
From: "Kyle T. Jones" <KBfoMe@realdomain.net>
Subject: Re: Well, that's the most obscure Perl bug I've ever seen
Message-Id: <hnr1th$kje$1@news.eternal-september.org>
Peter J. Holzer wrote:
> On 2010-03-12 16:46, Kyle T. Jones <KBfoMe@realdomain.net> wrote:
>> Peter J. Holzer wrote:
>>> On 2010-03-11 23:17, Kyle T. Jones <KBfoMe@realdomain.net> wrote:
>>>> I guess instead of a snark, I could have offered this (it's ugly,
>>>> there's undoubtedly an easier way to do it, I invite criticism, but I'm
>>>> 95% it'll work just fine):
>>>>
>>> [script snipped]
>>>
>>> I seem to be missing the part of the script which fixes all the errors
>>> and warnings ...
>> So add in a section that splits the script into subs and main then
>> parses each for the first instance of declared variables, and append my
>> to the front of each. That's likely to be a huge percentage of the
>> errors you'll hit adding in use strict to scripts authored by folks that
>> don't use, well, use strict.
>>
>> You know what though - I did think he had said he had "thousands of
>> scripts" - but what he said was "several scripts of a thousand lines of
>> code". So, that changes things.
>>
>> Tell you what, though - you go and create something that "fixes all the
>> errors and warnings" for any perl script (or any language for that
>> matter). You'll be rich, @#$%^.
>
> Well, that was sort of my point. Just mechanically adding a line "use
> strict;" is easy, but that almost certainly isn't what keeps the OP from
> doing it.
Especially given that it was just a couple scripts, what I suggested
wouldn't be helpful at all.
Wading through the plethora of warnings and errors and
> deciding how to fix them is the real work and that cannot be easily
> automated. Automatically generating my() or our() declarations isn't
> quite trivial due to the irregular syntax of Perl but should be possible
> - but I think the danger of covering up or creating bugs is greater than
> the benefit.
>
> I'd do it like this:
>
> When you need to change anything in a script the very first time, you
> have to spend some time anyway to read and understand it. So this is the
> perfect time to add "use warnings; use strict" an fix all the resulting
> errors and warnings as you go. (This is also an advantage of "use
> warnings" over "-w" - it affects only the source file, not any used
> modules, so you can add it one file at a time)
>
> hp
>
Check.
Cheers.
------------------------------
Date: Tue, 16 Mar 2010 15:51:33 -0700 (PDT)
From: Peng Yu <pengyu.ut@gmail.com>
Subject: Word count program that can remove of common words and consider the different forms?
Message-Id: <7158161a-89e2-40b2-ac4f-013d766f3766@q21g2000yqm.googlegroups.com>
The following word count program is kind of primitive. It dosn't
remove the common words like 'is' 'and'...
Also it doesn't consider different forms of words. For examples,
'tree' and 'trees' are the same word.
http://foundationstone.com.au/FoundationStone.html?./HtmlSupport/WebPage/wordcount.html
Could somebody let me know if there is a better perl script that can
do a better word count job at least for text in English?
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
Back issues are available via anonymous ftp from
ftp://cil-www.oce.orst.edu/pub/perl/old-digests.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V11 Issue 2878
***************************************