[30730] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 1975 Volume: 11

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Mon Nov 10 18:09:55 2008

Date: Mon, 10 Nov 2008 15:09:19 -0800 (PST)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Mon, 10 Nov 2008     Volume: 11 Number: 1975

Today's topics:
    Re: [OT]: maximum memory <nospam-abuse@ilyaz.org>
    Re: Huge files manipulation xhoster@gmail.com
    Re: Huge files manipulation <rcaputo@pobox.com>
    Re: Huge files manipulation <bugbear@trim_papermule.co.uk_trim>
    Re: Huge files manipulation <jurgenex@hotmail.com>
    Re: Huge files manipulation <tim@burlyhost.com>
    Re: Huge files manipulation <cartercc@gmail.com>
        Perl -> My SQL - Tutorial or Guide? <noemail@nothere.com>
    Re: Perl -> My SQL - Tutorial or Guide? <kkeller-usenet@wombat.san-francisco.ca.us>
    Re: Perl -> My SQL - Tutorial or Guide? <cartercc@gmail.com>
    Re: Perl -> My SQL - Tutorial or Guide? xhoster@gmail.com
    Re: Should sophisticated Regexp solutions be pay as you <cwilbur@chromatico.net>
    Re: SUBSTR() with replacement or lvalue performance iss sln@netherlands.com
    Re: The end of dynamic scope xhoster@gmail.com
    Re: The end of dynamic scope <xueweizhong@gmail.com>
    Re: The end of dynamic scope xhoster@gmail.com
        Using unreferenced labels <pgodfrin@gmail.com>
        {JOB} Calling all Perl Monks - My client has been looki <patricklarge@gmail.com>
    Re: {JOB} Calling all Perl Monks - My client has been l <tadmc@seesig.invalid>
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Mon, 10 Nov 2008 21:59:34 +0000 (UTC)
From:  Ilya Zakharevich <nospam-abuse@ilyaz.org>
Subject: Re: [OT]: maximum memory
Message-Id: <gfaas6$s2g$1@agate.berkeley.edu>

[A complimentary Cc of this posting was sent to
John W. Krahn
<jwkrahn@shaw.ca>], who wrote in article <JmQRk.408$e54.337@newsfe21.iad>:
> > Hmm, all navel researches I saw were done in (super?) 35mm;

> Maybe you are thinking of Super 8 (8mm)?

Nope, in the years I was using Super 8, navels were not the tops on my
priority lists.

For 35mm, see, e.g., http://www.imdb.com/title/tt0114134/technical

> > never in (15-sprocket) IMAX format.
> 
> AFAIK IMAX is in 70mm, but I don't know how many sprockets the projector 
> has, or if that matters.

With 70mm, the stuff is tricky.  First, it is 65mm when shot, 70mm
when projected.  Then the usual mess strikes: how you put the actual
frames on a strip of film [*].  

In fact, the mess is much nicer than with 35mm film.  IIRC, during
last 40 or so years, mostly two formats were used: 65/15sprockets
(=IMAX), and 65/5sprockets.  (One is 3times larger than the other! [**])
This is the same difference as landscape/portrait printing: on one
film travels vertically, so you need to fit width of frame into 65mm -
performation, and on another film travels horizontally, so you fit the
height of the frame.

Yours,
Ilya

  [*] For 35mm, yesterday I found these gems:

http://www.cinematography.net/edited-pages/Shooting35mm_Super35mmFor_11.85.htm
http://www.arri.de/infodown/cam/ti/format_guide.pdf

  [**] I expect that for most applications, current digicams (e.g.,
       Red One) exceed capacities of 65/5 [***].  However, 65/15 at
       60fps should be significantly better than what Red One gives.
       I think one must have something like 4K x 3K x 3ccd x 48fps to
       get a comparable experience.  (Which led to bandwidth I
       mentioned before.  ;-)

  [***] On the other hand, this calculation is based on interpolation
	from digital photos.  With photos, the very high extinction
	resolution of the film does not enter the equation (since eye
	does not care about *extinction* resolution, only about
	noise-limited resolution - one where S/N ratio drops below
	about 3 - and digital has an order of magnitude better noise).

	But with movies, eye averages out the noise very effectively;
	so the higher extinction resolution of film may have some
	relevance...  Do not think I saw it investigated anywhere; I
	may want to create some digital simulations...

	On the gripping hand, some people in the film industry believe
	that HD video (2K x 1K) is comparable to (digitized) Super 35,
	and this ratio is quite similar to what one gets from photos.
	With photos, the best-processed color 36mm x 24mm is
	approximately equivalent in resolution to a digital shot
	rescaled down to 4 or 5MPix (only that the digital shot would
	have practically no noise).


------------------------------

Date: 10 Nov 2008 15:42:30 GMT
From: xhoster@gmail.com
Subject: Re: Huge files manipulation
Message-Id: <20081110104305.504$HQ@newsreader.com>

klashxx <klashxx@gmail.com> wrote:
> Hi , i need a fast way to delete duplicates entrys from very huge
> files ( >2 Gbs ) , these files are in plain text.
>
> ..To clarify, this is the structure of the file:
>
> 30xx|000009925000194653|00000000000000|20081031|02510|00000005445363|
> 01|F|0207|00|||+0005655,00|||+0000000000000,00
> 30xx|000009925000194653|00000000000000|20081031|02510|00000005445363|
> 01|F|0207|00|||+0000000000000,00|||+0000000000000,00
> 30xx|4150010003502043|CARDS|20081031|MP415001|00000024265698|01|F|
> 1804|
> 00|||+0000000000000,00|||+0000000000000,00
>
> Having a key formed by the first 7 fields i want to print or delete
> only the duplicates( the delimiter is the pipe..).

Given the line wraps, it is hard to figure out what the structure
of your file is.  Every line has from 7 to infinity fields, with the
first one being 30xx?  When you say "print or delete", which one?  Do you
want to do both in a single pass, or have two different programs, one for
each use-case?

>
> I tried all the usual methods ( awk / sort /uniq / sed /grep .. ) but
> it always ended with the same result (out of memory!)

Which of those programs was running out of memory?  Can you use sort
to group lines according to the key without running out of memory?
That is what I do, first use system sort to group keys, then Perl
to finish up.

How many duplicate keys do you expect there to be?  If the number of
duplicates is pretty small, I'd come up with the list of them:

cut -d\| -f1-7 big_file|sort|uniq -d > dup_keys.

And then load dup_keys into a Perl hash, then step through big_file
comparing each line's key to the hash.

> I 'm very new to perl, but i read somewhere tha Tie::File module can
> handle very large files ,

Tie::File has substantial per-line overhead.  So unless the lines
are quite long, Tie::File doesn't increase the size of the file you
can handle by all that much.  Also, it isn't clear how you would use
it anyway.  It doesn't help you keep huge hashes, which is what you need
to group keys efficiently if you aren't pre-sorting.  And while it makes it
*easy* to delete lines from the middle of large files, it does not make
it *efficient* to do so.

> i tried but cannot get the right code...

We can't very well comment on code we can't see.

 ...
> PD:I do not want to split the files.

Why not?

Xho

-- 
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.


------------------------------

Date: Mon, 10 Nov 2008 15:44:16 GMT
From: Rocco Caputo <rcaputo@pobox.com>
Subject: Re: Huge files manipulation
Message-Id: <slrnghgm0r.1ct.rcaputo@eyrie.homenet>

On Mon, 10 Nov 2008 02:24:53 -0800 (PST), klashxx wrote:
> Hi , i need a fast way to delete duplicates entrys from very huge
> files ( >2 Gbs ) , these files are in plain text.
>
> ..To clarify, this is the structure of the file:
>
> 30xx|000009925000194653|00000000000000|20081031|02510|00000005445363|
> 01|F|0207|00|||+0005655,00|||+0000000000000,00
> 30xx|000009925000194653|00000000000000|20081031|02510|00000005445363|
> 01|F|0207|00|||+0000000000000,00|||+0000000000000,00
> 30xx|4150010003502043|CARDS|20081031|MP415001|00000024265698|01|F|
> 1804|
> 00|||+0000000000000,00|||+0000000000000,00
>
> Having a key formed by the first 7 fields i want to print or delete
> only the duplicates( the delimiter is the pipe..).
>
> I tried all the usual methods ( awk / sort /uniq / sed /grep .. ) but
> it always ended with the same result (out of memory!)

> PD:I do not want to split the files.

Exactly how are you using awk/sort/uniq/sed/grep?  Which part of the
pipeline is running out of memory?

Depending on what you're doing, and where you're doing it, you may be
able to tune sort() to use more memory (much faster) or a faster
filesystem for temporary files.

Are the first seven fields always the same width?  If so, you needn't
bother with the pipes.

Must the order of the files be preserved?

-- 
Rocco Caputo - http://poe.perl.org/


------------------------------

Date: Mon, 10 Nov 2008 16:45:08 +0000
From: bugbear <bugbear@trim_papermule.co.uk_trim>
Subject: Re: Huge files manipulation
Message-Id: <FKedndmd4cSJ-IXUnZ2dnUVZ8vWdnZ2d@posted.plusnet>

klashxx wrote:
> Hi , i need a fast way to delete duplicates entrys from very huge
> files ( >2 Gbs ) , these files are in plain text.
> 
> ..To clarify, this is the structure of the file:
> 
> 30xx|000009925000194653|00000000000000|20081031|02510|00000005445363|
> 01|F|0207|00|||+0005655,00|||+0000000000000,00
> 30xx|000009925000194653|00000000000000|20081031|02510|00000005445363|
> 01|F|0207|00|||+0000000000000,00|||+0000000000000,00
> 30xx|4150010003502043|CARDS|20081031|MP415001|00000024265698|01|F|
> 1804|
> 00|||+0000000000000,00|||+0000000000000,00
> 
> Having a key formed by the first 7 fields i want to print or delete
> only the duplicates( the delimiter is the pipe..).

Hmm. If you're blowing ram, I suggest a first pass than generates
a data structure containing a signature, formed by a hash
on your 7 fields; I suggest this key should be 8-12 bytes,
and a file offset, and size.

This can be considered an "index" into your huge file.
Hopefully this index is sufficiently digested to fit in RAM,

The data structure are now sorted(and grouped) by the signatures.
The grouped signatures are now sorted by earliest file offset.

The file is now processed again, using the index; each set of fields
against a signature is simply read in (seek/read), and checked
"long hand".

Matches are output, which was (after all) the point.

I would suggest that a large RAM cache be used for this stage, to minimise
IO.

   BugBear


------------------------------

Date: Mon, 10 Nov 2008 09:25:33 -0800
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: Huge files manipulation
Message-Id: <81rgh41el9rev22eae5agr97hk5drjseut@4ax.com>

klashxx <klashxx@gmail.com> wrote:
>Hi , i need a fast way to delete duplicates entrys from very huge
>files ( >2 Gbs ) , these files are in plain text.
>
>Having a key formed by the first 7 fields i want to print or delete
>only the duplicates( the delimiter is the pipe..).

Hmmm, what is the ratio of unique lines to total lines? I.e. are there
many duplicate lines or only few?

If the number of unique lines is small then a standard approach with
recording each unique line in a hash may work. Then you can simply check
if a line with that content already exists() and delete/print the
duplicate as you encouter it further down the file.

If the number of unique lines is large then that will no longer be
possible and you will have to trade speed and simplicity for memory.
For each line I'd compute a checksum and record that checksum together
with the exact position of each matching line in the hash. 
Then in a second pass those lines with unique checksums are unique while
lines with the same checksum (more than one line was recorded for a
given checksum) are candidates for duplicates and need to be compared
individually.

jue


------------------------------

Date: Mon, 10 Nov 2008 10:26:11 -0800
From: Tim Greer <tim@burlyhost.com>
Subject: Re: Huge files manipulation
Message-Id: <71%Rk.2292$sl.373@newsfe23.iad>

klashxx wrote:

> Hi , i need a fast way to delete duplicates entrys from very huge
> files ( >2 Gbs ) , these files are in plain text.
> 
> ..To clarify, this is the structure of the file:
> 
> 30xx|000009925000194653|00000000000000|20081031|02510|00000005445363|
> 01|F|0207|00|||+0005655,00|||+0000000000000,00
> 30xx|000009925000194653|00000000000000|20081031|02510|00000005445363|
> 01|F|0207|00|||+0000000000000,00|||+0000000000000,00
> 30xx|4150010003502043|CARDS|20081031|MP415001|00000024265698|01|F|
> 1804|
> 00|||+0000000000000,00|||+0000000000000,00
> 
> Having a key formed by the first 7 fields i want to print or delete
> only the duplicates( the delimiter is the pipe..).
> 
> I tried all the usual methods ( awk / sort /uniq / sed /grep .. ) but
> it always ended with the same result (out of memory!)

What is the code you're using now?

> 
> PD:I do not want to split the files.

That could help solve the problem in its current form, potentially. 
Have you considered using a database solution, if nothing more than
just for this type of task, even if you want to continue storing the
fields/data in the files?
-- 
Tim Greer, CEO/Founder/CTO, BurlyHost.com, Inc.
Shared Hosting, Reseller Hosting, Dedicated & Semi-Dedicated servers
and Custom Hosting.  24/7 support, 30 day guarantee, secure servers.
Industry's most experienced staff! -- Web Hosting With Muscle!


------------------------------

Date: Mon, 10 Nov 2008 11:17:56 -0800 (PST)
From: cartercc <cartercc@gmail.com>
Subject: Re: Huge files manipulation
Message-Id: <87e2a78a-094b-45e5-aa3f-cb0ea5721d19@c36g2000prc.googlegroups.com>

On Nov 10, 5:24=A0am, klashxx <klas...@gmail.com> wrote:
> Hi , i need a fast way to delete duplicates entrys from very huge
> files ( >2 Gbs ) , these files are in plain text.
>
> ..To clarify, this is the structure of the file:
>
> 30xx|000009925000194653|00000000000000|20081031|02510|00000005445363|
> 01|F|0207|00|||+0005655,00|||+0000000000000,00
> 30xx|000009925000194653|00000000000000|20081031|02510|00000005445363|
> 01|F|0207|00|||+0000000000000,00|||+0000000000000,00
> 30xx|4150010003502043|CARDS|20081031|MP415001|00000024265698|01|F|
> 1804|
> 00|||+0000000000000,00|||+0000000000000,00
>
> Having a key formed by the first 7 fields i want to print or delete
> only the duplicates( the delimiter is the pipe..).
>
> I tried all the usual methods ( awk / sort /uniq / sed /grep .. ) but
> it always ended with the same result (out of memory!)
>
> In using HP-UX large servers.
>
> I 'm very new to perl, but i read somewhere tha Tie::File module can
> handle very large files , i tried but cannot get the right code...
>
> Any advice will be very well come.
>
> Thank you in advance.
>
> Regards
>
> PD:I do not want to split the files.

1. Create a database with a data table, in two columns, ID_PK and
DATA.
2. Read the file line by line and insert each row into the database,
using the first seven fields as the key. This will insure that you
have no duplicates in the database. As each PK must be unique, your
insert statement will fail for duplicates.
3. Do a select statement from the database and print out all the
records returned.

Splitting the file is irrelevant as virtually 100% of the time you
have to split all files to take a gander at the data.

I still don't understand why you can't use a simple hash rather than a
DB, but my not understanding that point is irrelevant as well.

CC


------------------------------

Date: Mon, 10 Nov 2008 14:31:06 -0500
From: me <noemail@nothere.com>
Subject: Perl -> My SQL - Tutorial or Guide?
Message-Id: <uo2hh41vpbno9b00974l1s3omrv9d5riud@4ax.com>

I'm looking for some recommended tutorials and/or guide to basic
access to MySQL from Perl. I'm lightly experienced in Perl, almost
ignorant of MySQL, and need to jumpstart. I have skills with DB access
and core SQL in other languages/DB's. 

I'll need to be able to do basic access, update, as well as monitor
for common errors (and conditions/returns I'm too newbie to know of). 

Thanks, 




------------------------------

Date: Mon, 10 Nov 2008 11:53:15 -0800
From: Keith Keller <kkeller-usenet@wombat.san-francisco.ca.us>
Subject: Re: Perl -> My SQL - Tutorial or Guide?
Message-Id: <b2gmu5x7pv.ln2@goaway.wombat.san-francisco.ca.us>

On 2008-11-10, me <noemail@nothere.com> wrote:
> I'm looking for some recommended tutorials and/or guide to basic
> access to MySQL from Perl. I'm lightly experienced in Perl, almost
> ignorant of MySQL, and need to jumpstart. I have skills with DB access
> and core SQL in other languages/DB's. 
>
> I'll need to be able to do basic access, update, as well as monitor
> for common errors (and conditions/returns I'm too newbie to know of). 

I'm sure Google works in your universe, but here's a basic intro:

http://www.perl.com/pub/a/1999/10/DBI.html

And of course the DBI docs are authoritative:

http://dbi.perl.org/

Links to the man page, other docs (including MJD's article above) and
the Perl DBI book (whic is also good).

--keith

-- 
kkeller-usenet@wombat.san-francisco.ca.us
(try just my userid to email me)
AOLSFAQ=http://www.therockgarden.ca/aolsfaq.txt
see X- headers for PGP signature information



------------------------------

Date: Mon, 10 Nov 2008 12:08:38 -0800 (PST)
From: cartercc <cartercc@gmail.com>
Subject: Re: Perl -> My SQL - Tutorial or Guide?
Message-Id: <9335e938-69b2-4a3c-a737-bf9c30599dfe@s1g2000prg.googlegroups.com>

On Nov 10, 2:31=A0pm, me <noem...@nothere.com> wrote:
> I'm looking for some recommended tutorials and/or guide to basic
> access to MySQL from Perl. I'm lightly experienced in Perl, almost
> ignorant of MySQL, and need to jumpstart. I have skills with DB access
> and core SQL in other languages/DB's.
>
> I'll need to be able to do basic access, update, as well as monitor
> for common errors (and conditions/returns I'm too newbie to know of).
>
> Thanks,

I highly recommend 'MySQL and Perl for the Web' by Paul Dubois.

I have some criticisms of the book, which I won't discuss, but I can
tell you that I've been reading it for six years and I haven't
exhausted it. I pick it up about once a year, and it amazes me what I
still can learn, after rereading it now all the way through three or
four times, and in bits and pieces a lot more than that.

A caution: I'd make sure you understand Perl references before reading
it. It's okay for SQL, but you really need to be beyond a beginning
stage for Perl.

CC


------------------------------

Date: 10 Nov 2008 22:51:33 GMT
From: xhoster@gmail.com
Subject: Re: Perl -> My SQL - Tutorial or Guide?
Message-Id: <20081110175208.553$bY@newsreader.com>

me <noemail@nothere.com> wrote:
> I'm looking for some recommended tutorials and/or guide to basic
> access to MySQL from Perl.

The general recommended way to access RDBMS from Perl is to use DBI.
See the perldoc for DBI.

For things peculiarities specific to MySQL, see the perldoc for DBD::mysql

> I'm lightly experienced in Perl, almost
> ignorant of MySQL, and need to jumpstart. I have skills with DB access
> and core SQL in other languages/DB's.

I liked "MySQL" by DuBois, which of course is mostly about MySQL itself
but also has chapters on using Perl with MySQL.  Since you are already
familiar with SQL from other DBs, the first half may not be as useful to
you as it was to me, but it is probably still quite handy for the Perl
chapters and as a reference.

Xho

-- 
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.


------------------------------

Date: Mon, 10 Nov 2008 16:44:35 -0500
From: Charlton Wilbur <cwilbur@chromatico.net>
Subject: Re: Should sophisticated Regexp solutions be pay as you go requested?
Message-Id: <86d4h3w918.fsf@mithril.chromatico.net>

>>>>> "TJMcC" == Tad J McClellan <tadmc@seesig.invalid> writes:

    TJMcC> sln@netherlands.com <sln@netherlands.com> wrote:

    >> I've noticed alot of complex regular expression problems posted
    >> here that require alot of thought.
    >> 
    >> Should there be monetary compensation for such complex solutions?

    TJMcC> No. That would violate the non-commercial nature of Usenet.

I think *expecting* compensation is a problem.

On the other hand, if the querent is so appreciative of an answer that
he or she wants to throw some money at a helpful person, I see nothing
wrong with that.

(Yeah, I know.  I'll believe it when I see it.)

Charlton


-- 
Charlton Wilbur
cwilbur@chromatico.net


------------------------------

Date: Mon, 10 Nov 2008 22:23:53 GMT
From: sln@netherlands.com
Subject: Re: SUBSTR() with replacement or lvalue performance issues
Message-Id: <tichh4h9dd8f16g13gd2tu6c01hsmr619f@4ax.com>

On Fri, 07 Nov 2008 11:41:21 +0100, Michele Dondi <bik.mido@tiscalinet.it> wrote:

>On Fri, 07 Nov 2008 02:17:58 GMT, sln@netherlands.com wrote:
>
>>Apart from like, copy from the start of a matched position, to a 
>>file (as opposed to another buffer), then catenating the modification
>>to the file, then continue on with the next match, is the substr
>>(lvalue or replacement) a viable option?
>>
>>I have to consider performance on such large operations.
>
>ISTR that the lvaluedness of substr()'s return value, as long as the
>fact that you can EVEN take references of it and modify the string
>with a sort of action-at-distance was put there specifically for
>performance issues. At some point there were problems with
>substitutions having a lenght larger than the substituted IalsoIRC,
>but they should be solved in recent enough perls.
>
>See: <http://perlmonks.org/?node_id=498434>
>
>
>Michele

            ^^^^^^^^^^^^

Being able to get segment references while not altering the
string works pretty good. Altering the string with the ref's is
possible but I wouldn't trust it and the string would still shrink/expand.

In the simplest example usage, something like below would seem to solve
performance issues. Thanks for the link!


sln
---------------------------------

use strict;
use warnings;

my $bigstring = \"some big big big scalar string";
my $modstring;
my $lastpos = 0;
my @segrefs = ();

while ($$bigstring =~ /big/g)
{
	my ($offset, $curpos) = ($-[0], pos($$bigstring));

	 # modify part (local copy) of the big string
	$modstring = substr $$bigstring, $offset, ($curpos - $offset);
	$modstring .= "-huge";

	 # cache the interval (read only) and modstring references
	push @segrefs, \substr $$bigstring, $lastpos, ($offset - $lastpos);
	push @segrefs, \$modstring;

	$lastpos = $curpos;
}

 # print the new string (to a file maybe)

if ($lastpos)
{
	push @segrefs, \substr $$bigstring, $lastpos;
	for (@segrefs) {
		print $$_;
	}
}



------------------------------

Date: 10 Nov 2008 16:00:47 GMT
From: xhoster@gmail.com
Subject: Re: The end of dynamic scope
Message-Id: <20081110110122.705$25@newsreader.com>

Todd <xueweizhong@gmail.com> wrote:
> The dynamic scope is a slightly different defined compared with the
> lexical scope when considier only the end boundary of the scope. Let
> check the codes below:
>
>   #! /bin/perl -l
>
>   my $a1 = "old value";
>   if (my $a1 = "new value") {};
>   print "case `my' after <if block>: $a1";
>
>   our $a2 = "old value";
>   if (our $a2 = "new value") {};
>   print "case `our' after <if block>: $a2";

Re-"our"ing a variable should be a no-op, shouldn't it?

>
>   local $a3 = "old value";
>   if (local $a3 = "new value") {};
>   print "case `local' after <if block>: $a3";
>
>   local our $a4 = "old value";
>   if (local our $a4 = "new value") {};
>   print "case `local our' after <if block>: $a4";
>
>   __END__
>
>   case `my' after <if block>: old value
>   case `our' after <if block>: new value
>   case `local' after <if block>: new value
>   case `local our' after <if block>: new value
>
> So here only the `my' variable decarlared in the condition expr of
> <if block> totally disapeared after the if block.
>
> May some one here give any hints on this?

What kind of hint are you looking for?  Yes, lexical variables have
somewhat weird scoping rules when declared in conditionals.  But you seem
to have a good handle on what those are.

Xho

-- 
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.


------------------------------

Date: Mon, 10 Nov 2008 08:24:15 -0800 (PST)
From: Todd <xueweizhong@gmail.com>
Subject: Re: The end of dynamic scope
Message-Id: <155adc5a-3d0b-47a0-8335-15a56af6d2e2@a26g2000prf.googlegroups.com>

> What kind of hint are you looking for? =A0Yes, lexical variables have
> somewhat weird scoping rules when declared in conditionals. =A0But you se=
em
> to have a good handle on what those are.

My question is why the local binded value is still there even after
the block is ended. As you see, in the case of my variable, the scope
is ended.

So if you can answer the question, please go ahead.

-Todd


------------------------------

Date: 10 Nov 2008 17:09:26 GMT
From: xhoster@gmail.com
Subject: Re: The end of dynamic scope
Message-Id: <20081110121001.877$6p@newsreader.com>

Todd <xueweizhong@gmail.com> wrote:
>
> My question is why the local binded value is still there even after
> the block is ended. As you see, in the case of my variable, the scope
> is ended.

From "local"s perspective, the conditional is not part of that block.  And
purely lexically, it is not part of that block--the conditional is located
before the opening "{", not between the "{" and "}".

In an attempt to DWIM, "lexical" variables are effectively pushed down to
the block scope (or the scope is pulled up to meet them).  I vaguely recall
something from the guts that there is some kind of pseudo-scope which
encompasses both the conditional and the block for those constructs which
have both, and that the pseudo-scope applies only lexically not
dynamically.

The basic answer to "why is it that way?" is "That is the way the
implementors implemented it."  And thinking about the implications of the
alternatives, it seems to me like the only reasonable way to do it.

Xho

-- 
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.


------------------------------

Date: Mon, 10 Nov 2008 13:35:51 -0800 (PST)
From: pgodfrin <pgodfrin@gmail.com>
Subject: Using unreferenced labels
Message-Id: <0bece4d1-122c-4a1d-a24c-60dc099b16d1@h23g2000prf.googlegroups.com>

Greetings,
I've been told in the past to not use labels (MAIN:) like this:

#!/usr/bin/perl
use strict;
use warnings;
use lib '/home/modules';
use Pgt;

MAIN:
#   $ENV{TESTCD}=1;
    run_code();
exit;

and I foolishly ignored that advice. Mainly 'cause the advice was not
explained and I never had any problems with using a label like this.
Now I've come across a goofy little error. When run_code() croaks, it
returns the line of the MAIN label:

Test Code not set
Use %ENV in your code at rjt line 7

The module Pgt is:

package      Pgt;
use strict;
use warnings;
use Carp;
our ($VERSION, @ISA, @EXPORT, @EXPORT_OK);
require      Exporter;
@ISA       = qw(Exporter);
@EXPORT    = qw(run_code);

$VERSION="1.0" ;

sub run_code()
{
    unless(exists $ENV{'TESTCD'})
      {croak "Test Code not set\nUse %ENV in your code"}
    print $ENV{'TESTCD'};
}   # end run_code

If I comment out the MAIN lable, then croak correctly returns the line
number of failed run_code() call. Can anyone explain why?
pg


------------------------------

Date: Mon, 10 Nov 2008 12:44:58 -0800 (PST)
From: plarge10 <patricklarge@gmail.com>
Subject: {JOB} Calling all Perl Monks - My client has been looking for you...
Message-Id: <7ecd714f-15fe-424b-8cd9-44ba0ef5058e@s1g2000prg.googlegroups.com>

My client is very simply looking for the best and brightest perl
developers in New York City.

Technology Requirement
=95 Perl 5 (Object Oriented Programming is a plus)
=95 MySQL
=95 Ability to code without the use of WYSIWYG editors (vi(m) or emacs)
=95 Stong knowledge of xHTML, CSS and JavaScript is a big plus
=95 Intermediate understanding of administering Unix / Apache / Mysql
environment is required (MySql replication administration is a big
plus)

Additional skills that are a big plus to this position
=95 PHP 4 and 5 (zend framework experience is a big plus)
=95 Familiarity with RSS/XML standards
=95 Experience building web-based tools & applications
=95 Database design & optimization experience

Requirements:
=95 Ability to work at least 40 hours on site in our offices downtown
New York City
=95 Ability to put in a bit of extra time nights/weekends (from
anywhere) when crises arise.
=95 Can work independently with little direct supervision
=95 Maintain project timelines and meet all project deadlines.
=95 Willingness to tackle unfamiliar problems and take the time to
discover "best practice" solutions
=95 Excellent communication skills, both written & verbal
=95 Familiarity with the basics of building a ecommerce & content sites.

If interested please send resumes and salary requirements as well as
history to: patrick@mavstaff.com

--
Patrick M Large
Principal IT Recruiter
Maverick Staffing Solutions
New York, New York
Tel: 917-921-4294
http://www.linkedin.com/in/patricklarge


------------------------------

Date: Mon, 10 Nov 2008 15:47:32 -0600
From: Tad J McClellan <tadmc@seesig.invalid>
Subject: Re: {JOB} Calling all Perl Monks - My client has been looking for you...
Message-Id: <slrnghhavk.n4s.tadmc@tadmc30.sbcglobal.net>

plarge10 <patricklarge@gmail.com> wrote:
> My client is very simply looking for the best and brightest perl
> developers in New York City.


They should consider hiring a recruiter that understands
the social moors of Perl programmers then.

Job postings in discussion newsgroups are spam.

You sir, are therefore a spammer.


(Perl jobs should be posted to the Perl jobs list http://jobs.perl.org )

(perl programmers write code in C, I doubt that that is what
 the client is looking for...
)


-- 
Tad McClellan
email: perl -le "print scalar reverse qq/moc.noitatibaher\100cmdat/"


------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc.  For subscription or unsubscription requests, send
#the single line:
#
#	subscribe perl-users
#or:
#	unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.  

NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice. 

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V11 Issue 1975
***************************************


home help back first fref pref prev next nref lref last post