[31907] in Perl-Users-Digest
Perl-Users Digest, Issue: 3170 Volume: 11
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Tue Oct 12 16:09:25 2010
Date: Tue, 12 Oct 2010 13:09:09 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Tue, 12 Oct 2010 Volume: 11 Number: 3170
Today's topics:
Angelina Jolie CUTTED Sex Sceen In the Movie......Eroti <razorchinal@yahoo.com>
Re: becoming superuser <hjp-usenet2@hjp.at>
Re: Communication across Perl scripts <rvtol+usenet@xs4all.nl>
Re: Communication across Perl scripts <m@rtij.nl.invlalid>
Re: Communication across Perl scripts (Randal L. Schwartz)
Re: Communication across Perl scripts <jl_post@hotmail.com>
Re: Date difference in days <use-net@tinita.de>
Re: Date difference in days <RedGrittyBrick@spamweary.invalid>
Re: Date difference in days <jurgenex@hotmail.com>
Re: Date difference in days <glex_no-spam@qwest-spam-no.invalid>
Re: Date difference in days <hjp-usenet2@hjp.at>
Re: How can I check if two refs point to the same objec <jl_post@hotmail.com>
Re: Looping on "if" statement? <zihav@yahoo.com>
real time log parser? <tch@nospam.wpkg.org>
Re: real time log parser? <RedGrittyBrick@spamweary.invalid>
Re: real time log parser? <tch@nospam.wpkg.org>
Re: real time log parser? <RedGrittyBrick@spamweary.invalid>
Re: real time log parser? <RedGrittyBrick@spamweary.invalid>
Re: real time log parser? <tzz@lifelogs.com>
Re: real time log parser? <tch@nospam.wpkg.org>
suitable key for a hash <cartercc@gmail.com>
Re: suitable key for a hash <RedGrittyBrick@spamweary.invalid>
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Tue, 12 Oct 2010 11:53:52 -0700 (PDT)
From: Abhilasha <razorchinal@yahoo.com>
Subject: Angelina Jolie CUTTED Sex Sceen In the Movie......Erotic Video Must See
Message-Id: <5d30af91-ec95-4759-a4be-704d58bf04f5@d17g2000yqm.googlegroups.com>
I have hidden the video by google premises .... To view .....CLICK on
the IMAGE below the SEARCH BOX to watch http://allaboutactress.co.cc
------------------------------
Date: Tue, 12 Oct 2010 20:45:26 +0200
From: "Peter J. Holzer" <hjp-usenet2@hjp.at>
Subject: Re: becoming superuser
Message-Id: <slrnib9b66.q1l.hjp-usenet2@hrunkner.hjp.at>
On 2010-10-09 21:19, John Smith <john@example.invalid> wrote:
> Peter J. Holzer wrote:
>
> [snipped a lot; I'll get to that part later]
>
>> A declaration is a construct which declares that a "thing" exists and has
>> a certain name. For example,
>>
>> my $x;
>>
>> is a declaration: It declares that there is a lexical, scalar variable
>> named "$x" (or just "x" - it is debatable whether the dollar sign is
>> part of the name).
>>
>> sub double {
>> my ($x) = @_;
>> return $x * 2;
>> }
>>
>> is also a declaration. It declares a subroutine named "double", which
>> returne the value of its first argument multiplied by 2. Finally,
>>
>> sub half;
>>
>> is a declaration which declares that a subroutine "half" exists, but it
>> doesn't define what "half" does - that would be elsewhere.
>
> So, I'd like to figure out the qq equivalents, but I don't understand
> why the compiler couldn't just optimize the whole thing away:
A compiler could theoretically notice that your function "myfunc"
doesn't do anything and optimize away all calls to myfunc. However, the
perl compiler is called every time you invoke a perl script. You really
don't want it to perform complicated control flow analysis (and in the
case of Perl this is a lot more complicated than it looks at first
glance because functions can be redefined at run time).
> sub myfunc{};
[...]
> Why would anybody on god's green earth write this function, in any of
> its forms?
I have no idea. You wrote it. Why did you write it?
hp
------------------------------
Date: Tue, 12 Oct 2010 12:44:46 +0200
From: "Dr.Ruud" <rvtol+usenet@xs4all.nl>
Subject: Re: Communication across Perl scripts
Message-Id: <4cb43c1e$0$81475$e4fe514c@news.xs4all.nl>
On 2010-10-11 18:25, Jean wrote:
> I have two scripts; Script 1 generates some data. I want my
> script two to be able to access that information. The easiest/dumbest
> way is to write the data generated by script 1 as a file and read it
> later using script 2. Is there any other way than this ?
I normally use a database for that. Script-1 can normally be scaled up
by making it do things in parallel (by chunking the input in an obvious
non-inter-dependable way).
Script-2 can also just be a phase in script-1. Once all children are
done processing, there normally is a reporting phase.
> There is no guarantee that Script 2 will be run after Script 1. So
> there should be some way to free that memory using a watchdog timer.
When the intermediate data is in temporary database tables, they
disappear automatically with the close of the connection.
--
Ruud
------------------------------
Date: Tue, 12 Oct 2010 13:55:38 +0200
From: Martijn Lievaart <m@rtij.nl.invlalid>
Subject: Re: Communication across Perl scripts
Message-Id: <quudo7-2l5.ln1@news.rtij.nl>
On Tue, 12 Oct 2010 10:05:57 +0200, Peter Makholm wrote:
> "jl_post@hotmail.com" <jl_post@hotmail.com> writes:
>
>> That may be easiest, but I don't think it's the dumbest. And if
>> you use this approach, I highly recommend using the "Storable" module
>> (it's a standard module so you should already have it).
>
> As long as you just use it for a single host for very temporary files,
> Storable is fine. But I have been bitten by Storable not being
> compatible between versions or different installations one time to many
> to call it 'highly recommended'.
Another way might be Data::Dumper.
M4
------------------------------
Date: Tue, 12 Oct 2010 08:15:35 -0700
From: merlyn@stonehenge.com (Randal L. Schwartz)
Subject: Re: Communication across Perl scripts
Message-Id: <86eibva1d4.fsf@red.stonehenge.com>
>>>>> "Jean" == Jean <alertjean@gmail.com> writes:
Jean> I am searching for efficient ways of communication across two Perl
Jean> scripts. I have two scripts; Script 1 generates some data. I want my
Jean> script two to be able to access that information.
Look at DBM::Deep for a trivial way to store structured data, including
having transactions so the data will change "atomically".
And despite the name... DBM::Deep has no XS components... so it can even
be installed in a hosted setup with limited ("no") access to compilers.
Disclaimer: Stonehenge paid for part of the development of DBM::Deep,
because yes, it's *that* useful.
print "Just another Perl hacker,"; # the original
--
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
<merlyn@stonehenge.com> <URL:http://www.stonehenge.com/merlyn/>
Smalltalk/Perl/Unix consulting, Technical writing, Comedy, etc. etc.
See http://methodsandmessages.posterous.com/ for Smalltalk discussion
------------------------------
Date: Tue, 12 Oct 2010 11:14:55 -0700 (PDT)
From: "jl_post@hotmail.com" <jl_post@hotmail.com>
Subject: Re: Communication across Perl scripts
Message-Id: <e5045309-e9e5-4212-bcc9-9f38fb41193b@z28g2000yqh.googlegroups.com>
On Oct 12, 2:05=A0am, Peter Makholm <pe...@makholm.net> wrote:
>
> As long as you just use it for a single host for very temporary files,
> Storable is fine. But I have been bitten by Storable not being
> compatible between versions or different installations one time to
> many to call it 'highly recommended'.
I was under the impression that Storable::nstore() was cross-
platform compatible (as opposed to Storable::store(), which isn't).
"perldoc Storable" has this to say about it:
> You can also store data in network order to allow easy
> sharing across multiple platforms, or when storing on a
> socket known to be remotely connected. The routines to
> call have an initial "n" prefix for *network*, as in
> "nstore" and "nstore_fd".
Unfortunately, it doesn't really specify the extent of what was
meant by "multiple platforms". I always thought that meant any
platform could read data written out by nstore(), but since I've never
tested it, I can't really be sure.
When you said you were "bitten" by Storable, were you using
Storable::store(), or Storable::nstore()?
-- Jean-Luc
------------------------------
Date: 12 Oct 2010 10:58:31 GMT
From: Tina =?ISO-8859-15?Q?M=FCller?= <use-net@tinita.de>
Subject: Re: Date difference in days
Message-Id: <8hitanFrs4U1@mid.individual.net>
Paul E. Schoen <paul@pstech-inc.com> wrote:
>
> BTW this webpage gets about 12 hits per day. Not much chance of a collision,
> and probably not much damage if it happens. Losing count of a hit is not a
> real problem,
That's not the biggest problem.
This can happen if
process 1 opens file and reads counter n, increments n
process 2 opens file and reads counter n, increments n
process 1 writes file and writes n+1
process 2 writes file and writes n+1
So the hit from the second process gets lost. You say, ok, but
that's just not very probable, so don't care if there are some
lost hits.
The *real* problem is, that this code can truncate your counter to zero.
(happened to me long time ago with my first counter =)
process 1 opens file and reads counter n, increments n
process 1 opens file with ">" mode, file is truncated
process 2 opens file and reads - nothing. oops! n=0, increments n
process 1 gets lock, writes n+1, closes
process 2 writes file with a count of 1
That's because in between the opening (and truncate) and the
flock there can be another process opening the file.
hth,
tina
--
http://www.perl-community.de/
http://perlpunks.de/
------------------------------
Date: Tue, 12 Oct 2010 13:53:42 +0100
From: RedGrittyBrick <RedGrittyBrick@spamweary.invalid>
Subject: Re: Date difference in days
Message-Id: <4cb45a56$0$2518$db0fefd9@news.zen.co.uk>
On 12/10/2010 07:58, Paul E. Schoen wrote:
>
> "J. Gleixner" <glex_no-spam@qwest-spam-no.invalid> wrote in message
>>
>> perldoc -q "I still don't get locking."
>
> Yes, I read that (although I had to log in to my server with telnet,
> which is inconvenient from a Win Vista machine).
* http://perldoc.perl.org/
* Install Putty on Vista
* Install perl on Vista
--
RGB
------------------------------
Date: Tue, 12 Oct 2010 07:39:50 -0700
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: Date difference in days
Message-Id: <ons8b65ooqci7i0ronqep57n95ah7mptmg@4ax.com>
"Paul E. Schoen" <paul@pstech-inc.com> wrote:
>"J. Gleixner" <glex_no-spam@qwest-spam-no.invalid> wrote in message
>> perldoc -q "I still don't get locking."
>
>Yes, I read that (although I had to log in to my server with telnet, which
>is inconvenient from a Win Vista machine).
How so? Perl including perldoc runs just fine on Vista.
jue
------------------------------
Date: Tue, 12 Oct 2010 09:59:19 -0500
From: "J. Gleixner" <glex_no-spam@qwest-spam-no.invalid>
Subject: Re: Date difference in days
Message-Id: <4cb477c7$0$89398$815e3792@news.qwest.net>
Paul E. Schoen wrote:
>
> "J. Gleixner" <glex_no-spam@qwest-spam-no.invalid> wrote in message
> news:4cb31756$0$2511$815e3792@news.qwest.net...
>> Paul E. Schoen wrote:
>> [...]
>>> The script can be seen in action as the hit counter in:
>> [...]
>>
>> perldoc -q "I still don't get locking."
>
> Yes, I read that (although I had to log in to my server with telnet,
> which is inconvenient from a Win Vista machine).
What does using Telnet have to do with anything???
Perl can be installed on virtually any OS and it comes with the
documentation.
[...]
> I think the argument against hit counters is really splitting hares, and
> that's pretty rough on the rabbits.
Yes. The reference to that was on how useless it is to display
counters.
You can get all statistics from the logs or even using Google Analytics,
without any of your "customers" seeing how little the site is being
used. A counter on a Web page screams, "Hey, this is my first web
page and I just started reading an HTML book from 1996."
If you want a less amateur looking site, drop the counter. Oh, and
before you get to the page in the book on blinking text, avoid that
one too. :-)
------------------------------
Date: Tue, 12 Oct 2010 21:12:49 +0200
From: "Peter J. Holzer" <hjp-usenet2@hjp.at>
Subject: Re: Date difference in days
Message-Id: <slrnib9cpj.q1l.hjp-usenet2@hrunkner.hjp.at>
On 2010-10-12 06:58, Paul E. Schoen <paul@pstech-inc.com> wrote:
>
> "J. Gleixner" <glex_no-spam@qwest-spam-no.invalid> wrote in message
> news:4cb31756$0$2511$815e3792@news.qwest.net...
>> Paul E. Schoen wrote:
>> [...]
>>> The script can be seen in action as the hit counter in:
>> [...]
>>
>> perldoc -q "I still don't get locking."
>
> Yes, I read that (although I had to log in to my server with telnet, which
> is inconvenient from a Win Vista machine).
>
> This is the relevant code.
>
> open (LOG, '<', "$logpath");
> my @file = <LOG>; # an array of the file contents
> close(LOG);
>
> my $count = $file[0]; # the count value is the first line in the file,
> i.e. $file[0]
> $count++; # increments the counter value
>
> open (LOG, ">$logpath"); # opens the log file for writing
> flock(LOG, 2); # file lock set
> print LOG "$count\n"; # prints out the new counter value to the file
> flock(LOG, 8); # file lock unset
> close(LOG);
>
> This was from an existing script.
Wherever you got that script from, don't get any more scripts from
there. That's just awful.
As Tina mentioned, the flock there is useless. It doesn't protect the
two obvious race conditions. All it does protect is a single printf,
which almost certainly doesn't change the file anyway (the count is only
written at close, *after* you release the lock).
There are two ways to do this.
The safe way:
use IO::Handle; # for flush
open (my $log_fh, '+<', $logpath) or die "cannot open $logpath: $!";
# the file is now open for reading and writing, so we can lock it
flock($log_fh, LOCK_EX) or die "cannot lock $logpath: $!";
# read current counter and increment it
my $count = <$log_fh>;
$count++;
# rewind to begin of file and write the new counter
seek($log_fh, 0, SEEK_SET);
print $log_fh $count;
$log_fh->flush() or die "cannot flush $logpath: $!";
# after flush we know that the file counter has been written
# (at least to the OS disk cache, not necessarily the disk),
# so we can release the lock
flock($log_fh, LOCK_UN);
# done - close the file
close($log_fh) or die "cannot close $logpath: $!";
(actually, in this simple case, flush and flock($log_fh, LOCK_UN) are
redundant - close will automaticall flush any pending writes and unlock
the file (in this order)).
> BTW this webpage gets about 12 hits per day. Not much chance of a collision,
> and probably not much damage if it happens.
If you don't mind losing a hit every now and then you can do it safely
without locks:
open (my $log_fh, '<', $logpath) or die "cannot open $logpath: $!";
my $count = <$log_fh>;
$count++;
close($log_fh);
open (my $log_fh, '>', "$logpath.$$") or die "cannot open $logpath.$$: $!";
print $log_fh $count;
close($log_fh) or die "cannot close $logpath: $!";
rename("$logpath.$$", $logpath) or die "cannot rename $logpath.$$ to $logpath: $!";
If a second hit happens bitween the open and the rename, it won't be
counted. But the counter will never be accidentally reset to zero.
hp
------------------------------
Date: Tue, 12 Oct 2010 07:56:15 -0700 (PDT)
From: "jl_post@hotmail.com" <jl_post@hotmail.com>
Subject: Re: How can I check if two refs point to the same object?
Message-Id: <92330aa8-9d95-496d-9001-19099a242ab3@g17g2000yqo.googlegroups.com>
> jl_p...@hotmail.com <jl_p...@hotmail.com> wrote:
>
> > =A0 Is there a way to see which references refer to the same
> > object, even when the '=3D=3D' and 'eq' operators are overloaded?
On Oct 11, 5:28=A0pm, pac...@kosh.dhis.org (Alan Curry) replied:
>
> Get their addresses with Scalar::Util::refaddr and compare.
Thanks, Alan! That's exactly what I'm looking for!
-- Jean-Luc
------------------------------
Date: Tue, 12 Oct 2010 05:39:14 -0700 (PDT)
From: T <zihav@yahoo.com>
Subject: Re: Looping on "if" statement?
Message-Id: <ba2a35f2-7499-42a7-9f5c-15a99cd01d4b@30g2000yqm.googlegroups.com>
Greetings,
Just wanted to place an update in case someone else runs into this
problem. It appears to be a Redhat 5.5 issues with the following bit
of code and XML:
grep{m/[<>]/ } @out
@out contains the XML output from a AccuRev CLI command. Here's what
we found:
grep { m/<>/ } @array
* infinite loop only when run on RH5.5 in the overall context of of
our program
* when run by itself in a test program with the same data, it=92s fine,
and also runs fine
* in the context of the perl debugger.
grep m/<>/, @array * works as expected
We've switch to the second, Not sure why RH5.5 has this problem. Hope
this helps someone else.
Thanks for the help
Tom
------------------------------
Date: Tue, 12 Oct 2010 14:33:04 +0200
From: Tomasz Chmielewski <tch@nospam.wpkg.org>
Subject: real time log parser?
Message-Id: <8hj2s1F3fgU1@mid.uni-berlin.de>
I would like to write a log parser which would work "in real time".
Meaning, it will read i.e. /var/log/mail.info and append interesting
entries it finds to a database, according to some criteria.
What should I look at / read to achieve it? I'm OK to create a "static"
parser like this, where the file is parsed once, but I don't have much
experience with continuous processing of files which grow (and at times,
are rotated/removed/truncated).
--
Tomasz Chmielewski
http://wpkg.org
------------------------------
Date: Tue, 12 Oct 2010 14:06:16 +0100
From: RedGrittyBrick <RedGrittyBrick@spamweary.invalid>
Subject: Re: real time log parser?
Message-Id: <4cb45d48$0$2508$db0fefd9@news.zen.co.uk>
On 12/10/2010 13:33, Tomasz Chmielewski wrote:
> I would like to write a log parser which would work "in real time".
>
> Meaning, it will read i.e. /var/log/mail.info and append interesting
> entries it finds to a database, according to some criteria.
>
>
> What should I look at / read to achieve it? I'm OK to create a "static"
> parser like this, where the file is parsed once, but I don't have much
> experience with continuous processing of files which grow (and at times,
> are rotated/removed/truncated).
>
>
(tail -f /var/log/mail.info | ./parser.pl) &
--
RGB
------------------------------
Date: Tue, 12 Oct 2010 15:23:38 +0200
From: Tomasz Chmielewski <tch@nospam.wpkg.org>
Subject: Re: real time log parser?
Message-Id: <8hj5qqFjsiU1@mid.uni-berlin.de>
On 12.10.2010 15:06, RedGrittyBrick wrote:
>> What should I look at / read to achieve it? I'm OK to create a "static"
>> parser like this, where the file is parsed once, but I don't have much
>> experience with continuous processing of files which grow (and at times,
>> are rotated/removed/truncated).
>>
>>
>
>
> (tail -f /var/log/mail.info | ./parser.pl) &
And then, logrotate / restart syslog
echo blah >> /var/log/mail.info
Oops, nothing new gets to the parser!
Besides, it'd be interesting to get rid of the tail binary, too.
--
Tomasz Chmielewski
http://wpkg.org
------------------------------
Date: Tue, 12 Oct 2010 14:33:42 +0100
From: RedGrittyBrick <RedGrittyBrick@spamweary.invalid>
Subject: Re: real time log parser?
Message-Id: <4cb463b5$0$12173$fa0fcedb@news.zen.co.uk>
On 12/10/2010 14:23, Tomasz Chmielewski wrote:
> On 12.10.2010 15:06, RedGrittyBrick wrote:
>
>>> What should I look at / read to achieve it? I'm OK to create a "static"
>>> parser like this, where the file is parsed once, but I don't have much
>>> experience with continuous processing of files which grow (and at times,
>>> are rotated/removed/truncated).
>>>
>>>
>>
>>
>> (tail -f /var/log/mail.info | ./parser.pl) &
>
> And then,
yes sorry.
> logrotate
kill and restart the parser using logrotate's postrotate feature?
> / restart syslog
???
>
> Besides, it'd be interesting to get rid of the tail binary, too.
"This is the Unix philosophy: Write programs that do one thing and do it
well. Write programs to work together. Write programs to handle text
streams, because that is a universal interface." -- Doug McIlroy
--
RGB
------------------------------
Date: Tue, 12 Oct 2010 14:40:36 +0100
From: RedGrittyBrick <RedGrittyBrick@spamweary.invalid>
Subject: Re: real time log parser?
Message-Id: <4cb46553$0$2531$da0feed9@news.zen.co.uk>
On 12/10/2010 14:33, RedGrittyBrick wrote:
> On 12/10/2010 14:23, Tomasz Chmielewski wrote:
>> On 12.10.2010 15:06, RedGrittyBrick wrote:
>>
>>>> What should I look at / read to achieve it? I'm OK to create a "static"
>>>> parser like this, where the file is parsed once, but I don't have much
>>>> experience with continuous processing of files which grow (and at
>>>> times,
>>>> are rotated/removed/truncated).
>>>>
>>>>
>>>
>>>
>>> (tail -f /var/log/mail.info | ./parser.pl) &
>>
>> And then,
>
> yes sorry.
>
>
>> logrotate
>
> kill and restart the parser using logrotate's postrotate feature?
>
>
>> / restart syslog
>
> ???
>
>
>>
>> Besides, it'd be interesting to get rid of the tail binary, too.
>
> "This is the Unix philosophy: Write programs that do one thing and do it
> well. Write programs to work together. Write programs to handle text
> streams, because that is a universal interface." -- Doug McIlroy
>
I should have mentioned File::Tail. TIMTOWTDI after all.
--
RGB
------------------------------
Date: Tue, 12 Oct 2010 08:56:35 -0500
From: Ted Zlatanov <tzz@lifelogs.com>
Subject: Re: real time log parser?
Message-Id: <87r5fv4ir0.fsf@lifelogs.com>
On Tue, 12 Oct 2010 15:23:38 +0200 Tomasz Chmielewski <tch@nospam.wpkg.org> wrote:
TC> On 12.10.2010 15:06, RedGrittyBrick wrote:
>> (tail -f /var/log/mail.info | ./parser.pl) &
TC> And then, logrotate / restart syslog
TC> echo blah >> /var/log/mail.info
TC> Oops, nothing new gets to the parser!
That's why we use "tail -F" ("same as --follow=name --retry") if it's
available :)
TC> Besides, it'd be interesting to get rid of the tail binary, too.
Well, not necessarily.
Ted
------------------------------
Date: Tue, 12 Oct 2010 16:12:24 +0200
From: Tomasz Chmielewski <tch@nospam.wpkg.org>
Subject: Re: real time log parser?
Message-Id: <8hj8m8F81mU1@mid.uni-berlin.de>
On 12.10.2010 15:56, Ted Zlatanov wrote:
> That's why we use "tail -F" ("same as --follow=name --retry") if it's
> available :)
Nice tip, thanks.
> TC> Besides, it'd be interesting to get rid of the tail binary, too.
>
> Well, not necessarily.
...because I'd like the parser only to parse the lines it didn't parse
before. And other featuritis like this.
Say - the parser crashed for some reason or didn't run for half a day?
Start it again, it will figure out where it last ended.
Certainly, there are more ways to do it, but I don't think tail helps
here a lot, quite the contrary. File::Tail is a good one, too, thanks
for the tips.
--
Tomasz Chmielewski
http://wpkg.org
------------------------------
Date: Tue, 12 Oct 2010 09:19:54 -0700 (PDT)
From: ccc31807 <cartercc@gmail.com>
Subject: suitable key for a hash
Message-Id: <9a9c9ce1-b08f-4781-a055-8af8cca793ae@28g2000yqm.googlegroups.com>
I have a data file to process that consists of about 25K rows and
about 30 columns. This file contains no column with unique values,
that is, every column contains duplicate values. I am placing the data
in a hash to process it (so I can access the data values by name
rather than position), and the only 'key' I can come up with is the $.
variable for the input line numbers.
Surely someone must have dealt with this problem before. Is there a
better solution?
The processing requires dumping the data into discrete categories,
e.g., level, state, person's name, status, for the purpose of
generating reports, e.g., by level, by state, by name, by status, and
not having a unique key isn't an issue.
CC.
------------------------------
Date: Tue, 12 Oct 2010 18:03:07 +0100
From: RedGrittyBrick <RedGrittyBrick@spamweary.invalid>
Subject: Re: suitable key for a hash
Message-Id: <4cb494ca$0$12170$fa0fcedb@news.zen.co.uk>
On 12/10/2010 17:19, ccc31807 wrote:
> I have a data file to process that consists of about 25K rows and
> about 30 columns. This file contains no column with unique values,
> that is, every column contains duplicate values. I am placing the data
> in a hash to process it (so I can access the data values by name
> rather than position), and the only 'key' I can come up with is the $.
> variable for the input line numbers.
>
> Surely someone must have dealt with this problem before. Is there a
> better solution?
A better solution than
... $name{$index} ...
must surely be
... $name[$index] ...
I don't see any point using hashes if the key value is an integer in the
range 1..25000 with no gaps.
> The processing requires dumping the data into discrete categories,
> e.g., level, state, person's name, status, for the purpose of
> generating reports, e.g., by level, by state, by name, by status, and
> not having a unique key isn't an issue.
An SSCCE would help.
--
RGB
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
Back issues are available via anonymous ftp from
ftp://cil-www.oce.orst.edu/pub/perl/old-digests.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V11 Issue 3170
***************************************