[23734] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 5940 Volume: 10

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Mon Dec 15 03:05:39 2003

Date: Mon, 15 Dec 2003 00:05:08 -0800 (PST)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Mon, 15 Dec 2003     Volume: 10 Number: 5940

Today's topics:
    Re: Caching results (?) of lengthy cgi process <henryn@zzzspacebbs.com>
    Re: Caching results (?) of lengthy cgi process <henryn@zzzspacebbs.com>
    Re: Caching results (?) of lengthy cgi process <sbryce@singlepoint.net>
    Re: Caching results (?) of lengthy cgi process <henryn@zzzspacebbs.com>
    Re: Caching results (?) of lengthy cgi process <henryn@zzzspacebbs.com>
    Re: Caching results (?) of lengthy cgi process <flavell@ph.gla.ac.uk>
    Re: editing challenge: Perl vs. cfengine <perl@my-header.org>
    Re: encrypt email address to a string <test@test.com>
    Re: encrypt email address to a string (Randal L. Schwartz)
        Hash to a 2d array? <blrow@hotmail.com>
    Re: Hash to a 2d array? <invalid-email@rochester.rr.com>
    Re: Hash to a 2d array? <thepoet_nospam@arcor.de>
    Re: Looking for script to monitor http and https sites (John W.)
        Mo Money. <fabian_h@sbcglobal.net>
        Perl equivalent of "On error resume" (Rajesh)
    Re: Perl equivalent of "On error resume" <mgjv@tradingpost.com.au>
    Re: Perl equivalent of "On error resume" <uri@stemsystems.com>
    Re: Planning for maintenance <derek.moody@clara.net>
    Re: Planning for maintenance <dmcbride@naboo.to.org.no.spam.for.me>
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Mon, 15 Dec 2003 00:49:07 GMT
From: Henry <henryn@zzzspacebbs.com>
Subject: Re: Caching results (?) of lengthy cgi process
Message-Id: <BC0244FA.19629%henryn@zzzspacebbs.com>

Alan:

Thanks for your response on this thread:

in article Pine.LNX.4.53.0312132204240.31846@ppepc56.ph.gla.ac.uk, Alan J.
Flavell at flavell@ph.gla.ac.uk wrote on 12/13/03 2:08 PM:

> On Sat, 13 Dec 2003, Sherm Pendley wrote:
> 
>> Henry wrote:
>> 
>>> 1)  Arrange for "update_results.cgi" to run every few weeks and write
>>> plain-old "results.html" which replaces "results.cgi", or
>> 
>> Why would you need to use a CGI at all, in this case? Couldn't you
>> simply schedule an ordinary script to run every few weeks?
> 
> A "make file" with suitable dependency definitions would seem to be
> the solution of choice for this kind of requirement.  With "make"
> being run independently of the web server, as you say (although I have
> been known, in the case of remote web servers, to kick-off the
> maintenance task by poking an unadvertised CGI script at the remote
> server, using a web browser or other client action at a computer close
> to myself).

Hmmmm, interesting, I would have never considered 'make', though I have from
time to time used it for tasks other than conventional dependency
management/project building in a source tree.

In this case, I _think_ there are no dependencies.  Everything that
"update_results.cgi" might do is similar, independent, and without
precursors other than a specific set of remote files.   So a straight
procedural approach is probably sufficient, probably doing something as
simple as a 'foreach'.

If there's a failure, it's because something has gone horribly wrong:
someone has changed the remote data in radical ways, and I'll have to
re-engineer the guts to follow.

Those are the reasons I can think of to use 'make'.  Did I understand your
recommendation?

One more question:  if you "kick off" the maintenance task via an
unadvertised CGI script, do the resulting processes run at page-load time,
with the usual permissions?  Or did you figure out a way of unlinking the
processes?  
> 
> ObPerl: there's more than one way to do it.  ;-)

Right, and --as I'm discovering in this thread-- there are lots of different
viewpoints about how to use the tools.  Perhaps these could be called
"meta-ways of doing it".  I'm learning a lot.

Thanks,

Henry

henryn@zzzspacebbs.com    remove 'zzz'


> 



------------------------------

Date: Mon, 15 Dec 2003 01:23:57 GMT
From: Henry <henryn@zzzspacebbs.com>
Subject: Re: Caching results (?) of lengthy cgi process
Message-Id: <BC024D2A.1962E%henryn@zzzspacebbs.com>

Randal:

Thank you for your response on this thread:

in article c745137a42619932140433ef144416b8@news.teranews.com, Randal L.
Schwartz at merlyn@stonehenge.com wrote on 12/13/03 8:56 PM:

>>>>>> "Henry" == Henry  <henryn@zzzspacebbs.com> writes:
> 
> Henry> In most basic terms this means having the
> Henry> time-since-last-update visible in a base-page prompt: "It's
> Henry> been 'n' days since the last update, hit 'submit' to re-synch."
> Henry> That is a bit different from an automagic, crontab-scheduled
> Henry> update.
> 
> Consider the code at <http://www.stonehenge.com/merlyn/LinuxMag/col20.html>.

Wow, thanks!   I've reviewed the narrative and code a couple of times. I'm
not yet good enough at Perl to grasp the more subtle points, but I get the
drift.   

I don't need this level of rigor because, for one thing, the details of the
update process might actually contain some interesting results. (Watching
the processing go by may also turn out to be incredibly boring.  I don't
know yet.)  The other is there will be only one person accessing this stuff
at a time, for the foreseeable future.

The key area for me has to do with detecting changes in the remote data,
which --as I've said-- isn't under my control.

These are index pages to a bunch of other files. I need to sniff the
delivered HTML to see if it has changed.  If it has, I want to generate a
derivative index which will be valid on my on server indefinitely, until the
next time someone updates the index pages.

I was thinking of doing simple CRC check to detect changes, since the HTML
source as delivered shows no time/date info at all.   Given a generalized
web page, where no one is particularly worried about informing users of
changes, are there more sophisticated methods I might use?
> 
> And why didn't you ask this in a CGI group instead of a Perl group?

Ummm, gosh, well, mainly because I haven't found one.  I've only been doing
this for about a month.   I found alt.comp.perlcgi.freelance, but it doesn't
seem the right place.  Can you recommend a good CGI group?
> 
> print "Just another Perl hacker,"

Thanks,

Henry

henryn@zzzspacebbs.com   remove 'zzz'



------------------------------

Date: Sun, 14 Dec 2003 18:35:54 -0700
From: Scott Bryce <sbryce@singlepoint.net>
Subject: Re: Caching results (?) of lengthy cgi process
Message-Id: <3FDD0FFA.4060507@singlepoint.net>

Henry wrote:

> Can you recommend a good CGI group?

comp.infosystems.www.authoring.cgi



------------------------------

Date: Mon, 15 Dec 2003 01:38:42 GMT
From: Henry <henryn@zzzspacebbs.com>
Subject: Re: Caching results (?) of lengthy cgi process
Message-Id: <BC02509F.19632%henryn@zzzspacebbs.com>

P:

Thanks for your post on this thread:

in article pkent77tea-E0BDB9.15324614122003@ptb-nnrpp01.plus.net, pkent at
pkent77tea@yahoo.com.tea wrote on 12/14/03 7:32 AM:

> In article <BC00ACF7.1943B%henryn@zzzspacebbs.com>,
> Henry <henryn@zzzspacebbs.com> wrote:
> 
>> Consider: Perl "results.cgi" generates a perfectly good page of juicy
>> results, except it takes much too long in human time to do it, over 30
>> seconds -- and it is likely to get longer, not shorter.
>> 
>> In fact, the results don't change very often, so the hard work only needs to
>> be done periodically, maybe every couple of weeks.   So, most of the time,
>> there's no need to have the user wait for a full update.
> 
> If I understand your situation correctly, what I'd do is:
> 
> 1) run your perl program to generate foo.html
> 2) don't point people to results.cgi, point them to foo.html
> 3) periodically run your perl program to generate an updated foo.html
> (you might do this as a cron job, or on an ad-hoc basis)

Sure, can do.  I guess I'm trying to avoid this because I'd rather have nice
HTML formatting control of the process update progress information rather
than grabbing it from the terminal log -- a flimsy reason.
> 
> Another option is to write to disk the _results_ of the query, not the
> HTML that is finally sent to the browser. You can then have a relatively
> lightweight CGI that reads in the results from disk and presents them to
> the user in the way that the user wants. This might be useful where you
> have a large number of results and users can choose "10 results per
> page" or "give me page 4 of the results".

Yes, that's closer to the issue I'm chasing: presenting the data in the best
possible format for use.    There may eventually be multiple alternatives.
Whatever is useful in structuring the documents.   See also just below.
> 
> In parallel you might also want to find out why the query is taking so
> long, if only for future reference when you want (or are asked) to have
> another query where the results _do_ change.

I _think_ the query is taking so long because there is a lot to do.     The
pages I'm indexing contain standard outline-style tables of contents; my
test case contains some 1800 lines of stuff like this

    I. Major Division
      a. next division
    II. Major Division
      a. next division
         A. Getting Deeper
           i. Deeper still
    ...

which (with the help of some contributed JS) I'm turning into a much more
user-friendly collapsing list.  It seems that churning through that many
lines generating the JS for each TOC line just takes a long time.   My
server machine is only a PPC running at 450MHz.

I don't think the time to actually get the stuff from the remote site (via
DSL) is a significant issue.  Right now I'm running test cases in which the
raw data is pre-downloaded so the time required should be purely a matter of
local processing.

Thanks,

Henry

henryn@zzzspacebbs.com  remove 'zzz'
> 
> P



------------------------------

Date: Mon, 15 Dec 2003 01:50:39 GMT
From: Henry <henryn@zzzspacebbs.com>
Subject: Re: Caching results (?) of lengthy cgi process
Message-Id: <BC02536B.19638%henryn@zzzspacebbs.com>

Eric:

Thanks for your response on this thread:

in article Xns945175C7752B2sdn.comcast@216.196.97.136, Eric J. Roode at
REMOVEsdnCAPS@comcast.net wrote on 12/14/03 8:35 AM:

> merlyn@stonehenge.com (Randal L. Schwartz) wrote in
> news:c745137a42619932140433ef144416b8@news.teranews.com:
> 
>> And why didn't you ask this in a CGI group instead of a Perl group?
> 
> Because Perl and CGI are the same thing, of course.... right?

Yeah, well, it's a bit of a mystery to me....

As I responded to that issue, my main excuse is that I haven't yet found the
right CGI group.

I don't want to drive this into the ground, but it would be helpful to me to
understand this point a _little_ better, if you have patience to do it.

I understand CGI as the venue and Perl as the technology.   Thus, for
example, someone attempting to learn pastry-making might ask advice from a
French pastry chef, a generic industrial baker, or a home cook.  The
processes will vary by venue, but the fundamental technology  is the same.

So, have I grasped the situation, or missed it by light years?

Thanks,

Henry

henryn@zzzspacebbs.com








------------------------------

Date: Mon, 15 Dec 2003 01:27:10 +0000
From: "Alan J. Flavell" <flavell@ph.gla.ac.uk>
Subject: Re: Caching results (?) of lengthy cgi process
Message-Id: <Pine.LNX.4.53.0312150054440.8536@ppepc56.ph.gla.ac.uk>

On Mon, 15 Dec 2003, Henry wrote:

> In this case, I _think_ there are no dependencies.

At least: the published web files are dependent on the data from
which they are built.

[I fear that there may not be very much Perl in this answer, but I beg
the group's indulgence for dealing with it as briefly as I can here,
rather than trying to chase the questioner off to some other group...]

> precursors other than a specific set of remote files.   So a straight
> procedural approach is probably sufficient, probably doing something as
> simple as a 'foreach'.

Sure enough, you can, if you don't object to rebuilding all the web
files even though for some of them their data is unchanged (with
consequent modification of the last-change date which the server will
advertise to clients, unless you take extra actions to adjust that).

> If there's a failure, it's because something has gone horribly wrong:
> someone has changed the remote data in radical ways, and I'll have to
> re-engineer the guts to follow.

(I suspect you're looking for more complexity than I had intended.
We aren't aiming to rebuild a Linux kernel or the Apache server - the
makefile can be quite a simple thing.)

I don't know your situation in detail, so I'll describe a scenario
with which I was familiar, and see how it might throw light on yours.

Authors deliver source files in latex format, plus diagrams in
encapsulated postscript (.eps) format.  These have to be latex'ed
against a style file to produce a dvi file, which then is run through
dvips to produce a postscript file.  The latex files also have to be
packaged together with their embedded diagrams into .tar.gz archives.
They also have to be converted into an HTML format with embedded
images, using a conversion template.

Each of these components (the author's source, the style file, the
conversion template...) can change, and each change would call for
some part of the publishing process to be repeated on a subset of the
files.  The whole thing can be driven by a makefile, based on the
last-change datestamps of the various components, and recipes for what
process to apply to each component to produce its derivative
component.

Seems to me that your task is a perhaps simpler form of that class of
requirement, no?

> One more question:  if you "kick off" the maintenance task via an
> unadvertised CGI script, do the resulting processes run at page-load time,

By "page load" you mean the loading of the page corresponding to the
maintenance task, performed by the administrator(1)?  Or you mean the
loading of web pages by the end user(2)?

Certainly I didn't mean (2), no.  The whole idea here is to separate
the maintenance task from the delivery of web pages.  The use of a
maintenance script that's run on the web server is purely a
convenience for the administrator.  As for (1), then if the
maintenance task can be completed in acceptable time (say 30-60secs)
then yes, that can be done, with confirmatory results written back to
the admin's browser.

But if it takes much longer (say 10-20 minutes or more), it would be
better to daemonize the maintenance task from the original CGI
process, the CGI process itself can then terminate, and have the
spawned-off maintenance task send back its maintenance log later.

> with the usual permissions?

Since the maintenance task is run in the web server's environment,
then it has the same permissions (uid, gid, file access etc.) by
default as any other CGI script in that situation would have.

> Or did you figure out a way of unlinking the processes?

Sorry, I didn't quite get that?

good luck.  As you say, there's almost too many different ways of
doing this ;-)


------------------------------

Date: Mon, 15 Dec 2003 00:10:52 +0100
From: Matija Papec <perl@my-header.org>
Subject: Re: editing challenge: Perl vs. cfengine
Message-Id: <99rptv8f6qguamold68qsksf0bq3jqrsa3@4ax.com>

X-Ftn-To: Ted Zlatanov 

Ted Zlatanov <tzz@lifelogs.com> wrote:
>>>My own attempt used an array-file tie and was definitely not
>>>as elegant.
>> 
>> IMHO, you don't have much choice; you'll need some module with nice
>> interface to stay elegant.
>
>I think a module would be OK to answer this challenge, as long as
>it's not written just to answer this challenge, but is really useful :)

Well, one could start with building methods specific to cfengine,

my $e = Text::PEditor->open($file);
$e->LocateLineMatching(".*. /etc/init.d/functions.*");
$e->IncrementPointer(-3);
 ...

and extend things later (if there is a strong will :))

>In particular, Perl does not have good facilities for a logical
>line-based cursor for insertion and deletion of lines.  Emacs Lisp,
>for example, is very good at this.  Are there any CPAN modules that
>would do this?  I know only of the one that ties a file to an array,
>but that doesn't really have a logical cursor.

I did some search on CPAN but no luck, maybe I didn't look hard enough.



-- 
Matija


------------------------------

Date: Mon, 15 Dec 2003 00:08:38 GMT
From: "Fengshui" <test@test.com>
Subject: Re: encrypt email address to a string
Message-Id: <aY6Db.196176$Ec1.7205574@bgtnsc05-news.ops.worldnet.att.net>

>
> > domain rule is as: caseless letters, numbers and -
> >
> > username rule is letters. numbers, -, _ and .
> >
> > so if I want to covert/encript an email into a letter/number only
> > string, how do i do this without adding length to the string? i
> don't
> > want to just exchange ascii code for each one.
>
> How strong do you want the encryption to be?  Unless you have an
> extreeeemely simple scheme, you're going to add length.  Plus, you're
> going to have to do annoying things like Huffman-encode bit
> sequences.  Ick.
>
> What's this for, anyhow?
>

What I need to do is pretty simple. I need to add email address as a
Referal, but I want it to look something different, so that spammers will
not know that is an email. I don't want to keep a database to make member1 =
abc@abc.com, member2 = abe@abe.us
Example:
abc@abc.com can be changed to abcAabcB
A=@
B=.com
but I also want to make the string shorter, because an email can have pretty
long string etc.. My idea is that domain is made of 26+10+1=37 items, and if
I add capital letters than I will have 37+26=63 items, then I do a match to
change a 37 based number to a 63 based number. such as
1234567890abcedfeghi...
then
11 will be 1+37=38 which is A
aa will be 11+37*11=308, which is 4S
111 will be 1+37+37*37=1407...

so in long run, I will make a string much shorter by maping a 38 item to a
63 item. if I add the DOT ., then i will map 39 to 64.

it is a two way string, so that if I look at abcAabcB, I will know what that
is..

I have the idea this, but I don't know what is the largest integer Perl can
handle, and I want to have an algorithm to convert 39 to 64 easiliy, How do
you covert 10 based to 16 based without all those calculatons? I remember
there is something I learned in school, but it is 20 years ago, could not
remember.. My English is poor and I don't know how to search it on google.
How do you call 10based, and 16based? Hex??? Dex???






------------------------------

Date: 14 Dec 2003 18:31:51 -0800
From: merlyn@stonehenge.com (Randal L. Schwartz)
Subject: Re: encrypt email address to a string
Message-Id: <86smjmvmjs.fsf@blue.stonehenge.com>

*** post for FREE via your newsreader at post.newsfeed.com ***

>>>>> "Fengshui" == Fengshui  <test@test.com> writes:

Fengshui> username rule is letters. numbers, -, _ and .

You realize this is not the general case?  Email "usernames"
(called "local-part" in RFC terminology) can include every possible
character.

For example, the address <fred&barney@stonehenge.com> is a perfectly
legal address.  (Go ahead and email it!  It's an autoresponder.)

As is <*@qz.to>, my acquaintence "Eli the bearded".  (And he is
definitely bearded.)

Please don't be so narrowminded about email addresses.

print "Just another Perl hacker,"

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
<merlyn@stonehenge.com> <URL:http://www.stonehenge.com/merlyn/>
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!


 -----= Posted via Newsfeed.Com, Uncensored Usenet News =-----
http://www.newsfeed.com - The #1 Newsgroup Service in the World!
-----== 100,000 Groups! - 19 Servers! - Unlimited Download! =-----
                  


------------------------------

Date: Mon, 15 Dec 2003 01:45:29 GMT
From: "SpaceCowboy" <blrow@hotmail.com>
Subject: Hash to a 2d array?
Message-Id: <Zm8Db.47440$HH.46828@fe1.texas.rr.com>

I just started programming perl recently, but I have experience with similar
scripting languages.  I wanted to read in a directory, and the filenames of
the directory into a text file.  I accomplished this quickly, then wanted to
generalize it.  To make a long story short, I spent the next 6 hours trying
to figure out how to write and read a 2d array from a hash.

From one file, I read a data specification that such as
extension = bmp
Image
{
   name = ***FILENAME***
   x = 190
   y = 0
}...

Whereever I recognize the *** as meaning to insert the filename there, and
do this for all file names.  This allows me to do some initial data creation
very quickly.

So I wanted to create a hash <extensions => 2D Array>
Each element of the outer most array contains a field name and a possible
keyword
e.g.
{bmp=>[[name, FILENAME],[x,190],[y,0]]}
I hash to a 2D array because a hash to a hash might change the order of
fields as they are written out.

I cannot figure out how to write/read this data.  I've read the flinstones
example on the web and tried this
@new_field = [x, 190];
push @{my_hash{key}} $new_field; # I believe this is what I tried, but the
code is at work

But then I don't know how to iterate through it to print out my fields.  The
closest I've come is displaying the array memory address on my print.  I'm
not sure if I'm inserting correctly, or if I'm just iterating incorrectly.
Could anyone post some example of how to hash into a 2d array?  Inserting,
and then iterating would be appreciated.

SpaceCowboy




------------------------------

Date: Mon, 15 Dec 2003 02:49:30 GMT
From: Bob Walton <invalid-email@rochester.rr.com>
Subject: Re: Hash to a 2d array?
Message-Id: <3FDD1FFC.7090403@rochester.rr.com>

SpaceCowboy wrote:

> I just started programming perl recently, but I have experience with similar
> scripting languages.  I wanted to read in a directory, and the filenames of
> the directory into a text file.  I accomplished this quickly, then wanted to
> generalize it.  To make a long story short, I spent the next 6 hours trying
> to figure out how to write and read a 2d array from a hash.
> 
> From one file, I read a data specification that such as
> extension = bmp
> Image
> {
>    name = ***FILENAME***


Well, first, lets start by agreeing to write Perl code, not some sort of 
pseudocode.  So that might be something like:

my $Image={
     name=> 'filename.jpg',
     x   => 190,
     y   => 0,
};


>    x = 190
>    y = 0
> }...
> 
> Whereever I recognize the *** as meaning to insert the filename there, and
> do this for all file names.  This allows me to do some initial data creation
> very quickly.
> 
> So I wanted to create a hash <extensions => 2D Array>
> Each element of the outer most array contains a field name and a possible
> keyword
> e.g.
> {bmp=>[[name, FILENAME],[x,190],[y,0]]}


So in Perl, that would be:

my $Image={
    bmp => [
              ['name', 'filename.jpg'],
              ['x',190],
              ['y',0],
           ],
};

You don't mention where 'name', 'x', and 'y' come from.  I assume they 
are just hardcoded items in your program.  In that case, you really 
needn't bother storing them in the array, as their presence there won't 
convey any information you don't already know (if they were hash keys, 
they would be needed to retrieve the proper fields).  So:

my $Image={
    bmp => [
              'filename.jpg',
              190,
              0,
           ],
};

would adequately convey the information, while dispensing with a 
one-element anonymous array that isn't needed.


> I hash to a 2D array because a hash to a hash might change the order of
> fields as they are written out.
> 
> I cannot figure out how to write/read this data.  I've read the flinstones
> example on the web and tried this
> @new_field = [x, 190];


The construction [x, 190] first off will give a warning about a bareword 
(you *are* using warnings, right?).  Secondly, it assigns a scalar value 
(a reference to an anonymous array) to an array.  Perl will do what you 
probably meant and create a one-element array, but this would be far 
better and much clearer:

   my $new_field = ['x',190];


> push @{my_hash{key}} $new_field; # I believe this is what I tried, but the


Again, a bareword problem.  And also note that @new_field and $new_field 
are totally unrelated variables, having nothing whatever in common other 
than sharing the same typeglob name.  Maybe:

   push @{$my_hash{key}},$new_field;

is what you want?  Note that it is OK to have a bareword hash key as 
long as it is like a Perl identifier (alphanumeric etc), but that it is 
not OK to have a bareword hash name -- and the comma separating the 
arguments for push is required.


> code is at work
> 
> But then I don't know how to iterate through it to print out my fields.  The
> closest I've come is displaying the array memory address on my print.  I'm
> not sure if I'm inserting correctly, or if I'm just iterating incorrectly.
> Could anyone post some example of how to hash into a 2d array?  Inserting,
> and then iterating would be appreciated.
> 
> SpaceCowboy

Well, I'm not sure exactly what it is you want to end up with, nor 
exactly what it is you are starting with.  Let's suppose you have input 
data consisting of whitespace separated fields containing a filename and 
two integers in that order.  You want to add these to a hash keyed by 
the filename, and then print the result out.  If that is remotely close 
to what you want, try:

use strict;
use warnings;
my %myhash;
while(<DATA>){
    chomp;
    my @array=split;
    $myhash{$array[0]}=[@array[1,2]];
}
for(sort keys %myhash){
    print "Filename: $_, int1: $myhash{$_}[0], int2: $myhash{$_}[1]\n";
}
__END__
filename.jpg 190 0
asdf.fsda 42 23
lastone.txt 123 321

If you actually do have what you described (a hash containing 2D 
arrays), you could add values to it with something like:

   $hash{$key}[$i][$j]='value';

and reference existing values with:

   $value=$hash{$key}[$i][$j];

That's not so bad, is it?

If you don't know what you have,

   use Data::Dumper;

or use the Perl debugger:

   perl -d programfilename.pl

to find out.

HTH
-- 
Bob Walton
Email: http://bwalton.com/cgi-bin/emailbob.pl



------------------------------

Date: Mon, 15 Dec 2003 07:56:38 +0100
From: "Christian Winter" <thepoet_nospam@arcor.de>
Subject: Re: Hash to a 2d array?
Message-Id: <3fdd5b26$0$19077$9b4e6d93@newsread2.arcor-online.net>

"SpaceCowboy" wrote:
[a lot about a question on nested data]

Did you already have a look at the output of
"perldoc perlref" and "perldoc perlreftut" (also
available at http://www.perldoc.com)? Those two
should explain all aspects of nested datastructures,
how to construct, access and modify them.

HTH
-Christian



------------------------------

Date: 14 Dec 2003 21:49:08 -0800
From: jwagenleitner@yahoo.com (John W.)
Subject: Re: Looking for script to monitor http and https sites
Message-Id: <a4c755ee.0312142149.55f08fe7@posting.google.com>

tdang1@slb.com (perlnovice) wrote in message news:<8c66c403.0312111131.25cdbb5@posting.google.com>...
> Hello,
> 
> I am looking for a perl script that can help me monitor about 15 http and https
> websites. 
> 
> The script will:
> 
>     o Download the webpage then verify if the page contains some text.
>       1. If it does the script goes back to sleep.
>       2. If not, it will send out an email.
>     o If the website is down. The script will time out in 10 seconds and send 
>       out an email.
> 
> Should you have a script doing the similar job please send me an copy.
> 
> Thank you very much in advance,
> 
> perlnovice

Use the lwp::simple and net::smtp modules and write your own script. 
You can get the contents of a url and use a regex to verify some
information on that page.  It's a trivial excercise and perfect for a
novice.

Believe me, writing your own will be much easier than trying to find
one already written as the ones you will find will likely be much more
than what you require.

John


------------------------------

Date: Mon, 15 Dec 2003 06:31:37 GMT
From: "News" <fabian_h@sbcglobal.net>
Subject: Mo Money.
Message-Id: <dzcDb.214$uS7.46362738@newssvr30.news.prodigy.com>

Get Money Legally & Qucikly!!!

Follow the directions below and in two weeks you'll have up to $20000.00
in your PayPal account. There is a very high rate of participation in
the program because of its low investment and high rate of return. Just
$5.00 to one person!
THAT'S ALL !!!
If you are a skeptic and don't think the program will work, I urge you
to give it a try anyway! It REALLY WORKS! Why do you think so many
people are promoting it ?
LOOK AT IT THIS WAY: If the Program is a total failure for you and you
never get even $1.00 in return, your total loss will be the $5.00! If
you are not yet a paypal member, there is no risk at all!!! If the
Program is only moderately successful for you, your PayPal account will
have several hundred dollars deposited into it within the next few days!
If you actively participate in the Program, you could have up to
$20,000.00 in your PayPal account within two weeks!
Now let me tell you the simple details.
Getting Started!!
If you're not already a user of PayPal, the very first thing you need to
do is go to PayPal and sign up. It takes two minutes and Pay Pal will
deposit $5.00 in your account just for becoming a member. That makes
this program's total cost $0!!! Follow this link to open your PayPal
account:
https://www.paypal.com
Now log into your PayPal account, and send the PayPal account of the
person listed in Position 1 $5.00 PayPal will ask you to select type.
(Select "service" and put "$5.00 donation" for subject.) When person in
Position 1 receives notification of your payment, you can simply copy
this page and change the names in position #1 & #2 & #3 as instructed.
Remember, only the person in Position 1 on the list gets your $5.00
donation. Send them a donation then remove #1 PayPal account from the
list. Move the other two accounts up & add your Paypal account to #3
position. After you have retyped the names in the new order,
IMMEDIATELY send the revised message to as many people as possible.
PROMOTE! PROMOTE! The more you promote the Program, the more you will
receive in donations!! That's all there is to it. You are reading this
message in usenet, and usenet is the best way to spread the word about
the program. Post this message to AT LEAST 200 groups in usenet(there
are over 20, 000), after you send the 5 dollars to the person at #1.
This will guarentee you a profit from this program. The more groups you
post it to, the more money you will make!!!
When your name reaches Position 1 (usually in less than a week) it will
be your turn to receive the cash. $5.00 will be sent to your PayPal
account by people just like you who are willing to send $5.00 dontation
and receive up to $20,000 in less than two weeks. Because there are only
(3) names on the list you can anticipate 80% of your cash within two
weeks.
Anytime you find yourself short on cash just take out your $5.00
donation program and send it to 50 prospects. Imagine if you sent it to
100 or even more. Most people spend more than $5 on the lottery every
week with no real hope of ever winning.
IMPORTANT!!! IN ORDER FOR THIS PROGRAM TO WORK, YOU MUST BE HONEST. DO
NOT ADD YOUR EMAIL IMMEDIATELY TO THE #1 POSITION, AND MAKE SURE TO SEND
YOUR 5 DOLLARS TO THE PERSON AT #1. IF EVERYONE WHO TRIED THIS DIDN"T
SEND THE MONEY, THEN NOBODY WOULD MAKE A DIME. 5 DOLLARS IS A VERY SMALL
INVESTMENT, ESPECIALLY WHEN YOUR ABOUT TO MAKE MANY MANY MANY
TIMES THAT AMOUNT. IF WE ARE ALL HONEST, THEN WE ALL MAKE LOTS OF
MONEY!!
REMEMBER, you add your email to the #3 spot, and then move everyone else
up 1(deleting the person who was formally in the #1 spot, and who you
should have sent your money to). DO NOT add your email to #1 when you
start this program. If everyone did this, then NO ONE would make a cent.
THIS PROGRAM WORKS - JUST TRY IT

POSITION # 1 PAYPAL ACCOUNT: fabian_h@sbcglobal.net
POSITION # 2 PAYPAL ACCOUNT: jackvelen@yahoo.com
POSITION # 3 PAYPAL ACCOUNT: s.paiva@sympatico.ca

Integrity and honesty make this plan work. Participants who actively
promote this program will average between $8000
and
$12000 and receive the donations within two weeks.
This is not a chain letter. You are simply making a donation of $5.00 to
another person. The Program does not violate title 18 section 1302 of
the Postal and lottery code.
Remember -TIME is of the essence. YOU can choose to live
Paycheck-to-Paycheck or live FREE from FINANCIAL BONDAGE. Become a part
of the donation program and help people help people.
This program is about helping each other!
Success is a journey - Not a destination!
Start Your Journey TODAY!!!!






------------------------------

Date: 14 Dec 2003 21:21:13 -0800
From: raj1_iit@yahoo.com (Rajesh)
Subject: Perl equivalent of "On error resume"
Message-Id: <bb6dafde.0312142121.2dab3295@posting.google.com>

Hi,

Can anybody tell me if there is any equivalent of "On error resume" in
Perl? If not, how can u implemnt it?


------------------------------

Date: 15 Dec 2003 05:34:15 GMT
From: Martien Verbruggen <mgjv@tradingpost.com.au>
Subject: Re: Perl equivalent of "On error resume"
Message-Id: <slrnbtqhuq.amq.mgjv@verbruggen.comdyn.com.au>

On 14 Dec 2003 21:21:13 -0800,
	Rajesh <raj1_iit@yahoo.com> wrote:
> Hi,
> 
> Can anybody tell me if there is any equivalent of "On error resume" in
> Perl?

I think you probably want eval BLOCK, which can be used to trap
exceptions.

$ perldoc -f eval

But there is not enough context in your question to be certain. Where
does this "On error resume" thing come from (Visual Basic or so?),
what does it do, and, more importantly, what do _you_ want to use it
for?

>       If not, how can u implemnt it?

I don't know how 'u' can implement it, since I don't know 'u'.

Martien
-- 
                        | 
Martien Verbruggen      | 
Trading Post Australia  | values of Beta will give rise to dom!
                        | 


------------------------------

Date: Mon, 15 Dec 2003 05:36:09 GMT
From: Uri Guttman <uri@stemsystems.com>
Subject: Re: Perl equivalent of "On error resume"
Message-Id: <x7y8teskvr.fsf@mail.sysarch.com>

>>>>> "R" == Rajesh  <raj1_iit@yahoo.com> writes:

  > Can anybody tell me if there is any equivalent of "On error resume" in
  > Perl? If not, how can u implemnt it?

it would help if you explained that did in whatever language it came
from.

uri

-- 
Uri Guttman  ------  uri@stemsystems.com  -------- http://www.stemsystems.com
--Perl Consulting, Stem Development, Systems Architecture, Design and Coding-
Search or Offer Perl Jobs  ----------------------------  http://jobs.perl.org


------------------------------

Date: Sun, 14 Dec 2003 22:20:54 +0000
From: "Derek.Moody" <derek.moody@clara.net>
Subject: Re: Planning for maintenance
Message-Id: <ant14225406cBxcK@half-baked-idea.co.uk>

In article <Xns94517B265C1A1sdn.comcast@216.196.97.136>, Eric J. Roode
<URL:mailto:REMOVEsdnCAPS@comcast.net> wrote:

> "Derek.Moody" <derek.moody@clara.net> wrote in
> news:ant141409bc8BxcK@half-baked-idea.co.uk: 
> 
> > The question:
> > For ease of future maintenance, especially for (s)he who may come
> > after me - * Do I import all subroutines and data so that the call
> > might be 
> >     &dosomething($parameter);
> > * Or do I use full references
> >     &Foo::dosomething($Bar::parameter);
> > Especially as some variable names will be duplicated in alternate
> > packages. 
> > 
> > Opinions solicited:
> > If you were hired to maintain my legacy (say 50 scripts, 8
> modules)
> > which form would you prefer?
> > As you may guess I favour the second version - am I overlooking
> any
> > potential pitfalls?

<snip reminder about prototyping issues, ta>

> Second, imho, by fully-qualifying all of the function calls, you're
> making a lot of extra typing for yourself, and limiting future
> flexibility.  If you choose to move half of the function calls to a
> different module someday, you'll have to edit all of the function
> call invocations to have the new module name, as opposed to simply
> changing a declaration or two at the top of each program.

I'm not really worried about the amount of typing -I- have to do - most of
the early part of this job is search and replace in any case.

The question is really whether or not maintainability is improved by these
explicit package calls.

Cheerio,

-- 
>> derek.moody@clara.net



------------------------------

Date: Mon, 15 Dec 2003 04:07:22 GMT
From: Darin McBride <dmcbride@naboo.to.org.no.spam.for.me>
Subject: Re: Planning for maintenance
Message-Id: <_raDb.705688$6C4.416439@pd7tw1no>

Alan J. Flavell wrote:

> On Sun, 14 Dec 2003, Eric J. Roode wrote:
> 
>> Second, imho, by fully-qualifying all of the function calls, you're
>> making a lot of extra typing for yourself, and limiting future
>> flexibility.  If you choose to move half of the function calls to a
>> different module someday, you'll have to edit all of the function
>> call invocations to have the new module name, as opposed to simply
>> changing a declaration or two at the top of each program.
> 
> Indeed.  On the other hand, if you'd had the bad-fortune to choose a
> function name which later turned out to clash with some other needed
> module...

And thus my personal preference: OO perl :-)  Name clashing goes down
to zero, although if everyone had a function (method?) foo(), it may
get confusing for the programmer...


------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

The Perl-Users Digest is a retransmission of the USENET newsgroup
comp.lang.perl.misc.  For subscription or unsubscription requests, send
the single line:

	subscribe perl-users
or:
	unsubscribe perl-users

to almanac@ruby.oce.orst.edu.  

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

To request back copies (available for a week or so), send your request
to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
where x is the volume number and y is the issue number.

For other requests pertaining to the digest, send mail to
perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
sending perl questions to the -request address, I don't have time to
answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V10 Issue 5940
***************************************


home help back first fref pref prev next nref lref last post