[24748] in Perl-Users-Digest
Perl-Users Digest, Issue: 6903 Volume: 10
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Tue Aug 24 03:05:57 2004
Date: Tue, 24 Aug 2004 00:05:06 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Tue, 24 Aug 2004 Volume: 10 Number: 6903
Today's topics:
"Perlish Patterns" by Phil Crow <dog@dog.dog>
Re: Clean Quarantine Script <dave@nospam.com>
Re: Confusion of Win32::ODBC under CygWin. Please help. <usa1@llenroc.ude.invalid>
Re: Confusion of Win32::ODBC under CygWin. Please help. <ceo@nospam.on.net>
convertinga directory path into a hash <zebee@zip.com.au>
Re: convertinga directory path into a hash <thepoet_nospam@arcor.de>
Re: finding directory sizes <zebee@zip.com.au>
Heal the rift <vvosen@cpinternet.com>
Re: How to randomize foreach (Anno Siegel)
Re: How to randomize foreach <mb@uq.net.au.invalid>
Perl Search & Replace Script For Website (Michael)
Re: Placeholder in an sql query (dn_perl@hotmail.com)
xml::parser <getfun@btamail.net.cn>
Re: xml::parser <postmaster@castleamber.com>
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Tue, 24 Aug 2004 08:22:02 +0200
From: "Peter Michael" <dog@dog.dog>
Subject: "Perlish Patterns" by Phil Crow
Message-Id: <cgel3e$pfs1@news-1.bank.dresdner.net>
Hi,
I ordered "Perlish Patterns" (Apress) by Phil Crow in advance via
amazon. Now they told me that the book is "no longer available".
Does anybody know what happened to it?
TIA,
Peter
------------------------------
Date: Mon, 23 Aug 2004 22:57:37 -0400
From: Dave <dave@nospam.com>
Subject: Re: Clean Quarantine Script
Message-Id: <412aaeb7$0$21738$61fed72c@news.rcn.com>
Matt wrote:
> The following script will not work to delete files with weird charaters such
> as:
>
> coords .scr
> DVDR- MOVIES.txt.scr
> href .exe
> null) .rar
> acct_backup.2004-08-11.tar.gz
> acct_backup.2004-08-12.tar.gz
>
> It won't delete any of those files. Can anyone tell me how to fix that?
Since your post identifies your news client as "Microsoft Outlook
Express," I hope you're not trying to use that script on a Windows
platform, because it contains *nix specific code.
As another reader suggested, use Perl's internal unlink() function. You
also might consider using the File::Spec module if you want a
platform-independent way to handle paths.
Dave
<snip script contents>
------------------------------
Date: 23 Aug 2004 22:25:31 GMT
From: "A. Sinan Unur" <usa1@llenroc.ude.invalid>
Subject: Re: Confusion of Win32::ODBC under CygWin. Please help...
Message-Id: <Xns954EBD57218CEasu1cornelledu@132.236.56.8>
Chris <ceo@nospan.on.net> wrote in
news:TbrWc.5880$Y94.4465@newssvr33.news.prodigy.com:
> MartinM wrote:
>> Hi,
>>
>> I use Perl under Cygwin. I need to use Win32::ODBC. It is located
>> in
>>
>> c:\cygwin\lib\perl5\site_perl\5.8.2\cygwin-thread-multi-64int\Win32\OD
>> BC.pm
>>
>> but this path is not in @INC. There are just links 5.8.5 not 5.8.2
>> in the @INC and (to my surprise), the 5.8.5/cygwin-thread-multi-64int
>> is completely empty.
>>
>> The @INC includes:
>> /usr/lib/perl5/5.8.5/cygwin-thread-multi-64int
>> /usr/lib/perl5/5.8.5
>> /usr/lib/perl5/site_perl/5.8.5/cygwin-thread-multi-64int
>> /usr/lib/perl5/site_perl/5.8.5
>> /usr/lib/perl5/site_perl
>> /usr/lib/perl5/vendor_perl/5.8.5/cygwin-thread-multi-64int
>> /usr/lib/perl5/vendor_perl/5.8.5
>> /usr/lib/perl5/vendor_perl
>>
>> So the execution of code with use Win32::ODBC; ends up with
>> Can't locate Win32/ODBC...
>>
>> If I copy content of 5.8.2/ into 5.8.5 it results with error
>>
>> Can't load the
>> /usr/lib/perl5/site_perl/5.8.5/cygwin-thread-multi-64int/auto/Win32/OD
>> BC/ODBC.dll for module Win32/ODBC: dlopen: Win32 error 126 at
>> /usr/lib/perl5/site_perl/5.8.5/cygwin-thread-multi-64int/Dynaloader.pm
>> line 230
>>
>> Please, can you send a piece of advice to me?
>
> My advice is to not try to mix and marry Cygwin Perl and ActiveState
> Perl
That is not what he is doing at all.
> (assuming that is where you are getting Win32::ODBC from?).
No. He is getting that from the lib directory of a previous version of perl
(from the Cygwin distribution).
You seem to be unaware of the fact that the cygwin setup program includes
the Win32 modules as an extra download that can be selected separately.
> My personal solution to this, since both Perls are actually native
> Win32 images, is to place a symlink in /usr/bin under Cygwin that
> points to ActiveState Perl and use THAT in my Perl scripts under
> Cygwin that require specific Win32 capabilities:
This is kinda nuts. You say not to mix various perls and then you go ahead
and do that.
The solution to the OP's problem is to install (using the Cygwin setup
program) the the Cygwin versions of the Win32 modules.
Sinan.
------------------------------
Date: Tue, 24 Aug 2004 01:55:09 GMT
From: ChrisO <ceo@nospam.on.net>
Subject: Re: Confusion of Win32::ODBC under CygWin. Please help...
Message-Id: <1exWc.7006$FV3.6288@newssvr17.news.prodigy.com>
A. Sinan Unur wrote:
> Chris <ceo@nospan.on.net> wrote in
> news:TbrWc.5880$Y94.4465@newssvr33.news.prodigy.com:
>
>
>>MartinM wrote:
>>
>>>Hi,
>>>
>>>I use Perl under Cygwin. I need to use Win32::ODBC. It is located
>>>in
>>>
>>>c:\cygwin\lib\perl5\site_perl\5.8.2\cygwin-thread-multi-64int\Win32\OD
>>>BC.pm
>>>
>>>but this path is not in @INC. There are just links 5.8.5 not 5.8.2
>>>in the @INC and (to my surprise), the 5.8.5/cygwin-thread-multi-64int
>>>is completely empty.
>>>
>>>The @INC includes:
>>> /usr/lib/perl5/5.8.5/cygwin-thread-multi-64int
>>> /usr/lib/perl5/5.8.5
>>> /usr/lib/perl5/site_perl/5.8.5/cygwin-thread-multi-64int
>>> /usr/lib/perl5/site_perl/5.8.5
>>> /usr/lib/perl5/site_perl
>>> /usr/lib/perl5/vendor_perl/5.8.5/cygwin-thread-multi-64int
>>> /usr/lib/perl5/vendor_perl/5.8.5
>>> /usr/lib/perl5/vendor_perl
>>>
>>>So the execution of code with use Win32::ODBC; ends up with
>>>Can't locate Win32/ODBC...
>>>
>>>If I copy content of 5.8.2/ into 5.8.5 it results with error
>>>
>>>Can't load the
>>>/usr/lib/perl5/site_perl/5.8.5/cygwin-thread-multi-64int/auto/Win32/OD
>>>BC/ODBC.dll for module Win32/ODBC: dlopen: Win32 error 126 at
>>>/usr/lib/perl5/site_perl/5.8.5/cygwin-thread-multi-64int/Dynaloader.pm
>>>line 230
>>>
>>>Please, can you send a piece of advice to me?
>>
>>My advice is to not try to mix and marry Cygwin Perl and ActiveState
>>Perl
>
>
> That is not what he is doing at all.
>
>
>>(assuming that is where you are getting Win32::ODBC from?).
>
>
> No. He is getting that from the lib directory of a previous version of perl
> (from the Cygwin distribution).
>
> You seem to be unaware of the fact that the cygwin setup program includes
> the Win32 modules as an extra download that can be selected separately.
>
Indeed I was not aware and that certainly clouded my response. I would
be quite surprised however to see that the Win32 modules under Cygwin
compare well to the support in AS however. I'll have to check it out.
>
>>My personal solution to this, since both Perls are actually native
>>Win32 images, is to place a symlink in /usr/bin under Cygwin that
>>points to ActiveState Perl and use THAT in my Perl scripts under
>>Cygwin that require specific Win32 capabilities:
>
>
> This is kinda nuts. You say not to mix various perls and then you go ahead
> and do that.
>
This is not mixing them. They are *quite* separate using this method
(which you call "nuts".) In fact, the separation is the point. I think
you would find this a sensible solution if you tried it. It works
terrifically well.
> The solution to the OP's problem is to install (using the Cygwin setup
> program) the the Cygwin versions of the Win32 modules.
>
My last point remains moot in light of your revealing the presence of
the Win32 mods under Cygwin. I'll have to check these out before
commenting further.
-ceo
------------------------------
Date: Tue, 24 Aug 2004 03:13:50 GMT
From: Zebee Johnstone <zebee@zip.com.au>
Subject: convertinga directory path into a hash
Message-Id: <slrncilc93.4l4.zebee@zeus.zipworld.com.au>
I have a unix directory path, say /home/user/mail
Not knowing how long it will be, that is, how many elements, how can I
convert it into a sequence of hashes:
$ref->{'home'}->{'user'}->{'mail'}
zebee
--
Zebee Johnstone (zebee@zip.com.au), proud holder of
aus.motorcycles Poser Permit #1.
"Motorcycles are like peanuts... who can stop at just one?"
------------------------------
Date: Tue, 24 Aug 2004 08:43:56 +0200
From: Christian Winter <thepoet_nospam@arcor.de>
Subject: Re: convertinga directory path into a hash
Message-Id: <412ae3ac$0$6639$9b4e6d93@newsread4.arcor-online.net>
Zebee Johnstone schrieb:
> I have a unix directory path, say /home/user/mail
>
> Not knowing how long it will be, that is, how many elements, how can I
> convert it into a sequence of hashes:
> $ref->{'home'}->{'user'}->{'mail'}
You could use eval() for that, like:
----------------------------------------------------
#!perl
use strict;
use warnings;
use Data::Dumper;
sub hashify {
my $var;
my $p = shift;
$p =~ s#^/##; # remove leading slash
$p =~ s#/$##; # remove trailing slash
$p =~ s|/|"}->{"|g; # build hashref syntax
eval '$var->{"'.$p.'"}="somevalue"'; # ...and eval into $var
return $var; # which is returned
}
my $retval = hashify("/home/user/mail");
print Data::Dumper->Dump( [$retval] );
print $retval->{'home'}->{'user'}->{'mail'},$/;
------------------------------------------------------
HTH
-Christian
------------------------------
Date: Tue, 24 Aug 2004 01:52:18 GMT
From: Zebee Johnstone <zebee@zip.com.au>
Subject: Re: finding directory sizes
Message-Id: <slrncil7g7.jv.zebee@zeus.zipworld.com.au>
In comp.lang.perl.misc on 23 Aug 2004 04:46:12 -0700
Jim Keenan <jkeen_via_google@yahoo.com> wrote:
>
> Unless you can demonstrate through benchmarking that this is a faster
> approach than another such as using 'stat', I don't see why you need
> to open a filehandle connection to read a file when you are simply
> interested in the file's name and size.
I'm not opening Du for each file, but for directories.
If I use File::Find, I have to go through every single file, then later
work out how to decide which directories to keep together and which to
split.
Using Du on directories I can go:
start at root.
Check all directories one level below root, get their size.
If one of them is too big to fit on a CD, then go down one level,
do it again. recurse if necessary, though it isn't usually.
This gives me the smallest number of directories to then fit on CD.
It won't be the most efficient use of CD space, but then the efficient
use of human time to find things and get them back is more important
than a few meg here and there.
If I use File::Find to look at every single file, then I have to do some
kind of later munging to work out that directory split so as to have as
much of possible of the directories below root kept together.
So root might have /web /home /other and /web might have 15 sites all
smaller than a CD (some quite small, some quite large), plus there's
/web/web2 which has a similar mix of sites below it, but /web/web2
itself is lasrger than a CD. Measnwhile /home has at least one
directory below it that is too big to fit on a CD, so it has to be
split, and the directories below do too.
But I dn't know in advance which will have to be, and which won't. If
/web/web2/website1 is big enough to take up a CD on its own, I don't
want to split it.
yes, if I have to recurse, then I have to re-do du on that directory,
so if there's a reasonable way to record the info for each file only
once and then do the splitting that might be better.
Zebee
------------------------------
Date: Mon, 23 Aug 2004 17:41:21 -0500
From: G_cowboy_is_that_a_Gnu_Hurd? <vvosen@cpinternet.com>
Subject: Heal the rift
Message-Id: <10ikskh7l2rfue2@corp.supernews.com>
edgrsprj wrote:
>bottom posted<
> August 16, 2004 Posted to sci.geo.earthquakes and other Newsgroups
>
> Newsgroup Readers: I recommend that you send copies of this report to any
> government officials and researchers that you know who are interested in
> earthquake forecasting science.
>
> The information in this report represents expressions of personal opinion.
>
> This is an introductory report regarding a Perl language earthquake
> forecasting computer program which is now generating good results. I
> personally believe that the program has the potential to enable us to
> accurately forecast the approach of a reasonable percentage of the
> destructive earthquakes which are occurring around the world. This report
> will explain how the program works.
>
> That program is presently already fully operational and in use. But
> for it to be effectively used by governments etc. around the world it will
> probably be necessary for some organization to "fine tune" the program
> code and the output data format and take charge of circulating the program
> and instructing government personnel regarding how to use it.
>
> Note: I don't have time to get into a lengthy discussion regarding this
> subject matter. If you post a response which asks questions which require
> lengthy answers, which are too difficult to answer, or if you ask too many
> questions I am not going to be able to respond to your note. Also, it
> might take me a few days to respond to any questions.
>
> This subject matter involves earthquakes, geophysics, celestial mechanics,
> and Perl language computer programming. That is why this report has been
> posted to a number of Newsgroups. If you wish to respond in just one of
> them I would recommend sci.geo.earthquakes.
>
> Present Theories:
>
> *** Earthquakes occur when fault zones collect enough strain energy as
> the result of tectonic plate movements etc. that their rock layers
> fracture.
>
> *** The actual times when many earthquakes occur and when certain types
> of fault zone process related electromagnetic energy field fluctuations
> (EM signals) are being generated are controlled by forces which are
> directly and/or indirectly related to the positions of the sun and the
> moon in the
> sky. The sun and moon gravities are controlling those forces. They add
> enough additional strain energy to the fault zone to trigger the
> earthquake.
>
> *** A given fault zone will tend to fracture when those forces bend,
> stretch, and compress its rock layers in certain directions. Years,
> decades, and even centuries may pass before the fault zone has stored
> enough energy for another earthquake to occur and it is once again bent
> etc. in
> such a manner that one is triggered. But this process is sufficiently
> repetitive and reliable that it can be used to forecast earthquakes.
>
> A computer program can be used to compare the times when those EM signals
> are generated with the times when earthquakes occurred in the past. And
> when matches are found between the signals and past earthquakes then that
> can serve as an indicator that a new earthquake is about to occur in the
> general area where the past one occurred.
>
> With the program I am presently using:
>
> About 500 EM signals are detected each year. They last from 0.25 to 20
> seconds. That short duration time can make it a little difficult to
> detect
> them. But it makes them ideal for use with this type of computer program.
>
> For each data point, about 3 months worth of signals, usually between 100
> and 200, are compared one by one with the roughly 25,000 earthquakes I
> have
> in my database. They occurred during the years 1990 through the present.
>
> To present a somewhat simplified picture, when signal # 1 is compared with
> the first earthquake in the database the earthquake is given a rating on a
> 0
> to 100 scale for how well the signal matched the earthquake. Then signal
> # 1 is compared with earthquake # 2.
>
> When all of the earthquakes have been checked the same steps are taken for
> signal # 2. And when those checks are finished the results of the first
> series of tests are added to the results of the second series of tests.
> This then continues for each of the signals.
>
> When all of the comparisons are done the result is a list of 25,000
> earthquakes which has a single probability rating for each earthquake
> which shows how it matched all of the signals combined.
>
> Some of those 150 or so signals will have preferentially matched one
> earthquake. And as a result it will have a higher rating. Some will have
> preferentially matched another etc.
>
> The list is adjusted so that the earthquake which was the best match for
> all of the signals (the highest results rating) is multiplied by some
> factor so
> that it has a final rating of 100. And all of the other earthquakes are
> then multiplied by that factor. They then have final ratings which range
> from 0 to 100.
>
> The next steps provide the information regarding when and where an
> earthquake is going to occur.
>
> A complete run like that takes about 30 minutes on a 700 Meg speed
> computer. And that series of tests is repeated about every 3 days.
>
> With each new test date the signals for the oldest 3 day period are
> removed
> from the test list. And signals detected during the latest 3 day period
> are
> added to it. So the list of signals gradually changes with time.
>
> If an earthquake at say 122W and 36N has a final rating of:
>
> 50 on day 1
> 60 on day 4
> 80 on day 7
> 93 on day 10
> 97 on day 13
> 99 on day 16
>
> That gradual increase in final ratings would then indicate that another
> earthquake could be about to occur near 122W and 36N.
>
> If and when it did occur, say on day 17, the final ratings for the 122W
> and 36N earthquake in the list would usually begin to go down as the
> warning signals which pointed to it were gradually removed from the test
> signal list.
>
> Because some 25,000 earthquakes which occurred around the world are given
> ratings with each series of tests, and because some of the signals during
> that 90 day test signal period will point to one earthquake and some will
> point to another, the final results list which is generated every 3 days
> shows were earthquakes around the world might be about to occur.
>
> An example for a single earthquake:
>
> On December 22, 2003 the following destructive earthquake occurred in
> California, U.S.A.
>
> 2003/12/22 19:15:56 35.71N 121.10W 7.6 6.5 CENTRAL CALIFORNIA
> (NEIS data)
>
> My data processing computer program was not fully operational until April
> of
> 2004. And so I could not use it to watch for that earthquake at the time
> that it occurred. However I recently used the program to test the signals
> which were detected back around then to see how well it might have done if
> it had existed last December. And it generated the following data.
>
> These were the final ratings for the following earthquake which is one of
> the ones in my 25,000 earthquake data table. It was one of the
> geographically closest ones to that December 22 earthquake.
>
> 1989/10/18 00:04:15 37.03N 121.88W 19.0 7.1 SAN FRANCISCO, CALIFORNIA
>
> The dates in this list are the end dates for each 3 month test signal
> period. The Pa: numbers represent final ratings for that San Francisco
> earthquake when it was compared with all of the 25,000 earthquakes in the
> database. The Pd: numbers represent final rating for that earthquake when
> it was compared with only those earthquakes in the database which produced
> fatalities.
>
> Test Date Pa: Pd:
>
> 03/10/30 71: 75
> 03/11/25 87: 89:
> 03/12/07 92: 94:
> 03/12/15 91: 96:
> 03/12/22 93: 97:
> 03/12/25 90: 95:
> 03/12/30 91: 94:
> 04/01/15 91: 91:
> 04/02/19 74: 81:
> 04/03/20 65: 69:
> 04/04/13 72: 77:
> 04/05/18 66: 67:
>
> Both the Pa: and Pd: final results numbers peaked beginning around
> December
> 7, continuing through December 30. And if that computer program had been
> operational back in December, and had government officials here in the
> U.S. and earthquake forecasters seen those numbers reach a peak for that
> San Francisco earthquake which occurred at 37N and 122W, then they might
> have been able to check that general area for other signs of an
> approaching earthquake such as large, fresh cracks in building
> foundations, radon gas
> releases, abrupt changes in ground water table levels etc. And perhaps it
> would have been possible to determine that the December 22 earthquake
> which occurred at 36N and 121W was getting ready to occur.
>
> PRESENT STATUS OF THIS FORECASTING PROGRAM
>
> This is not a futuristic computer program which will become available 10
> years from now. It is already fully operational. And several other
> people
> that I know of have it running on their computers at this time. Signal
> data
> are being collected each day. And for the past few months I have been
> storing final results data at one of my Web sites for examination and use
> by
> government officials etc. around the world. Anyone who can run a Perl
> language program on his or her computer can confirm the results himself or
> herself.
>
> The computer program and its enormous database files were also until
> recently available as free downloads from my Web site. However, that site
> allows just a limited number of visits each month (bandwidth allowance).
> And a few weeks ago it had become so popular that the monthly bandwidth
> was
> exceeded and the site temporarily shut down. When that happened I removed
> most of the large download files etc. from the site so that it would not
> crash immediately after it started running again. Additionally, I am in
> the process of producing a new final results Web page which has a format
> which makes it easier to interpret the data.
>
> I recently received the formal U.S. copyright ownership papers for one
> version of that Perl language data processing computer program. And, if I
> do not receive more requests than I can respond to I can send people
> e-mail copies of a version which is presently several months old but which
> produces
> the same type of data as the latest version. The final results data are
> simply formatted a little differently with the latest version. To run the
> program you need to first install a Perl computer language compiler on
> your
> computer. That takes just a few minutes. The following Web page contains
> instructions for how to obtain a free copy of a Perl compiler:
>
> http://www.freewebz.com/eq-forecasting/Perl.html
>
> The following file contains a copy of my earthquake forecasting Perl
> program. However that file does not contain the very large support files
> which are presently needed to run it.
>
> http://www.freewebz.com/eq-forecasting/311.zip
>
> That Perl program was designed to run with Windows XP. It will also run
> with Windows 98. To get it to run with other operating systems such as
> UNIX
> it is probably necessary to change some of its internal statements. I
> myself would not know what changes needed to be made. However, this is a
> text program. And such changes should be easy to make with any text
> editor if you are familiar with Perl language computer programming.
>
> The computer program works quite well with the EM signal data that I
> myself
> am collecting. It remains to be seen how well it will work with other
> types of data including EM signal data being collected by other people
> around the world.
>
> The next planned step is to create a new version of the final results data
> Web page and store it at my Web site for demonstration purposes. That
> might
> take a few days or perhaps as much as a few weeks. And at that time a
> program update note will probably be posted to the Newsgroups.
>
> Longer range, additional data need to be generated and evaluated. The
> Perl program should be rewritten using a more conventional programming
> format. It presently simply consists of a group of subroutines which have
> been
> linked together. Perhaps a different programming language should be used.
>
> The program contains scores of software settings which can be optimized
> for
> better performance. More sophisticated data processing routines should be
> added. And geophysicists should try to determine why the program actually
> works. It relies on an unusual geophysical theory model for how
> earthquakes
> are being triggered. And although the computer program produces good data
> indicating that the theory model has some value, the actual earthquake
> triggering mechanism is not yet understood.
>
> This is not a funded research effort. It is not being run in connection
> with any formal research group. And I myself have only limited amounts of
> time to devote to the effort. Progress is therefore not as rapid as I
> would
> like. But it is being slowly made.
>
> This first report was intended to simply let people know that the
> earthquake forecasting computer program which was described here now
> exists and to provide a preliminary explanation of how it works.
>
> COMMENT: THE PERL COMPUTER LANGUAGE
>
> Without being an expert on this subject I believe that the Perl language
> was probably created in part so that people who are not professional
> computer programmers could have a versatile language for use which was
> inexpensive (it is actually free), widely available, well supported,
> powerful, and not
> too difficult to learn. I would say that if that was the case then this
> application demonstrates that this goal was achieved. And the development
> of an effective, inexpensive, and easily used computer program for
> forecasting earthquakes and doing advanced research on them would
> certainly have to be considered an important application for any computer
> language.
We might as well try figuring out how to heal the Pacific mid-continental
ridge if we're building all this this infrastructure that is earthquake
resistant and trying to figure out how to predict earthquakes. Also the
social aspects of earthquake prediction hasn't been thought through. What
happens when a million people are evacuated? Looters, yep, that's right
LOOTERS. Those people too poor to go traipsing across the country that see
the opportunity to sell people goods that they stole from them. And what's
more, predicting earthquakes is nigh impossible. You can find them, but
you're going to need to set up earthquake stations across the whole western
coast and along the entire pacific rim. Furthermore, your not just going
to be placing seisometers along the faults, you're going to have to drill
subsurface stations monitoring seismic indicators: pressure, temp, seismic
information, stress in otherwords, which are all going to have to be
connected via an wireless intranet. Then all these substations are going
to have to be replaced every time they get squished in order to get a
reasonable forcast of the tectonic situation. Until we get tectonic radar
it's not ever going to be worth it, nor very 'big picture'. Practically
speaking, we'd be better off abandoning it. We could then turn the entire
west coast into a big solar farm/state-park. With low investment in
infrasture and fewer different things to fix in case of an earthquake,
Californians might stop building their houses on sand.
-gc
------------------------------
Date: 24 Aug 2004 00:04:56 GMT
From: anno4000@lublin.zrz.tu-berlin.de (Anno Siegel)
Subject: Re: How to randomize foreach
Message-Id: <cge0n8$b70$1@mamenchi.zrz.TU-Berlin.DE>
Abigail <abigail@abigail.nl> wrote in comp.lang.perl.misc:
> Anno Siegel (anno4000@lublin.zrz.tu-berlin.de) wrote on MMMMX September
> MCMXCIII in <URL:news:cgd5et$o5f$1@mamenchi.zrz.TU-Berlin.DE>:
> || Gunnar Hjalmarsson <noreply@gunnar.cc> wrote in comp.lang.perl.misc:
> || > Anno Siegel wrote:
> || > > One time-linear shuffler swaps list elements instead of extracting
> || > > them. That saves the time splice needs to shift the array tight:
> || > >
> || > > sub swapping {
> || > > for ( reverse 1 .. $#_ ) {
> || > > my $r = rand( 1 + $_);
> || > > @_[ $r, $_] = @_[ $_, $r];
> || > > }
> || > > @_;
> || > > }
> || > >
> || > > Benchmarking this against
> || > >
> || > > sub splicing {
> || > > map splice( @_, rand $_, 1), reverse 1 .. @_;
> || > > }
> || > >
> || > > shows splicing() about twice as fast for arrays shorter than 1000.
> || > > The break-even point is with lists of length 12_000; from then on
> || > > swapping() wins out by an increasing margin. At 30_000, swapping is
> || > > twice as fast as splicing. On my machine.
> || >
> || > The benchmark I posted showed that the execution time of
> || > List::Util::shuffle() is much faster than that. Is the explanation,
> || > then, that I was actually comparing apples and oranges, since
> || > List::Util is a compiled module?
> ||
> || It is. It also contains a pure Perl implementation, but by default it
> || loads an XS version. That's why I wanted to add a benchmark against
> || a Perl implementation of the in-place shuffle.
>
>
> So, it's fair to compare a splice who is doing its work in C with a
> shuffle written in pure Perl?
I didn't want to imply any unfairness, just to add a data point.
> I'd say comparing the XS shuffle with the C splice is more fair.
Replacing n calls to a C-level function by a single one is fair?
We know that the overhead swamps out a lot of what may be happening
low level, and these benchmarks prove it again. So setting one
swap operation against one call to splice seems fair enough from
that point of view.
Anno
------------------------------
Date: Tue, 24 Aug 2004 10:33:36 +1000
From: Matthew Braid <mb@uq.net.au.invalid>
Subject: Re: How to randomize foreach
Message-Id: <cge2d0$5hj$1@bunyip.cc.uq.edu.au>
Abigail wrote:
> Matthew Braid (mb@uq.net.au.invalid) wrote on MMMMX September MCMXCIII in
> <URL:news:cgbd5q$j40$1@bunyip.cc.uq.edu.au>:
> ''
> '' Odd. I just tried the same thing (well, I added a third option - sort
> '' {int(rand(3)) - 1} @arr - to double check:
>
> I've seen variations of the third option before, being presented as a
> way to shuffle a list. But I've yet to see any proof that this gives a
> fair shuffle (fair being every possible permutation of the list has the
> same chance of being produced, given a truely random 'rand()' function).
>
>
> Abigail
I only included the rand() method out of curiousity (I knew it was gonna be
slow, I just put it there as a comparison). I'd never actually _use_ that method
in my code :)
MB
------------------------------
Date: 23 Aug 2004 18:30:10 -0700
From: Michael4172@hotmail.com (Michael)
Subject: Perl Search & Replace Script For Website
Message-Id: <9dcd9df6.0408231730.f586eb9@posting.google.com>
Anyone know of a script, where whenever a page is published, its
scanned and if a keyword I define in another area is spotted it
automatically goes into a link? Then any other currences of it
afterswards in the same article its regular text?
If that didn't make sense, let me show an example.
"Johnny (<-- that would be a link since I predefined it, in a script
or what not, upon clicking it would go to his profile) did something
good. Johnny Estrada (<-- this would be plain text since its the 2nd
occurence in the same article) did something else."
Hope that gives you an idea of what I'm trying to do. Any help would
appreciate :)
------------------------------
Date: 23 Aug 2004 22:19:44 -0700
From: dn_perl@hotmail.com (dn_perl@hotmail.com)
Subject: Re: Placeholder in an sql query
Message-Id: <97314b5b.0408232119.24c6eab@posting.google.com>
(Heinrich Mislik) wrote -
>
> If the type of a field is CHAR(n), placeholders sometimes do not work, because the driver uses VARCHAR for placeholders as default. The reason is, that comparing CHAR(n) to CHAR(m) implicit pads with blanks, whereas comparing CHAR(n) with VARCHAR doesn't pad. And constants are considerd CHAR.
> As a workaround try padding yourself:
> WHERE part = RPAD(?,10)
>------------------
Thanks! Your suggestion worked. But a problem remains still.
Say the table is : students(name CHAR(8))
Entries in Students are : ' ' (8 blanks),
'bob ' and 'dave ' .
===> case A) If the query is :
my $st_name = "dave" ; # non-blank name
my $dstmt = $dbh->prepare("select count(*) from students
where name = RPAD(?,8) ") ;
$dstmt->execute($st_name) or die "sql call failed";
my $num_entries = $dstmt->fetchrow() ;
$dstmt->finish ;
things work well for case A.
===> but case B) If the query is :
my $st_name = " " ; # one-blank in name
my $dstmt = $dbh->prepare("select count(*) from students
where name = RPAD(?,8) ") ;
$dstmt->execute($st_name) or die "sql call failed";
my $num_entries = $dstmt->fetchrow() ;
$dstmt->finish ;
I get $num_entries = 0. WHY? If I don't use placeholder,
I get $num_entries = 1, which is proper. It looks to me that
if query_field = ' ' (one-blank), I am running into problems.
In fact, instead of using RPAD, I would prefer using 'trim',
as in RPAD I must find out lengths of all fields.
But :
my $dstmt = $dbh->prepare("select count(*) from students
where trim(name) = trim(?) ") ;
doesn't work either when $st_name is ' ' (one-blank) .
Please help.
-------------
------------------------------
Date: Tue, 24 Aug 2004 04:47:51 +0000 (UTC)
From: oLo <getfun@btamail.net.cn>
Subject: xml::parser
Message-Id: <Xns954F8226C3C3Egetfun@202.108.36.140>
how to get c lib about XML::Parser ?
------------------------------
Date: 24 Aug 2004 05:36:16 GMT
From: John Bokma <postmaster@castleamber.com>
Subject: Re: xml::parser
Message-Id: <Xns954F6263D5F2castleamber@130.133.1.4>
oLo <getfun@btamail.net.cn> wrote in news:Xns954F8226C3C3Egetfun@
202.108.36.140:
> how to get c lib about XML::Parser ?
See http://www.catb.org/~esr/faqs/smart-questions.html
--
John MexIT: http://johnbokma.com/mexit/
personal page: http://johnbokma.com/
Experienced programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc. For subscription or unsubscription requests, send
#the single line:
#
# subscribe perl-users
#or:
# unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.
NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice.
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V10 Issue 6903
***************************************