[12618] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 27 Volume: 9

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Tue Jul 6 21:47:13 1999

Date: Tue, 6 Jul 1999 18:36:48 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Tue, 6 Jul 1999     Volume: 9 Number: 27

Today's topics:
        faster grep? <jjens@primenet.com>
    Re: faster grep? <jukka.juslin@cern.ch>
    Re: faster grep? (elephant)
    Re: faster grep? <jukka.juslin@cern.ch>
    Re: faster grep? (elephant)
    Re: faster grep? <gellyfish@gellyfish.com>
    Re: faster grep? <jukka.juslin@cern.ch>
    Re: faster grep? <jhi@alpha.hut.fi>
    Re: faster grep? (elephant)
    Re: faster grep? <jjens@primenet.com>
        Fetching HTTP documents <office@algorithm.hypermart.net>
    Re: Fetching HTTP documents <jukka.juslin@cern.ch>
    Re: Fetching HTTP documents (Eric Bohlman)
    Re: Fetching HTTP documents (Abigail)
    Re: Fetching HTTP documents <ptimmins@itd.sterling.com>
    Re: Fetching HTTP documents <john@add.com>
        file properties trouble <office@asc.nl>
    Re: file properties trouble (Iain Chalmers)
        File uploading script! bababozorg@aol.com
    Re: File uploading script! (Abigail)
    Re: File uploading script! <gellyfish@gellyfish.com>
        Find function param question <pvorishatesspam@earthlink.net>
        finding relative path between two files <nmorison@ozemail.com.au>
    Re: finding relative path between two files (Abigail)
    Re: finding relative path between two files <nmorison@ozemail.com.au>
    Re: finding relative path between two files (elephant)
    Re: finding relative path between two files <nmorison@ozemail.com.au>
    Re: finding relative path between two files <nmorison@ozemail.com.au>
        Digest Administrivia (Last modified: 1 Jul 99) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: 5 Jul 1999 16:48:42 GMT
From: John Jensen <jjens@primenet.com>
Subject: faster grep?
Message-Id: <7lqnla$lq0$1@nnrp02.primenet.com>

Hello,

As my first perl program, I'm trying to do some word-frequency analysis on
some text files.  My first approach was to do a while (<file>) and
index($_,"foo").  This is a little slow, and don't actually need the fine
control of the search process from perl.  All I need is a word count for a
file.  So, I wonder what the best approach would be ... perhaps to fire a
command line grep, or ... ?

Thanks,

John



------------------------------

Date: Mon, 05 Jul 1999 19:59:49 +0200
From: Jukka Juslin <jukka.juslin@cern.ch>
Subject: Re: faster grep?
Message-Id: <3780F295.3EBC7D16@cern.ch>

John Jensen wrote:
> 
> Hello,
> 
> As my first perl program, I'm trying to do some word-frequency analysis on
> some text files.  My first approach was to do a while (<file>) and
> index($_,"foo").  This is a little slow, and don't actually need the fine
> control of the search process from perl.  All I need is a word count for a
> file.  So, I wonder what the best approach would be ... perhaps to fire a
> command line grep, or ... ?

Why not use wc command?

++Jukka


------------------------------

Date: Tue, 6 Jul 1999 04:32:34 +1000
From: e-lephant@b-igpond.com (elephant)
Subject: Re: faster grep?
Message-Id: <MPG.11eba54e46b826ae989af5@news-server>

Jukka Juslin writes ..
>Why not use wc command?

I think you'll find that 'wc' makes a count of all whitespace delimited 
words in a file .. kind of useless for search indexing a given word .. if 
any shell command were to be used then it would be 'grep' with the '-c' 
option (which would be a good option for you btw John .. grep is very 
quick)

-- 
 jason - remove all hyphens for email reply -


------------------------------

Date: Mon, 05 Jul 1999 21:10:47 +0200
From: Jukka Juslin <jukka.juslin@cern.ch>
Subject: Re: faster grep?
Message-Id: <37810337.4CCB1341@cern.ch>

elephant wrote:
> 
> Jukka Juslin writes ..
> >Why not use wc command?
> 
> I think you'll find that 'wc' makes a count of all whitespace delimited
> words in a file .. kind of useless for search indexing a given word .. if
> any shell command were to be used then it would be 'grep' with the '-c'
> option (which would be a good option for you btw John .. grep is very
> quick)

egrep is faster still.

++Jukka


------------------------------

Date: Tue, 6 Jul 1999 06:29:41 +1000
From: e-lephant@b-igpond.com (elephant)
Subject: Re: faster grep?
Message-Id: <MPG.11ebc0c15f3d0a60989afa@news-server>

Jukka Juslin writes ..
>egrep is faster still.

really ? .. I would have expected that for the extended regexps .. but 
for the standard ones I would have expected grep to be faster

-- 
 jason - remove all hyphens for email reply -


------------------------------

Date: 5 Jul 1999 21:32:07 -0000
From: Jonathan Stowe <gellyfish@gellyfish.com>
Subject: Re: faster grep?
Message-Id: <7lr88n$5eh$1@gellyfish.btinternet.com>

On 5 Jul 1999 16:48:42 GMT John Jensen wrote:
> Hello,
> 
> As my first perl program, I'm trying to do some word-frequency analysis on
> some text files.  My first approach was to do a while (<file>) and
> index($_,"foo").  This is a little slow, and don't actually need the fine
> control of the search process from perl.  All I need is a word count for a
> file.  So, I wonder what the best approach would be ... perhaps to fire a
> command line grep, or ... ?
> 

Well in Perl of course you could do something like this (reads from STDIN):

#!/usr/bin/perl -w

use strict;

my %words;

undef $/;

$words{$_}++ foreach (split ' ',<>);

print "$_->[0]\t:\t$_->[1]\n" foreach ( sort { $b->[1] <=> $a->[1] }
                                        map { [$_, $words{$_} ] }
                                        keys %words);

(I'm sure one of the golfing fraternity will remove half the code there
 but you get the gist ... )

Of course it is pretty dumb about what it considers to be a word but
that is left to the reader to improve ...

/J\
-- 
Jonathan Stowe <jns@gellyfish.com>
Some of your questions answered:
<URL:http://www.btinternet.com/~gellyfish/resources/wwwfaq.htm>
Hastings: <URL:http://www.newhoo.com/Regional/UK/England/East_Sussex/Hastings>


------------------------------

Date: Tue, 06 Jul 1999 12:22:36 +0200
From: Jukka Juslin <jukka.juslin@cern.ch>
Subject: Re: faster grep?
Message-Id: <3781D8EC.5307DA4B@cern.ch>

elephant wrote:
> 
> Jukka Juslin writes ..
> >egrep is faster still.
> 
> really ? .. I would have expected that for the extended regexps .. but
> for the standard ones I would have expected grep to be faster

juslin@pcephc23 ~ > time egrep patch *
0.140u 0.060s 0:00.44 

juslin@pcephc23 ~ > time grep patch *
0.130u 0.080s 0:02.68 

Not too hard to test it out by yourself.

BR,
Jukka


------------------------------

Date: 06 Jul 1999 14:49:12 +0300
From: Jarkko Hietaniemi <jhi@alpha.hut.fi>
Subject: Re: faster grep?
Message-Id: <oeebtdqj8w7.fsf@alpha.hut.fi>


Digital UNIX grep, egrep, GNU grep, egrep, ArizonaU agrep,
Perl source directory:

grep
r 0,56s u 0,30s s 0,10s 71% "grep hunk *.[hc] > /dev/null"
r 0,46s u 0,30s s 0,10s 86% "grep hunk *.[hc] > /dev/null"
r 0,52s u 0,30s s 0,10s 76% "grep hunk *.[hc] > /dev/null"
r 0,48s u 0,30s s 0,12s 86% "grep hunk *.[hc] > /dev/null"
r 0,49s u 0,30s s 0,10s 81% "grep hunk *.[hc] > /dev/null"
egrep
r 0,46s u 0,28s s 0,08s 80% "egrep hunk *.[hc] > /dev/null"
r 0,54s u 0,32s s 0,12s 80% "egrep hunk *.[hc] > /dev/null"
r 0,47s u 0,28s s 0,10s 81% "egrep hunk *.[hc] > /dev/null"
r 0,45s u 0,28s s 0,10s 85% "egrep hunk *.[hc] > /dev/null"
r 0,56s u 0,30s s 0,10s 71% "egrep hunk *.[hc] > /dev/null"
ggrep
r 0,22s u 0,07s s 0,08s 68% "ggrep hunk *.[hc] > /dev/null"
r 0,18s u 0,05s s 0,08s 72% "ggrep hunk *.[hc] > /dev/null"
r 0,20s u 0,05s s 0,08s 66% "ggrep hunk *.[hc] > /dev/null"
r 0,22s u 0,07s s 0,08s 67% "ggrep hunk *.[hc] > /dev/null"
r 0,28s u 0,07s s 0,08s 53% "ggrep hunk *.[hc] > /dev/null"
gegrep
r 0,21s u 0,05s s 0,08s 64% "gegrep hunk *.[hc] > /dev/null"
r 0,21s u 0,05s s 0,10s 72% "gegrep hunk *.[hc] > /dev/null"
r 0,26s u 0,05s s 0,08s 51% "gegrep hunk *.[hc] > /dev/null"
r 0,17s u 0,07s s 0,07s 78% "gegrep hunk *.[hc] > /dev/null"
r 0,25s u 0,07s s 0,08s 60% "gegrep hunk *.[hc] > /dev/null"
agrep
r 0,38s u 0,08s s 0,12s 52% "agrep hunk *.[hc] > /dev/null"
r 0,27s u 0,08s s 0,10s 68% "agrep hunk *.[hc] > /dev/null"
r 0,36s u 0,08s s 0,13s 60% "agrep hunk *.[hc] > /dev/null"
r 0,47s u 0,10s s 0,10s 42% "agrep hunk *.[hc] > /dev/null"
r 0,29s u 0,10s s 0,12s 74% "agrep hunk *.[hc] > /dev/null"

-- 
$jhi++; # http://www.iki.fi/jhi/
        # There is this special biologist word we use for 'stable'.
        # It is 'dead'. -- Jack Cohen


------------------------------

Date: Tue, 6 Jul 1999 22:36:06 +1000
From: e-lephant@b-igpond.com (elephant)
Subject: Re: faster grep?
Message-Id: <MPG.11eca34380809437989b05@news-server>

Jukka Juslin writes ..
>elephant wrote:
>> 
>> Jukka Juslin writes ..
>> >egrep is faster still.
>> 
>> really ? .. I would have expected that for the extended regexps .. but
>> for the standard ones I would have expected grep to be faster
>
>juslin@pcephc23 ~ > time egrep patch *
>0.140u 0.060s 0:00.44 
>
>juslin@pcephc23 ~ > time grep patch *
>0.130u 0.080s 0:02.68 

thanks for that .. I wasn't doubting your claim - just expressing my 
surprise

>Not too hard to test it out by yourself.

you assume that I'm using UNIX

-- 
 jason - remove all hyphens for email reply -


------------------------------

Date: 6 Jul 1999 13:53:32 GMT
From: John Jensen <jjens@primenet.com>
Subject: Re: faster grep?
Message-Id: <7lt1os$gfv$1@nnrp03.primenet.com>

Jonathan Stowe <gellyfish@gellyfish.com> wrote:

: On 5 Jul 1999 16:48:42 GMT John Jensen wrote:

: > So, I wonder what the best approach would be ... perhaps to fire a
: > command line grep, or ... ?

: Well in Perl of course you could do something like this (reads from STDIN):

: #!/usr/bin/perl -w
: [...]

Thanks for the code.  I dug in yesterday and did a version in C.  For the
analysis part of my program that might be a better answer: I get to
compile my (~50) regular expressions once, and then apply them to each
line read (~300,000 lines).  In C that runs in 2 minutes on my 233MHz K6.

Like that makes a difference ;-) ... it makes a lot of sense to run a
computer 7x24 just to do a 2 minute job every night!

As I move on to the display aspect of my problem I think I will be using
perl cgi's.  That's the nice thing about having lots of tools in your
toolbox - choosing the one that feels right.

John


------------------------------

Date: Mon, 5 Jul 1999 22:46:09 +0300
From: "Magnetic Media" <office@algorithm.hypermart.net>
Subject: Fetching HTTP documents
Message-Id: <7lr27g$c2d$1@main.rls.roknet.ro>

Hi!

Please, tell me how can I retrieve the contents of a document through HTTP,
to use it's content in a CGI script? Is there any library function or
subroutine available?

Thanks, Cuki.




------------------------------

Date: Mon, 05 Jul 1999 22:34:41 +0200
From: Jukka Juslin <jukka.juslin@cern.ch>
Subject: Re: Fetching HTTP documents
Message-Id: <378116E1.B3142CEB@cern.ch>

Magnetic Media wrote:
> 
> Hi!
> 
> Please, tell me how can I retrieve the contents of a document through HTTP,
> to use it's content in a CGI script? Is there any library function or
> subroutine available?

The subject should probably be like fetching HTML documents.

#!/usr/bin/perl

use LWP::UserAgent;

$URL = 'http://www.cnn.com';
$request = new HTTP::Request('GET', "$URL");

 ...

This isn't a lib function nor subroutine. It doesn't help just to
say programming related words randomly to make a good impact
(or maybe you were a C programmer).

This is a module.

--
++Jukka


------------------------------

Date: 5 Jul 1999 21:23:13 GMT
From: ebohlman@netcom.com (Eric Bohlman)
Subject: Re: Fetching HTTP documents
Message-Id: <7lr7o1$9va@dfw-ixnews11.ix.netcom.com>

Magnetic Media (office@algorithm.hypermart.net) wrote:
: Please, tell me how can I retrieve the contents of a document through HTTP,
: to use it's content in a CGI script? Is there any library function or
: subroutine available?

Yep, use the LWP series of modules.



------------------------------

Date: 5 Jul 1999 18:04:56 -0500
From: abigail@delanet.com (Abigail)
Subject: Re: Fetching HTTP documents
Message-Id: <slrn7o2eg4.h6v.abigail@alexandra.delanet.com>

Magnetic Media (office@algorithm.hypermart.net) wrote on MMCXXXIV
September MCMXCIII in <URL:news:7lr27g$c2d$1@main.rls.roknet.ro>:
%% 
%% Please, tell me how can I retrieve the contents of a document through HTTP,
%% to use it's content in a CGI script? Is there any library function or
%% subroutine available?


use LWP::UserAgent;



Abigail
-- 
package Just_another_Perl_Hacker; sub print {($_=$_[0])=~ s/_/ /g;
                                      print } sub __PACKAGE__ { &
                                      print (     __PACKAGE__)} &
                                                  __PACKAGE__
                                            (                )


  -----------== Posted via Newsfeeds.Com, Uncensored Usenet News ==----------
   http://www.newsfeeds.com       The Largest Usenet Servers in the World!
------== Over 73,000 Newsgroups - Including  Dedicated  Binaries Servers ==-----


------------------------------

Date: Tue, 06 Jul 1999 00:45:03 GMT
From: Patrick Timmins <ptimmins@itd.sterling.com>
Subject: Re: Fetching HTTP documents
Message-Id: <7lrjid$2h$1@nnrp1.deja.com>

In article <7lr27g$c2d$1@main.rls.roknet.ro>,
  "Magnetic Media" <office@algorithm.hypermart.net> wrote:

> Hi!
>
> Please, tell me how can I retrieve the contents of a document through
> HTTP, to use it's content in a CGI script? Is there any library
> function or subroutine available?

use LWP::Simple;
and invoke its' 'get' method.

look for the libwww modules at a CPAN near you. eg:
ftp://ftp.rge.com/pub/languages/perl//modules/by-module/LWP/

$monger{Omaha}[0]
Patrick Timmins
ptimmins@itd.sterling.com


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.


------------------------------

Date: Tue, 6 Jul 1999 14:22:21 -0700
From: "John" <john@add.com>
Subject: Re: Fetching HTTP documents
Message-Id: <931296259.452.77@news.remarQ.com>

or a shortcut:

$url=`lynx -source www.foo.bar`;

Magnetic Media wrote in message <7lr27g$c2d$1@main.rls.roknet.ro>...
>Hi!
>
>Please, tell me how can I retrieve the contents of a document through HTTP,
>to use it's content in a CGI script? Is there any library function or
>subroutine available?
>
>Thanks, Cuki.
>
>




------------------------------

Date: Mon, 5 Jul 1999 10:33:54 +0200
From: "Bastiaan S van den Berg" <office@asc.nl>
Subject: file properties trouble
Message-Id: <7lpqpq$7tf$1@zonnetje.NL.net>

i've written this peace of code , but somehow , all files come out as
undefined

it needs to look for directories and normal files , but it doesn't work

does anyone has an idea what could be wrong?

[code]
opendir(DIR,$somedir) or die "error! $!";
@meuk= readdir(DIR);
closedir DIR;

foreach $entry (@meuk) {
 if (-d $entry) {
  print '[',$entry,']',"\n";
 } elsif (-f $entry) {
  print $entry,"\n";
 } else {
  print '? ',$entry,"\n";
 }
};
[code]

tnx in advance!

cul8r
--
        .-.      |
    .-.(._ ).-.  | Bastiaan v/d Berg ; aka buZz
   (  /   `(   ) | Internet Specialist
   .-;     .-.'  | Account Software Consultancy
  (   ).-.(   )  | www.asc.nl =|= office@asc.nl
   '-'(   )'-'   | huizen.ddsw.nl/bewoners/buzz =|= buzz@ddsw.nl
       '-'       |




------------------------------

Date: Mon, 05 Jul 1999 18:49:08 +1000
From: bigiain@mightymedia.com.au (Iain Chalmers)
Subject: Re: file properties trouble
Message-Id: <bigiain-0507991849080001@bigman.mighty.aust.com>

In article <7lpqpq$7tf$1@zonnetje.NL.net>, "Bastiaan S van den Berg"
<buzz@ddsw.nl> wrote:

> i've written this peace of code , but somehow , all files come out as
> undefined
> 
> it needs to look for directories and normal files , but it doesn't work
> 
> does anyone has an idea what could be wrong?

yup, the people who wrote the documentation for readdir have already
guessed you'd make this mistake, thats why they wrote this in perlfunc:

readdir DIRHANDLE 

Returns the next directory entry for a directory opened by opendir(). If
used in a list context, returns all the rest of the entries in the
directory.  If there are no more entries, returns an undefined value in a
scalar context or a null list in a list context.

If you're planning to filetest the return values out of a readdir(), you'd
better prepend the directory in question.  Otherwise, because we didn't
chdir() there, it would have been testing the wrong file.

cheers

Iain


------------------------------

Date: Sun, 04 Jul 1999 14:40:55 GMT
From: bababozorg@aol.com
Subject: File uploading script!
Message-Id: <7lnrpm$tbe$1@nnrp1.deja.com>

hi
can anyone tell me what is wrong with this script, it doesnt work,
actually it makes a file, but it doesnt write to the file, it just
makes empty file!!
i was looking at the other free scripts which they did the same, but
their script is working properbly!
please correct me if you see any error i am kinda new to perl! :)
#!C:\Perl\5.00502\bin\MSWin32-x86-object\perl.exe
use CGI qw(:standard);
$q = new CGI;
$dir = ".";
$file = $q->param(file);
$file =~ s/\s+//g;
$file =~ s/\\/\//g;
$file =~ /([^\/\\]+)$/; #i got this from the jeff's file upload :)
$filename = $1;
open(FILE, ">$dir/$filename");
undef $reads;
undef $buffer;
while ($bytes = read($file,$buffer,1024)) {
$reads += $bytes;
print FILE $buffer;
}
close($file);
close(FILE);
print $q->header;
print "done\n $reads";
exit;

thanks
hamed


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.


------------------------------

Date: 4 Jul 1999 10:20:16 -0500
From: abigail@delanet.com (Abigail)
Subject: Re: File uploading script!
Message-Id: <slrn7nuust.31h.abigail@alexandra.delanet.com>

bababozorg@aol.com (bababozorg@aol.com) wrote on MMCXXXIII September
MCMXCIII in <URL:news:7lnrpm$tbe$1@nnrp1.deja.com>:
<> hi
<> can anyone tell me what is wrong with this script, it doesnt work,
<> actually it makes a file, but it doesnt write to the file, it just
<> makes empty file!!
<> i was looking at the other free scripts which they did the same, but
<> their script is working properbly!
<> please correct me if you see any error i am kinda new to perl! :)

Zeroth mistake: you don't use capital letters.

<> #!C:\Perl\5.00502\bin\MSWin32-x86-object\perl.exe

First mistake:  there is no -w.
Second mistake: there is no -T.
Third mistake:  there is no use strict;

<> use CGI qw(:standard);
<> $q = new CGI;
<> $dir = ".";
<> $file = $q->param(file);
<> $file =~ s/\s+//g;
<> $file =~ s/\\/\//g;
<> $file =~ /([^\/\\]+)$/; #i got this from the jeff's file upload :)
<> $filename = $1;
<> open(FILE, ">$dir/$filename");

Fourth mistake: you don't check the return value of open().

<> undef $reads;
<> undef $buffer;
<> while ($bytes = read($file,$buffer,1024)) {
<> $reads += $bytes;

Fifth mistake: you don't indent.

<> print FILE $buffer;
<> }
<> close($file);
<> close(FILE);
<> print $q->header;
<> print "done\n $reads";
<> exit;



Abigail
-- 
perl -e '* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
         / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / 
         % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %;
         BEGIN {% % = ($ _ = " " => print "Just Another Perl Hacker\n")}'


  -----------== Posted via Newsfeeds.Com, Uncensored Usenet News ==----------
   http://www.newsfeeds.com       The Largest Usenet Servers in the World!
------== Over 73,000 Newsgroups - Including  Dedicated  Binaries Servers ==-----


------------------------------

Date: 4 Jul 1999 20:35:52 -0000
From: Jonathan Stowe <gellyfish@gellyfish.com>
Subject: Re: File uploading script!
Message-Id: <7logj8$48e$1@gellyfish.btinternet.com>

On Sun, 04 Jul 1999 14:40:55 GMT bababozorg@aol.com wrote:
> hi
> can anyone tell me what is wrong with this script, it doesnt work,
> actually it makes a file, but it doesnt write to the file, it just
> makes empty file!!
> i was looking at the other free scripts which they did the same, but
> their script is working properbly!
> please correct me if you see any error i am kinda new to perl! :)
> #!C:\Perl\5.00502\bin\MSWin32-x86-object\perl.exe

No -w on shebang line.  You should also start to use 'strict' as this
will make you more honest in the way you use variables.

> use CGI qw(:standard);
> $q = new CGI;

You dont need to import the symbols like that if you are using the
OO interface to the CGI module as you are.

> $dir = ".";
> $file = $q->param(file);

You should have quotes around that parameter there.

> $file =~ s/\s+//g;
> $file =~ s/\\/\//g;
> $file =~ /([^\/\\]+)$/; #i got this from the jeff's file upload :)
> $filename = $1;
> open(FILE, ">$dir/$filename");

You are not checking whether the open succeeded.

If you are on a Windows system you should also use binmode() on both the
read an write filehandles.

> undef $reads;
> undef $buffer;

You would be better off declaring these with 'my'.

> while ($bytes = read($file,$buffer,1024)) {
> $reads += $bytes;
> print FILE $buffer;
> }
> close($file);
> close(FILE);
> print $q->header;

You probably want to set your content-type to 'text/plain' here.

> print "done\n $reads";
> exit;

You dont need this exit here - at best it is unnecessary at this point.

Beyond that theres not much wrong with the program: I think your problem
may be with the binmode() as it appears you are on a Windows machine.

/J\
-- 
Jonathan Stowe <jns@gellyfish.com>
Some of your questions answered:
<URL:http://www.btinternet.com/~gellyfish/resources/wwwfaq.htm>
Hastings: <URL:http://www.newhoo.com/Regional/UK/England/East_Sussex/Hastings>


------------------------------

Date: Fri, 02 Jul 1999 20:40:21 -0700
From: Phil Voris <pvorishatesspam@earthlink.net>
Subject: Find function param question
Message-Id: <377D8625.67F9BEB1@earthlink.net>

Say I have a find call as follows:

find(\&find_files, @ARGV);

sub find_files{

    my($cwd) = cwd();
    
    # if the file is not a directory:	
    if (!-d){
	# but is binary
	if (-B){
	    push (@long_binary_paths, join("/", $cwd, $_));
	} 
	# but is ascii
	else {	    
	    push (@long_ascii_paths, join("/", $cwd, $_));
	}
    } 
    return 1;
}

I find myself forced to use global arrays to push onto because I can't
define a function -- for use by the find -- that takes args.  I can't
pass by ref and it gets quite ugly.  Any thoughts?

P



------------------------------

Date: Tue, 06 Jul 1999 17:44:04 +1000
From: Neale Morison <nmorison@ozemail.com.au>
Subject: finding relative path between two files
Message-Id: <3781B3C4.638C@ozemail.com.au>

Hello all. I have been trying to write a sensible bit of code to find
relative paths between two files, for use in a Web project. 
What I have written works, with an incredible number of provisos, and is
about as elegant as cat-sick.
I was wondering if anybody knew of a module that handles this or has any
suggestions for turning this into two lines of taut, lean code.
Regards, Neale

	$p1 = "a/b/c/d/e/f/g";
	$p2 = "a/b/c/d/h/i/j/k";
	print "p1: $p1<BR>";
	print "p2: $p2<BR>";

	print find_relative_path($p1, $p2);

#----------------------------------------------------
#
# find_relative_path($path1, $path2)
#
# finds relative path from file1 to file2 - 
# case sensitive comparison to handle Unix systems
# assumes Unix path separators / (works in ActivePerl on Windows)
# assumes both paths terminate in files, not directories
# assumes paths are normalised, so the earlier parts of them match
# this may not be true - one may contain a drive letter, one may not
# two consecutive /s might be treated as one by file system or browser
# but would not work in this algorithm
#
#----------------------------------------------------

sub find_relative_path {

	my $p1 = shift;
	my $p2 = shift;
	my ($shared, $diff_p1, $diff_p2, $rel_path, $char);
	$shared = $p2;
	
	while ($shared ne "" ) {
		if ($p1 =~ m/$shared/) {
			last;
		}
		chop $shared;
		while ( ($shared ne "") and ($char = chop($shared)) ne '/') {};
		$shared .= '/';
	}
	($diff_p1 = $p1) =~ s/$shared//;
	($diff_p2 = $p2) =~ s/$shared//;
	
	while ($diff_p1 =~ m{/}g) {
		$rel_path .= '../'; 
	}
	$rel_path .= $diff_p2;

	return $rel_path;
	

} # END SUB find_relative_path($path1, $path2)


------------------------------

Date: 6 Jul 1999 03:21:06 -0500
From: abigail@delanet.com (Abigail)
Subject: Re: finding relative path between two files
Message-Id: <slrn7o3f2s.h6v.abigail@alexandra.delanet.com>

Neale Morison (nmorison@ozemail.com.au) wrote on MMCXXXV September
MCMXCIII in <URL:news:3781B3C4.638C@ozemail.com.au>:
<> Hello all. I have been trying to write a sensible bit of code to find
<> relative paths between two files, for use in a Web project. 
<> What I have written works, with an incredible number of provisos, and is
<> about as elegant as cat-sick.
<> I was wondering if anybody knew of a module that handles this or has any
<> suggestions for turning this into two lines of taut, lean code.
<> Regards, Neale
<> 
<> 	$p1 = "a/b/c/d/e/f/g";
<> 	$p2 = "a/b/c/d/h/i/j/k";
<> 	print "p1: $p1<BR>";
<> 	print "p2: $p2<BR>";
<> 
<> 	print find_relative_path($p1, $p2);


sub find_relative_path ($$) {
    my @f = split m =/==> shift, -1; pop @f;
    my @s = split m =/==> shift, -1;

    while (@f && @s && $f [0] == $s [0]) {shift @f; shift @s}

    join q =/==> (q q..q) x @f, @s or q q.q;
}



Abigail
-- 
perl -e '* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
         / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / 
         % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %;
         BEGIN {% % = ($ _ = " " => print "Just Another Perl Hacker\n")}'


  -----------== Posted via Newsfeeds.Com, Uncensored Usenet News ==----------
   http://www.newsfeeds.com       The Largest Usenet Servers in the World!
------== Over 73,000 Newsgroups - Including  Dedicated  Binaries Servers ==-----


------------------------------

Date: Tue, 6 Jul 1999 19:19:15 +1000
From: "Neale Morison" <nmorison@ozemail.com.au>
Subject: Re: finding relative path between two files
Message-Id: <RTjg3.3014$A55.21943@ozemail.com.au>

Abigail wrote in message ...
>sub find_relative_path ($$) {
>    my @f = split m =/==> shift, -1; pop @f;
>    my @s = split m =/==> shift, -1;
>
>    while (@f && @s && $f [0] == $s [0]) {shift @f; shift @s}
>
>    join q =/==> (q q..q) x @f, @s or q q.q;
>}
That's 5 lines.





------------------------------

Date: Tue, 6 Jul 1999 19:17:24 +1000
From: e-lephant@b-igpond.com (elephant)
Subject: Re: finding relative path between two files
Message-Id: <MPG.11ec74ab2480904989b03@news-server>

Abigail writes ..
><> I was wondering if anybody knew of a module that handles this or has any
><> suggestions for turning this into two lines of taut, lean code.
>
>    while (@f && @s && $f [0] == $s [0]) {shift @f; shift @s}

of course she meant 'eq' not '=='

-- 
 jason - remove all hyphens for email reply -


------------------------------

Date: Tue, 6 Jul 1999 22:17:14 +1000
From: "Neale Morison" <nmorison@ozemail.com.au>
Subject: Re: finding relative path between two files
Message-Id: <yumg3.3106$A55.22711@ozemail.com.au>

Abigail wrote in message ...
>sub find_relative_path ($$) {
>    my @f = split m =/==> shift, -1; pop @f;
>    my @s = split m =/==> shift, -1;
>
>    while (@f && @s && $f [0] eq $s [0]) {shift @f; shift @s}
>
>    join q =/==> (q q..q) x @f, @s or q q.q;
>}


Thank you very much, Abigail, and Jason for the crucial edit. This works
fine. I was expecting it to print "Just another Perl hacker" but you fooled
me. What took me hours seems to be a matter of moments for you. Now I can
spend more hours working out what in heaven's name this code  does. Best
regards, Neale




------------------------------

Date: Tue, 6 Jul 1999 23:45:29 +1000
From: "Neale Morison" <nmorison@ozemail.com.au>
Subject: Re: finding relative path between two files
Message-Id: <cNng3.3153$A55.21785@ozemail.com.au>

Abigail wrote in message ...
! print find_relative_path2('/a/b/c/d/e/f','/a/b/c/q');
!
! sub find_relative_path ($$) {
!   my @f = split m =/==> shift, -1; pop @f;
!    my @s = split m =/==> shift, -1;
!    while (@f && @s && $f [0] eq $s [0]) {shift @f; shift @s}
!    join q =/==> (q q..q) x @f, @s or q q.q;
!  }

I think I understand the code above now, Abigail, and may I say there is
much poetry in it. In fact it's 'q't. Can I impose on you for one more
question?
Is there anything substantially different to your code in the code below?

sub find_relative_path2 ($$) {
    my @f = split m {/}, shift;
    pop @f;
    my @s = split m {/}, shift;
    while (@f && @s && $f [0] eq $s [0]) {shift @f; shift @s}
    join q {/}, ('..') x @f, @s or ('.');
}






------------------------------

Date: 1 Jul 99 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 1 Jul 99)
Message-Id: <null>


Administrivia:

The Perl-Users Digest is a retransmission of the USENET newsgroup
comp.lang.perl.misc.  For subscription or unsubscription requests, send
the single line:

	subscribe perl-users
or:
	unsubscribe perl-users

to almanac@ruby.oce.orst.edu.  

To submit articles to comp.lang.perl.misc (and this Digest), send your
article to perl-users@ruby.oce.orst.edu.

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

To request back copies (available for a week or so), send your request
to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
where x is the volume number and y is the issue number.

The Meta-FAQ, an article containing information about the FAQ, is
available by requesting "send perl-users meta-faq". The real FAQ, as it
appeared last in the newsgroup, can be retrieved with the request "send
perl-users FAQ". Due to their sizes, neither the Meta-FAQ nor the FAQ
are included in the digest.

The "mini-FAQ", which is an updated version of the Meta-FAQ, is
available by requesting "send perl-users mini-faq". It appears twice
weekly in the group, but is not distributed in the digest.

For other requests pertaining to the digest, send mail to
perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
sending perl questions to the -request address, I don't have time to
answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V9 Issue 27
************************************


home help back first fref pref prev next nref lref last post