[29347] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 591 Volume: 11

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Wed Jun 27 18:09:58 2007

Date: Wed, 27 Jun 2007 15:09:07 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Wed, 27 Jun 2007     Volume: 11 Number: 591

Today's topics:
    Re: "Out of memory!" ??? <joe@inwap.com>
    Re: "Out of memory!" ??? <jiehuang001@gmail.com>
    Re: "Out of memory!" ??? xhoster@gmail.com
    Re: "Out of memory!" ??? <wahab@chemie.uni-halle.de>
    Re: "Out of memory!" ??? <jiehuang001@gmail.com>
    Re: "Out of memory!" ??? <jurgenex@hotmail.com>
    Re: "Out of memory!" ??? <jiehuang001@gmail.com>
    Re: "Out of memory!" ??? <kkeller-usenet@wombat.san-francisco.ca.us>
        help with wintegrate scripting  cartercc@gmail.com
        Read File into multi-dimensional array ignore whitespac  vorticitywolfe@gmail.com
    Re: Read File into multi-dimensional array ignore white <simon.chao@fmr.com>
    Re: Read File into multi-dimensional array ignore white  vorticitywolfe@gmail.com
    Re: skip first N lines when reading file <No_4@dsl.pipex.com>
    Re: The Modernization of Emacs: terminology buffer and  <blmblm@myrealbox.com>
    Re: The Modernization of Emacs: terminology buffer and  <blmblm@myrealbox.com>
    Re: The Modernization of Emacs: terminology buffer and  <blmblm@myrealbox.com>
        using wildcards with -e <dummymb@hotmail.com>
    Re: using wildcards with -e <peter@makholm.net>
    Re: using wildcards with -e <dummymb@hotmail.com>
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Wed, 27 Jun 2007 11:30:24 -0700
From: Joe Smith <joe@inwap.com>
Subject: Re: "Out of memory!" ???
Message-Id: <eJqdnefG_PRfMR_bnZ2dnUVZ_iydnZ2d@comcast.com>

Jie wrote:
>  Even though my Mac has 6G memory,

Take a look at `perl -V` to see if the version of perl you're running
was compiled with the right options to handle more than 4G memory.

> Also, for another program, when I use "foreach <IN-FILE>", it runs out
> of memory. but when I change to "while<IN-FILE>", it works fine...
> Strange....

Not strange.  foreach(<>) creates a huge anonymous array by reading
the entire file into memory all at once.  while(<>) processes one
line at a time; only a small part of the file is in a buffer at a time.
	-Joe


------------------------------

Date: Wed, 27 Jun 2007 11:45:16 -0700
From:  Jie <jiehuang001@gmail.com>
Subject: Re: "Out of memory!" ???
Message-Id: <1182969916.120396.284360@n2g2000hse.googlegroups.com>


Hi, Joe:

then is there a way to loop over an array by using "while" and
eliminating the using of "foreach"?

this is the output for "perl -V", http://www.humanbee.com/CHE/out.txt


On Jun 27, 1:30 pm, Joe Smith <j...@inwap.com> wrote:
> Jie wrote:
> >  Even though my Mac has 6G memory,
>
> Take a look at `perl -V` to see if the version of perl you're running
> was compiled with the right options to handle more than 4G memory.
>
> > Also, for another program, when I use "foreach <IN-FILE>", it runs out
> > of memory. but when I change to "while<IN-FILE>", it works fine...
> > Strange....
>
> Not strange.  foreach(<>) creates a huge anonymous array by reading
> the entire file into memory all at once.  while(<>) processes one
> line at a time; only a small part of the file is in a buffer at a time.
>         -Joe




------------------------------

Date: 27 Jun 2007 18:57:30 GMT
From: xhoster@gmail.com
Subject: Re: "Out of memory!" ???
Message-Id: <20070627145732.996$Eu@newsreader.com>

Jie <jiehuang001@gmail.com> wrote:
> Hi, I have this script trying to do something for an array of files.
> The script is posted at http://www.humanbee.com/CHE/MERGE.txt
>
> As you can see, for each file in
> ("1","2","3","4","5","6","7","8","9"....
> some processing will be carried out. Even though my Mac has 6G memory,
> the program will give this following message after processing file
> "4". My question is, if there is enough memory to process each one
> individually, why there is not enough memory to process the array in a
> loop by using "foreach". I assume within this foreach loop, when one
> file is finished processing and the output file is written, the memory
> will be released, to begin processing the next one. Isn't that true??

No.  You make heavy use symbolic references.  They aren't automatically
cleaned up, as Perl has no way of knowing when they won't be used again.
This is one of the reasons (probably a lesser one of them) why symbolic
references are usually not a good thing.

Xho

-- 
-------------------- http://NewsReader.Com/ --------------------
Usenet Newsgroup Service                        $9.95/Month 30GB


------------------------------

Date: Wed, 27 Jun 2007 21:08:14 +0200
From: Mirco Wahab <wahab@chemie.uni-halle.de>
Subject: Re: "Out of memory!" ???
Message-Id: <f5ucj4$268f$1@nserver.hrz.tu-freiberg.de>

Jie wrote:
> Hi, Joe:
> 
> then is there a way to loop over an array by using "while" and
> eliminating the using of "foreach"?

"foreach" is probably not the problem here.  Xhoster
already identified the problem, you create (probably huge)
arrays and hashes(!) from symbols just read by
the program from files.

If your snp-files are large (more than one million lines)
and you are addung stuff like:

        ($sample_ID, @cols) = split;
        ...
        ${$sample_ID}{$marker} = $Allele[$A];
        ...
        push @allSamples, $sample_ID;

in the loop, it will fill up memory somewhere -
and, what makes the fun hotter:

        while( ... ) {
            if ( ... {regexp} ... ) {
               push @$1, $2; ## for each chromosome, create an array of its SNPs
            }
        }

         ...

        @files = ("1","2","3", ... "X","XY","Y");
        ...
        foreach $f (@files) {
            ...
            foreach $marker (@$f) { ## here @$f means the array of desired SNP list in the map file
            ...
        }

renders this code completely unmaintainable, unpredictable,
error-prone; so it has to be rewritten if it's really used
for something.

my €0,02

Regards

M.


------------------------------

Date: Wed, 27 Jun 2007 12:47:52 -0700
From:  Jie <jiehuang001@gmail.com>
Subject: Re: "Out of memory!" ???
Message-Id: <1182973672.993667.281870@q75g2000hsh.googlegroups.com>


Hi,

my navie question is that if there is enough memory to handle the
biggest file, which is "1", why could it not do the rest in a
loop? ..... I realize there is too much computation for each file,
that is why i write the output for each file into a temp file under
the "OUTPUT" folder..

Yes, this code is definitely real. Actually, it is for a quite
important project.
I put the complete code and all the input files here http://www.humanbee.co=
m/perl/

If you could offer 2 more cents to take a peek and give me some hint
on how to re-write it, I would deeply deeply appreciate!!!

Jie


On Jun 27, 2:08 pm, Mirco Wahab <w...@chemie.uni-halle.de> wrote:
> Jie wrote:
> > Hi, Joe:
>
> > then is there a way to loop over an array by using "while" and
> > eliminating the using of "foreach"?
>
> "foreach" is probably not the problem here.  Xhoster
> already identified the problem, you create (probably huge)
> arrays and hashes(!) from symbols just read by
> the program from files.
>
> If your snp-files are large (more than one million lines)
> and you are addung stuff like:
>
>         ($sample_ID, @cols) =3D split;
>         ...
>         ${$sample_ID}{$marker} =3D $Allele[$A];
>         ...
>         push @allSamples, $sample_ID;
>
> in the loop, it will fill up memory somewhere -
> and, what makes the fun hotter:
>
>         while( ... ) {
>             if ( ... {regexp} ... ) {
>                push @$1, $2; ## for each chromosome, create an array of i=
ts SNPs
>             }
>         }
>
>          ...
>
>         @files =3D ("1","2","3", ... "X","XY","Y");
>         ...
>         foreach $f (@files) {
>             ...
>             foreach $marker (@$f) { ## here @$f means the array of desire=
d SNP list in the map file
>             ...
>         }
>
> renders this code completely unmaintainable, unpredictable,
> error-prone; so it has to be rewritten if it's really used
> for something.
>
> my =800,02
>
> Regards
>
> M.




------------------------------

Date: Wed, 27 Jun 2007 20:10:04 GMT
From: "Jürgen Exner" <jurgenex@hotmail.com>
Subject: Re: "Out of memory!" ???
Message-Id: <wuzgi.7505$G85.1857@trndny08>

Jie wrote:
> my navie question is that if there is enough memory to handle the
> biggest file, which is "1", why could it not do the rest in a
> loop? ..... I realize there is too much computation for each file,
> that is why i write the output for each file into a temp file under
> the "OUTPUT" folder..

Did you read what xhoster wrote?
Symbolic references create an entry in the global symbol table and perl has 
no way of knowing when they go out of scope, therefore the memory used by 
them will never be recycled. And your code is infested with symbolic 
references.

Solution: rewrite your code in a clean way that does not use symbolic 
references (which are evil to begin with).

jue 




------------------------------

Date: Wed, 27 Jun 2007 13:51:20 -0700
From:  Jie <jiehuang001@gmail.com>
Subject: Re: "Out of memory!" ???
Message-Id: <1182977480.679974.7730@q69g2000hsb.googlegroups.com>

I do not know if symbolic reference means Global reference. And
therefore I should try to define local variable. This might be a very
naive question. Do I just add "my" to define a local variable such as
below:

foreach my $f (@files) {
 my @marker_array =3D();
 my $map ....
 my ....
}



On Jun 27, 3:10 pm, "J=FCrgen Exner" <jurge...@hotmail.com> wrote:
> Jie wrote:
> > my navie question is that if there is enough memory to handle the
> > biggest file, which is "1", why could it not do the rest in a
> > loop? ..... I realize there is too much computation for each file,
> > that is why i write the output for each file into a temp file under
> > the "OUTPUT" folder..
>
> Did you read what xhoster wrote?
> Symbolic references create an entry in the global symbol table and perl h=
as
> no way of knowing when they go out of scope, therefore the memory used by
> them will never be recycled. And your code is infested with symbolic
> references.
>
> Solution: rewrite your code in a clean way that does not use symbolic
> references (which are evil to begin with).
>
> jue




------------------------------

Date: Wed, 27 Jun 2007 14:22:56 -0700
From: Keith Keller <kkeller-usenet@wombat.san-francisco.ca.us>
Subject: Re: "Out of memory!" ???
Message-Id: <h21bl4x48g.ln2@goaway.wombat.san-francisco.ca.us>

On 2007-06-27, Jie <jiehuang001@gmail.com> wrote:
> I do not know if symbolic reference means Global reference. And
> therefore I should try to define local variable. This might be a very
> naive question. Do I just add "my" to define a local variable such as
> below:
>
> foreach my $f (@files) {
>  my @marker_array =();
>  my $map ....
>  my ....
> }

What you should really do is include

use strict;

at the top of your code, which will produce fatal errors on any symbolic
reference.  Fix those errors, and you will have eliminated use of
symrefs.

Since we're talking about bad coding practice, all of your calls to open
should be something like

open MAP, "< ../SNP_Map_All.map" or die "couldn't open ../SNP_Map_All.map: $!";

Speaking more generally, have you looked to see whether Bioperl has some
modules or scripts you can use for this task?  See for example

http://doc.bioperl.org/releases/bioperl-1.4/Bio/Variation/SNP.html

--keith

-- 
kkeller-usenet@wombat.san-francisco.ca.us
(try just my userid to email me)
AOLSFAQ=http://www.therockgarden.ca/aolsfaq.txt
see X- headers for PGP signature information



------------------------------

Date: Wed, 27 Jun 2007 11:57:31 -0700
From:  cartercc@gmail.com
Subject: help with wintegrate scripting
Message-Id: <1182970651.664388.150140@k79g2000hse.googlegroups.com>

I'm a database manager with primary responsibility for querying and
reporting. Our big database is UniData and I use a lot lf Perl and
Java, and also use Postresql and a lot of Access (for reporting, not
for a data store).

Our environment is in the process of rapid change. A number of my Perl
scripts are totally broken, and I have been told to stop using
UniObjects (an IBM package we use for Uniquery). We've been told to
use wintegrate, a product that is totally new to me.

I've looked at wintegrate for a week or so, looked at a number of wis
script files, and have concluded that it must be some variant of
BASIC. (Not a problem ... hey, I'm easy.) My problem is that I don't
have a clue where to start.

What I need to do is write some scripts that we can schedule to run on
off hours and download the data. I would be very much appreciative if
anyone could give me a couple of pointers to get started.

Thanks, CC.



------------------------------

Date: Wed, 27 Jun 2007 11:26:19 -0700
From:  vorticitywolfe@gmail.com
Subject: Read File into multi-dimensional array ignore whitespace
Message-Id: <1182968779.005415.195050@k29g2000hsd.googlegroups.com>

Hello, a newbie question...

I have a file that looks like this:
########test.txt#########
28  0742 1715  0656 1800  0602 1839  0506 1920  0430 1956  0427  2011
29  0741 1716           0600 1840  0505 1922  0429 1957  0427 2011
0454
30  0740 1718           0558 1842  0503 1923  0429 1958  0428 2011
0455
31  0739 1719           0556 1843                   0428
1959                   0457
#######################

and I read it with this code:

###########test.pl###########
while(my $line = <INPUT>){
 push @AoA, [ split ];
}
###########################

I am trying to read it into a multi dimensional array, much like the
perldoc perllol; however, I'm running into a couple of problems.

1.) What does this mean in the documentation, i.e. I don't get what
function the line should be calling , and it is not explained? I see
that it is trying to index $AoA with row $x and column $y, but the
function is throwing me off.

 for $x (1 .. 10) {
	for $y (1 .. 10) {
	    $AoA[$x][$y] = func($x, $y);
	}
    }

2) the push@AoA, [ split ]; thinks the 3rd column is blank space and
combines the array into 8 elements (for the last line ) as opposed to
13 for the first).

3.) If I print @AoA; I get the: ARRAY(BLAH BLAH) ARRAY(BLAH BLAH). I
know this is because I am not printing a scalar. But the question here
is: Is there a way to print an array without for looping through each
element?

Thanks for all your help in advance!

Jonathan



------------------------------

Date: Wed, 27 Jun 2007 11:51:14 -0700
From:  it_says_BALLS_on_your forehead <simon.chao@fmr.com>
Subject: Re: Read File into multi-dimensional array ignore whitespace
Message-Id: <1182970274.663891.219060@o61g2000hsh.googlegroups.com>

On Jun 27, 2:26 pm, vorticitywo...@gmail.com wrote:
> Hello, a newbie question...
>
> I have a file that looks like this:
> ########test.txt#########
> 28  0742 1715  0656 1800  0602 1839  0506 1920  0430 1956  0427  2011
> 29  0741 1716           0600 1840  0505 1922  0429 1957  0427 2011
> 0454
> 30  0740 1718           0558 1842  0503 1923  0429 1958  0428 2011
> 0455
> 31  0739 1719           0556 1843                   0428
> 1959                   0457
> #######################
>
> and I read it with this code:
>
> ###########test.pl###########
> while(my $line = <INPUT>){
>  push @AoA, [ split ];}
>
> ###########################
>
> I am trying to read it into a multi dimensional array, much like the
> perldoc perllol; however, I'm running into a couple of problems.
>
> 1.) What does this mean in the documentation, i.e. I don't get what
> function the line should be calling , and it is not explained? I see
> that it is trying to index $AoA with row $x and column $y, but the
> function is throwing me off.
>
>  for $x (1 .. 10) {
>         for $y (1 .. 10) {
>             $AoA[$x][$y] = func($x, $y);
>         }
>     }

Don't worry about the function call. That's just an example to show
what's possible.

> 2) the push@AoA, [ split ]; thinks the 3rd column is blank space and
> combines the array into 8 elements (for the last line ) as opposed to
> 13 for the first).
>

right, you're creating jagged arrays because the split function
doesn't care what whitespace character to split on or how many if
called with no explicit delimiter. there's no way for perl to know how
many columns there SHOULD be. you need to be more precise in your
description of your data. is it only one space character between each
value? is it a tab?

> 3.) If I print @AoA; I get the: ARRAY(BLAH BLAH) ARRAY(BLAH BLAH). I
> know this is because I am not printing a scalar. But the question here
> is: Is there a way to print an array without for looping through each
> element?
>

you can always use Data::Dumper.
e.g.
print Dumper( \@AoA ), "\n";

> Thanks for all your help in advance!
>
> Jonathan




------------------------------

Date: Wed, 27 Jun 2007 15:00:51 -0700
From:  vorticitywolfe@gmail.com
Subject: Re: Read File into multi-dimensional array ignore whitespace
Message-Id: <1182981651.349634.274410@o61g2000hsh.googlegroups.com>

On Jun 27, 6:51 pm, it_says_BALLS_on_your forehead
<simon.c...@fmr.com> wrote:
> On Jun 27, 2:26 pm, vorticitywo...@gmail.com wrote:
>
>
>
> > Hello, a newbie question...
>
> > I have a file that looks like this:
> > ########test.txt#########
> > 28  0742 1715  0656 1800  0602 1839  0506 1920  0430 1956  0427  2011
> > 29  0741 1716           0600 1840  0505 1922  0429 1957  0427 2011
> > 0454
> > 30  0740 1718           0558 1842  0503 1923  0429 1958  0428 2011
> > 0455
> > 31  0739 1719           0556 1843                   0428
> > 1959                   0457
> > #######################
>
> > and I read it with this code:
>
> > ###########test.pl###########
> > while(my $line = <INPUT>){
> >  push @AoA, [ split ];}
>
> > ###########################
>
> > I am trying to read it into a multi dimensional array, much like the
> > perldoc perllol; however, I'm running into a couple of problems.
>
> > 1.) What does this mean in the documentation, i.e. I don't get what
> > function the line should be calling , and it is not explained? I see
> > that it is trying to index $AoA with row $x and column $y, but the
> > function is throwing me off.
>
> >  for $x (1 .. 10) {
> >         for $y (1 .. 10) {
> >             $AoA[$x][$y] = func($x, $y);
> >         }
> >     }
>
> Don't worry about the function call. That's just an example to show
> what's possible.
>
> > 2) the push@AoA, [ split ]; thinks the 3rd column is blank space and
> > combines the array into 8 elements (for the last line ) as opposed to
> > 13 for the first).
>
> right, you're creating jagged arrays because the split function
> doesn't care what whitespace character to split on or how many if
> called with no explicit delimiter. there's no way for perl to know how
> many columns there SHOULD be. you need to be more precise in your
> description of your data. is it only one space character between each
> value? is it a tab?
>
> > 3.) If I print @AoA; I get the: ARRAY(BLAH BLAH) ARRAY(BLAH BLAH). I
> > know this is because I am not printing a scalar. But the question here
> > is: Is there a way to print an array without for looping through each
> > element?
>
> you can always use Data::Dumper.
> e.g.
> print Dumper( \@AoA ), "\n";
>
> > Thanks for all your help in advance!
>
> > Jonathan

Thanks for your help, I'm kind of stuck with the jagged arrays,
because the files I'm reading are created elsewhere, and I just want
to chunk through them getting each value. I was trying to parse it
with split since the format is XX--XXXX-XXXX--XXXX-XXXX--
where the X are numbers and the - are whitespace. So I used a split(/
\s?\s?/) command, but even then I can't get around the single then
double space syntax. what a mess.

The function thing makes sense now, I thought it may have been some
other syntax that I was unaware of. So thanks! I wish it were a csv
file :-) I have much more experience with those, though in IDL... oh
well.



------------------------------

Date: Wed, 27 Jun 2007 21:47:28 +0100
From: Big and Blue <No_4@dsl.pipex.com>
Subject: Re: skip first N lines when reading file
Message-Id: <6pqdnfTy4cl8UR_bnZ2dnUVZ8s_inZ2d@pipex.net>

Peter Makholm wrote:
> Jie <jiehuang001@gmail.com> writes:
> 
  > My take:
> 
> while(<IN>) {
>     next if 1 .. N;
> 
>     do something;
> }

    Why bother doing the test once you've skipped?

  while (<IN>) { last if ($. == $skip};
  while (<IN>) {
    ...code to run after $skip lines...
  }



-- 
              Just because I've written it doesn't mean that
                   either you or I have to believe it.


------------------------------

Date: 27 Jun 2007 18:41:36 GMT
From: blmblm@myrealbox.com <blmblm@myrealbox.com>
Subject: Re: The Modernization of Emacs: terminology buffer and keybinding
Message-Id: <5efpavF392efmU1@mid.individual.net>

In article <1182811569.871702.207010@n2g2000hse.googlegroups.com>,
Twisted  <twisted0n3@gmail.com> wrote:
> On Jun 25, 5:32 pm, blm...@myrealbox.com <blm...@myrealbox.com> wrote:
> > To me it's similar to "memorizing" a phone number by dialing
> > it enough times that it makes its way into memory without
> > conscious effort.  I suspect that not everyone's brain works
> > this way, and some people have to look it up every time.
> > For those people, I can understand why something without a
> > GUI could be a painful experience.  "YMMV", maybe.
> 
> You'll be happy to know then that my opinion is that the phone system
> is archaic too. 

Happy?  Not especially.  Amused?  Yes.  One more, um, "eccentric"?
opinion, from someone with many of them.

[ snip ]

-- 
B. L. Massingill
ObDisclaimer:  I don't speak for my employers; they return the favor.


------------------------------

Date: 27 Jun 2007 18:59:33 GMT
From: blmblm@myrealbox.com <blmblm@myrealbox.com>
Subject: Re: The Modernization of Emacs: terminology buffer and keybinding
Message-Id: <5efqclF37fmubU1@mid.individual.net>

In article <1182905443.074236.128290@w5g2000hsg.googlegroups.com>,
Twisted  <twisted0n3@gmail.com> wrote:

[ snip ]

> I'm wondering if getting your head around unix arcana is also
> dependent on an iffy "knack" where you "get it" and somehow know where
> to look for documentation and problem fixes, despite everything having
> its own idiosyncratic way, and "get" some sort of workflow trick
> going, or you don't. 

Well ....  For me I think the crucial thing was having Unix experts
on tap when I was learning -- someone to answer my questions
patiently, and also to show me things I might not have found on
my own.  Some combination of Usenet groups and books might have
served the same purpose.

I find Windows and its tools as frustrating as you seem to find
Unix, but I strongly suspect that being shown the ropes by someone
who understands and likes the system would help a lot.  <shrug>

> Personally, the thing I always found most
> irritating was the necessary frequent trips to the help. Even when the
> help was easy to use (itself rare) that's a load of additional task
> switching and crap. Of course, lots of the time the help was not easy
> to use. Man pages and anything else viewed on a console, for example
> -- generally you could not view it side by side with your work, but
> instead interrupt the work, view it, try to memorize the one next
> step, go back to your work, perform that next step, back to the help
> to memorize another step ... that has all the workflow of a backed-up
> sewer, yet until and unless the commands become second nature it's
> what you're typically forced to do without a proper GUI. 

[ I'm trying to imagine circumstances in which I would say "proper
GUI" and .... not succeeding.  "Proper command line", now that
I say sometimes ....  :-)? ]

About not being able to view help side by side with one's work,
though ....   You probably haven't heard the joke about how a
window manager is a mechanism for having multiple xterms (terminal
windows) on the screen at the same time, and a mouse is a device
for selecting which one should have the focus?  Well, I like it.

[ snip ]

> Maybe the thing I really, REALLY deplore is simply having 99% of my
> attention forced to deal with the mechanics of the job and the
> mechanics of the help viewer and only 1% with the actual content of
> the job, instead of the other way around.

Exactly my experience of trying to use MS Office tools to do quick
edits under time pressure.  

-- 
B. L. Massingill
ObDisclaimer:  I don't speak for my employers; they return the favor.


------------------------------

Date: 27 Jun 2007 19:24:38 GMT
From: blmblm@myrealbox.com <blmblm@myrealbox.com>
Subject: Re: The Modernization of Emacs: terminology buffer and keybinding
Message-Id: <5efrrmF37rfioU1@mid.individual.net>

In article <87r6nzc9nu.fsf@mail.eng.it>,
Gian Uberto Lauri  <saint@spammer.impiccati.it> wrote:
> >>>>> "n" == nebulous99  <nebulous99@gmail.com> writes:
> 
> n> On Jun 22, 6:32 pm, Cor Gest <c...@clsnet.nl> wrote:
> >> > HOW IN THE BLOODY HELL IS IT SUPPOSED TO OCCUR TO SOMEONE TO
> >> ENTER > THEM, GIVEN THAT THEY HAVE TO DO SO TO REACH THE HELP THAT
> >> WOULD TELL > THEM THOSE ARE THE KEYS TO REACH THE HELP?!
> >> 
> >> What's your problem ?
> >> 
> >> Ofcourse a mere program-consumer would not look what was being
> >> installed on his/her system in the first place ...  So after some
> >> trivial perusing what was installed and where : WOW Look, MA !
> >> .... it's all there!
> >> 
> >> lpr /usr/local/share/emacs/21.3/etc/refcard.ps or your
> >> install-dir........^ ^ or your
> >> version.............................^
> 
> n> So now we're expected to go on a filesystem fishing expedition
> n> instead of just hit F1? One small step (backwards) for a man; one
> n> giant leap (backwards) for mankind. :P
> 
> Waring, possible ID TEN T detected!
> 
> There's a program called find, not this intuitive but worth learning
> 
> It could solve the problem from the root with something like
> 
> find / -name refcard.ps -exec lpr {} \; 2> /dev/null

Let's not make this any worse than it has to be ....  

If I wanted to find files that might have documentation on emacs,
I wouldn't look for filename refcard.ps; I'd try either

locate emacs

(Linux only AFAIK, won't find recently-added files because it
searches against a database usually rebuilt once a day)

or

find / -name "*emacs*" 


You are so right that "find" is worth learning, but I'm not sure
it's a tool I'd have mentioned in this particular discussion,
because it *does* take a bit of learning.  I wasn't surprised at
all that you got the reply you did.  :-)?

And as you mention downthread, any time you use "find" to execute
a command that might be costly (in terms of paper and ink in this
case), it's smart to first do a dry run to be sure the files it's
going to operate on are the ones you meant.

Whether "find" is better or worse than the GUI-based thing Windows
provides for searching for files ....  I guess this is really
not the time or place to rant about the puppy, is it?  (Yes,
I know you can make the animated character go away, something I
discovered just in time to avoid -- well, I'm not sure, but it
wouldn't have been good.)


[ snip ]

-- 
B. L. Massingill
ObDisclaimer:  I don't speak for my employers; they return the favor.


------------------------------

Date: Wed, 27 Jun 2007 21:07:40 -0000
From:  DMB <dummymb@hotmail.com>
Subject: using wildcards with -e
Message-Id: <1182978460.302199.68150@m36g2000hse.googlegroups.com>

I have the following code:

if (-e $done_dir."BOB.DOC") {
	print "found it\n";
} else {
	print "nope\n";
}

where $done_dir contains the directory path of the file I'm looking
for.

I'd like to make it so that it doesn't matter what the filename is.
I'm only interested in the suffix.  I was thinking along the lines
of...

if (-e $done_dir."*.DOC") {
	print "found it\n";
} else {
	print "nope\n";
}

but of course this does not work in perl.  I'm sure there is an easy
solution to this, but I can't seem to find a example online or in any
of my books of using -e and wildcards.

Thank you for your assistance.



------------------------------

Date: Wed, 27 Jun 2007 21:13:01 +0000
From: Peter Makholm <peter@makholm.net>
Subject: Re: using wildcards with -e
Message-Id: <87r6nxumn6.fsf@makholm.net>

DMB <dummymb@hotmail.com> writes:

> I'd like to make it so that it doesn't matter what the filename is.
> I'm only interested in the suffix.  I was thinking along the lines
> of...
>
> if (-e $done_dir."*.DOC") {
> 	print "found it\n";
> } else {
> 	print "nope\n";
> }

You might want to look at the glob function which expands these kinds
of wildcards into a list of files. So something like

if (grep { -e $_ } glob("$donedir/*.DOC") ) {
    print "Found it";
}

//Makholm




------------------------------

Date: Wed, 27 Jun 2007 21:20:21 -0000
From:  DMB <dummymb@hotmail.com>
Subject: Re: using wildcards with -e
Message-Id: <1182979221.282799.222790@w5g2000hsg.googlegroups.com>

On Jun 27, 4:13 pm, Peter Makholm <p...@makholm.net> wrote:
> DMB <dumm...@hotmail.com> writes:
> > I'd like to make it so that it doesn't matter what the filename is.
> > I'm only interested in the suffix.  I was thinking along the lines
> > of...
>
> > if (-e $done_dir."*.DOC") {
> >    print "found it\n";
> > } else {
> >    print "nope\n";
> > }
>
> You might want to look at the glob function which expands these kinds
> of wildcards into a list of files. So something like
>
> if (grep { -e $_ } glob("$donedir/*.DOC") ) {
>     print "Found it";
>
> }
>
> //Makholm

Great!  I'm reading on glob now.  I'm not sure I fully understand it
yet, but your example worked perfectly when I tested it.

Thank you.



------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc.  For subscription or unsubscription requests, send
#the single line:
#
#	subscribe perl-users
#or:
#	unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.  

NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice. 

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V11 Issue 591
**************************************


home help back first fref pref prev next nref lref last post