[33090] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 4366 Volume: 11

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Mon Feb 9 03:09:21 2015

Date: Mon, 9 Feb 2015 00:09:05 -0800 (PST)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Mon, 9 Feb 2015     Volume: 11 Number: 4366

Today's topics:
    Re: Am I adding an unnecessary step? <rweikusat@mobileactivedefense.com>
    Re: First time installing a CPAN module; questions on s <news@lawshouse.org>
        Traversing through sub dirs and read file contents smilesonisamal@gmail.com
    Re: Traversing through sub dirs and read file contents <news@lawshouse.org>
    Re: Traversing through sub dirs and read file contents <gravitalsun@hotmail.foo>
    Re: Traversing through sub dirs and read file contents <jurgenex@hotmail.com>
    Re: Traversing through sub dirs and read file contents <gamo@telecable.es>
    Re: Traversing through sub dirs and read file contents <jurgenex@hotmail.com>
    Re: Traversing through sub dirs and read file contents smilesonisamal@gmail.com
    Re: Traversing through sub dirs and read file contents <gamo@telecable.es>
    Re: Traversing through sub dirs and read file contents <rweikusat@mobileactivedefense.com>
    Re: Traversing through sub dirs and read file contents <rweikusat@mobileactivedefense.com>
    Re: Traversing through sub dirs and read file contents <bauhaus@futureapps.invalid>
    Re: Traversing through sub dirs and read file contents <jurgenex@hotmail.com>
    Re: Traversing through sub dirs and read file contents <news@lawshouse.org>
    Re: Traversing through sub dirs and read file contents <gravitalsun@hotmail.foo>
    Re: Traversing through sub dirs and read file contents <bauhaus@futureapps.invalid>
        Whitespace in code    (was: First time installing a CPA <hjp-usenet3@hjp.at>
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Sun, 08 Feb 2015 18:11:12 +0000
From: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Subject: Re: Am I adding an unnecessary step?
Message-Id: <87vbjci88v.fsf@doppelsaurus.mobileactivedefense.com>

"Peter J. Holzer" <hjp-usenet3@hjp.at> writes:
> On 2015-02-05 18:09, Rainer Weikusat <rweikusat@mobileactivedefense.com> wrote:
>> Mart van de Wege <mvdwege@gmail.com> writes:
>>> The main stumbling block is the fact that Perl5 does not have
>>> multidimensional arrays,
[...]

>> the obvious solution (AFAIK also used by every programming language
>> under the sun) is to use an 1D array whose size is the product of the
>> sizes of all dimensions and then linearize my n-D coordinates onto
>> that.

[...]

>> but instead of doing this

[...]

>> people usually come up with is use arrays to the nth power

[...]

>> end up with memory management code eating all their hair

[...]

> For some languages, like Perl, because the latter is already directly
> supported and the hair-eating memory management code has already been
> written by Other People(TM). Why should I write some code allocating
> arrays and multiplying indexes if I can just write $a->[$i][$j][$k]?

Because you really can't, at least not all of the time: While the code
freeing such a data structure already exists, the code creating one
doesn't. Eg, assuming a 5 x 6 x 731 x 13 structure with all elements
initialized to -43 is to be created, this can be open-coded as long list
of assignments of the form

$m[0][0][0][0] = -43
$m[0][0][0][1] = -43
$m[0][0][0][2] = -43

 .
 .
 .

$m[4][5][730][12] = - 43;

but more likely, it will end up as set of nested loops and the perl
runtime will infer when an actual memory allocation routine has to be
called based on the index changes (which will likely end up wasting a
lot of memory beyond what the access arrays already waste because of
overallocation based on assumptions about future array growth).

For a linearized representation, it could be just

push(@$aref, -43) for 1 .. 5 * 6 * 731 * 13;


------------------------------

Date: Sat, 07 Feb 2015 10:06:07 +0000
From: Henry Law <news@lawshouse.org>
Subject: Re: First time installing a CPAN module; questions on skipped tests.
Message-Id: <xc-dnZRURMQ-fkjJnZ2dnUVZ8vydnZ2d@giganews.com>

On 05/02/15 18:41, Peter J. Holzer wrote:
> are there trailing spaces in the source
> code, ...

I hope this doesn't constitute thread hijacking, but I'd like to ask why 
this is considered to be a bad thing.

Personally I hate it, and when I am editing code which contains trailing 
white space I take it out, but I thought it was just a foible of mine. 
Can you explain, Peter?

-- 

Henry Law            Manchester, England


------------------------------

Date: Fri, 6 Feb 2015 21:58:33 -0800 (PST)
From: smilesonisamal@gmail.com
Subject: Traversing through sub dirs and read file contents
Message-Id: <1ecc25b0-224b-47dd-b6ed-d162f1ffd8af@googlegroups.com>

Hi all,
   I have written a small program to recursively go through the sub dirs(~400) and read the file contents from each of the sub-dirs. Currently the program runs very slow in win7 32 bit . Is there a way I can speed up the execution?

Regards
Pradeep


#! /usr/bin/perl
use strict;
use warnings;
use File::Basename;
use File::Find;
use Data::Dumper;

#variable/arrays declartions
my ($fh,$line,$dir,$fp,$base_dir,$dh,$file);
my (@dir,@dir_names,@filenames);

#path declartions
$base_dir = 'C:\perl\bin';

@dir = $base_dir;


#read through the given directory path to look for dir/subdir's
while (@dir) {
   $dir = pop(@dir);
   opendir($dh, $dir);

   while($file = readdir($dh)) {
      next if $file eq '.';
      next if $file eq '..';
      
      $file = "$dir/$file";
      
      print "system(pwd)"
      if (-d $file) {
         push(@dir, $file);
         push(@dir_names, basename($file));
         } 
        elsif (-f $file) {
         $file =~ s/.*\///;
         $file =~ s/\.[^.]+$//;
         push(@filenames, $file);
         print $file;
         my $filename = 'ppm-shell.bat';
         open(my $fh,$filename) or die "Could not open file '$filename' $!";
         while (my $row = <$fh>) {
           chomp $row;
           print "$row\n";
        }
     } 
   } 
}
        


------------------------------

Date: Sat, 07 Feb 2015 10:28:46 +0000
From: Henry Law <news@lawshouse.org>
Subject: Re: Traversing through sub dirs and read file contents
Message-Id: <NMudnRyjaYdvdUjJnZ2dnUVZ7sSdnZ2d@giganews.com>

On 07/02/15 05:58, smilesonisamal@gmail.com wrote:

>    I have written a small program to recursively go through the sub dirs(~400) and read
> the file contents from each of the sub-dirs. Currently the program runs very slow in win7
> 32 bit . Is there a way I can speed up the execution?


>
> #! /usr/bin/perl
> use strict;
> use warnings;
> use File::Basename;
> use File::Find;
> use Data::Dumper;

> #read through the given directory path to look for dir/subdir's
> while (@dir) {
>     $dir = pop(@dir);
>     opendir($dh, $dir);

Why have you imported File::Find and then not used it?  I'd suggest you 
remake the code to use File::Find and only then start optimising 
performance.

untested ...

#!/usr/bin/perl
use strict;
use warnings;
use File::Find;

my $base_dir = shift or return;
die "'$base_dir' isn't a directory\n" unless -d $base_dir;

find( \&process_it, $base_dir );

sub process_it {
   next if -d $File::Find::name;
   print "$File::Find::name\n";
   open my $FH, '<', $File::Find::name
     or die "Couldn't open '$File::Find::name': $!\n";
   print while <$FH>;  # Or whatever you really want to do ...
   close $FH;
}


-- 

Henry Law            Manchester, England


------------------------------

Date: Sat, 07 Feb 2015 18:43:21 +0200
From: George Mpouras <gravitalsun@hotmail.foo>
Subject: Re: Traversing through sub dirs and read file contents
Message-Id: <mb5fb8$6be$1@news.ntua.gr>

On 7/2/2015 7:58 πμ, smilesonisamal@gmail.com wrote:
> Hi all,
>     I have written a small program to recursively go through the sub dirs(~400) and read the file contents from each of the sub-dirs. Currently the program runs very slow in win7 32 bit . Is there a way I can speed up the execution?
>
> Regards
> Pradeep
>

Your code have some errors; also it is not clear if you want to read a 
specific file or all of them.

What exactly do you want to do for every file ?
What exactly do you want to do for every directory ?



------------------------------

Date: Sat, 07 Feb 2015 11:35:42 -0800
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: Traversing through sub dirs and read file contents
Message-Id: <40qcda96kcp8t8b9s3dmrneh4gk8j5fjls@4ax.com>

smilesonisamal@gmail.com wrote:
>Hi all,
>   I have written a small program to recursively go through the sub dirs(~400) and read the file contents from each of the sub-dirs. Currently the program runs very slow in win7 32 bit . Is there a way I can speed up the execution?

Please limit your line length to ~75 characters as has been a proven
custom in Usenet for the past 3 decades.

>use File::Find;

You "use File::Find" but your are not not using it. Instead you are
using your own home-brewed version of a linearized algorithm which would
be much easier to understand and less error-prone if it were written
recursively. Why are you doing that?

Learn how to take advantage of File::Find, test if it is fast enough (I
bet it is), and be happy.

jue


------------------------------

Date: Sat, 07 Feb 2015 23:44:18 +0100
From: gamo <gamo@telecable.es>
Subject: Re: Traversing through sub dirs and read file contents
Message-Id: <mb64g5$9gl$2@speranza.aioe.org>

El 07/02/15 a las 20:35, jurgenex@hotmail.com escribió:
> smilesonisamal@gmail.com wrote:
>> Hi all,
>>   I have written a small program to recursively go through the sub dirs(~400)

 and read the file contents from each of the sub-dirs. Currently the
program runs

 very slow in win7 32 bit . Is there a way I can speed up the execution?
> 
> Please limit your line length to ~75 characters as has been a proven
> custom in Usenet for the past 3 decades.
> 
>> use File::Find;
> 
> You "use File::Find" but your are not not using it. Instead you are
> using your own home-brewed version of a linearized algorithm which would
> be much easier to understand and less error-prone if it were written
> recursively. Why are you doing that?
> 
> Learn how to take advantage of File::Find, test if it is fast enough (I
> bet it is), and be happy.
> 
> jue
> 

There is nothing wrong for not using File::Find. It's ugly.
However, you must take in account first if you were to deal with
absolute or relative paths. It's easy that you choose to use
only relative paths (starting in '.') and do the thing recursive,
as Jurgen suggested. 'chdir' is your friend.

Best regards.

-- 
http://www.telecable.es/personales/gamo/
The generation of random numbers is too important to be left to chance


------------------------------

Date: Sat, 07 Feb 2015 16:11:29 -0800
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: Traversing through sub dirs and read file contents
Message-Id: <19addad8dsgopqrffr10b4krug8bh5nuai@4ax.com>

gamo <gamo@telecable.es> wrote:
>El 07/02/15 a las 20:35, jurgenex@hotmail.com escribió:
>> Learn how to take advantage of File::Find, test if it is fast enough (I
>> bet it is), and be happy.
>
>There is nothing wrong for not using File::Find. It's ugly.

Maybe. 
But it is doing 95% of the leg work for you and it has been used by many
people for many years, therefore chances are high that it has less
errors than any home-cooked hack.

jue


------------------------------

Date: Sun, 8 Feb 2015 05:16:43 -0800 (PST)
From: smilesonisamal@gmail.com
Subject: Re: Traversing through sub dirs and read file contents
Message-Id: <e92418b1-afcf-464a-8f14-d95baa4a6ab4@googlegroups.com>

On Saturday, February 7, 2015 at 10:13:27 PM UTC+5:30, George Mpouras wrote=
:
> On 7/2/2015 7:58 =CF=80=CE=BC, smilesonisamal@gmail.com wrote:
> > Hi all,
> >     I have written a small program to recursively go through the sub di=
rs(~400) and read the file contents from each of the sub-dirs. Currently th=
e program runs very slow in win7 32 bit . Is there a way I can speed up the=
 execution?
> >
> > Regards
> > Pradeep
> >
>=20
> Your code have some errors; also it is not clear if you want to read a=20
> specific file or all of them.
>=20
> What exactly do you want to do for every file ?
> What exactly do you want to do for every directory ?

Sorry for not being clear.
Each sub directories have lot of files and so I want to read at least one f=
ile to check the files are readable are not for simplicity. For reading all=
 the files inside the sub-directories will take lot of time as there are lo=
t of files and requires sophisticated algorithms for fast performance.=20

I dont want to perform any special operations on the files/dirs except read=
ing it after finding the file.=20

For exp:
Base-dir -> Exp C:\Perl\Bin
Lets say there is a file inside directory called C:\Perl\bin\ppm\ppm-shell.=
bat.

So my program should traverse through all the files starting from C:\Perl\b=
in directory and find the file ppm-shell.bat and reads it.



------------------------------

Date: Sun, 08 Feb 2015 14:43:49 +0100
From: gamo <gamo@telecable.es>
Subject: Re: Traversing through sub dirs and read file contents
Message-Id: <mb7p6q$pff$1@speranza.aioe.org>

El 08/02/15 a las 01:11, jurgenex@hotmail.com escribió:
> gamo <gamo@telecable.es> wrote:
>> El 07/02/15 a las 20:35, jurgenex@hotmail.com escribió:
>>> Learn how to take advantage of File::Find, test if it is fast enough (I
>>> bet it is), and be happy.
>>
>> There is nothing wrong for not using File::Find. It's ugly.
> 
> Maybe. 
> But it is doing 95% of the leg work for you and it has been used by many
> people for many years, therefore chances are high that it has less
> errors than any home-cooked hack.
> 
> jue
> 

Maybe.
But it's a sub easy to check adding print "$filename\n";

-- 
http://www.telecable.es/personales/gamo/
The generation of random numbers is too important to be left to chance


------------------------------

Date: Sun, 08 Feb 2015 14:02:29 +0000
From: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Subject: Re: Traversing through sub dirs and read file contents
Message-Id: <87lhk833ii.fsf@doppelsaurus.mobileactivedefense.com>

smilesonisamal@gmail.com writes:
>    I have written a small program to recursively go through the sub
>    dirs(~400) and read the file contents from each of the
>    sub-dirs. Currently the program runs very slow in win7 32 bit . Is
>    there a way I can speed up the execution?

JFTR: This is not a recursive traversal algorithm but an iterative one
utilizing a stack of directories to visit.

[...]

> #read through the given directory path to look for dir/subdir's
> while (@dir) {
>    $dir = pop(@dir);
>    opendir($dh, $dir);
>
>    while($file = readdir($dh)) {
>       next if $file eq '.';
>       next if $file eq '..';
>       
>       $file = "$dir/$file";
>       
>       print "system(pwd)"

You shouldn't ever actually execute this as it's a very expensive
operation (requires traversing from the current directory back to the
filesystem root) and the program already knows all of the information.


>       if (-d $file) {
>          push(@dir, $file);
>          push(@dir_names, basename($file));
>          } 
>         elsif (-f $file) {
>          $file =~ s/.*\///;
>          $file =~ s/\.[^.]+$//;

Why do you first prepend the directory to the name in order to strip it
away again afterwards?


------------------------------

Date: Sun, 08 Feb 2015 14:09:41 +0000
From: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Subject: Re: Traversing through sub dirs and read file contents
Message-Id: <87h9uw336i.fsf@doppelsaurus.mobileactivedefense.com>

gamo <gamo@telecable.es> writes:
> El 08/02/15 a las 01:11, jurgenex@hotmail.com escribió:
>> gamo <gamo@telecable.es> wrote:
>>> El 07/02/15 a las 20:35, jurgenex@hotmail.com escribió:
>>>> Learn how to take advantage of File::Find, test if it is fast enough (I
>>>> bet it is), and be happy.
>>>
>>> There is nothing wrong for not using File::Find. It's ugly.
>> 
>> Maybe. 
>> But it is doing 95% of the leg work for you and it has been used by many
>> people for many years, therefore chances are high that it has less
>> errors than any home-cooked hack.

That's a non-sequitur[*] and it tries to plaster over this with 'marketing
lingo'. If there's an error in the directory-traversal part of the
posted code, how about pointing it out?

File::Find is a walking horror.

[*] Code doesn't change by being used hence, lots of users doesn't equal
bugs getting fixed and 'many people' have been using all kinds of 'buggy
software' 'for many years', even voluntarily so (sometimes, even bugs
are intentional ...).


------------------------------

Date: Sun, 08 Feb 2015 16:36:31 +0100
From: Georg Bauhaus <bauhaus@futureapps.invalid>
Subject: Re: Traversing through sub dirs and read file contents
Message-Id: <mb7vor$2sr$1@dont-email.me>

On 08.02.15 15:09, Rainer Weikusat wrote:
> File::Find is a walking horror.

+1




------------------------------

Date: Sun, 08 Feb 2015 09:20:24 -0800
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: Traversing through sub dirs and read file contents
Message-Id: <0q5fdall9qg4amf7rnadkmklon1re67gg3@4ax.com>

Rainer Weikusat <rweikusat@mobileactivedefense.com> wrote:
>gamo <gamo@telecable.es> writes:
>> El 08/02/15 a las 01:11, jurgenex@hotmail.com escribió:
>>> gamo <gamo@telecable.es> wrote:
>>>> El 07/02/15 a las 20:35, jurgenex@hotmail.com escribió:
>>>>> Learn how to take advantage of File::Find, test if it is fast enough (I
>>>>> bet it is), and be happy.
>>>>
>>>> There is nothing wrong for not using File::Find. It's ugly.
>>> 
>>> Maybe. 
>>> But it is doing 95% of the leg work for you and it has been used by many
>>> people for many years, therefore chances are high that it has less
>>> errors than any home-cooked hack.
>
>That's a non-sequitur[*] and it tries to plaster over this with 'marketing
>lingo'.

If you insist on splitting hairs let me rephrase:
It has been used by many people for many years, therefore chances are
good that any error in the code has been exposed by now. And because the
Perl community is known to care about core modules chances are high that
any error found has been fixed. And therefore chances are high it will
have less errors than any home-cooked hack.

>File::Find is a walking horror.

If it does in 2 lines for what the OP needed 25 lines including
implementing his own stack of directories, then _I_ know which one _I_
would choose. YMMV.

>[*] Code doesn't change by being used hence, lots of users doesn't equal
>bugs getting fixed 

But it equals to bugs getting found.

>and 'many people' have been using all kinds of 'buggy
>software' 'for many years', even voluntarily so (sometimes, even bugs
>are intentional ...).

Ach, geh deine Korinthen doch alleine kacken.


------------------------------

Date: Sun, 08 Feb 2015 21:01:27 +0000
From: Henry Law <news@lawshouse.org>
Subject: Re: Traversing through sub dirs and read file contents
Message-Id: <UoadnYoXPNMjU0rJnZ2dnUVZ8smdnZ2d@giganews.com>

On 08/02/15 14:09, Rainer Weikusat wrote:
> File::Find is a walking horror

Well, IANPH but I use it all the time for doing what it's designed to 
do: follow a tree and let me do whatever I want with directories and 
files it finds.  It may have some exotic fault that I wot not of, but 
for the base use case it works flawlessly.

-- 

Henry Law            Manchester, England


------------------------------

Date: Sun, 08 Feb 2015 23:10:10 +0200
From: George Mpouras <gravitalsun@hotmail.foo>
Subject: Re: Traversing through sub dirs and read file contents
Message-Id: <mb8jbj$1o0a$1@news.ntua.gr>

> Each sub directories have lot of files and so I want to read at least one file to check the files are readable are not for simplicity. For reading all the files inside the sub-directories will take lot of time as there are lot of files and requires sophisticated algorithms for fast performance.
>
> I dont want to perform any special operations on the files/dirs except reading it after finding the file.
>
> For exp:
> Base-dir -> Exp C:\Perl\Bin
> Lets say there is a file inside directory called C:\Perl\bin\ppm\ppm-shell.bat.
>
> So my program should traverse through all the files starting from C:\Perl\bin directory and find the file ppm-shell.bat and reads it.


Ok, if so, the following code will be very fast for what you want.
 From every directory, peaks one random file and read it binary to
decide if it readable. Hope yoy like it.




#! /usr/bin/perl
use strict;
use warnings;

my $threads = 4;
my %One_file_per_directory;
my @WorkLoad_for_every_thread;

CollectSampleFiles('C:/Perl/Bin');

# Lets press the turbo button !
foreach (keys %One_file_per_directory)
{
push @WorkLoad_for_every_thread, $One_file_per_directory{$_};

	if ($threads == scalar @WorkLoad_for_every_thread)
	{
	CheckFilesInParallel(@WorkLoad_for_every_thread);
	@WorkLoad_for_every_thread=()
	}
}
CheckFilesInParallel(@WorkLoad_for_every_thread);



sub CheckFilesInParallel
{
$|=1;
my @Tids=();

	foreach my $file (@_)
	{
	my $tid = fork; die "Could not fork because \"$^E\"\n" unless defined $tid;

		if (0 == $tid)
		{
		print "Thread $$ says that file \"$file\" is ", 
TestFileCorruption($file),"\n";
		exit
		}
		else
		{
		push @Tids, $tid
		}
	}

for (@Tids) { waitpid $_, 0}
}







sub CollectSampleFiles
{
opendir my $dh, $_[0] or die "Error reading \"$_[0]\" , \"$!\"\n";

	foreach (grep ! /^\.{1,2}$/, readdir $dh) {
	$_ = "$_[0]/$_";
	if (-f $_) {
	$One_file_per_directory{$_[0]} = $_ unless exists 
$One_file_per_directory{$_[0]} }
	else { CollectSampleFiles($_) }
	}

closedir $dh
}

sub TestFileCorruption
{
open FILE, '<:raw', $_[0] or die "Could not read file \"$_[0]\" because 
\"$!\"\n";
my $buffer;
until (eof FILE) { read FILE, $buffer, 16000 or return 'corrupt' }
close FILE; 'ok'
}



------------------------------

Date: Mon, 09 Feb 2015 09:00:10 +0100
From: Georg Bauhaus <bauhaus@futureapps.invalid>
Subject: Re: Traversing through sub dirs and read file contents
Message-Id: <mb9pd6$as3$1@dont-email.me>

On 08.02.15 22:01, Henry Law wrote:
> On 08/02/15 14:09, Rainer Weikusat wrote:
>> File::Find is a walking horror
>
> Well, IANPH but I use it all the time for doing what it's designed to do: follow a tree and let me do whatever I want with directories and files it finds.  It may have some exotic fault that I wot not of, but for the base use case it works flawlessly.
>

One thing the OP mentioned was "a lot of files" per directory.
Then, looking at

   @filenames = readdir DIR;

in File::Find, isn't that indicative of a classic design choice
that may not be optimal (or not working at all), in the OP's case?



------------------------------

Date: Sun, 8 Feb 2015 11:12:19 +0100
From: "Peter J. Holzer" <hjp-usenet3@hjp.at>
Subject: Whitespace in code    (was: First time installing a CPAN module; questions on skipped tests.)
Message-Id: <slrnmdedk3.b2m.hjp-usenet3@hrunkner.hjp.at>

On 2015-02-07 10:06, Henry Law <news@lawshouse.org> wrote:
> On 05/02/15 18:41, Peter J. Holzer wrote:
>> are there trailing spaces in the source
>> code, ...
>
> I hope this doesn't constitute thread hijacking, but I'd like to ask why 
> this is considered to be a bad thing.
>
> Personally I hate it, and when I am editing code which contains trailing 
> white space I take it out, but I thought it was just a foible of mine. 
> Can you explain, Peter?

No. As I wrote further down:

>> I'm not sure what problems Test::TrailingSpace is supposed to help
>> preventing

"I'm not sure" is an euphemism for "I have no idea". Perl doesn't care
about trailing spaces. Trailing space within multiline string constants
is significant, of course, and that may affect some applications, but in
this case you probably shouldn't use multiline strings anyway.

There may be some reasons which have nothing to do with the code but
with the programming environment. For example, some editors mark
trailing spaces (in fact, I have configured vim to do this), and that
looks ugly, and may cause problems when copying from the terminal (the
terminal doesn't know that the marker is really supposed to be a space).

But that doesn't seem sufficient reason to me to add a test checking for
white space. If a line looks ugly in my editor because of trailing
whitespace I can remove that space right then. 

There are two other kinds of whitespace I care about: Line endings and
(leading) tabs.

Perl generally doesn't care about line endings, but the OS may: for
example, a CR in the shebang line can prevent execution on Linux. Also,
embedded line endings on string constants, may be mess up the output.

Tabs, especially leading tabs are a problem if several people work on
the code with different preferences on how tabs are to be treated: Some
people have tab stops set to every 4 or even every 2 characters, some
have their editor configured to convert tabs to spaces or - even worse -
spaces to tabs (and some editors convert the whole file and some only
lines you edit). The result is a huge mess. Unless you have a very
strict policy on how to use tabs it is best to avoid them completely.

        hp


-- 
   _  | Peter J. Holzer    | Fluch der elektronischen Textverarbeitung:
|_|_) |                    | Man feilt solange an seinen Text um, bis
| |   | hjp@hjp.at         | die Satzbestandteile des Satzes nicht mehr
__/   | http://www.hjp.at/ | zusammenpaßt. -- Ralph Babel


------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

Back issues are available via anonymous ftp from
ftp://cil-www.oce.orst.edu/pub/perl/old-digests. 

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V11 Issue 4366
***************************************


home help back first fref pref prev next nref lref last post