[31685] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 2948 Volume: 11

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Mon May 17 14:09:26 2010

Date: Mon, 17 May 2010 11:09:09 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Mon, 17 May 2010     Volume: 11 Number: 2948

Today's topics:
    Re: determining whether a server supports secure authen <nospam-abuse@ilyaz.org>
    Re: determining whether a server supports secure authen sln@netherlands.com
    Re: FAQ 5.4 How do I delete the last N lines from a fil sln@netherlands.com
    Re: FAQ 5.4 How do I delete the last N lines from a fil <ralph@happydays.com>
    Re: FAQ 5.4 How do I delete the last N lines from a fil <willem@turtle.stack.nl>
    Re: FAQ 5.4 How do I delete the last N lines from a fil <uri@StemSystems.com>
    Re: FAQ 5.4 How do I delete the last N lines from a fil <willem@turtle.stack.nl>
    Re: FAQ 5.4 How do I delete the last N lines from a fil <willem@turtle.stack.nl>
    Re: FAQ 5.4 How do I delete the last N lines from a fil <ralph@happydays.com>
    Re: FAQ 5.4 How do I delete the last N lines from a fil <uri@StemSystems.com>
    Re: FAQ 5.4 How do I delete the last N lines from a fil <uri@StemSystems.com>
    Re: how do i find the max value out of an array? sln@netherlands.com
    Re: MinGW and Perl 5.12 - Windows 64 bits ActiveState sln@netherlands.com
    Re: MinGW and Perl 5.12 - Windows 64 bits ActiveState sln@netherlands.com
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Sun, 16 May 2010 23:08:53 +0000 (UTC)
From: Ilya Zakharevich <nospam-abuse@ilyaz.org>
Subject: Re: determining whether a server supports secure authentication
Message-Id: <slrnhv0uo5.qoq.nospam-abuse@powdermilk.math.berkeley.edu>

On 2010-05-16, Uno <merrilljensen@q.com> wrote:
> I was "sure" that I was using SSL, and in my head it sounded right that 
> a secure socket layer would employ secure authentication.  They are 
> completely separate notions.

Secure connection makes absolutely no sense without secure
authentication (well, "almost" - one can invent a FEW types of attacks
which may be stopped by "just SSL" - but why would the attackers
restrict themselves?).

The standard analogy of secure connection is sending a parcel guarded
by a policeman on route.  The standard analogy of having no secure
authentication is leaving a package on a bench in a public park so
that the other party may come and pick it up.  Now imagine doing
both...

> So, problem solved by unchecking a box.

Hardly.

Hope this helps,
Ilya


------------------------------

Date: Sun, 16 May 2010 17:34:39 -0700
From: sln@netherlands.com
Subject: Re: determining whether a server supports secure authentication
Message-Id: <6o31v5dpip7fqj0786qpjsp3lvaejpk66k@4ax.com>

On Sun, 16 May 2010 23:08:53 +0000 (UTC), Ilya Zakharevich <nospam-abuse@ilyaz.org> wrote:

>On 2010-05-16, Uno <merrilljensen@q.com> wrote:
>> I was "sure" that I was using SSL, and in my head it sounded right that 
>> a secure socket layer would employ secure authentication.  They are 
>> completely separate notions.
>
>Secure connection makes absolutely no sense without secure
>authentication (well, "almost" - one can invent a FEW types of attacks
>which may be stopped by "just SSL" - but why would the attackers
>restrict themselves?).
>
>The standard analogy of secure connection is sending a parcel guarded
>by a policeman on route.  The standard analogy of having no secure
>authentication is leaving a package on a bench in a public park so
>that the other party may come and pick it up.  Now imagine doing
>both...
>
>> So, problem solved by unchecking a box.
>
>Hardly.
>

This all sounds very criminal.

-sln


------------------------------

Date: Sun, 16 May 2010 17:26:02 -0700
From: sln@netherlands.com
Subject: Re: FAQ 5.4 How do I delete the last N lines from a file?
Message-Id: <r321v5dlo9svjf653qpplv8s1d2nr8hg6f@4ax.com>

On Sun, 16 May 2010 04:00:02 GMT, PerlFAQ Server <brian@theperlreview.com> wrote:

>This is an excerpt from the latest version perlfaq5.pod, which
>comes with the standard Perl distribution. These postings aim to 
>reduce the number of repeated questions as well as allow the community
>to review and update the answers. The latest version of the complete
>perlfaq is at http://faq.perl.org .
>
>--------------------------------------------------------------------
>
>5.4: How do I delete the last N lines from a file?
>
>    (contributed by brian d foy)
>
>    The easiest conceptual solution is to count the lines in the file then
>    start at the beginning and print the number of lines (minus the last N)
>    to a new file.
>
>    Most often, the real question is how you can delete the last N lines
>    without making more than one pass over the file, or how to do it with a
>    lot of copying. The easy concept is the hard reality when you might have
>    millions of lines in your file.

I believe, "or how to do it with a lot of copying." was meant to be
"or how to do it without a lot of copying."

And, I'm no so sure you're not conflating "making more than one pass over the file"
with reading/writing the file more than one time.

>
>    One trick is to use "File::ReadBackwards", which starts at the end of

Is this really a trick?

I can't remember if there is a truncate at file position primitive.
If I take a guess one way, I would say this approach would work as fast
as any:

create a line stack, the size of N
read each line, store line in stack, increment a counter
when the counter equals N, drop the oldest line into a new file, newest line to stack.
repeat until end of old file
close new file
delete old file
rename new file to old

viola, truncation

-sln


------------------------------

Date: Mon, 17 May 2010 12:43:33 -0400
From: Ralph Malph <ralph@happydays.com>
Subject: Re: FAQ 5.4 How do I delete the last N lines from a file?
Message-Id: <a7cb1$4bf17236$40779ac3$16412@news.eurofeeds.com>

On 5/16/2010 12:00 AM, PerlFAQ Server wrote:
> This is an excerpt from the latest version perlfaq5.pod, which
> comes with the standard Perl distribution. These postings aim to
> reduce the number of repeated questions as well as allow the community
> to review and update the answers. The latest version of the complete
> perlfaq is at http://faq.perl.org .
>
> --------------------------------------------------------------------
>
> 5.4: How do I delete the last N lines from a file?
>
>      (contributed by brian d foy)
>
>      The easiest conceptual solution is to count the lines in the file then
>      start at the beginning and print the number of lines (minus the last N)
>      to a new file.
>
>      Most often, the real question is how you can delete the last N lines
>      without making more than one pass over the file, or how to do it with a
>      lot of copying. The easy concept is the hard reality when you might have
>      millions of lines in your file.
>
>      One trick is to use "File::ReadBackwards", which starts at the end of
>      the file. That module provides an object that wraps the real filehandle
>      to make it easy for you to move around the file. Once you get to the
>      spot you need, you can get the actual filehandle and work with it as
>      normal. In this case, you get the file position at the end of the last
>      line you want to keep and truncate the file to that point:
>
>              use File::ReadBackwards;
>
>              my $filename = 'test.txt';
>              my $Lines_to_truncate = 2;
>
>              my $bw = File::ReadBackwards->new( $filename )
>                      or die "Could not read backwards in [$filename]: $!";
>
>              my $lines_from_end = 0;
>              until( $bw->eof or $lines_from_end == $Lines_to_truncate )
>                      {
>                      print "Got: ", $bw->readline;
>                      $lines_from_end++;
>                      }
>
>              truncate( $filename, $bw->tell );
>
>      The "File::ReadBackwards" module also has the advantage of setting the
>      input record separator to a regular expression.
>
>      You can also use the "Tie::File" module which lets you access the lines
>      through a tied array. You can use normal array operations to modify your
>      file, including setting the last index and using "splice".
Feeling bored I compared the code in the faq with
some bash code that would achieve the same results.
I also ran some generic perl that did basically the same
thing as the shell script(code at bottom).
The test file was named 'puke'. Contents are the integers 0 through
999999. 1 million rows total. The test is to excluded the last 10000
lines. perl 5.10.1 on cygwin. machine has 4gb ram. dual core Intel.
Anyway, in this not really scientific test the faq method using
Uri's File::ReadBackwards module is the winner. I suppose this is the
expected result but I thought the shell code would be more
competitive.

$ time perl faq.pl > top_n-10000

real    0m0.219s
user    0m0.093s
sys     0m0.061s

$ time cat puke | wc -l | xargs echo -10000 + | bc \
   | xargs echo head puke -n | sh > top_n-10000

real    0m0.312s
user    0m0.090s
sys     0m0.121s

$ time perl temp.pl > top_n-10000

real    0m0.858s
user    0m0.701s
sys     0m0.062s

-----------------
temp.pl
-----------------
use strict;
use warnings;

my $num_lines_exclude=10000;

open(FH, '<', "puke") or die $!;
my $line_count=0;
while(<FH>){
	$line_count++;
}
seek(FH, 0, 0);
my $lines_to_read=$line_count-$num_lines_exclude;
while($lines_to_read>0){
	my $line=<FH>;
	print $line;
	$lines_to_read--;
}


------------------------------

Date: Mon, 17 May 2010 17:10:42 +0000 (UTC)
From: Willem <willem@turtle.stack.nl>
Subject: Re: FAQ 5.4 How do I delete the last N lines from a file?
Message-Id: <slrnhv2u4i.3in.willem@turtle.stack.nl>

Ralph Malph wrote:
) Feeling bored I compared the code in the faq with
) some bash code that would achieve the same results.
) I also ran some generic perl that did basically the same
) thing as the shell script(code at bottom).
) The test file was named 'puke'. Contents are the integers 0 through
) 999999. 1 million rows total. The test is to excluded the last 10000
) lines. perl 5.10.1 on cygwin. machine has 4gb ram. dual core Intel.
) Anyway, in this not really scientific test the faq method using
) Uri's File::ReadBackwards module is the winner. I suppose this is the
) expected result but I thought the shell code would be more
) competitive.

Why ?  AIUI, ReadBackwards never touches the beginning of the file, so that
should clearly lead to a lot less disk I/O.

I'm assuming te tests you ran may have had the file still in disk cache,
though, so that would make the difference a lot less significant, but
still ReadBackwards takes time proportional to the size of the removed bit,
while the rest take time proportional to the size of the whole file.

Have you also tried removing 10 lines from a million-line file ?
And for giggles, you could try a hand-rolled one that uses the functions
seek(), sysread() and truncate() to accomplish the job.


SaSW, Willem
-- 
Disclaimer: I am in no way responsible for any of the statements
            made in the above text. For all I know I might be
            drugged or something..
            No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT


------------------------------

Date: Mon, 17 May 2010 13:34:57 -0400
From: "Uri Guttman" <uri@StemSystems.com>
Subject: Re: FAQ 5.4 How do I delete the last N lines from a file?
Message-Id: <87tyq6zb26.fsf@quad.sysarch.com>

>>>>> "W" == Willem  <willem@turtle.stack.nl> writes:

  W> Have you also tried removing 10 lines from a million-line file ?
  W> And for giggles, you could try a hand-rolled one that uses the functions
  W> seek(), sysread() and truncate() to accomplish the job.

ahem. that is what file::readbackward does! it may be possible to hand
roll optimize it by removing some overhead, etc. but it was designed to
be very fast. your earlier point about how much to remove or skip is the
important one. truncating most of a large file will be slower but you
still need to count lines from the end. since you don't need to read
each line for this you could read large blocks, scan for newlines and
count them and then truncate to the desired point. readbackwards has the
overhead of splitting the blocks into lines and returning each one for
counting. but you always need to read the part you are truncating if you
are counting lines from the end.

uri

-- 
Uri Guttman  ------  uri@stemsystems.com  --------  http://www.sysarch.com --
-----  Perl Code Review , Architecture, Development, Training, Support ------
---------  Gourmet Hot Cocoa Mix  ----  http://bestfriendscocoa.com ---------


------------------------------

Date: Mon, 17 May 2010 17:41:27 +0000 (UTC)
From: Willem <willem@turtle.stack.nl>
Subject: Re: FAQ 5.4 How do I delete the last N lines from a file?
Message-Id: <slrnhv2vu7.3sm.willem@turtle.stack.nl>

Uri Guttman wrote:
)>>>>> "W" == Willem  <willem@turtle.stack.nl> writes:
)
)  W> Have you also tried removing 10 lines from a million-line file ?
)  W> And for giggles, you could try a hand-rolled one that uses the functions
)  W> seek(), sysread() and truncate() to accomplish the job.
)
) ahem. that is what file::readbackward does!

I know.

) it may be possible to hand
) roll optimize it by removing some overhead, etc. but it was designed to
) be very fast. your earlier point about how much to remove or skip is the
) important one. truncating most of a large file will be slower but you
) still need to count lines from the end. since you don't need to read
) each line for this you could read large blocks, scan for newlines and
) count them and then truncate to the desired point. readbackwards has the
) overhead of splitting the blocks into lines and returning each one for
) counting. but you always need to read the part you are truncating if you
) are counting lines from the end.

I just tried a hand-rolled sysseek/sysread/truncate version, and either
that overhead is very significant, or my machine is a lot faster.

> time perl trunc.pl tenmillion.txt 10000

real	0m0.013s
user	0m0.011s
sys	0m0.003s

That's removing ten thousand lines from a ten-million line file
that is over 600mb large.

I'll try a ReadBackwards version now.


SaSW, Willem
-- 
Disclaimer: I am in no way responsible for any of the statements
            made in the above text. For all I know I might be
            drugged or something..
            No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT


------------------------------

Date: Mon, 17 May 2010 17:49:56 +0000 (UTC)
From: Willem <willem@turtle.stack.nl>
Subject: Re: FAQ 5.4 How do I delete the last N lines from a file?
Message-Id: <slrnhv30e4.3uf.willem@turtle.stack.nl>

Willem wrote:
)> time perl trunc.pl tenmillion.txt 10000
)
) real	0m0.013s
) user	0m0.011s
) sys	0m0.003s
)
) That's removing ten thousand lines from a ten-million line file
) that is over 600mb large.
)
) I'll try a ReadBackwards version now.


> time perl readb.pl million.txt 10000

real	0m0.036s
user	0m0.035s
sys	0m0.002s

Almost a factor of 3:1.
Removing a hundred thousand lines holds the same 3:1 pattern.

readb.pl:

use warnings;
use strict;

use File::ReadBackwards;

my $filename = shift;
my $Lines_to_truncate = shift;

my $bw = File::ReadBackwards->new( $filename )
  or die "Could not read backwards in [$filename]: $!";

my $lines_from_end = 0;
until( $bw->eof or $lines_from_end == $Lines_to_truncate )
{
  $lines_from_end++;
}

truncate( $filename, $bw->tell );


trunc.pl:

use warnings;
use strict;

my $blocksize = 4096;

my ($file, $lines) = @ARGV;
open (my $fh, '+<', $file)
  or die "Failed to open '$file' for r/w: $!";
my $pos = sysseek($fh, 0, 2)
  or die "Failed to seek to EOF of '$file': $!";

while (($pos -= ($pos - 1) % $blocksize + 1) >= 0) {
  sysseek($fh, $pos, 0)
    or die "Failed to seek backwards in '$file':$!";
  sysread($fh, my $block, $blocksize)
    or die "Failed to read from '$file': $!";
  my $spos = rindex($block, "\n"); 
  while ($spos >= 0) {
    if (--$lines < 0) {
      truncate($fh, $pos + $spos)
        or die "Failed to truncate '$file':$!";
      exit(0);
    }
    $spos = rindex($block, "\n", $spos - 1);
  }
}
die "Not enough lines in file";



SaSW, Willem
-- 
Disclaimer: I am in no way responsible for any of the statements
            made in the above text. For all I know I might be
            drugged or something..
            No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT


------------------------------

Date: Mon, 17 May 2010 13:54:24 -0400
From: Ralph Malph <ralph@happydays.com>
Subject: Re: FAQ 5.4 How do I delete the last N lines from a file?
Message-Id: <e7d50$4bf182d0$40779ac3$25945@news.eurofeeds.com>

On 5/17/2010 12:43 PM, Ralph Malph wrote:
[snip]
> $ time cat puke | wc -l | xargs echo -10000 + | bc \
> | xargs echo head puke -n | sh > top_n-10000
>
> real 0m0.312s
> user 0m0.090s
> sys 0m0.121s

a non-pipelined shell script
does better
$ time ./temp.sh > top_n-10000
real    0m0.266s
user    0m0.015s
sys     0m0.076s

------------
temp.sh
------------
lines = `wc -l puke`
let num_lines=$(($lines-10000))
head puke -n $num_lines


------------------------------

Date: Mon, 17 May 2010 13:55:47 -0400
From: "Uri Guttman" <uri@StemSystems.com>
Subject: Re: FAQ 5.4 How do I delete the last N lines from a file?
Message-Id: <871vdaza3g.fsf@quad.sysarch.com>

>>>>> "W" == Willem  <willem@turtle.stack.nl> writes:

  W> Uri Guttman wrote:

  W> ) it may be possible to hand
  W> ) roll optimize it by removing some overhead, etc. but it was designed to
  W> ) be very fast. your earlier point about how much to remove or skip is the
  W> ) important one. truncating most of a large file will be slower but you
  W> ) still need to count lines from the end. since you don't need to read
  W> ) each line for this you could read large blocks, scan for newlines and
  W> ) count them and then truncate to the desired point. readbackwards has the
  W> ) overhead of splitting the blocks into lines and returning each one for
  W> ) counting. but you always need to read the part you are truncating if you
  W> ) are counting lines from the end.

  W> I just tried a hand-rolled sysseek/sysread/truncate version, and either
  W> that overhead is very significant, or my machine is a lot faster.

  W> I'll try a ReadBackwards version now.

did you try my suggested algorithm? it isn't too much work reading large
blocks from the end, counting newlines and then doing a truncate at the
desired point. i see it at about 30 lines of code or so.

uri

-- 
Uri Guttman  ------  uri@stemsystems.com  --------  http://www.sysarch.com --
-----  Perl Code Review , Architecture, Development, Training, Support ------
---------  Gourmet Hot Cocoa Mix  ----  http://bestfriendscocoa.com ---------


------------------------------

Date: Mon, 17 May 2010 13:58:57 -0400
From: "Uri Guttman" <uri@StemSystems.com>
Subject: Re: FAQ 5.4 How do I delete the last N lines from a file?
Message-Id: <87wrv2xvdq.fsf@quad.sysarch.com>

>>>>> "W" == Willem  <willem@turtle.stack.nl> writes:

  W>   my $spos = rindex($block, "\n"); 

ahh, here is your bottleneck. use tr/// to count the newlines of each
block. if you haven't read enough then read another. you don't need to
use rindex for each newline. also when you find the block which has the
desired ending, you can use a forward regex or something else to find
the nth newline in one call. perl is slow doing ops in a loop but fast
doing loops internally. so always use perl ops which do more work for you.

  W>   while ($spos >= 0) {
  W>     if (--$lines < 0) {
  W>       truncate($fh, $pos + $spos)
  W>         or die "Failed to truncate '$file':$!";
  W>       exit(0);
  W>     }
  W>     $spos = rindex($block, "\n", $spos - 1);

that is a slow perl loop calling rindex over and over. 

uri

-- 
Uri Guttman  ------  uri@stemsystems.com  --------  http://www.sysarch.com --
-----  Perl Code Review , Architecture, Development, Training, Support ------
---------  Gourmet Hot Cocoa Mix  ----  http://bestfriendscocoa.com ---------


------------------------------

Date: Sun, 16 May 2010 18:37:42 -0700
From: sln@netherlands.com
Subject: Re: how do i find the max value out of an array?
Message-Id: <5b71v5hrebl1nucd0l2ufd6h5o4shi9fjs@4ax.com>

On Thu, 13 May 2010 22:01:16 -0500, John Bokma <john@castleamber.com> wrote:

>Xho Jingleheimerschmidt <xhoster@gmail.com> writes:
>
>> John Bokma wrote:
>>> Xho Jingleheimerschmidt <xhoster@gmail.com> writes:
>>>
>>>> Jürgen Exner wrote:
>>>>> "sopan.shewale@gmail.com" <sopan.shewale@gmail.com> wrote:
>>>>>> Once you have array, how about?
>>>>>> my $max = (sort { $b <=> $a } @array)[0];
>>>>> If you insist on being inefficient, 
>>>> We are discussing Perl, aren't we?
>>>
>>> Yup, and my experience is that good Perl runs fast enough. 
>>
>> Well, that really depends on fast enough for what.
>
>For the stuff I do with it, hence "my experience is". ;-). (Mostly
>processing a lot of data)

Regardless of what the OP literally said,
you would recommend sort to find the min or max value?

-sln


------------------------------

Date: Sun, 16 May 2010 18:43:48 -0700
From: sln@netherlands.com
Subject: Re: MinGW and Perl 5.12 - Windows 64 bits ActiveState
Message-Id: <5k71v517s9cqk3fmciuoqoeignk2d1d059@4ax.com>

On Sun, 16 May 2010 21:36:07 +0100, Ben Morrow <ben@morrow.me.uk> wrote:

>
>Quoth Dilbert <dilbert1999@gmail.com>:
>> 
>> I have downloaded and unzipped "mingw-w64-bin_x86_64-
>> mingw_20100515_sezero.zip" (that's "...20100515..." and not "...
>> 20100428...") from http://sourceforge.net/projects/mingw-w64/files.
>> 
>> Then I added the "mingw64\bin" directory to the path.
>> 
><snip>
>> 
>> Now I want to download and make Text::CSV_XS, but unfortunately there
>> is an error "gcc.exe: CreateProcess: No such file or directory"
>> 
>> C:\Users\CK\Documents\PerlModules\Text-CSV_XS\Text-CSV_XS-0.73>dmake
>> cp CSV_XS.pm blib\lib\Text\CSV_XS.pm
>> C:\Perl64\bin\perl.exe C:\Perl64\lib\ExtUtils\xsubpp  -typemap C:
>> \Perl64\lib\ExtUtils\typemap  CSV_XS.xs > CSV_XS.xsc && C:\Perl64\bin\
>> perl.exe -MExtUtils::Command -e "mv" -- CSV_XS.xsc CSV_XS.c
>> C:/Users/CK/DOCUME~1/PROGRA~2/MINGW6~1/gcc.exe -c       -DNDEBUG -
>              ^^^^^^^^^^^^^^^^^^^^^^^^^^
>Is this a 'spaces in the path' or 'path too long' problem? Can you
>install the compiler somewhere like c:\mingw64 and try again?
>

Has anyone actually benchmarked 64-bit over 32 in processing Perl
mechanics? 

What are the performance delta's on each function?

-sln


------------------------------

Date: Sun, 16 May 2010 18:47:42 -0700
From: sln@netherlands.com
Subject: Re: MinGW and Perl 5.12 - Windows 64 bits ActiveState
Message-Id: <7s71v5lngdjajsi0k8kced8arrjuiqlr19@4ax.com>

On Sun, 16 May 2010 18:43:48 -0700, sln@netherlands.com wrote:

>On Sun, 16 May 2010 21:36:07 +0100, Ben Morrow <ben@morrow.me.uk> wrote:
>
>>
>>Quoth Dilbert <dilbert1999@gmail.com>:
>>> 
>>> I have downloaded and unzipped "mingw-w64-bin_x86_64-
>>> mingw_20100515_sezero.zip" (that's "...20100515..." and not "...
>>> 20100428...") from http://sourceforge.net/projects/mingw-w64/files.
>>> 
>>> Then I added the "mingw64\bin" directory to the path.
>>> 
>><snip>
>>> 
>>> Now I want to download and make Text::CSV_XS, but unfortunately there
>>> is an error "gcc.exe: CreateProcess: No such file or directory"
>>> 
>>> C:\Users\CK\Documents\PerlModules\Text-CSV_XS\Text-CSV_XS-0.73>dmake
>>> cp CSV_XS.pm blib\lib\Text\CSV_XS.pm
>>> C:\Perl64\bin\perl.exe C:\Perl64\lib\ExtUtils\xsubpp  -typemap C:
>>> \Perl64\lib\ExtUtils\typemap  CSV_XS.xs > CSV_XS.xsc && C:\Perl64\bin\
>>> perl.exe -MExtUtils::Command -e "mv" -- CSV_XS.xsc CSV_XS.c
>>> C:/Users/CK/DOCUME~1/PROGRA~2/MINGW6~1/gcc.exe -c       -DNDEBUG -
>>              ^^^^^^^^^^^^^^^^^^^^^^^^^^
>>Is this a 'spaces in the path' or 'path too long' problem? Can you
>>install the compiler somewhere like c:\mingw64 and try again?
>>
>
>Has anyone actually benchmarked 64-bit over 32 in processing Perl
>mechanics? 
>
>What are the performance delta's on each function?
>

Back in the day, I guess the ibm370 was 128 bit.
Can Perl be written for risc processors? But Perl gets cornerred
in the OS implementation details. That will eventually be the death
of it. But thats not Perl fault, it trys to please too many, too
much.

-sln


------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

Back issues are available via anonymous ftp from
ftp://cil-www.oce.orst.edu/pub/perl/old-digests. 

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V11 Issue 2948
***************************************


home help back first fref pref prev next nref lref last post