[31917] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 3180 Volume: 11

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Mon Oct 18 14:09:25 2010

Date: Mon, 18 Oct 2010 11:09:08 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Mon, 18 Oct 2010     Volume: 11 Number: 3180

Today's topics:
        CGI, multiple page data input. <justin.1010@purestblue.com>
    Re: CGI, multiple page data input. <mathematisch@gmail.com>
    Re: CGI, multiple page data input. <jurgenex@hotmail.com>
    Re: perl curl get data from website <tadmc@seesig.invalid>
    Re: perl curl get data from website <nospam-abuse@ilyaz.org>
    Re: perl curl get data from website <emailsrvr-groups@yahoo.com>
        please help with creating a special iterator <mathematisch@gmail.com>
    Re: please help with creating a special iterator <glex_no-spam@qwest-spam-no.invalid>
        reading LWP in chunks <klaus03@gmail.com>
    Re: reading LWP in chunks <ben@morrow.me.uk>
    Re: reading LWP in chunks <klaus03@gmail.com>
    Re: reading LWP in chunks <ben@morrow.me.uk>
    Re: reading LWP in chunks <klaus03@gmail.com>
    Re: reading LWP in chunks <jimsgibson@gmail.com>
    Re: where to install cpan modules <mijoryx@yahoo.com>
    Re: where to install cpan modules <justin.1010@purestblue.com>
    Re: why does this happen? <whynot@pozharski.name>
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Mon, 18 Oct 2010 15:11:08 +0100
From: Justin C <justin.1010@purestblue.com>
Subject: CGI, multiple page data input.
Message-Id: <s41uo7-ml9.ln1@zem.masonsmusic.co.uk>

All of my web stuff (inhouse) until now has passed previously input data
back to the browser in hidden form fields when more data needs to be
collected. For example, my script that calculates shipping rates for
customer orders collects weight, number of items, country of destination
on the first page. When this page is submitted a new page is returned
asking for the dimensions of each box, this second page form is
dependent on input from the first page (each box can be different
dimensions), but I also need some of the original page's input for my
calculations later, so the original data is passed back in hidden form
fields.

I'd like to move my coding forward a little and find a better way. I'm
guessing the cookies is the correct way to progress with this, but I
want to be sure, or hear of other ways before I commit to a cookies
approach for my new web-page.

Any suggestions of reading matter, or other ways of doing this, ... am I
barking up the wrong tree? Or just barking?!

Thanks for your suggestions.

   Justin.


------------------------------

Date: Mon, 18 Oct 2010 08:15:21 -0700 (PDT)
From: Mathematisch <mathematisch@gmail.com>
Subject: Re: CGI, multiple page data input.
Message-Id: <3fb551ea-63b4-4eae-a0a1-20a4d61c484b@a37g2000yqi.googlegroups.com>


> Any suggestions of reading matter, or other ways of doing this, ... am I
> barking up the wrong tree? Or just barking?!

Try using: CGI::Session

Regards, F


------------------------------

Date: Mon, 18 Oct 2010 08:25:33 -0700
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: CGI, multiple page data input.
Message-Id: <gipob6trp13pt4hhsec932lh0kn5gdf1na@4ax.com>

Justin C <justin.1010@purestblue.com> wrote:
>All of my web stuff (inhouse) until now has passed previously input data
>back to the browser in hidden form fields when more data needs to be
>collected. For example, my script that calculates shipping rates for
>customer orders collects weight, number of items, country of destination
>on the first page. When this page is submitted a new page is returned
>asking for the dimensions of each box, this second page form is
>dependent on input from the first page (each box can be different
>dimensions), but I also need some of the original page's input for my
>calculations later, so the original data is passed back in hidden form
>fields.
>
>I'd like to move my coding forward a little and find a better way. I'm
>guessing the cookies is the correct way to progress with this, but I
>want to be sure, or hear of other ways before I commit to a cookies
>approach for my new web-page.

You are looking for sessions.

Of course this has nothing to do with Perl whatsoever because the
concept of sessions is totally independent of whatever programming
language you happen to be using to implement them.

jue


------------------------------

Date: Sun, 17 Oct 2010 16:48:27 -0500
From: Tad McClellan <tadmc@seesig.invalid>
Subject: Re: perl curl get data from website
Message-Id: <slrnibms0l.gl0.tadmc@tadbox.sbcglobal.net>

Ilya Zakharevich <nospam-abuse@ilyaz.org> wrote:
> On 2010-10-17, Tad McClellan <tadmc@seesig.invalid> wrote:
>> You might want to try it with the Web Scraping Proxy:
>>
>>     http://www2.research.att.com/sw/tools/wsp/
>>
>> which is nice because it logs the traffic in the form of
>> Perl code that you can copy/paste/modify to suit your needs.
>
>    A username and password are being requested by
>    http://www2.research.att.com. The site says: "Enter Password"
>
> Now what?


Don't overlook the BOLD text on the page I linked to.

It's easy to do, I overlooked it at first too  :-)


-- 
Tad McClellan
email: perl -le "print scalar reverse qq/moc.liamg\100cm.j.dat/"
The above message is a Usenet post.
I don't recall having given anyone permission to use it on a Web site.


------------------------------

Date: Sun, 17 Oct 2010 21:58:09 +0000 (UTC)
From: Ilya Zakharevich <nospam-abuse@ilyaz.org>
Subject: Re: perl curl get data from website
Message-Id: <slrnibmsbh.k4d.nospam-abuse@powdermilk.math.berkeley.edu>

On 2010-10-17, Tad McClellan <tadmc@seesig.invalid> wrote:
>>    A username and password are being requested by
>>    http://www2.research.att.com. The site says: "Enter Password"

>> Now what?

> Don't overlook the BOLD text on the page I linked to.
>
> It's easy to do, I overlooked it at first too  :-)

A wonderful example of steganography.  And very well balanced - if it
were 4 times longer, I would run a "Search" on the page...

Thanks,
Ilya


------------------------------

Date: Mon, 18 Oct 2010 05:58:42 -0700 (PDT)
From: SVCitian <emailsrvr-groups@yahoo.com>
Subject: Re: perl curl get data from website
Message-Id: <33f04e34-c200-4680-acd3-eda1a6408674@j4g2000prm.googlegroups.com>

On Oct 17, 10:21=A0pm, Tad McClellan <ta...@seesig.invalid> wrote:
> SVCitian <emailsrvr-gro...@yahoo.com> wrote:
> > I even tried to user "tamper data" firefox add to get behind the
> > scenes of GET, POST, etc... but I can't proceed any further than the
> > URLs given above.
>
> > why? that may be something to do with ajax, cookie, user agent, or
> > whatever.
>
> You might want to try it with the Web Scraping Proxy:
>
> =A0 =A0http://www2.research.att.com/sw/tools/wsp/
>
> which is nice because it logs the traffic in the form of
> Perl code that you can copy/paste/modify to suit your needs.
>
> --
> Tad McClellan
> email: perl -le "print scalar reverse qq/moc.liamg\100cm.j.dat/"
> The above message is a Usenet post.
> I don't recall having given anyone permission to use it on a Web site.

I have tried this long time back, and I couldn't make it work and also
failed with the attempt.

This in itself generated a whole search in forums for making it work.

If anyone out there who has used wsp (and still have it on their
computers), could you run my site through it and advise your findings.
I think it just takes few minutes of your time if you have already
made the wsp work for you.

Will appreciate your assistance.

Thank you.




------------------------------

Date: Mon, 18 Oct 2010 07:50:17 -0700 (PDT)
From: Mathematisch <mathematisch@gmail.com>
Subject: please help with creating a special iterator
Message-Id: <0f2cbc17-f41a-451a-8ad0-6015360adf7c@a37g2000yqi.googlegroups.com>

Hi,

The problem: I would like to create an iterator to iterate through a
csv file with the following structure:


field_1,field_2,...field_14
field_1,field_2,...field_14
(...)



Note that this is a csv file with 14 fields and it is already sorted
by field_1 and then by field_2. There are usually only 5-10 lines
having the same field_1 and field_2 value.

There could be up to hundreds of millions of lines in the file. The
desired iterator should work like this: At each "next_entry" call, the
iterator should return a reference to an array of the lines having the
identical field_1 and field_2 values.

Because of my lack of understanding the iterator concept, I could not
come up with a solution yet. The file is too big to use the field_1
and field_2 as a hash key to achieve the same goal of grouping the
entries.

Thank you very much for any help on this. I hope I can learn from the
eventual proposed solutions.

Kind regards.
F.





------------------------------

Date: Mon, 18 Oct 2010 11:09:11 -0500
From: "J. Gleixner" <glex_no-spam@qwest-spam-no.invalid>
Subject: Re: please help with creating a special iterator
Message-Id: <4cbc7128$0$87065$815e3792@news.qwest.net>

Mathematisch wrote:
> Hi,
> 
> The problem: I would like to create an iterator to iterate through a
> csv file with the following structure:
> 
> 
> field_1,field_2,...field_14
> field_1,field_2,...field_14
> (...)
> 
> 
> 
> Note that this is a csv file with 14 fields and it is already sorted
> by field_1 and then by field_2. There are usually only 5-10 lines
> having the same field_1 and field_2 value.
> 
> There could be up to hundreds of millions of lines in the file. The
> desired iterator should work like this: At each "next_entry" call, the
> iterator should return a reference to an array of the lines having the
> identical field_1 and field_2 values.
> 
> Because of my lack of understanding the iterator concept, I could not
> come up with a solution yet. The file is too big to use the field_1
> and field_2 as a hash key to achieve the same goal of grouping the
> entries.

You don't say what you want to do with the data, however
you could store everything into a database, then using
group by, order by, you could process your data easily.

However, since you say that everything is already sorted
by those keys, you could process things as you read the
file, keeping track of when those fields change.  Throwing a
next_entry around this and having it return the data
of calling process_data, would be simple enough, I rarely
bother with creating an 'iterator'.. but that's just me.. :-)

Hopefully you're using Text::CSV or some other module to
parse the CSV file.

my ( $prev_f1, $prev_f2, @data );
while( my $line = <> )
{
	chomp( $line );
	my ( $f1, $f2, @fields ) = parse-line-somehow();
	
	if( $f1 eq $prev_f1 && $f2 eq $prev_f2 )
	{
		push( @data, \@fields );
	}
	else
	{
		process_data( $prev_f1, $prev_f2, \@data );
		$prev_f1 = $f1;
		$prev_f2 = $f2;
		undef @data;
		push( @data, \@fields );
	}
}
process_data( $prev_f1, $prev_f2, \@data ) if @data;

sub process_data
{
	my $f1 = shift;
	my $f2 = shift;
	my $data_aref = shift;

	# do whatever you want...
}


------------------------------

Date: Mon, 18 Oct 2010 05:49:11 -0700 (PDT)
From: Klaus <klaus03@gmail.com>
Subject: reading LWP in chunks
Message-Id: <5015a27e-6c26-420f-91e7-322c2e3eb71a@a36g2000yqc.googlegroups.com>

Hi Perl programmers,

I am trying to write a Module (its name will be LWP::Chunk) to
read arbitrarily big http-files sequentially in small chunks.

Let me give an example:

With the existing module LWP::UserAgent, you can say:

use LWP::UserAgent;
my $ua = LWP::UserAgent->new;
$ua->timeout(10);
$ua->env_proxy;
my $response = $ua->get('http://search.cpan.org/');

With my new module LWP::Chunk (this module still needs to be written),
you would be able to say:

use LWP::Chunk;
my $ck = LWP::Chunk->new('http://search.cpan.org/', {csize => 1024,
timeout => 10});
my $container = '';
while ($ck->read_chunk) {
    $container .= $ck->buffer;
    # do whatever you want to do here,
    # you are even allowed to go last;
}
if ($ck->there_was_an_error) {
    die "There has been an error (code=".$ck->errcode.")";
}
# here we have the data in $container

The problem is that I am stuck with writing method $ck->read_chunk. (I
want to read the next chunk of 1024 bytes).

I had a look at LWP::Simple and at LWP::UserAgent, but I could not
find any code that allows to read the next 1024 bytes from 'http://
search.cpan.org/'. (I don't want to read the whole data in one go, I
rather want to read it in smaller chunks)

Can anybody point me to the LWP-internals (maybe LWP::UserAgent,
HTTP::Request, HTTP::Response, etc... ???) which reads a chunk of
data ?

Thanks in advance.


------------------------------

Date: Mon, 18 Oct 2010 14:12:28 +0100
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: reading LWP in chunks
Message-Id: <smtto7-t2j.ln1@osiris.mauzo.dyndns.org>


Quoth Klaus <klaus03@gmail.com>:
> Hi Perl programmers,
> 
> I am trying to write a Module (its name will be LWP::Chunk) to
> read arbitrarily big http-files sequentially in small chunks.

This already exists :). See the ->add_handler method of LWP::UserAgent
and the :content_cb and :read_size_hint parameters to ->get.

Of course, you may still want to wrap the callback API in the more
straightforward API you describe, especially if you need to be sure your
chunks are no larger than you specify.

Ben



------------------------------

Date: Mon, 18 Oct 2010 06:47:05 -0700 (PDT)
From: Klaus <klaus03@gmail.com>
Subject: Re: reading LWP in chunks
Message-Id: <efc4279d-4a30-4131-b4d8-ba3ac0af30e1@t20g2000yqa.googlegroups.com>

On 18 oct, 15:12, Ben Morrow <b...@morrow.me.uk> wrote:
> Quoth Klaus <klau...@gmail.com>:
>
> > Hi Perl programmers,
>
> > I am trying to write a Module (its name will be LWP::Chunk) to
> > read arbitrarily big http-files sequentially in small chunks.
>
> This already exists :). See the ->add_handler method of LWP::UserAgent
> and the :content_cb and :read_size_hint parameters to ->get.

Yes, I can see the add_handler method in LWP::UserAgent (and I can
also see the response_data => sub {...} section which, I think is most
interesting for my purposes):

>> response_data => sub { my($response,
>> $ua, $h, $data) = @_; ... }
>> This handlers is called for each chunk of data received
>> for the response. The handler might croak to abort the
>> request.
>> This handler need to return a TRUE value to be called
>> again for subsequent chunks for the same request.

There ** must ** be a loop somewhere deep inside LWP::UserAgent->get()
that says "...while (read_chunk(:read_size)) { &response_data-
>(...); } ..."

At this point, I want to go ** deep ** into the guts of LWP::UserAgent-
>get ( -- that could be inside LWP::UserAgent, inside HTTP::Request,
inside HTTP::Response, etc... -- )  to find that loop, rip out that
line that says "read_chunk()" and stick it into my new module
"LWP::Chunk") -- of course, I make sure to read the license document
before ripping out anything.

I have dived into LWP::UserAgent, HTTP::Request, HTTP::Response, but I
can't find that elusive "...while (read_chunk(:read_size))
{ &response_data->(...); } ..."

Can anybody cure my blindness ?


------------------------------

Date: Mon, 18 Oct 2010 15:09:18 +0100
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: reading LWP in chunks
Message-Id: <e11uo7-lfn.ln1@osiris.mauzo.dyndns.org>


Quoth Klaus <klaus03@gmail.com>:
> On 18 oct, 15:12, Ben Morrow <b...@morrow.me.uk> wrote:
> > Quoth Klaus <klau...@gmail.com>:
> >
> > > I am trying to write a Module (its name will be LWP::Chunk) to
> > > read arbitrarily big http-files sequentially in small chunks.
> >
> > This already exists :). See the ->add_handler method of LWP::UserAgent
> > and the :content_cb and :read_size_hint parameters to ->get.
> 
> Yes, I can see the add_handler method in LWP::UserAgent (and I can
> also see the response_data => sub {...} section which, I think is most
> interesting for my purposes):
> 
> >> response_data => sub { my($response,
> >> $ua, $h, $data) = @_; ... }
> >> This handlers is called for each chunk of data received
> >> for the response. The handler might croak to abort the
> >> request.
> >> This handler need to return a TRUE value to be called
> >> again for subsequent chunks for the same request.
> 
> There ** must ** be a loop somewhere deep inside LWP::UserAgent->get()
> that says "...while (read_chunk(:read_size)) { &response_data-
> >(...); } ..."
> 
> At this point, I want to go ** deep ** into the guts of LWP::UserAgent-
> >get ( -- that could be inside LWP::UserAgent, inside HTTP::Request,
> inside HTTP::Response, etc... -- )  to find that loop, rip out that
> line that says "read_chunk()" and stick it into my new module
> "LWP::Chunk")

Why? What is wrong with using the existing API?

> I have dived into LWP::UserAgent, HTTP::Request, HTTP::Response, but I
> can't find that elusive "...while (read_chunk(:read_size))
> { &response_data->(...); } ..."

That loop is in LWP::Protocol::collect, which is called from
LWP::Protocol::http::request (which passes a callback to do the actual
reading).

Ben



------------------------------

Date: Mon, 18 Oct 2010 07:35:21 -0700 (PDT)
From: Klaus <klaus03@gmail.com>
Subject: Re: reading LWP in chunks
Message-Id: <03b109e7-41b3-4471-95cb-72a6e7833fda@i21g2000yqg.googlegroups.com>

> > > Quoth Klaus <klau...@gmail.com>:
> > > > I am trying to write a Module (its name will be LWP::Chunk) to
> > > > read arbitrarily big http-files sequentially in small chunks.

> Quoth Klaus <klau...@gmail.com>:
> > At this point, I want to go ** deep ** into the guts of LWP::UserAgent-
> > >get ( -- that could be inside LWP::UserAgent, inside HTTP::Request,
> > inside HTTP::Response, etc... -- ) =A0to find that loop, rip out that
> > line that says "read_chunk()" and stick it into my new module
> > "LWP::Chunk")

On 18 oct, 16:09, Ben Morrow <b...@morrow.me.uk> wrote:
> Why? What is wrong with using the existing API?

Nothing, it's just yet another TIMTOWTDI for reading LWP. I personally
prefer reading in chunks using my own while-construct, while others
might prefer a simple call to LWP::UserAgent->get(...) using
callbacks.

I simply want to provide an LWP-module for those who prefer writing
their own while-construct, but to be honest, apart from myself, I
don't know how many there are amongst the perl user community who
prefer writing their own while-construct.

> > I have dived into LWP::UserAgent, HTTP::Request, HTTP::Response, but I
> > can't find that elusive "...while (read_chunk(:read_size))
> > { &response_data->(...); } ..."
>
> That loop is in LWP::Protocol::collect, which is called from
> LWP::Protocol::http::request (which passes a callback to do the actual
> reading).

Thank you very much for this nugget of information. This is most
useful for my future module LWP::Chunk.



------------------------------

Date: Mon, 18 Oct 2010 10:28:14 -0700
From: Jim Gibson <jimsgibson@gmail.com>
Subject: Re: reading LWP in chunks
Message-Id: <181020101028141623%jimsgibson@gmail.com>

In article
<03b109e7-41b3-4471-95cb-72a6e7833fda@i21g2000yqg.googlegroups.com>,
Klaus <klaus03@gmail.com> wrote:

> > > > Quoth Klaus <klau...@gmail.com>:
> > > > > I am trying to write a Module (its name will be LWP::Chunk) to
> > > > > read arbitrarily big http-files sequentially in small chunks.
> 
> > Quoth Klaus <klau...@gmail.com>:
> > > At this point, I want to go ** deep ** into the guts of LWP::UserAgent-
> > > >get ( -- that could be inside LWP::UserAgent, inside HTTP::Request,
> > > inside HTTP::Response, etc... -- )  to find that loop, rip out that
> > > line that says "read_chunk()" and stick it into my new module
> > > "LWP::Chunk")
> 
> On 18 oct, 16:09, Ben Morrow <b...@morrow.me.uk> wrote:
> > Why? What is wrong with using the existing API?
> 
> Nothing, it's just yet another TIMTOWTDI for reading LWP. I personally
> prefer reading in chunks using my own while-construct, while others
> might prefer a simple call to LWP::UserAgent->get(...) using
> callbacks.

I believe Ben is suggesting that you implement your LWP::Chunk module
and its while-construct by using the existing add_handler method of
LWP::UserAgent, rather than extracting the code from there and putting
it into your own module. This is known as adding a "layer", and is
commonly done to make using some complicated interface easier to use
for some commonly-used purpose.

The advantage of using an existing module is that you take advantage of
work already done. You also get to use any improvements or bug fixes of
the module you are using.

The disadvantage is that you are then dependent upon that module. If
the API ever changes, you will need to change your module. If support
for the module is ever dropped, you may need to rewrite your own
module. However, since LWP::UserAgent is a widely-used, mature module,
neither of these circumstances is likely.

> I simply want to provide an LWP-module for those who prefer writing
> their own while-construct, but to be honest, apart from myself, I
> don't know how many there are amongst the perl user community who
> prefer writing their own while-construct.

Nobody is suggesting that you do not write a module, just that you use
existing code as is, rather than extracting it and copying it.

-- 
Jim Gibson


------------------------------

Date: Mon, 18 Oct 2010 00:16:56 -0700 (PDT)
From: luser-ex-troll <mijoryx@yahoo.com>
Subject: Re: where to install cpan modules
Message-Id: <3836c6ee-7a1c-446c-a5bf-c203f9375ded@i5g2000yqe.googlegroups.com>

On Oct 16, 4:16=A0pm, John Smith <j...@example.invalid> wrote:
> John Smith wrote:
> > $ pwd
> > /usr/share/perl
> > $ cd ..
> > $ ls -ald perl
> > drwxr-xr-x 3 root root 4096 2009-04-20 07:59 perl
> > $ sudo chmod o=3Drw perl
> > [sudo] password for ron:
> > $ ls -ald perl
> > drwxr-xrw- 3 root root 4096 2009-04-20 07:59 perl
> > $
>
> > So now I have write priveleges, but now that I do, when I run perl -V,
> > my interpreter can't discover the meaning of strict, so I think this
> > evidences of the above eluded-to pooch-screw:
>
> > $ perl -V
> > Can't locate strict.pm in @INC (@INC contains: /etc/perl
> > /usr/local/lib/perl/5.10.0 /usr/local/share/perl/5.10.0 /usr/lib/perl5
> > /usr/share/perl5 /usr/lib/perl/5.10 /usr/share/perl/5.10
> > /usr/local/lib/site_perl .) at /usr/lib/perl/5.10/Config.pm line 5.
> > BEGIN failed--compilation aborted at /usr/lib/perl/5.10/Config.pm line =
5.
> > Compilation failed in require.
> > BEGIN failed--compilation aborted.
> > $
>
> > So, what great thing have I done here?
>
> I believe I have changed the one thing I did back to its original state:
>
> =A0 pwd
> /usr/share
> # ls -ald perl
> drwxr-xr-x 3 root root 4096 2009-04-20 07:59 perl
>
> (compares to:)
> =A0> drwxr-xr-x 3 root root 4096 2009-04-20 07:59 perl
>
> So Ben asks me what happened when I changed this permission. =A0I've got =
a
> bit of a learning curve here, and I've been trying to figure of what my
> OS is telling me, but I have trouble with some of it.
>
> First, how am I too understand the output of this command?
>
> # ls -a perl
> . =A0.. =A05.10 =A05.10.0
> # man ls
>
> (yes I manned it already)
>
> Second, what is the difference between these two?
> # ls -l perl
> total 12
> lrwxrwxrwx =A01 root root =A0 =A0 6 2009-12-13 18:06 5.10 -> 5.10.0
> drwxr-xr-x 50 root root 12288 2010-10-08 23:11 5.10.0

5.10 is a symbolic link (a shortcut) to 5.10.0



------------------------------

Date: Mon, 18 Oct 2010 13:36:39 +0100
From: Justin C <justin.1010@purestblue.com>
Subject: Re: where to install cpan modules
Message-Id: <njrto7-3n5.ln1@zem.masonsmusic.co.uk>

On 2010-10-17, Ilya Zakharevich <nospam-abuse@ilyaz.org> wrote:
> On 2010-10-16, John Smith <john@example.invalid> wrote:
>> Einstein defined insanity as doing the same thing repeatedly and 
>> expecting different results.
>
> Each time I try to define insanity, I get different results...  Now whta?

You must be doing it wrong, please post an example of your code :)

#!/usr/bin/perl

use strict;
use warnings;

my $insanity = "true";

die "\$insanity undefined" unless defined $insanity;

__END__


   Justin.


------------------------------

Date: Mon, 18 Oct 2010 20:17:26 +0300
From: Eric Pozharski <whynot@pozharski.name>
Subject: Re: why does this happen?
Message-Id: <slrnibp096.3sl.whynot@orphan.zombinet>

with <slrnibm07u.thh.hjp-usenet2@hrunkner.hjp.at> Peter J. Holzer wrote:
> On 2010-10-17 07:13, Eric Pozharski <whynot@pozharski.name> wrote:
>> In fact, I don't have a clear understanding what that Save-As thingy
>> is.  I have ':w' (writes the buffer back), ':w >>filename' (writes
>> the buffer with other filename), and ':w !cmd' (pipes the buffer to
>> command).  Which one is Save-As?  All of them are ':write' for me.
> You forgot ':w filename', which is likely what the OP meant if he's
> using a vi clone. If he's using vim in particular, he might have meant
> ':saveas filename' which is like ':w filename' followed by ':r
> filename'.

Me's corrected.

> I was taking "Save as" (did he even use that term? I don't remember)

And this time, our prize goes toooooooooooooo...  Uri!!!!

	Message-ID: <8739sqsh48.fsf@quad.sysarch.com>

Fsck, 20 days before!

> as a generic name for the editor's "save the current buffer under a
> different file name" functionality. Since that functionality is common
> to just about every editor I've used in the past 25 years I don't see
> much value in bringing editor-specific commands into the discussion.

I'll better skip redescribing differences between ':w' and ':sav'.  The
only thing I have to note is that the 2nd command isn't ':r' it's
':filetype detect'.  And let me focus on this point.

I've thought a lot about that, and I've got to conclusion that Save-As
never makes files executable itself.  (sorry for bringing back toy-OS
strawman again.)  In the toy-OS file type comes with the suffix.
Whenever filetype is associated with some executable we could describe
such file as executable (someone could do that executing double-click).
Thus '*.txt' is kind of executable.  However, for the purpose of this
quarrel, we'll stay from this.  Let's say that if a file ends on laps of
'query.dll' (I've checked, '*.exe' ends there) (pity, I've forgot to
check where ends '*.dll') then it's executable, if not (there're many
other ways (such a legacy aware system, btw)) then it's not.

Thus, if someone Save-Ases as executable type with whatever editor
(really, whatever, that doesn't matter if that's notepad, oowriter or
edlin) then that user makes file executable *explicitly*.  And vice
versa.  However, if filetype isn't changed than just created file is
taken from the current filetype (most probably it's not executable).

I think, that if OP would have double-clicked the problematic file then,
most probably, it would be executed.  There're mime-types for this,
aren't they?  Thus it's the shell (probably whatever) that doesn't give
a fsck about mime-types and pays attention to executable-bit only.  (I
don't think that shell couldn't be subverted into that pervert
behaviour.  And you know, it most probably will be.)

However, (and I'll check that tomorrow) what if '*.bat' doesn't go to
'query.dll'?  Then we have to agree that '*.txt' is executable either.

Shortly, my point is that executability doesn't spring from a corner.
It's set explicitly.  Manually or not.  Although what *is* executability
is in competence of shell.  Being it concrete or GUI.


-- 
Torvalds' goal for Linux is very simple: World Domination
Stallman's goal for GNU is even simpler: Freedom


------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

Back issues are available via anonymous ftp from
ftp://cil-www.oce.orst.edu/pub/perl/old-digests. 

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V11 Issue 3180
***************************************


home help back first fref pref prev next nref lref last post