[31630] in Perl-Users-Digest
Perl-Users Digest, Issue: 2889 Volume: 11
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Fri Mar 26 16:09:28 2010
Date: Fri, 26 Mar 2010 13:09:10 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Fri, 26 Mar 2010 Volume: 11 Number: 2889
Today's topics:
equivalent <hendedav@gmail.com>
Re: equivalent <jurgenex@hotmail.com>
Re: File formatting <RedGrittyBrick@spamweary.invalid>
Re: File formatting <tadmc@seesig.invalid>
Re: File formatting <jurgenex@hotmail.com>
Re: File formatting <cartercc@gmail.com>
logic question for text file updates <cartercc@gmail.com>
Re: logic question for text file updates sln@netherlands.com
Re: logic question for text file updates <cartercc@gmail.com>
Re: logic question for text file updates sln@netherlands.com
Re: logic question for text file updates <ben@morrow.me.uk>
Re: Perl / cgi / include file a la #include <noemail@nothere.com>
Re: Perl / cgi / include file a la #include <noemail@nothere.com>
Re: Perl / cgi / include file a la #include <jurgenex@hotmail.com>
Re: Perl / cgi / include file a la #include <uri@StemSystems.com>
Re: Perl / cgi / include file a la #include <ben@morrow.me.uk>
Re: Perl / cgi / include file a la #include <cartercc@gmail.com>
Re: Perl / cgi / include file a la #include <ben@morrow.me.uk>
Re: s///gsi; with a wildcard <glex_no-spam@qwest-spam-no.invalid>
socket transmission <ron.eggler@gmail.com>
Re: socket transmission (Jens Thoms Toerring)
using Print << marker with require statement? <noemail@nothere.com>
Re: using Print << marker with require statement? sln@netherlands.com
Re: using Print << marker with require statement? sln@netherlands.com
Re: using Print << marker with require statement? <ben@morrow.me.uk>
Re: Why doesn't %+ match %- for this regex? sln@netherlands.com
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Fri, 26 Mar 2010 12:00:25 -0700 (PDT)
From: Dave <hendedav@gmail.com>
Subject: equivalent
Message-Id: <257778be-146a-4612-b2f7-f8e5de9f5846@z11g2000yqz.googlegroups.com>
Gang,
I'm looking for an equivalent in perl to the php syntax:
$data = file_get_contents('php://input');
I've read some posts about IO::File::String and LWP::Simple possibly
being used instead of 'file_get_contents'. The php line accepts raw
input and assigns it to a variable. This input is coming from the
output of another script and not a file. If anyone has any ideas or
modules to look into, please feel free to chime in!
Thanks,
Dave
------------------------------
Date: Fri, 26 Mar 2010 12:22:23 -0700
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: equivalent
Message-Id: <542qq59arm9dnufambdfhe9ca5i2j10j64@4ax.com>
Dave <hendedav@gmail.com> wrote:
> I'm looking for an equivalent in perl to the php syntax:
>
>$data = file_get_contents('php://input');
>
>I've read some posts about IO::File::String and LWP::Simple possibly
>being used instead of 'file_get_contents'. The php line accepts raw
>input
I have no idea what raw input is supposed to mean.
> and assigns it to a variable. This input is coming from the
>output of another script and not a file.
Are your trying to capture the output of another program?
'perldoc -q output':
Why can't I get the output of a command with system()?
There are other ways for special circumstances, too, but in general you
want backticks or qx.
jue
------------------------------
Date: Fri, 26 Mar 2010 11:07:40 +0000
From: RedGrittyBrick <RedGrittyBrick@spamweary.invalid>
Subject: Re: File formatting
Message-Id: <4bac957e$0$2536$da0feed9@news.zen.co.uk>
On 26/03/2010 07:46, Ninja Li wrote:
> Hi,
>
> I have a file with two fields, country and city and "|" delimiter.
> Here are the sample formats:
>
> USA | Boston
> USA | Chicago
> USA | Seattle
> Ireland | Dublin
> Britain | London
> Britain | Liverpool
>
> I would like to have the output like the following:
> USA | Boston, Chicago, Seattle
> Ireland | Dublin
> Britain | London, Liverpool
>
> I tried to open the file, use temp variables to store and compare
> the countries and it looks very cumbersome. Is there an easier way to
> tackle this?
>
I'd use split and push onto an arrayref stored in a hash. Or just append
to a hash value with an initial comma if the hash key exists.
--
RGB
------------------------------
Date: Fri, 26 Mar 2010 06:25:01 -0500
From: Tad McClellan <tadmc@seesig.invalid>
Subject: Re: File formatting
Message-Id: <slrnhqp64r.5t1.tadmc@tadbox.sbcglobal.net>
Ninja Li <nickli2000@gmail.com> wrote:
> I have a file with two fields, country and city and "|" delimiter.
You have a " | " *separator*, not a delimiter.
> I would like to have the output like the following:
> USA | Boston, Chicago, Seattle
> Ireland | Dublin
> Britain | London, Liverpool
-----------------------
#!/usr/bin/perl
use warnings;
use strict;
my %cities;
while ( <DATA> ) {
chomp;
my($country, $city) = split / \| /;
push @{ $cities{$country} }, $city;
}
foreach my $country ( sort keys %cities ) {
print "$country | ", join(', ', @{ $cities{$country} }), "\n";
}
__DATA__
USA | Boston
USA | Chicago
USA | Seattle
Ireland | Dublin
Britain | London
Britain | Liverpool
-----------------------
--
Tad McClellan
email: perl -le "print scalar reverse qq/moc.liamg\100cm.j.dat/"
The above message is a Usenet post.
I don't recall having given anyone permission to use it on a Web site.
------------------------------
Date: Fri, 26 Mar 2010 06:48:29 -0700
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: File formatting
Message-Id: <0cepq59slar8hhs5ni6vgrk0u207eo7ckf@4ax.com>
Ninja Li <nickli2000@gmail.com> wrote:
> I have a file with two fields, country and city and "|" delimiter.
>Here are the sample formats:
>
> USA | Boston
> USA | Chicago
> USA | Seattle
> Ireland | Dublin
> Britain | London
> Britain | Liverpool
>
> I would like to have the output like the following:
> USA | Boston, Chicago, Seattle
> Ireland | Dublin
> Britain | London, Liverpool
>
> I tried to open the file, use temp variables to store and compare
>the countries and it looks very cumbersome. Is there an easier way to
>tackle this?
As the cities are obviously grouped there is no need to construct a
complex data structure. Instead just keep one $current_country and
@cities in which you just push() all cities for as long as the ocuntry
doesn't change. And while readind the file line by line whenever the
line contains a new country just do a
print "$current_country | @cities\n";
and reset $current_country to the new country and initialize @cities
with the first city of that country.
jue
------------------------------
Date: Fri, 26 Mar 2010 08:23:37 -0700 (PDT)
From: ccc31807 <cartercc@gmail.com>
Subject: Re: File formatting
Message-Id: <4f97197f-63de-44fc-a227-d5a2268c524c@k13g2000yqe.googlegroups.com>
On Mar 26, 7:25=A0am, Tad McClellan <ta...@seesig.invalid> wrote:
> #!/usr/bin/perl
> use warnings;
> use strict;
>
> my %cities;
> while ( <DATA> ) {
> =A0 =A0 chomp;
> =A0 =A0 my($country, $city) =3D split / \| /;
> =A0 =A0 push @{ $cities{$country} }, $city;
>
> }
>
> foreach my $country ( sort keys %cities ) {
> =A0 =A0 print "$country | ", join(', ', @{ $cities{$country} }), "\n";
>
> }
>
> __DATA__
> USA | Boston
> USA | Chicago
> USA | Seattle
> Ireland | Dublin
> Britain | London
> Britain | Liverpool
> -----------------------
I think this is the ideal solution. You might want to check to see
that city names with spaces (like 'New York' look like in the array.
The only thing I would add is that your data structure (in memory)
looks like this:
%cities =3D {
USA =3D> [Boston Chicago Seattle],
Ireland =3D> [Dublin],
Britain =3D> [London Liverpool],
}
CC.
------------------------------
Date: Fri, 26 Mar 2010 11:14:36 -0700 (PDT)
From: ccc31807 <cartercc@gmail.com>
Subject: logic question for text file updates
Message-Id: <962eeae2-b0b5-498f-85c1-0b5d0a87dd26@f8g2000yqn.googlegroups.com>
We have a csv source file of many thousands of records, with two
columns, the ID and a
status field. It has very recently come to my attention that
occasionally the status of a record will change, with the change being
significant enough that the record must be updated before the process
runs. The update files consist of a small subset, sometimes a very
small subset, of the records in the source file. (The update file has
a number of other fields that can change also, but I'm only concerned
with the status field.)
My first inclination is to open the update file, create a hash with
the ID as the key and the status as value, then open the source file,
read each line, update the line if it exists in the hash, and write
each line to a new output file. However, I can think of several
different ways to do this -- I just don't know which way would be
best. I don't particularly want to read every line and write every
line of a source file when only a few lines (if any) need to be
modified.
My second inclination would be to use a database and write an update
query for the records in the update file. But this seems a heavy
weight solution to a light weight problem -- I would only be using the
database to modify records, not to to any of the things we ordinarily
use databases for.
I've never had to do a small number of updates to a large file before,
and it seems too trivial a task to use a database for. Any suggestions
on a better way to do this?
Thanks, CC.
P.S. - The end product of this process is a data file with
approximately 20 fields, written as comma separated, double quote
delimited text, designed to be imported into Excel and Access by end
users in performance of their duties.
------------------------------
Date: Fri, 26 Mar 2010 11:35:57 -0700
From: sln@netherlands.com
Subject: Re: logic question for text file updates
Message-Id: <mmupq51v4p5ttef08qo2ijnv2t97f02uq2@4ax.com>
On Fri, 26 Mar 2010 11:14:36 -0700 (PDT), ccc31807 <cartercc@gmail.com> wrote:
>We have a csv source file of many thousands of records, with two
>columns, the ID and a
>status field. It has very recently come to my attention that
>occasionally the status of a record will change, with the change being
>significant enough that the record must be updated before the process
>runs. The update files consist of a small subset, sometimes a very
>small subset, of the records in the source file. (The update file has
>a number of other fields that can change also, but I'm only concerned
>with the status field.)
>
>My first inclination is to open the update file, create a hash with
>the ID as the key and the status as value, then open the source file,
>read each line, update the line if it exists in the hash, and write
>each line to a new output file. However, I can think of several
>different ways to do this -- I just don't know which way would be
>best. I don't particularly want to read every line and write every
>line of a source file when only a few lines (if any) need to be
>modified.
>
>My second inclination would be to use a database and write an update
>query for the records in the update file. But this seems a heavy
>weight solution to a light weight problem -- I would only be using the
>database to modify records, not to to any of the things we ordinarily
>use databases for.
>
>I've never had to do a small number of updates to a large file before,
>and it seems too trivial a task to use a database for. Any suggestions
>on a better way to do this?
>
>Thanks, CC.
>
>P.S. - The end product of this process is a data file with
>approximately 20 fields, written as comma separated, double quote
>delimited text, designed to be imported into Excel and Access by end
>users in performance of their duties.
You don't have to write a new file back out to disk for a small change.
You could design a disk that can grow or shrink its magnetic material
on the fly, and just insert/remove metal sectors as needed.
But, I think they trashed that idea when they invented frag-
mentation capabilities.
To circumvent fragments on a data level, you could re-design
the file record so that a particular field of a record is
fixed width relative to surrounding fields, sufficient enough
to hold the largest variable data that field could possibly
encounter.
And if the field is small enough to accomodate all possible
values, there is not that much "air" involved in relation to
the overall file size.
Since the field is fixed, the offset into the record to the
field can be surmised and added to the location of the last
record end position, allowing you to write a new fixed width
value, guaranteeing not to overwrite the next field in that
record.
-sln
------------------------------
Date: Fri, 26 Mar 2010 12:00:17 -0700 (PDT)
From: ccc31807 <cartercc@gmail.com>
Subject: Re: logic question for text file updates
Message-Id: <a050ff36-6643-424a-9070-50556a9e8991@o30g2000yqb.googlegroups.com>
On Mar 26, 2:35=A0pm, s...@netherlands.com wrote:
> On Fri, 26 Mar 2010 11:14:36 -0700 (PDT), ccc31807 <carte...@gmail.com> w=
rote:
> >We have a csv source file of many thousands of records, with two
> >columns, the ID and a
> >status field. It has very recently come to my attention that
> >occasionally the status of a record will change, with the change being
> >significant enough that the record must be updated before the process
> >runs. The update files consist of a small subset, sometimes a very
> >small subset, of the records in the source file. (The update file has
> >a number of other fields that can change also, but I'm only concerned
> >with the status field.)
>
> >My first inclination is to open the update file, create a hash with
> >the ID as the key and the status as value, then open the source file,
> >read each line, update the line if it exists in the hash, and write
> >each line to a new output file. However, I can think of several
> >different ways to do this -- I just don't know which way would be
> >best. I don't particularly want to read every line and write every
> >line of a source file when only a few lines (if any) need to be
> >modified.
>
> >My second inclination would be to use a database and write an update
> >query for the records in the update file. But this seems a heavy
> >weight solution to a light weight problem -- I would only be using the
> >database to modify records, not to to any of the things we ordinarily
> >use databases for.
>
> >I've never had to do a small number of updates to a large file before,
> >and it seems too trivial a task to use a database for. Any suggestions
> >on a better way to do this?
>
> >Thanks, CC.
>
> >P.S. - The end product of this process is a data file with
> >approximately 20 fields, written as comma separated, double quote
> >delimited text, designed to be imported into Excel and Access by end
> >users in performance of their duties.
>
> You don't have to write a new file back out to disk for a small change.
> You could design a disk that can grow or shrink its magnetic material
> on the fly, and just insert/remove metal sectors as needed.
>
> But, I think they trashed that idea when they invented frag-
> mentation capabilities.
>
> To circumvent fragments on a data level, you could re-design
> the file record so that a particular field of a record is
> fixed width relative to surrounding fields, sufficient enough
> to hold the largest variable data that field could possibly
> encounter.
>
> And if the field is small enough to accomodate all possible
> values, there is not that much "air" involved in relation to
> the overall file size.
>
> Since the field is fixed, the offset into the record to the
> field can be surmised and added to the location of the last
> record end position, allowing you to write a new fixed width
> value, guaranteeing not to overwrite the next field in that
> record.
>
> -sln
The key will always be a seven character integer. The value will
always be a string with fewer than 20 characters. I COULD use a fixed
width format, but my current format (for the source file) is pipe
separated (e.g. 0059485|Current) and all my logic splits input on the
pipe symbol.
The keys are not consecutive, not ordered, and have large skips, i.e.,
for several million records I might have ten thousand records in the
source file which are randomly ordered (is that an oxymoron?).
Treating the source file as an array would require many more array
elements than records in the file.
CC.
------------------------------
Date: Fri, 26 Mar 2010 12:42:17 -0700
From: sln@netherlands.com
Subject: Re: logic question for text file updates
Message-Id: <i42qq5pb7drgvlp02kd8lkemvld81fvbau@4ax.com>
On Fri, 26 Mar 2010 12:00:17 -0700 (PDT), ccc31807 <cartercc@gmail.com> wrote:
>On Mar 26, 2:35 pm, s...@netherlands.com wrote:
>> On Fri, 26 Mar 2010 11:14:36 -0700 (PDT), ccc31807 <carte...@gmail.com> wrote:
[snip]
>> >My first inclination is to open the update file, create a hash with
>> >the ID as the key and the status as value, then open the source file,
>> >read each line, update the line if it exists in the hash, and write
>> >each line to a new output file.
>> >P.S. - The end product of this process is a data file with
>> >approximately 20 fields, written as comma separated, double quote
>> >delimited text, designed to be imported into Excel and Access by end
>> >users in performance of their duties.
>>
>> Since the field is fixed, the offset into the record to the
>> field can be surmised and added to the location of the last
>> record end position, allowing you to write a new fixed width
>> value, guaranteeing not to overwrite the next field in that
>> record.
>>
>
>The key will always be a seven character integer. The value will
>always be a string with fewer than 20 characters. I COULD use a fixed
>width format, but my current format (for the source file) is pipe
>separated (e.g. 0059485|Current) and all my logic splits input on the
>pipe symbol.
>
>The keys are not consecutive, not ordered, and have large skips, i.e.,
>for several million records I might have ten thousand records in the
>source file which are randomly ordered (is that an oxymoron?).
>Treating the source file as an array would require many more array
>elements than records in the file.
>
>CC.
So, if you have a source file, delimited by | that you eventually make
a dbl quote comma delimited csv file, you could make that status field fixed
width (what 20 chars tops?) in the source. When you generate the dat, csv
file, just strip white space from the beginning and end of the field
before you double quote it to a csv file.
Source file:
- fields all dynamic width except status (which is fixed 20 char).
- format
<field1>|<field2>|<field3>|<field4>|<- status, 20 char ->|<field6>|<field7>|<field_last>\n
You know the file position of the previous EOR. Use index() to find the pipe '|' char
of the status field of the current record (4th in the example), add that to the previous
EOR to get the write() position for the new status (if it changed).
To find out if the status changed, do your split /'|'/ to get all the fields, check
the ID/status from the update file, write out new "fixed width" status (format with
printf or something) to the source file.
When it comes time to generate the csv from the source, just trim spaces before you
write it out.
-sln
------------------------------
Date: Fri, 26 Mar 2010 19:38:03 +0000
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: logic question for text file updates
Message-Id: <r1fv77-m601.ln1@osiris.mauzo.dyndns.org>
Quoth ccc31807 <cartercc@gmail.com>:
> On Mar 26, 2:35 pm, s...@netherlands.com wrote:
> > On Fri, 26 Mar 2010 11:14:36 -0700 (PDT), ccc31807 <carte...@gmail.com> wrote:
> > >We have a csv source file of many thousands of records, with two
> > >columns, the ID and a
> > >status field. It has very recently come to my attention that
> > >occasionally the status of a record will change, with the change being
> > >significant enough that the record must be updated before the process
> > >runs. The update files consist of a small subset, sometimes a very
> > >small subset, of the records in the source file. (The update file has
> > >a number of other fields that can change also, but I'm only concerned
> > >with the status field.)
> >
> > >My first inclination is to open the update file, create a hash with
> > >the ID as the key and the status as value, then open the source file,
> > >read each line, update the line if it exists in the hash, and write
> > >each line to a new output file. However, I can think of several
> > >different ways to do this -- I just don't know which way would be
> > >best. I don't particularly want to read every line and write every
> > >line of a source file when only a few lines (if any) need to be
> > >modified.
> >
<snip sln>
>
> The key will always be a seven character integer. The value will
> always be a string with fewer than 20 characters. I COULD use a fixed
> width format, but my current format (for the source file) is pipe
> separated (e.g. 0059485|Current) and all my logic splits input on the
> pipe symbol.
>
> The keys are not consecutive, not ordered, and have large skips, i.e.,
> for several million records I might have ten thousand records in the
> source file which are randomly ordered (is that an oxymoron?).
> Treating the source file as an array would require many more array
> elements than records in the file.
Is there any way of 'blanking' a record? Normal CSV doesn't support
comments, and if you're importing into Excel you can't extend it to do
so; what does Excel do if you give it a file like
one|two|three
||||||||||||||
four|five|six
? If it does something approximately sensible, or if you can find some
other way to get it to ignore a line without changing its length, then
you can
- read the update file(s) into a hash,
- open the source file read/write,
- go through it looking for the appropriate records,
- when you find one, wipe it out without changing the length or
removing the newline,
- add the changed records onto the end of the file, since the
records weren't in order anyway.
You would need to be careful about buffering and the seek pointer.
Probably you would need to tell/seek whenever switching between reading
and writing (I'm not actually sure how PerlIO handles this; opening
files RW is not something I do very often :)). Alternatively you could
use sys{read,write,seek}, but then you would just need to do your own
buffering so I don't think it wins you anything except potential bugs.
It's generally not worth messing around with approaches like this,
though. Rewriting a file of a few MB doesn't exactly take long, and it's
much easier to get right.
Ben
------------------------------
Date: Fri, 26 Mar 2010 13:46:58 -0400
From: me <noemail@nothere.com>
Subject: Re: Perl / cgi / include file a la #include
Message-Id: <4jspq5551dfk2d35vhsosj27cv4achh776@4ax.com>
On Fri, 26 Mar 2010 00:10:02 -0700, Jürgen Exner
<jurgenex@hotmail.com> wrote:
>me <noemail@nothere.com> wrote:
>>I have a perl script generating HTML code. I'd like to include some
>>existing HTML files (e.g. header.htm, footer.htm) in the output,
>>while the perl program will generate the majority of the html code.
>
>A simple File::slurp and print() should do nicely.
>
>>I know I can do a "require" statement and pull the files in while the
>>perl program runs. I have a couple questions though:
>
>Aehmmm, probably not. Or how would perl interpret the require()d
>header.htm?
Yes... I discovered a little gotcha in that process. Open / Read works
but I will check out Slurp.
>>1. Is it possible to get the web server to parse the perl output and
>>simply have a standard #include in the generated html that would
>>include the header and footer just before delivery to the client? Or
>>will the server always deliver the program output directly with no
>>further parsing?
>
>That really has nothing to with Perl at all. And I can't see what would
>be stopping you from calling e.g. 'foobar.pl | cpp' instead of just a
>plain 'foobar.pl'
I don't think I can do what you are suggesting as a cgi program.
>>2. If it's possible to put the #include in there (item 1 above), is
>>this a more or less efficient way to deliver the code? In other words,
>>does it make more sense to have the perl code run once and include all
>>that's needed, or will the server be more efficient at doing the
>>include?
>
>I would guess this very much depends upon what server you are using,
>what features it has, and how it is configured. But in general terms
>starting additional processes is always expensive, so you want to avoid
>doing that as much as possible.
>
>On the other hand, maybe all you are looking for is really a simple
>templating system.
Yes... that might work better although "templating" is overkill for
this site as there is only one page that needs to be programmatically
created.
------------------------------
Date: Fri, 26 Mar 2010 13:47:40 -0400
From: me <noemail@nothere.com>
Subject: Re: Perl / cgi / include file a la #include
Message-Id: <pospq51qcpfg6gvho1v7cco5vj72pb2tkg@4ax.com>
On Fri, 26 Mar 2010 03:39:42 -0400, "Uri Guttman"
<uri@StemSystems.com> wrote:
>and since my File::Slurp has been mentioned, i might as well bring up
>Template::Simple. as you said, the OP really is asking for a templater
>and doesn't know it. that module can do includes, substitutions and more
>and is very easy to learn and use. don't generate html directly from
>perl as it is noisy and annoying to handle all sorts of things.
>
Thanks, I will look into the Template module.
------------------------------
Date: Fri, 26 Mar 2010 10:59:32 -0700
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: Perl / cgi / include file a la #include
Message-Id: <57tpq5liu6l5l5mjc1aqj1r45on8jf906j@4ax.com>
me <noemail@nothere.com> wrote:
>On Fri, 26 Mar 2010 00:10:02 -0700, Jürgen Exner
[...]
>>>will the server always deliver the program output directly with no
>>>further parsing?
>>
>>That really has nothing to with Perl at all. And I can't see what would
>>be stopping you from calling e.g. 'foobar.pl | cpp' instead of just a
>>plain 'foobar.pl'
>
>I don't think I can do what you are suggesting as a cgi program.
Again, this has nothing to do with the CGI program.
Somewhere in the web server configuration there must be a description
for what the server is supposed to do when the resource http:/..... is
being requested. If the web server doesn't allow that description to be
a shell command with a pipe, then still you could write a 1/2 line
wrapper script which does nothing but call 'foobar.pl | cpp'.
Now, I am not saying this is a good solution for your original problem,
not at all. But it sure is possible.
jue
------------------------------
Date: Fri, 26 Mar 2010 14:12:39 -0400
From: "Uri Guttman" <uri@StemSystems.com>
Subject: Re: Perl / cgi / include file a la #include
Message-Id: <874ok3hrzc.fsf@quad.sysarch.com>
>>>>> "m" == me <noemail@nothere.com> writes:
m> Yes... that might work better although "templating" is overkill for
m> this site as there is only one page that needs to be programmatically
m> created.
'programmatically created' means template. you can do it by breaking up
the html into parts and build it up in the template. you may still need
code to mung the data with perl to work with the template but it will be
easier than generating the html in perl. there are other advantages to
using a template including allowing a designer to do the html/css and a
coder to work on the perl. rarely can one person be very good at both.
uri
--
Uri Guttman ------ uri@stemsystems.com -------- http://www.sysarch.com --
----- Perl Code Review , Architecture, Development, Training, Support ------
--------- Gourmet Hot Cocoa Mix ---- http://bestfriendscocoa.com ---------
------------------------------
Date: Fri, 26 Mar 2010 18:15:28 +0000
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: Perl / cgi / include file a la #include
Message-Id: <07av77-aiv.ln1@osiris.mauzo.dyndns.org>
Quoth me <noemail@nothere.com>:
> On Fri, 26 Mar 2010 03:39:42 -0400, "Uri Guttman"
> <uri@StemSystems.com> wrote:
>
> >and since my File::Slurp has been mentioned, i might as well bring up
> >Template::Simple. as you said, the OP really is asking for a templater
> >and doesn't know it. that module can do includes, substitutions and more
> >and is very easy to learn and use. don't generate html directly from
> >perl as it is noisy and annoying to handle all sorts of things.
> >
>
> Thanks, I will look into the Template module.
While Template (usually referred to as Template Toolkit, or TT2) is an
extremely useful module, Uri was talking about Template::Simple. T::S is
smaller, faster, and easier to learn, and if it will do what you want it
is a better choice.
Ben
------------------------------
Date: Fri, 26 Mar 2010 11:23:12 -0700 (PDT)
From: ccc31807 <cartercc@gmail.com>
Subject: Re: Perl / cgi / include file a la #include
Message-Id: <f3e13128-943c-4c8d-92dc-fe31c39f1d48@g28g2000yqh.googlegroups.com>
On Mar 25, 10:34=A0pm, me <noem...@nothere.com> wrote:
> I have a perl script generating HTML code. I'd like to include some
> existing HTML files (e.g. header.htm, footer.htm) in the output,
> while the perl program will generate the majority of the html code.
This is a very, very common need.
I create Perl modules which output HTML, and call the functions that
return the HTML in my scripts. Like this:
-----------HTML.pm------------------
package HTML;
...
sub print_header
{
my $title =3D shift;
print qq(<?xml version=3D"1.0" encoding=3D"UTF-8" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns=3D"http://www.w3.org/1999/xhtml" xml:lang=3D"en" lang=3D"en">
<head>
<title>$title</title>
<link type=3D"text/css" rel=3D"stylesheet" href=3D"http://site.css" />
</head>
<body>
);
}
...
---------------index.cgi-----------------
use HTML;
...
HTML::print_header($page);
...
exit(0);
CC
------------------------------
Date: Fri, 26 Mar 2010 19:26:19 +0000
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: Perl / cgi / include file a la #include
Message-Id: <rbev77-m601.ln1@osiris.mauzo.dyndns.org>
Quoth ccc31807 <cartercc@gmail.com>:
> On Mar 25, 10:34 pm, me <noem...@nothere.com> wrote:
> > I have a perl script generating HTML code. I'd like to include some
> > existing HTML files (e.g. header.htm, footer.htm) in the output,
> > while the perl program will generate the majority of the html code.
>
> This is a very, very common need.
>
> I create Perl modules which output HTML, and call the functions that
> return the HTML in my scripts. Like this:
Don't do that. Use a templating module.
Ben
------------------------------
Date: Fri, 26 Mar 2010 10:42:48 -0500
From: "J. Gleixner" <glex_no-spam@qwest-spam-no.invalid>
Subject: Re: s///gsi; with a wildcard
Message-Id: <4bacd5f8$0$89870$815e3792@news.qwest.net>
Jason Carlton wrote:
[...]
>>>>>> The fonts and all that are different for each post; the only
>>>>>> consistency seems to be that it starts with "Normal 0 false false
>>>>>> false", and it ends with a "}".
>>>>>> Would something as simple as this be enough to consistently
remove it?
[...]
> J, should that first "}" be a "{"? Like:
> $str =~ s/Normal 0 false false false[^{]*}//gsi;
Before asking if it's not correct, why not try it?
[^}]* - match everything until it sees '}'
} - include '}' in the pattern. -- without that you'll
have '}' in your results.
I gave example text, and the output it generates, if that
doesn't match what you want, then please be a little
more verbose. Provide a -short- example of the text before,
and what you want the text to be after doing something to it.
------------------------------
Date: Fri, 26 Mar 2010 09:24:54 -0700 (PDT)
From: cerr <ron.eggler@gmail.com>
Subject: socket transmission
Message-Id: <b9aa0cf5-6020-4a78-893a-5523571aeecd@g28g2000prb.googlegroups.com>
hi There,
I have a piece of code in C and wanted to write a little simulator for
it in Perl. I basically wanna be listening on port 16001 and on
reception i wanna send back an acknowledgement "POK\0". I thought I
got this accomplished but for some reason my client doesn't seem to
understand the acknowledge string.
What I got in perl:
while (1){
my $client = $prssock->accept();
print "Got client\n";
while(<$client>) {
print "$_\n";
$prgstr.=$_;
print $client "POK";
print "POK\n";
}
print "lost client!\n";
}
and in C:
rc = write(_prg->prg_bus.bus_sock, response, strlen(response)); //
where response = "POK\0"
I'm not sure what I'm doing wrong here... :( Any suggestions? Would it
be because of the null character? I don't think so, eh?
The client side just checks if "POK" exists in the received
string...and for some reason it can't see the one coming back from my
perl script.... that puzzles me..
Thanks,
--
roN
------------------------------
Date: 26 Mar 2010 17:28:38 GMT
From: jt@toerring.de (Jens Thoms Toerring)
Subject: Re: socket transmission
Message-Id: <814966Fq7sU1@mid.uni-berlin.de>
cerr <ron.eggler@gmail.com> wrote:
> I have a piece of code in C and wanted to write a little simulator for
> it in Perl. I basically wanna be listening on port 16001 and on
> reception i wanna send back an acknowledgement "POK\0". I thought I
> got this accomplished but for some reason my client doesn't seem to
> understand the acknowledge string.
> What I got in perl:
> while (1){
> my $client = $prssock->accept();
> print "Got client\n";
> while(<$client>) {
> print "$_\n";
> $prgstr.=$_;
> print $client "POK";
> print "POK\n";
> }
> print "lost client!\n";
> }
> and in C:
> rc = write(_prg->prg_bus.bus_sock, response, strlen(response)); //
> where response = "POK\0"
That means that you just send the three letters 'P', 'O' and 'K'.
> I'm not sure what I'm doing wrong here... :( Any suggestions? Would it
> be because of the null character? I don't think so, eh?
> The client side just checks if "POK" exists in the received
> string...and for some reason it can't see the one coming back from my
> perl script.... that puzzles me..
Well, you're definitely not sending a '\0' character over to
the client. But if that's no problem (since the client doesn't
expect it - but then, how does the client figure out that it
got to the end of a message? Or does it expect exactly three
characters?) there's still the issue with buffering. My guess
is that the data never get send over to the client since you
got buffered I/O per default. You could do e.g.
$old_handle = select $client;
$| = 1;
select $old_handle;
or perhaps just
$client->autoflush();
(depending on what you're using) to switch buffering off (it's
the same in C - I/O is (line- or fully) buffered by default and
this must be switched off if not wanted, using e.g. setvbuf() -
you may have to do that also in the C client). And if you decide
you want to send the '\0' character anyway try
print $client "POK\000";
On reading from the client I also see a problem with sending
just the three letters 'P', 'O' and 'K' to your server. Why
should the input operation '<$client>' stop waiting for more
data after that? It only stops when it either finds a '\n' or
the socket gets closed by the other side. If you want to ac-
cept three character long messages you would need to use e.g.
read() or sysread() and tell it explicitely about how many
chars you want.
I would recommend that you change your approach and instead
have the messages passed between client and server being ter-
minated with a '\n' character, otherwise I guess you will need
to do quite a bit more to get it to work.
Regards, Jens
--
\ Jens Thoms Toerring ___ jt@toerring.de
\__________________________ http://toerring.de
------------------------------
Date: Fri, 26 Mar 2010 14:37:27 -0400
From: me <noemail@nothere.com>
Subject: using Print << marker with require statement?
Message-Id: <62tpq5p2cenlnfcgk6q8dqbtbgoc6m67fq@4ax.com>
A noob question about using a require statement. I tried including a
file that looks like this:
print << "endOfText";
Sample Text
endOfText
This works fine in the main program but if I put it in a separate file
and include it with a require statement, it fails with the message
"Can't find string terminator "endOfText" anywhere before EOF at
test-require.pl line 1.
There must be some Perl subtlety that I am missing.
Thanks,
------------------------------
Date: Fri, 26 Mar 2010 11:47:52 -0700
From: sln@netherlands.com
Subject: Re: using Print << marker with require statement?
Message-Id: <020qq5p38fsjotrtd41i0h3ae3nc14eii8@4ax.com>
On Fri, 26 Mar 2010 14:37:27 -0400, me <noemail@nothere.com> wrote:
>A noob question about using a require statement. I tried including a
>file that looks like this:
>
> print << "endOfText";
> Sample Text
> endOfText
>
>This works fine in the main program but if I put it in a separate file
>and include it with a require statement, it fails with the message
>"Can't find string terminator "endOfText" anywhere before EOF at
>test-require.pl line 1.
>
>There must be some Perl subtlety that I am missing.
>
>Thanks,
Try this, it might be looking for newline embedded markers:
"\nendOfText\n";
use strict;
use warnings;
print << "endOfText";
Sample Text
endOfText
endOfText
__END__
-sln
------------------------------
Date: Fri, 26 Mar 2010 12:00:02 -0700
From: sln@netherlands.com
Subject: Re: using Print << marker with require statement?
Message-Id: <gq0qq51n66ovjf6rchq8ave84tovr3m9bk@4ax.com>
On Fri, 26 Mar 2010 14:37:27 -0400, me <noemail@nothere.com> wrote:
>A noob question about using a require statement. I tried including a
>file that looks like this:
>
> print << "endOfText";
> Sample Text
> endOfText
>
>This works fine in the main program but if I put it in a separate file
>and include it with a require statement, it fails with the message
>"Can't find string terminator "endOfText" anywhere before EOF at
>test-require.pl line 1.
>
>There must be some Perl subtlety that I am missing.
>
>Thanks,
Its looking for newline delimeted string.
In your body it is
\n' endOfText'\n
If you want to leave it that way then put
any body spaces after the first newline and
before the last newline.
print << " endOfText";
Sample Text
endOfText
-sln
------------------------------
Date: Fri, 26 Mar 2010 19:40:52 +0000
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: using Print << marker with require statement?
Message-Id: <47fv77-m601.ln1@osiris.mauzo.dyndns.org>
Quoth me <noemail@nothere.com>:
> A noob question about using a require statement. I tried including a
> file that looks like this:
>
> print << "endOfText";
> Sample Text
> endOfText
>
> This works fine in the main program but if I put it in a separate file
> and include it with a require statement, it fails with the message
> "Can't find string terminator "endOfText" anywhere before EOF at
> test-require.pl line 1.
>
> There must be some Perl subtlety that I am missing.
Works for me, so there must be something you're not telling us. I
prseume your example above is indented for the benefit of Usenet, and
'endOfText' is actually at the start of the line? It needs to be.
If you are trying to use this for templating, there are better
approaches. Search the group (or search.cpan.org) for 'template' to get
some suggestions.
Ben
------------------------------
Date: Fri, 26 Mar 2010 05:00:31 -0700
From: sln@netherlands.com
Subject: Re: Why doesn't %+ match %- for this regex?
Message-Id: <i58pq55qg1ksfti6r64ctiuc5pk0nltbtf@4ax.com>
On Thu, 25 Mar 2010 10:24:35 -0400, Shmuel (Seymour J.) Metz <spamtrap@library.lspace.org.invalid> wrote:
>I'm doing a match using named captures, and the results are puzzling. The
>match succeeds, and %- has what I expect, but %+ is empty. I interpolate
>the same variable in another pattern and %+ is set as I expect.
>
>The relevant code and output are:
There is no relevent code here. Without complete variable information
(ie: running code) its meaning can't even be extrapolated, let alone
what your objective is.
-sln
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
Back issues are available via anonymous ftp from
ftp://cil-www.oce.orst.edu/pub/perl/old-digests.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V11 Issue 2889
***************************************