[32855] in Perl-Users-Digest
Perl-Users Digest, Issue: 4123 Volume: 11
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Sat Jan 25 23:05:09 2014
Date: Fri, 24 Jan 2014 08:09:05 -0800 (PST)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Fri, 24 Jan 2014 Volume: 11 Number: 4123
Today's topics:
[OT] sys call length limitation <sun_tong_001@users.sourceforge.net>
Re: [OT] sys call length limitation (Tim McDaniel)
Re: [OT] sys call length limitation <sun_tong_001@users.sourceforge.net>
Re: [OT] sys call length limitation <ben@morrow.me.uk>
Re: [OT] sys call length limitation <hjp-usenet3@hjp.at>
Re: [OT] sys call length limitation <sun_tong_001@users.sourceforge.net>
Re: [OT] sys call length limitation <rweikusat@mobileactivedefense.com>
Re: [OT] sys call length limitation <rweikusat@mobileactivedefense.com>
Re: [OT] sys call length limitation <gamo@telecable.es>
Re: [OT] sys call length limitation <rweikusat@mobileactivedefense.com>
Re: [OT] sys call length limitation <ben@morrow.me.uk>
Re: [OT] sys call length limitation <rweikusat@mobileactivedefense.com>
Re: Regex replacement via external command <rweikusat@mobileactivedefense.com>
Re: Regex replacement via external command <rweikusat@mobileactivedefense.com>
Re: Regex replacement via external command <rweikusat@mobileactivedefense.com>
Re: Replacement for CGI.pm (Marc Espie)
Re: Replacement for CGI.pm <rweikusat@mobileactivedefense.com>
SEO Company Hyderabad rahul.webcolortech@gmail.com
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Thu, 23 Jan 2014 05:46:59 GMT
From: * Tong * <sun_tong_001@users.sourceforge.net>
Subject: [OT] sys call length limitation
Message-Id: <n12Eu.321366$Uv.259062@fx30.iad>
[Sorry that this is getting OT] because I'm thinking the easiest way to
solve my problem is just to raise the "ceiling" -- recompile my kernel.
On Wed, 22 Jan 2014 06:25:18 +0000, Tim McDaniel wrote:
> To be more precise, not OK within *your* Perl. On the system I'm on on
> the moment..,(it works)
Hi Tim, did you compile your kernel yourself? How did you make it to take
such huge size of arguments?
On Wed, 22 Jan 2014 14:55:43 +0000, Rainer Weikusat wrote:
> The perl-variant can deal with at most 131,067 bytes (getconf ARG_MAX
> returns the same value) and strace -f perl ... shows that
>
> [pid 2123] execve("/bin/sh", ["sh", "-c", "echo
> ;;;;;;;;;;;;;;;;;;;;;;;;;;;"...], [/* 26 vars */]) = -1 E2BIG (Argument
> list too long)
>
> see also
>
> "Limits on size of arguments and environment" section in the Linux
> execve(2) manpage, in particular,
>
> Additionally, the limit per string is 32 pages (the kernel
constant
> MAX_ARG_STRLEN)
As per http://lists.gnu.org/archive/html/bug-make/2009-07/msg00012.html,
,-----
| Linux >= 2.6.23 has removed the in-kernel hardwired command line length
| limit[3]. So, once stack size has been increased sufficiently with
|
| ulimit -s 32768
|
| or so, one could hope to use longer commands in makefiles. However,
| there is also a new limit on the length of each individual command line
| argument[4], MAX_ARG_STRLEN (131072).
`-----
[3] http://www.in-ulm.de/~mascheck/various/argmax/#linux
[4] http://www.in-ulm.de/~mascheck/various/argmax/#maximum_number
On reading [3],& [4], it seems that ARG_MAX is not used in the kernel
code itself pre-2.6.23. Post Linux 2.6.23, ARG_MAX is not hardcoded any
more. But "since 2.6.23, one argument must not be longer than
MAX_ARG_STRLEN (131072)". This confuses me. So Tim, did you touched
ARG_MAX at all when compiling your kernel yourself? How about
MAX_ARG_STRLEN?
All in all, how to recompile Linux kernel so as to increase the size of
command-line arguments?
Thanks
------------------------------
Date: Thu, 23 Jan 2014 06:02:25 +0000 (UTC)
From: tmcd@panix.com (Tim McDaniel)
Subject: Re: [OT] sys call length limitation
Message-Id: <lbqb9g$erj$1@reader1.panix.com>
In article <n12Eu.321366$Uv.259062@fx30.iad>,
* Tong * <sun_tong_001@users.sourceforge.net> wrote:
>[Sorry that this is getting OT] because I'm thinking the easiest way
>to solve my problem is just to raise the "ceiling" -- recompile my
>kernel.
It strikes me that that solution is like finding that it's too hard to
push a pile of bricks down the road, and fixing it by getting more
people to push, when you'd be much better advised to put them on a
wagon. I suggest using temp file(s) and/or a pipe (at most one, to
avoid deadlock, as already mentioned) to pass data to and from an
external process.
>On Wed, 22 Jan 2014 06:25:18 +0000, Tim McDaniel wrote:
>> To be more precise, not OK within *your* Perl. On the system I'm
>> on on the moment..,(it works)
>
>Hi Tim, did you compile your kernel yourself? How did you make it to
>take such huge size of arguments?
I didn't do anything. Panix, my ISP, the first UNIX-based ISP in New
York City, has command-line shells for its users. Whether they run
stock NetBSD 6.1.2 or have modified it, I don't know.
--
Tim McDaniel, tmcd@panix.com
------------------------------
Date: Thu, 23 Jan 2014 14:16:23 GMT
From: * Tong * <sun_tong_001@users.sourceforge.net>
Subject: Re: [OT] sys call length limitation
Message-Id: <Xu9Eu.285994$Wm1.243108@fx17.iad>
On Thu, 23 Jan 2014 06:02:25 +0000, Tim McDaniel wrote:
>>[Sorry that this is getting OT] because I'm thinking the easiest way to
>>solve my problem is just to raise the "ceiling" -- recompile my kernel.
>
> It strikes me that that solution is like finding that it's too hard to
> push a pile of bricks down the road, and fixing it by getting more
> people to push, when you'd be much better advised to put them on a
> wagon. I suggest using temp file(s) and/or a pipe (at most one, to
> avoid deadlock, as already mentioned) to pass data to and from an
> external process.
True, but that's beyond me, and even it is doable, it'll be more than 10x
complicated than merely:
s,(x+),`echo $1 | wc -c`,eg
From the maintenance point view, I think we have only one clear winner,
especially this is only a personal task -- KISS is paramount for me.
------------------------------
Date: Thu, 23 Jan 2014 14:59:47 +0000
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: [OT] sys call length limitation
Message-Id: <3cn7ra-s6a.ln1@anubis.morrow.me.uk>
Quoth * Tong * <sun_tong_001@users.sourceforge.net>:
> On Thu, 23 Jan 2014 06:02:25 +0000, Tim McDaniel wrote:
>
> >>[Sorry that this is getting OT] because I'm thinking the easiest way to
> >>solve my problem is just to raise the "ceiling" -- recompile my kernel.
> >
> > It strikes me that that solution is like finding that it's too hard to
> > push a pile of bricks down the road, and fixing it by getting more
> > people to push, when you'd be much better advised to put them on a
> > wagon. I suggest using temp file(s) and/or a pipe (at most one, to
> > avoid deadlock, as already mentioned) to pass data to and from an
> > external process.
>
> True, but that's beyond me, and even it is doable, it'll be more than 10x
> complicated than merely:
>
> s,(x+),`echo $1 | wc -c`,eg
If that is actually all you're doing, then
s,(x+),length $1,eg
Otherwise you will have to explain a bit more about how the external
program is invoked. If it's just a Unix filter (input on stdin, output
on stdout) then use tempfiles:
use File::Slurp qw/read_file write_file/;
use File::Temp qw/tempfile/;
sub filter {
my ($data) = @_;
my ($IN, $in) = tempfile;
my ($OUT, $out) = tempfile;
write_file $IN, $data;
system "wc -c <$in >$out";
return read_file $OUT;
}
s,(x+),filter $1,eg
Ben
------------------------------
Date: Thu, 23 Jan 2014 22:29:15 +0100
From: "Peter J. Holzer" <hjp-usenet3@hjp.at>
Subject: Re: [OT] sys call length limitation
Message-Id: <slrnle32db.j5k.hjp-usenet3@hrunkner.hjp.at>
On 2014-01-23 14:16, * Tong * <sun_tong_001@users.sourceforge.net> wrote:
> On Thu, 23 Jan 2014 06:02:25 +0000, Tim McDaniel wrote:
>> It strikes me that that solution is like finding that it's too hard to
>> push a pile of bricks down the road, and fixing it by getting more
>> people to push, when you'd be much better advised to put them on a
>> wagon. I suggest using temp file(s) and/or a pipe (at most one, to
>> avoid deadlock, as already mentioned) to pass data to and from an
>> external process.
>
> True, but that's beyond me, and even it is doable, it'll be more than 10x
> complicated than merely:
>
> s,(x+),`echo $1 | wc -c`,eg
>
> From the maintenance point view, I think we have only one clear winner,
I think we have a clear loser. This will break as soon as $1 contains
some characters which have a special meaning for the shell. In your
original posting you called the variable "httpbody" - this can of course
be anything, but it seems likely that it's html or xml - which typically
contains "<", ">", "&", ";" as well as single and/or double quotes,
which makes quoting the stuff "interesting".
> especially this is only a personal task -- KISS is paramount for me.
IMHO writing a few lines more is a lot simpler than fixing this when it
inevitably breaks.
hp
--
_ | Peter J. Holzer | Fluch der elektronischen Textverarbeitung:
|_|_) | | Man feilt solange an seinen Text um, bis
| | | hjp@hjp.at | die Satzbestandteile des Satzes nicht mehr
__/ | http://www.hjp.at/ | zusammenpaßt. -- Ralph Babel
------------------------------
Date: Fri, 24 Jan 2014 00:40:52 GMT
From: * Tong * <sun_tong_001@users.sourceforge.net>
Subject: Re: [OT] sys call length limitation
Message-Id: <oEiEu.295593$O25.93045@fx21.iad>
On Thu, 23 Jan 2014 14:59:47 +0000, Ben Morrow wrote:
>> s,(x+),`echo $1 | wc -c`,eg
>
> . . . If it's just a Unix filter (input on stdin, output
> on stdout) then use tempfiles:
>
> use File::Slurp qw/read_file write_file/;
> use File::Temp qw/tempfile/;
>
> sub filter {
> my ($data) = @_;
>
> my ($IN, $in) = tempfile; my ($OUT, $out) = tempfile;
>
> write_file $IN, $data; system "wc -c <$in >$out"; return
> read_file $OUT;
> }
>
> s,(x+),filter $1,eg
Nice, neat, clear & elegant. We have a winner.
Thanks
------------------------------
Date: Fri, 24 Jan 2014 01:55:19 +0000
From: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Subject: Re: [OT] sys call length limitation
Message-Id: <87zjmmcfeg.fsf@sable.mobileactivedefense.com>
* Tong * <sun_tong_001@users.sourceforge.net> writes:
> On Thu, 23 Jan 2014 14:59:47 +0000, Ben Morrow wrote:
>
>>> s,(x+),`echo $1 | wc -c`,eg
>>
>> . . . If it's just a Unix filter (input on stdin, output
>> on stdout) then use tempfiles:
>>
>> use File::Slurp qw/read_file write_file/;
>> use File::Temp qw/tempfile/;
>>
>> sub filter {
>> my ($data) = @_;
>>
>> my ($IN, $in) = tempfile; my ($OUT, $out) = tempfile;
>>
>> write_file $IN, $data; system "wc -c <$in >$out"; return
>> read_file $OUT;
>> }
>>
>> s,(x+),filter $1,eg
>
> Nice, neat, clear & elegant. We have a winner.
Make that "Oh my God I got it!".
------------------------------
Date: Fri, 24 Jan 2014 02:09:21 +0000
From: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Subject: Re: [OT] sys call length limitation
Message-Id: <87vbxacer2.fsf@sable.mobileactivedefense.com>
Rainer Weikusat <rweikusat@mobileactivedefense.com> writes:
> * Tong * <sun_tong_001@users.sourceforge.net> writes:
>> On Thu, 23 Jan 2014 14:59:47 +0000, Ben Morrow wrote:
>>
>>>> s,(x+),`echo $1 | wc -c`,eg
>>>
>>> . . . If it's just a Unix filter (input on stdin, output
>>> on stdout) then use tempfiles:
>>>
>>> use File::Slurp qw/read_file write_file/;
>>> use File::Temp qw/tempfile/;
>>>
>>> sub filter {
>>> my ($data) = @_;
>>>
>>> my ($IN, $in) = tempfile; my ($OUT, $out) = tempfile;
>>>
>>> write_file $IN, $data; system "wc -c <$in >$out"; return
>>> read_file $OUT;
>>> }
>>>
>>> s,(x+),filter $1,eg
>>
>> Nice, neat, clear & elegant. We have a winner.
>
> Make that "Oh my God I got it!".
There's, unfortunately, "Scared of People" in front of that, which, while
a great song, doesn't fit in here, but it can be regarded as a reverse
bonus track.
http://www.youtube.com/watch?v=X1KD131sXDs
"Legally imposed CULTURE-reduction is cabbage brained!"
------------------------------
Date: Fri, 24 Jan 2014 10:42:47 +0100
From: gamo <gamo@telecable.es>
Subject: Re: [OT] sys call length limitation
Message-Id: <lbtciv$2i4$1@speranza.aioe.org>
El 24/01/14 01:40, * Tong * escribió:
> On Thu, 23 Jan 2014 14:59:47 +0000, Ben Morrow wrote:
>
>>> s,(x+),`echo $1 | wc -c`,eg
>>
>> . . . If it's just a Unix filter (input on stdin, output
>> on stdout) then use tempfiles:
>>
>> use File::Slurp qw/read_file write_file/;
>> use File::Temp qw/tempfile/;
>>
>> sub filter {
>> my ($data) = @_;
>>
>> my ($IN, $in) = tempfile; my ($OUT, $out) = tempfile;
>>
>> write_file $IN, $data; system "wc -c <$in >$out"; return
>> read_file $OUT;
>> }
>>
>> s,(x+),filter $1,eg
>
> Nice, neat, clear & elegant. We have a winner.
>
> Thanks
>
Just to be a bit more elegant, don't forget to *unlink*
that tempfiles or you would not like how /tmp looks.
--
http://www.telecable.es/personales/gamo/
------------------------------
Date: Fri, 24 Jan 2014 13:38:11 +0000
From: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Subject: Re: [OT] sys call length limitation
Message-Id: <87k3dppkjg.fsf@sable.mobileactivedefense.com>
gamo <gamo@telecable.es> writes:
> El 24/01/14 01:40, * Tong * escribió:
>> On Thu, 23 Jan 2014 14:59:47 +0000, Ben Morrow wrote:
>>
>>>> s,(x+),`echo $1 | wc -c`,eg
>>>
>>> . . . If it's just a Unix filter (input on stdin, output
>>> on stdout) then use tempfiles:
>>>
>>> use File::Slurp qw/read_file write_file/;
>>> use File::Temp qw/tempfile/;
>>>
>>> sub filter {
>>> my ($data) = @_;
>>>
>>> my ($IN, $in) = tempfile; my ($OUT, $out) = tempfile;
>>>
>>> write_file $IN, $data; system "wc -c <$in >$out"; return
>>> read_file $OUT;
>>> }
>>>
>>> s,(x+),filter $1,eg
>>
>> Nice, neat, clear & elegant. We have a winner.
>>
>> Thanks
>>
>
> Just to be a bit more elegant, don't forget to *unlink*
> that tempfiles or you would not like how /tmp looks.
Or consider using Perl. It's not that difficult:
---------------
sub filter
{
my $fh = File::Temp->new();
print $fh (@_);
`wc -c < $fh`;
}
---------------
The clumsy procedure of writing the input to a temporary file in order to
"fix" the programmer ("Why do you tell me all this complicated stuff ! I
friggin' don't care !! I have to prepare facking my exams to get a well
paid developer job in the thec intusdry !!!") doesn't have to be
implemented that clumsily.
------------------------------
Date: Fri, 24 Jan 2014 14:20:56 +0000
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: [OT] sys call length limitation
Message-Id: <8f9ara-ln31.ln1@anubis.morrow.me.uk>
Quoth gamo <gamo@telecable.es>:
> El 24/01/14 01:40, * Tong * escribió:
> > On Thu, 23 Jan 2014 14:59:47 +0000, Ben Morrow wrote:
> >
> >> my ($IN, $in) = tempfile; my ($OUT, $out) = tempfile;
> >>
> >> write_file $IN, $data; system "wc -c <$in >$out"; return
> >> read_file $OUT;
> >> }
> >>
> >> s,(x+),filter $1,eg
>
> Just to be a bit more elegant, don't forget to *unlink*
> that tempfiles or you would not like how /tmp looks.
I had forgotten tempfile doesn't do that by default if you request a
filename.
my $IN = File::Temp->new;
my $OUT = File::Temp->new;
write_file $IN, $data;
system sprintf "wc -c <%s >%s", $IN->filename, $OUT->filename;
return read_file $OUT;
Ben
------------------------------
Date: Fri, 24 Jan 2014 15:52:28 +0000
From: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Subject: Re: [OT] sys call length limitation
Message-Id: <87fvod9y2r.fsf@sable.mobileactivedefense.com>
Ben Morrow <ben@morrow.me.uk> writes:
> Quoth gamo <gamo@telecable.es>:
>> El 24/01/14 01:40, * Tong * escribió:
>> > On Thu, 23 Jan 2014 14:59:47 +0000, Ben Morrow wrote:
>> >
>> >> my ($IN, $in) = tempfile; my ($OUT, $out) = tempfile;
>> >>
>> >> write_file $IN, $data; system "wc -c <$in >$out"; return
>> >> read_file $OUT;
>> >> }
>> >>
>> >> s,(x+),filter $1,eg
>>
>> Just to be a bit more elegant, don't forget to *unlink*
>> that tempfiles or you would not like how /tmp looks.
>
> I had forgotten tempfile doesn't do that by default if you request a
> filename.
>
> my $IN = File::Temp->new;
> my $OUT = File::Temp->new;
>
> write_file $IN, $data;
> system sprintf "wc -c <%s >%s", $IN->filename, $OUT->filename;
> return read_file $OUT;
Something it does by default is it stringifies to a
filename. Considering that `` undergoes double-quote interpolation, the
->filename and read_file and actually, the output file altogether can be
omitted via
`wc -c < $IN`
There's little point in making a copy of the first argument just so that
a copy has been made and little reason why the subroutine should ignore
all other arguments instead of dealing with them. And then, of course,
the whole hullaballo of File::Slurp isn't really useful for anything
here as the built-in print operator/ subroutine can do that as well, ie
print $IN (@_);
Should someone consider the overhead of funneling the input data through
the perl I/O subsystem prohibitively expensive, there's always
syswrite($fh, $_[0])
when dealing only with the first argument or
syswrite($fh, $_) for @_;
That's 380 LOC (in a 1261 line text file) of a spurious, external
dependency with no tangible effect save slowing down compilation of the
code somewhat avoided at the clearly intolerable expense of having to
write less code.
------------------------------
Date: Thu, 23 Jan 2014 14:15:31 +0000
From: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Subject: Re: Regex replacement via external command
Message-Id: <87zjmmn5rw.fsf@sable.mobileactivedefense.com>
Tim Landscheidt <tim@tim-landscheidt.de> writes:
> Rainer Weikusat <rweikusat@mobileactivedefense.com> wrote:
>
>>>>> Don't do `...` when there may be a lot of output. Please see the
>>>>> perlopentut man page, specifically "pipe open", and don't try to pass
>>>>> long values on the command line.
>
>>>> The problem I was dealing with is, I need to pick out a big chunk of
>>>> input string (>200K, by regex), feed it to external program (which is
>>>> pipe after pipe after pipe), then replace the matching string with the
>>>> processed result. what's the proper way to do it (for big matching
>>>> chunks)?
>
>>>> A thousand time over-simplified version is:
>
>>>> perl -e 'print("aa". "x" x 238565 . "bb", "\n")' > HttpBody
>>>> <HttpBody perl -n000e 's,(x+),`echo $1 | wc -c`,eg; print'
>
>>>> The problem is that I not only need to process this big chunk of
>>>> matching string via the external program, but I also need to replace the
>>>> matching string with the result of the external process. Putting two
>>>> together is where the problem for me.
>
>>> You could replace (in this example) the call of `echo $1 |
>>> wc -c` with a double-sided pipe where you feed $1 on stdin
>>> to wc and collect wc's stdout. You need to look at
>>> IPC::Open2 & Co. on how to achieve that; see "perldoc -q
>>> 'How can I open a pipe both to and from a command?'" for
>>> pointers.
>
>> That's almost certainly a recipe for disaster for 'large amounts of
>> data' because the process writing to the input pipe will block once the
>> 'input' pipe buffer is full and the external command will block once the
>> 'output' pipe buffer is full, ie, the whole thing will deadlock.
>
>> [...]
>
> "& Co.". Personally, I prefer IPC::Run and:
In this case, you should refer the OP to that. You can even laugh as he
falls into the pit nevertheless as soon as he gets hit by the difference
between _exit (correct) and exit (Docmumentation? Only newbies read
documentation!).
------------------------------
Date: Thu, 23 Jan 2014 16:13:44 +0000
From: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Subject: Re: Regex replacement via external command
Message-Id: <87a9emn0av.fsf@sable.mobileactivedefense.com>
* Tong * <sun_tong_001@users.sourceforge.net> writes:
> On Wed, 22 Jan 2014 16:27:56 +0000, Rainer Weikusat wrote:
>>> Another approach (as always) would be temporary files.
>>
>> In case the program is really supposed to work as a filter, a possible
>> other aproach would be to use a 'Perl lexer', eg, for the example above,
>> assuming the input is in $s (untested)
>>
>> for ($s) {
>> /\G(x+)/gc and do {
>> my $fh;
>>
>> open($fh, '|command');
>> print $fh ($1);
>> $fh = undef;
>>
>> redo;
>> };
>>
>> /\G([^x]+)/gc and print($1), redo;
>> }
>>
>> and simply let the output of the external command appear 'in the right
>> place' of the stdout output of the perl script (since they'll share the
>> same stdout).
>
> This would be quite a mouthful for me. I have to try it out so as to
> understand exactly what's going on.
>
> But first, a quick question, this looks to me like a filter program in
> Perl, that writes out matches ($1) itself, and uses external command to
> further processing the matches ($1) as well. The external command take
> the match string from pipe input, and writes out its result in the right
> place of the stdout output, correct? I.e., both the Perl script and the
> external command write their output to stdout, correct?
>
> The problem for me is that I not only need to process this big chunk of
> matching string via the external program, but I also need to replace the
> matching string with the result of the external process as well. The
> above code doesn't take care of grabbing the output of the external
> command and use it as the replacement, correct?
Contrived example which can actually be executed:
-----------------
undef $/;
my $input = <STDIN>;
for ($input) {
/\G(x+)/gc and do {
my $fh;
open($fh, '| tr x y');
print $fh ($1);
redo;
};
/\G([^x]+)/g and print($1), redo;
}
-----------------
This will read data from stdin until EOF. Any x in the input is replaced
with an y by piping it through 'tr x y', anything else is just printed
as-is. The for-'loop' is an extremely simple example of how to write a
lexical analyzer in Perl:
The first regex looks for a sequences of x at the current matching
position (\G, initially 0). If it matches, a pipe to tr is opened and
the matched text written to that. The 'redo' than cause the loop body to
be executed again. The /g flag means 'match globally', ie, find all
occurences, not just the first one, and the /c means 'don't reset the
current matching position in case the match fails'.
The second regex matches a sequences of 'something other than x': If it
matches, the matched text is printed and the loop body re-executed via
redo. The /c flag is not needed for the 2nd match because if neither the
first nor the second regex matched, end of the input has obviously been
reached, and the loop terminates.
------------------------------
Date: Thu, 23 Jan 2014 16:21:48 +0000
From: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Subject: Re: Regex replacement via external command
Message-Id: <8761pamzxf.fsf@sable.mobileactivedefense.com>
Rainer Weikusat <rweikusat@mobileactivedefense.com> writes:
[...]
> Contrived example which can actually be executed:
>
> -----------------
> undef $/;
> my $input = <STDIN>;
>
> for ($input) {
> /\G(x+)/gc and do {
> my $fh;
>
> open($fh, '| tr x y');
This is sort-of a silly example pipe because the output would be
identical when everything was just piped through tr. A slightly less
silly example would be
open($fh, '| sed "s/./y/g"');
This uses sed to replace every input byte with an y. As can easily be
verified, it only runs for x in the input, not for any other character.
------------------------------
Date: Thu, 23 Jan 2014 19:35:28 +0000 (UTC)
From: espie@lain.home (Marc Espie)
Subject: Re: Replacement for CGI.pm
Message-Id: <lbrqu0$18bj$1@saria.nerim.net>
In article <slrnl6n4ff.6u5.hjp-usenet3@hrunkner.hjp.at>,
Peter J. Holzer <hjp-usenet3@hjp.at> wrote:
>["Followup-To:" header set to comp.lang.perl.misc.]
>On 2013-10-24 23:39, Ben Morrow <ben@morrow.me.uk> wrote:
>> Quoth Joachim Pense <snob@pense-mainz.eu>:
>>>
>>> I wonder if an "HTML disassembler" is available - a program that takes
>>> HTML code as input and produces CGI.pm output.
>>
>> Given a single input, Peter's answer is actually the only correct one.
>
>I don't think so. CGI.pm provides a function for every HTML element, so
>it should be possible to parse an HTML input into a DOM tree and then
>serialize that as nested calls to CGI functions.
>
>So, this small HTML file:
>
> <html>
> <head>
> <title>hello</title>
> <link rel="stylesheet" href="hello.css" type="text/css" />
> </head>
> <body>
> <h1>hello, world</h1>
> <p>and goodbye</p>
> </body>
> </html>
>
>could be converted into:
>
> html(
> head(
> title('hello'),
> Link({-rel => 'stylesheet', type => 'text/css', -href =>
>'hello.css'}),
> ),
> body(
> h1('hello, world'),
> p('and goodbye')
> )
> )
>
>I think this is what Joachim meant.
>
>I just don't think that would be terribly useful. But then I think the
>html-generating functions in CGI.pm aren't very useful in general, so
>that't not a big surprise ;-).
You could probably do that using the HTML::Tree parser and then writing
a proper printer...
Chiming in a bit late in the discussion, CGI.pm is terrible terrible for
anything facing the internet in general. Good luck sanitizing any kind
of input, or quoting stuff correctly in the generated html.
That's one area where template systems like TT shine, they have everything
needed to enlighten you to the multiple problems properly encoding stuff
to write non-broken pages with no XSS issues...
(big fan of perl-dancer + template toolkit these days... the RESTful nature
of the engine being a big plus)
------------------------------
Date: Thu, 23 Jan 2014 19:39:17 +0000
From: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Subject: Re: Replacement for CGI.pm
Message-Id: <87eh3ylc7u.fsf@sable.mobileactivedefense.com>
espie@lain.home (Marc Espie) writes:
[...]
> Chiming in a bit late in the discussion, CGI.pm is terrible terrible for
> anything facing the internet in general. Good luck sanitizing any kind
> of input, or quoting stuff correctly in the generated html.
That this is something you consider an almost insurmountable obstacle
doesn't mean everybody shares your opinion on that.
------------------------------
Date: Fri, 24 Jan 2014 01:53:47 -0800 (PST)
From: rahul.webcolortech@gmail.com
Subject: SEO Company Hyderabad
Message-Id: <2f25f779-0d5b-4d30-b566-777d9d11572a@googlegroups.com>
Webcolor Technologies is a leading professional Search Engine Marketing com=
pany for the E-Business Industry.Webcolor Technologies are providing best s=
eo services hyderabad. SEO is mainly used to increase the visibility of the=
website. What ever the website is Webcolor Technologiesare ready to give c=
ent percent results for that website. Now a days at present world business =
is increasing rapidly as a result competitors are also increasing. In this =
scenario Webcolor Technologiesare ready to provide you best Seo services in=
hyderabad to compete your competitors and to develop your business .
We Offer: web designing and Web development
Content Writing Services
Search Engine Optimization services
Social Media Optimization Services
Pay Per Click Services
Affliate Marketing Services
Online Reputation Management Services
For More Details=20
Please contact:9052698932
Email : info[@]webcolortech[.]com
Web: www.webcolortech.com
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
Back issues are available via anonymous ftp from
ftp://cil-www.oce.orst.edu/pub/perl/old-digests.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V11 Issue 4123
***************************************