[7641] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 1267 Volume: 8

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Mon Nov 3 21:08:01 1997

Date: Mon, 3 Nov 97 18:00:26 -0800
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Mon, 3 Nov 1997     Volume: 8 Number: 1267

Today's topics:
     @ARGV Limits or Attempts to avoid "glob: Too many argum gshibona@unconfigured.xvnews.domain
     Re: @ARGV Limits or Attempts to avoid "glob: Too many a (Tad McClellan)
     Re: @ARGV Limits or Attempts to avoid "glob: Too many a <rootbeer@teleport.com>
     Re: Best way to comma seperate a number? (Tad McClellan)
     Re: Creating a new file each day?? (Mike Stok)
     Re: Did an open for append create the file? (Tad McClellan)
     Re: Dynamically creating filenames <rootbeer@teleport.com>
     embedded perl--memory leak? <freehill@austin.ibm.com>
     Help on upgrading? <AMarcus@ix.netcom.com>
     Making script wait (Andrew D. Arenson)
     Re: Man pages for Activestate port on Win32 (Pete Barker)
     Re: open() and pipes (Charles DeRykus)
     Re: Performance question <rootbeer@teleport.com>
     Re: Perl debug (Ilya Zakharevich)
     Re: Perl question: Subroutine (Andrew M. Langmead)
     Re: Perl, Sendmail, CGI <tigger@sgi.com>
     PTY in IRIX6.2 (kuehne)
     Re: Q: Precise Timestamps in Perl? (Andrew M. Langmead)
     Q: Safely opening file? xyzhenrygxyz@wpi.edu
     Right way to execute commands from within perl <jflowers@ezo.net>
     Re: Right way to execute commands from within perl (Tad McClellan)
     Seeking Year 2000 checker OR comment stripper for C/C++ (Bob Weissman)
     Re: suid problem. <rootbeer@teleport.com>
     Re: Taking notice of "Use of uninitialisaed value" warn <rootbeer@teleport.com>
     Re: THE POP module! (Sean Dowd)
     Trying to compile extensions faster. <vandevegt@lucent.com>
     Digest Administrivia (Last modified: 8 Mar 97) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: 3 Nov 1997 19:02:13 GMT
From: gshibona@unconfigured.xvnews.domain
Subject: @ARGV Limits or Attempts to avoid "glob: Too many arguments" message
Message-Id: <63l73l$rrc@butch.lmms.lmco.com>

I created two "do nothing" scripts; a while and a for, each operating on
script command line input i.e. @ARGV.

--------------------------------------------------------
#! /usr/local2/bin/perl
# this one is called wtst

while ( $file = <@ARGV> ) {
    # nop
}
--------------------------------------------------------
#! /usr/local2/bin/perl
# this one is called ftst

foreach $file (@ARGV) {
    # nop
}
--------------------------------------------------------

Now, check out these results.

{major:31 /users/mofp1.0.02}ll | wc -l
    3755
{major:32 /users/mofp1.0.02}ll *mtx | wc -l
    1065
{major:33 /users/mofp1.0.02}~gshibona/sscripts/ftst *
Arguments too long
{major:34 /users/mofp1.0.02}~gshibona/sscripts/wtst *
Arguments too long
{major:35 /users/mofp1.0.02}~gshibona/sscripts/ftst *mtx
{major:36 /users/mofp1.0.02}~gshibona/sscripts/wtst *mtx
glob: Too many arguments
{major:37 /users/mofp1.0.02}


@ARGV seems to have a limit that I'm exceeding in both test scripts when the
input "parameter" is a splat(*).  What's interesting is the results from
the test scripts when the input "parameter" is *mtx; that limit seems to be
different!  Look at line 35.  Also, look at the error messages; the implication
is that I'm encountering different problems.    

By what means does one process an unlimited (via "*") number of files on the
the command line and avoid seeing that "glob: ..." message?


-- 
Gerard Shibona		  770.494.9302	         gshibona@hercii.mar.lmco.com
=============================================================================

       If one forgets the past, he will not be prepared for the future.

=============================================================================




------------------------------

Date: Mon, 3 Nov 1997 18:29:27 -0600
From: tadmc@flash.net (Tad McClellan)
Subject: Re: @ARGV Limits or Attempts to avoid "glob: Too many arguments" message
Message-Id: <79ql36.8am.ln@localhost>

gshibona@unconfigured.xvnews.domain wrote:
: I created two "do nothing" scripts; a while and a for, each operating on
: script command line input i.e. @ARGV.

: --------------------------------------------------------
: #! /usr/local2/bin/perl
: # this one is called wtst

: while ( $file = <@ARGV> ) {
                  ^^^^^^^
                  ^^^^^^^
This is very likely not doing what you think it is doing.

What did you want to do here?

It is in no way synonymous with the foreach() version below.

Matter of fact, I can't even figure out what perl really does
do here. A big filename glob(), I guess?



: Now, check out these results.

OK.

: {major:31 /users/mofp1.0.02}ll | wc -l
                              ^^
                              ^^
I don't think that is a standard Unix command.

What does it do?


: {major:33 /users/mofp1.0.02}~gshibona/sscripts/ftst *

Try this:

   ls *

Does that get you a "Arguments too long" message too?

Probably does. And there is no perl involved anywhere there.

That is because the error message is not coming from perl.

You have run into a limitation of your shell/kernel.

Use xargs or switch to a shell that doesn't do that (like tcsh).



You might also have a look at this Frequently Asked Question:

   "Why do I sometimes get an "Argument list too long" when I use <*>?"


: By what means does one process an unlimited (via "*") number of files on the
: the command line and avoid seeing that "glob: ..." message?


xargs


--
    Tad McClellan                          SGML Consulting
    tadmc@flash.net                        Perl programming
    Fort Worth, Texas


------------------------------

Date: Mon, 3 Nov 1997 16:47:09 -0800
From: Tom Phoenix <rootbeer@teleport.com>
To: gshibona@unconfigured.xvnews.domain
Subject: Re: @ARGV Limits or Attempts to avoid "glob: Too many arguments" message
Message-Id: <Pine.GSO.3.96.971103164443.19730N-100000@usertest.teleport.com>

On 3 Nov 1997 gshibona@unconfigured.xvnews.domain wrote:

> while ( $file = <@ARGV> ) {

I don't know what you were trying to do, but that's not going to do it.
:-)

> @ARGV seems to have a limit that I'm exceeding in both test scripts when
> the input "parameter" is a splat(*). 

That's not Perl's limit. It's the system (or the shell) which have a
limit. 

> By what means does one process an unlimited (via "*") number of files on the
> the command line and avoid seeing that "glob: ..." message?

Don't use a glob. Use readdir and friends, and you won't have those limits
any more. Hope this helps!

-- 
Tom Phoenix           http://www.teleport.com/~rootbeer/
rootbeer@teleport.com  PGP   Skribu al mi per Esperanto!
Randal Schwartz Case:  http://www.rahul.net/jeffrey/ovs/
              Ask me about Perl trainings!



------------------------------

Date: Mon, 3 Nov 1997 18:08:58 -0600
From: tadmc@flash.net (Tad McClellan)
Subject: Re: Best way to comma seperate a number?
Message-Id: <q2pl36.86m.ln@localhost>

William R. Ward (hermit@cats.ucsc.edu) wrote:

: This sounds like a perfect candidate for a standard Perl module.  We
: all have to format numbers sometimes... Number::Format perhaps?


Thanks. That would be useful.

Please post an announcement here when you get it finished.


--
    Tad McClellan                          SGML Consulting
    tadmc@flash.net                        Perl programming
    Fort Worth, Texas


------------------------------

Date: 3 Nov 1997 21:03:28 GMT
From: mike@stok.co.uk (Mike Stok)
Subject: Re: Creating a new file each day??
Message-Id: <63le70$mvc@news-central.tiac.net>

In article <345e375e.256934392@news.supernews.com>,
Ronald L. Parker <ron@farmworks.com> wrote:

>Here's what the online help for my C compiler says about that...
>
>Type   |
>Char   | Effect of [.prec] (.n) on Conversion 
>----------------------------------------------------------------------
>diouxX | Specifies that at least n digits are printed. If input    
>       | argument has less than n digits, output value is left-padded
>
>       | with zeros. If input argument has more than n digits, the
>       | output value is not truncated. 
>
>I don't know if Perl inherits this behavior, but at least one runtime
>library supports it.

It seems to, 

[mike@stok mike]$ perl -e 'printf "%4.4d\n", 3'
0003
[mike@stok mike]$ perl -v

This is perl, version 5.004_04 built for i586-linux

(and in the old 5.003 at work....)

Mike

-- 
mike@stok.co.uk                    |           The "`Stok' disclaimers" apply.
http://www.stok.co.uk/~mike/       |   PGP fingerprint FE 56 4D 7D 42 1A 4A 9C
http://www.tiac.net/users/stok/    |                   65 F3 3F 1D 27 22 B7 41
stok@colltech.com                  |            Collective Technologies (work)


------------------------------

Date: Mon, 3 Nov 1997 18:37:24 -0600
From: tadmc@flash.net (Tad McClellan)
Subject: Re: Did an open for append create the file?
Message-Id: <4oql36.ebm.ln@localhost>

fred@no.spam.leeds.ac.uk wrote:
: Is there an easy way to find out if the call to open() with a
: filename starting ">>... actually created the target file.


It will only create the file if it does not already exist.

Are you wondering how to tell if it already existed?

If so, then you might want to do a word search for 'file test'
in the documentation that came with the perl distribution.


If instead you just want some confidence that everything went OK, then:

Maybe something like this?


open(OUT, ">>outfile") || die "could not open 'outfile'  $!";
print OUT "yabba dabba || die "could not write to 'outfile'";
print OUT "doo!\n"     || die "could not write to 'outfile'";
close(OUT)             || die "error closing 'outfile'  $!";


If nothing comes out on STDERR, then it probably went OK...


--
    Tad McClellan                          SGML Consulting
    tadmc@flash.net                        Perl programming
    Fort Worth, Texas


------------------------------

Date: Mon, 3 Nov 1997 15:01:01 -0800
From: Tom Phoenix <rootbeer@teleport.com>
To: bob@cafemedia.com
Subject: Re: Dynamically creating filenames
Message-Id: <Pine.GSO.3.96.971103145928.19730F-100000@usertest.teleport.com>

On Mon, 3 Nov 1997, Bob Maillet wrote:

> # this is where the trouble is..It seems the filenames created from the
> query are output as one big chunk and not looped # # through one at a
> time to run the mirror command
>        if ($Test EQ "ftptest"){

Not using the -w invocation option, are you?

>             $res = "$MUT.$TestID.txt";

Now, you know that that's not using the variable $Test, don't you?

Does that have some bearing on your problem? Good luck!

-- 
Tom Phoenix           http://www.teleport.com/~rootbeer/
rootbeer@teleport.com  PGP   Skribu al mi per Esperanto!
Randal Schwartz Case:  http://www.rahul.net/jeffrey/ovs/
              Ask me about Perl trainings!



------------------------------

Date: Mon, 03 Nov 1997 18:54:28 -0600
From: Chris Freehill <freehill@austin.ibm.com>
Subject: embedded perl--memory leak?
Message-Id: <345E7244.41C6@austin.ibm.com>

I'm using perl 5.003.

I'm using some of the function calls described in the perlguts and 
the perlembed man pages to execute bits of perl code from a C
program.

Using the examples in the man pages as a model, the sequence of
functions I'm calling is as follows:

static PerlInterpreter *my_perl;

int main()
{
   char *embedding[] = {"","-e","sub_eval_{eval $_[0]}"};

   while (1){
    _my_perl = perl_alloc();
    perl_construct(_my_perl);
    perl_parse(_my_perl,NULL,3,embedding,environ);

    strcpy(command,<some perl code>);
 
    val = perl_call_argv("_eval_",0,command);

    perl_destruct(_my_perl);
    perl_free(_my_perl);
  }
}
-- 

The perl code excutes as I would expect (that's not the problem), 
but some memory is not free'd at the end of each iteration.

Questions:

Is either perl_destruct() or perl_free() supposed to free all of 
code allocated within the perl "session" (I couldn't find a thorough
description of these functions)?  If so, is this a known
bug, and is it fixed in a later release?

How should I go about freeing the memory used by the perl interpreter?

Please respond either by posting or emailing me at
   freehill@austin.ibm.com

Thanks for any information about this.

Chris


------------------------------

Date: 3 Nov 1997 21:35:02 GMT
From: "AMarcus" <AMarcus@ix.netcom.com>
Subject: Help on upgrading?
Message-Id: <01bce8a0$462e92c0$eda620cc@ab1>

I just installed FreeBSD 2.2.2 , it came with Perl 4.0.  Can anyone give me
instructions on upgrading to a current version, or suggest a good
reference?  I'm new to unix, and perl, so the most basic info would be
appreciated.
-- 
AMarcus



------------------------------

Date: 03 Nov 1997 18:55:17 -0600
From: arenson@grok.imgen.bcm.tmc.edu (Andrew D. Arenson)
Subject: Making script wait
Message-Id: <wqwwipcway.fsf@grok.imgen.bcm.tmc.edu>


	I'd like to make my script wait for 5 minutes at a certain
point. What's the best way to do this?

-- 
Andrew D. Arenson            | http://gc.bcm.tmc.edu:8088/cgi-bin/andy/andy
Baylor College of Medicine   | arenson@bcm.tmc.edu        (713)  H 520-7392
Genome Sequencing Center, Molecular & Human Genetics Dept.     | W 798-4689
One Baylor Plaza, Room S903, Houston, TX 77030                 | F 798-5386


------------------------------

Date: 3 Nov 1997 22:39:34 GMT
From: on.maps.barker@cix.co.uk (Pete Barker)
Subject: Re: Man pages for Activestate port on Win32
Message-Id: <memo.19971103223822.45B@mt.cix.co.uk>

In article
<Pine.GSO.3.96.971103083811.10568T-100000@usertest.teleport.com>,
rootbeer@teleport.com (Tom Phoenix) wrote:

> On 3 Nov 1997, Pete Barker wrote:
>
> > I don't think my distribution came with man pages,
>
> Then it's broken. :-)  Complain to whoever gave it to you, and
get a
> better distribution which includes everything.
>
>     http://www.perl.com/CPAN/ports/win32/Standard/x86/
>
> Hope this helps!
> --
> Tom Phoenix           http://www.teleport.com/~rootbeer/
> rootbeer@teleport.com  PGP   Skribu al mi per Esperanto!
> Randal Schwartz Case:  http://www.rahul.net/jeffrey/ovs/
>               Ask me about Perl trainings!
>
>

Your message prompted me to search a bit better :-) I do have
pages, although they are in HTML format. Does the link you
provided contain the pages in man format (which I would prefer)?

Regards,

Pete Barker
P.S. Please remove on.maps. to mail me.



------------------------------

Date: Mon, 3 Nov 1997 19:56:36 GMT
From: ced@bcstec.ca.boeing.com (Charles DeRykus)
Subject: Re: open() and pipes
Message-Id: <EJ362C.A7n@bcstec.ca.boeing.com>

In article <63ku89$b11@gd2inews.swissptt.ch>,
 Roy Culley <tgdcuro1@gd2.swissptt.ch> wrote:
 > 
 >   Why doesn't open() return an error when a pipe open fails? 
 > 
 >   It does, but probably not how you expect it to. On systems that follow
 >   the standard fork/exec paradigm (eg, Unix), it works like this: open
 >   causes a fork. In the parent, open returns with the process ID of the
 >   child. The child execs the command to be piped to/from. The parent
 >   can't know whether the exec was successful or not - all it can return
 >   is whether the fork succeeded or not. To find out if the command
 >   succeeded, you have to catch SIGCHLD and wait to get the exit status.
 > 
 > Does anyone have a package that handles opening pipes in a 'clean'
 > way? I have to read several hundred MBytes from a program to produce
 > a report. Opening a pipe seems the only obvious way to do it but I
 > would like to know if the open has failed. Is seeting up a handler
 > for SIGCHLD the only way? Should the script just sleep for a short
 > time and then check if a SIGCHLD had happened?
 > 

IO::Pipe appears to report open errors, e.g, 

   use IO::Pipe;
   $pipe = new IO::Pipe;
   $pipe->reader(qw(lx -l));   # should be 'ls -l'
   print while <$pipe>;


This produces the error:

   IO::Pipe: Cannot exec: No such file or directory at .... line...



HTH,
--
Charles DeRykus


------------------------------

Date: Mon, 3 Nov 1997 14:42:57 -0800
From: Tom Phoenix <rootbeer@teleport.com>
To: Bob Browning <bob@textor.com>
Subject: Re: Performance question
Message-Id: <Pine.GSO.3.96.971103144008.19730B-100000@usertest.teleport.com>

On Mon, 3 Nov 1997, Bob Browning wrote:

> I have a 2 meg database with about 30,000 records.  I will need to
> access directly by key and also to search the database. 

> If I use the feature of the language that allows you to treat a file as
> an associative array will the performance be OK for this sort of size. 

That's the reason that those database routines were written; the access
should be efficient. If (for whatever reason) the access speed isn't
reasonably close to the access speed from C, that should be considered a
bug. :-) 

(Of course, if you do something like @foo = keys %DB, well, that's another
matter! :-) 

Good luck!

-- 
Tom Phoenix           http://www.teleport.com/~rootbeer/
rootbeer@teleport.com  PGP   Skribu al mi per Esperanto!
Randal Schwartz Case:  http://www.rahul.net/jeffrey/ovs/
              Ask me about Perl trainings!



------------------------------

Date: 3 Nov 1997 22:07:00 GMT
From: ilya@math.ohio-state.edu (Ilya Zakharevich)
Subject: Re: Perl debug
Message-Id: <63lhu4$jdj$1@agate.berkeley.edu>

In article <yhfoh41dfe3.fsf@ascend.com>,
Kevin Lambright  <kevinl@ascend.com> wrote:
> > Where can I get documentation that explains how to set up
> > and run perl in debug controlled by emacs?
> 
> Having perl-mode.el is good, but it won't get you the debugger.  You 
> also need perldb.el (or perldb+.el, which has cleaned up a few things).
> 
> Put this in your personal emacs lib path, or your site emacs lib path,
> byte compile it, if you wish and add the following lines to your
> .emacs file:

One does not need any additional package to debug Perl under RMS
Emacs.  XEmacs may have broken this support (as most other things), so
my comment may be irrelevant.

So if you use RMS Emacs, please ignore the above advice.  Just use M-x
perldb, or upgrade to CPerl.

Ilya


------------------------------

Date: Mon, 3 Nov 1997 21:34:52 GMT
From: aml@world.std.com (Andrew M. Langmead)
Subject: Re: Perl question: Subroutine
Message-Id: <EJ3AM4.8o5@world.std.com>

tt@ws6391.at writes:

>I am a perl novice and I need help writing a subroutine which
>should meet the following needs:
>Opening a module, passing arguments to it and executing the module.

Usually, you don't really think of a module as being
executed. Usually, you just think of a modules as containing
subroutines can be executed. There can be any arbritrary number of
subroutines, (including none) with any arbritrary name or names. With
the OOP features of the language, you can even have a module that
contains no subroutines, but objects blessed into that package have
methods (subroutines that are called through an object) that they
respond to.

If the name of the file is a constant, obviously, you could say
something like this:

use Modulename;
Modulename::subroutinename(@args);

If the problem is that the module isn't in a directory that perl would
normally look in, you can use the "lib" pragma to add the directory to
the places where the interpreter looks.

use lib '/home3/tt/perl5/lib';
use Modulename;

But you can't quite do this for for a module whose name you are
determining at runtime. If you have the name of the module and
subroutine you want to call, and the arguments you want to pass. You
could say something like:

require "$file.pm";
import $file;    # if the module has anything to export
{
  no strict 'refs';
  &{"$file::$sub"}(@args);
}

If the file isn't intended as being used as a module, but is just a
file full of perl code that you want executed, you might want to take
a look at the "do" function.

do $filename;

Be careful with the last one, though. You can unwittingly bypass
perl's taint checking when running suid or with the -T flag.
  
use POSIX;
 
print "what do you want to do?";
$do = <STDIN>;  # take arbitrary chuck of perl code from user 
 
$file = POSIX::tmpnam(); # get name that won't clash with other users.
 
open FILE,">$file" or die;
print FILE $do;
close FILE;
 
do $file; # lets hope that $file doesn't contain anything dangerous

-- 
Andrew Langmead


------------------------------

Date: Mon, 03 Nov 1997 16:59:06 -0800
From: Jamie Heller-Evans <tigger@sgi.com>
Subject: Re: Perl, Sendmail, CGI
Message-Id: <345E735A.2A41ED98@sgi.com>

David Siebert wrote:
> 
> Does anyone have some good sample code for using sendmail in a perl script?

Depends on what you're trying to do... Are you wanting to invoke
sendmail directly (ick) or just send mail?  

jamie


------------------------------

Date: 3 Nov 1997 15:46:59 -0600
From: kuehne@ccwf.cc.utexas.edu (kuehne)
Subject: PTY in IRIX6.2
Message-Id: <63lgoj$4n8@piglet.cc.utexas.edu>

Has anyone had any success using pty's in irix6.2? I want to do something
like the getpty routine in chat2.pl or Comm.pl, but these are quite
busted on irix. Comm.pl is busted in several places  after it gets the raw
device. I haven't tried pty from v 25 of unix archives because I want to
keep this in perl.

As the perlipc man page says, unix buffering has ruined my day (week). I
want to open to a command sending things to it and getting stuff back, but
I only get back things sporadically when the buffer is flushed.

After looking at HP, aix, and solaris, it seems that pty handling
is a vendor-specific free-for-all.

Any help or suggestion would be much appreciated.


-john kuehne


------------------------------

Date: Mon, 3 Nov 1997 23:25:30 GMT
From: aml@world.std.com (Andrew M. Langmead)
Subject: Re: Q: Precise Timestamps in Perl?
Message-Id: <EJ3FqI.KA7@world.std.com>

lbudney@fore.com writes:

>I want to generate unique filenames using timestamps and PIDs, but
>the time() function in Perl returns a 1-second timestamp.  That is
>too granular in general.  Is there a way in Perl to generate a
>millisecond or microsecond timestamp?

Have you thought about using POSIX::tmpnam to generate temp file names
instead? After all, if one person uses PIDs and timestamps, and
another uses timestamps and PIDs, and another uses a combination of
hashing algorithm and their lucky number, you can get name clashes. If
you you POSIX::tmpnam, you can ensure that your file name is unique
between all the C and perl programs on the same machine that use
tmpnam. (Ignoring the fact that different machines could be sharing
the same network file system but have different implementations of
tmpnam())

Otherwise take a look at the Time::Hires module on CPAN. 
<URL:http://www.perl.com/CPAN/modules/by-module/Time/
Time-HiRes-01.12.tar.gz>

>With gcc, I would simply access the member time_t.tm_usec to get a
>microsecond timestamp.  Is there a Perl-ish analogue?  

It sounds like like the .tv_usec member of the timeval structure which
is used with the gettimeofday function. Time::HiRes uses that
function.

-- 
Andrew Langmead


------------------------------

Date: 4 Nov 1997 01:02:28 GMT
From: xyzhenrygxyz@wpi.edu
Subject: Q: Safely opening file?
Message-Id: <63ls74$fll$1@bigboote.WPI.EDU>

I have read in the docs somewhere over the last two years the
'correct' way to safely open a file based on the user-supplied
filename.  It checked the filename for open() flags, then appended a
null byte.  Unfortunately, I cannot find it anymore in my searches
through www.perl.com, the 'Programming Perl' books (1st or 2nd ed.),
nor in 'Advanced Perl Programming'.

I thought the subroutine's name was SafeOpen.  Please, pointers to the
docs I read would be greatly appreciated. 

Henry Gabryjelski

P.S. - My e-mail does not contain the letters 'x', 'y', nor 'z'.


------------------------------

Date: 4 Nov 1997 00:18:27 GMT
From: "Jim Flowers" <jflowers@ezo.net>
Subject: Right way to execute commands from within perl
Message-Id: <01bce8b4$01fdd160$abd396ce@ivy.ezo.net>

I have struggled with this requirement on a number of occasions as a
there's-nobody-else-to-do-it programmer and I'm always a little bit
disappointed at the lack of elegance in my approach.

I just want to run a command on a different host and know that it did what
it was supposed to do.  Currently, that's append a line to a file (eg.
named.boot), discover a process id (eg. named.pid) and send a hup signal to
that process to force it to reread the modified file.  I seem to have a
choice of exec (command, arguments) with no return, system (command,
arguments); return with result or $variable = `command arguments` with
return value in $variable.

I usually use backquotes since I don't have to look up the syntax with
something like:
$return = `rsh kill -hup $pid`; having first looked up $pid but I don't
really know if it worked or not as rsh just returns 0 whether it succeeds
or fails.

Is there an "ordinary practice" method of doing this that I have missed or
at least a way of knowing that the specified command completes successfully
so that I can change $return to $success and test it?

Thanks.
-- 
Jim Flowers <jflowers@ezo.net>


------------------------------

Date: Mon, 3 Nov 1997 18:50:15 -0600
From: tadmc@flash.net (Tad McClellan)
Subject: Re: Right way to execute commands from within perl
Message-Id: <7grl36.qbn.ln@localhost>

Jim Flowers (jflowers@ezo.net) wrote:
: I have struggled with this requirement on a number of occasions as a
: there's-nobody-else-to-do-it programmer and I'm always a little bit
: disappointed at the lack of elegance in my approach.

: I just want to run a command on a different host and know that it did what
: it was supposed to do.  Currently, that's append a line to a file (eg.
: named.boot), discover a process id (eg. named.pid) and send a hup signal to
: that process to force it to reread the modified file.  I seem to have a
: choice of exec (command, arguments) with no return, system (command,
: arguments); return with result or $variable = `command arguments` with
: return value in $variable.
  ^^^^^^^^^^^^^^^^^^^^^^^^^


Backticks gets you the *output* of the command, not the return
value of the command.


: I usually use backquotes since I don't have to look up the syntax with
: something like:
: $return = `rsh kill -hup $pid`; having first looked up $pid but I don't
  ^^^^^^^

We now know that $return does not really hold the return value.

If you _do_ want the return value, then:

   $return = system "rsh kill -hup $pid"; # what do you mean 'look up'
                                          # the syntax anyway? 


--
    Tad McClellan                          SGML Consulting
    tadmc@flash.net                        Perl programming
    Fort Worth, Texas


------------------------------

Date: Tue, 4 Nov 1997 01:08:21 GMT
From: weissman@netcom.com (Bob Weissman)
Subject: Seeking Year 2000 checker OR comment stripper for C/C++
Message-Id: <weissmanEJ3KHx.Dtn@netcom.com>
Keywords: perl, year 2000

I'm looking for a perl script to inspect C and C++ code for suspicious
constructs with respect to the Year 2000 Problem.  Such a script should
strip comments and then look for suspicious strings such as "19", "365",
"getdate", "strptime", etc.

If such a beast does not exist, I'd be happy with a script which simply
removes all comments from C and C++ source, including multi-line
comments.  Then I could code the rest myself, but I'm not enough of a
perl wiz to know how to strip multi-line comments.  (Yes, I know the C
preprocessor will strip comments, but it also does other nasty things
like expand #include directives, which I don't want to do.)

I've looked on CPAN and searched the web, but haven't seen anything like
this.  Any help will be greatly appreciated.

Thanks,
== Bob


------------------------------

Date: Mon, 3 Nov 1997 14:54:42 -0800
From: Tom Phoenix <rootbeer@teleport.com>
To: Bob Jones <Bob.Jones@WKU.EDU>
Subject: Re: suid problem.
Message-Id: <Pine.GSO.3.96.971103144713.19730D-100000@usertest.teleport.com>

On Mon, 3 Nov 1997, Bob Jones wrote:

> Could someone be kind enough to e-mail me (please use e-mail since I
> don't have time to read newsgroups often) a snippet of code that would
> effectively create a new file for output, bypassing the taint problem. 

    $new_file_name = 'foo.bar';
    open FILE, "> $new_file_name"
	or die "Can't create file '$new_file_name': $!";

That's all it takes. :-)  (You may have been misled by the perlsec manpage
from some earlier versions of Perl, which wrongly implied that you could
turn off taint checking by giving up set-id privileges.) 

If you're wanting to use a filename derived from tainted data, you'll want
to use the untainting methods from perlsec.

If you're wanting to use the user's privs to open the file, you'll still
need to make sure that the name is untainted. But you may use code like
that listed in perlsec to fork a child process which will do the real open
and write.

You may find my (still experimental) module Taint.pm to be useful. 

    http://www.perl.org/CPAN/authors/id/PHOENIX/

Good luck!

-- 
Tom Phoenix           http://www.teleport.com/~rootbeer/
rootbeer@teleport.com  PGP   Skribu al mi per Esperanto!
Randal Schwartz Case:  http://www.rahul.net/jeffrey/ovs/
              Ask me about Perl trainings!



------------------------------

Date: Mon, 3 Nov 1997 16:44:25 -0800
From: Tom Phoenix <rootbeer@teleport.com>
To: fred@no.spam.leeds.ac.uk
Subject: Re: Taking notice of "Use of uninitialisaed value" warnings
Message-Id: <Pine.GSO.3.96.971103162753.19730M-100000@usertest.teleport.com>

On Mon, 3 Nov 1997 fred@no.spam.leeds.ac.uk wrote:

> Subject: Taking notice of "Use of uninitialisaed value" warnings

> 1. When one has deliberately undef'd $/ to slurp a file whole;

Maybe you're using an old version of Perl. I can't get this to make a
warning. If you're using 5.004, could you post some code?

> 2. When printing a complete Config;

Are you referring to the %Config hash from the module Config.pm? This
(like any other hash) may contain undefined values. If you want to print
it, though, this should do the job without warnings.

    use Config qw/%Config config_vars/;
    config_vars keys %Config;

> 3. When printing the results of dumpvar.

This may be a bug in dumpvar, but I can't tell from here. But if you're
printing undef, that's a warnable offense. :-)

> First, am I doing something wrong to get such warnings in any of these
> situations, 

As a general rule, -w shouldn't warn unless you're doing something
suspicious. If it does, there's either a better way to do these things, or
there's a bug in Perl. :-)

> and second, is there a recognised way of removing them when
> the undef'ness is known and potentially useful. 

Instead of printing something which may be undef, you could use code
something like this: 

    print defined($foo) ? $foo : '[$foo is undefined]' ;

Hope this helps!

-- 
Tom Phoenix           http://www.teleport.com/~rootbeer/
rootbeer@teleport.com  PGP   Skribu al mi per Esperanto!
Randal Schwartz Case:  http://www.rahul.net/jeffrey/ovs/
              Ask me about Perl trainings!



------------------------------

Date: Mon, 03 Nov 1997 19:05:08 -0500
From: dowd-nospam@flash.net (Sean Dowd)
Subject: Re: THE POP module!
Message-Id: <dowd-nospam-0311971905090001@dasc1-31.flash.net>

In article <345DB11B.2D82E753@imneverwrong.com>, Gustaf Edgren
<gurra@imneverwrong.com> wrote:

> I am currently developing a web-interface for a pop-mailbox. To do this
> I am using the POP module that you can download at www.perl.com.
> 
> Anyway I haven't got any problems with the module itself, but how do you
> get who its from?


The module or the message?  :-)

If you mean the former, it's either from Graham Barr
(libnet1.06/Net::POP3) or me (Mail::POOP3Client).

For the latter, something like this will work for Mail::POP3Client:

for $pop3->Head( $someIndex ) {
   /^From:\s(.*)$/ && print $1;
}
-- 
Remove nospam from the email address.


------------------------------

Date: Mon, 03 Nov 1997 15:00:19 -0600
From: "James M. VandeVegt" <vandevegt@lucent.com>
Subject: Trying to compile extensions faster.
Message-Id: <345E3B63.5A9EF75A@lucent.com>

After compiling the base perl distribution, I add about 30 extensions
from the CPAN archive. Is there a special way of untarring these
extensions into the perl source tree such that when I configure and
compile perl, it will configure and compile (and subsequently install)
all the extensions?

Can it handle an ordering, to allow for dependencies?

Any help appreciated.
--
Jim VandeVegt
vandevegt@lucent.com



------------------------------

Date: 8 Mar 97 21:33:47 GMT (Last modified)
From: Perl-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 8 Mar 97)
Message-Id: <null>


Administrivia:

The Perl-Users Digest is a retransmission of the USENET newsgroup
comp.lang.perl.misc.  For subscription or unsubscription requests, send
the single line:

	subscribe perl-users
or:
	unsubscribe perl-users

to almanac@ruby.oce.orst.edu.  

To submit articles to comp.lang.perl.misc (and this Digest), send your
article to perl-users@ruby.oce.orst.edu.

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

To request back copies (available for a week or so), send your request
to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
where x is the volume number and y is the issue number.

The Meta-FAQ, an article containing information about the FAQ, is
available by requesting "send perl-users meta-faq". The real FAQ, as it
appeared last in the newsgroup, can be retrieved with the request "send
perl-users FAQ". Due to their sizes, neither the Meta-FAQ nor the FAQ
are included in the digest.

The "mini-FAQ", which is an updated version of the Meta-FAQ, is
available by requesting "send perl-users mini-faq". It appears twice
weekly in the group, but is not distributed in the digest.

For other requests pertaining to the digest, send mail to
perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
sending perl questions to the -request address, I don't have time to
answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V8 Issue 1267
**************************************

home help back first fref pref prev next nref lref last post