[24320] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 6511 Volume: 10

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Mon May 3 18:05:45 2004

Date: Mon, 3 May 2004 15:05:11 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Mon, 3 May 2004     Volume: 10 Number: 6511

Today's topics:
    Re: Books online???? <bxb7668@somewhere.nocom>
    Re: CRC on Unix vs Win32 (Frank Sconzo)
    Re: execute every 15 min <jtc@shell.dimensional.com>
    Re: Finding all open filehandles and closing them befor (Bryan Castillo)
    Re: How to call another program <flavell@ph.gla.ac.uk>
    Re: Howto: Search between 2 files <bmb@ginger.libs.uga.edu>
    Re: is there something more elegant to convert Dos to u <noreply@gunnar.cc>
    Re: is there something more elegant to convert Dos to u (Andrew)
    Re: is there something more elegant to convert Dos to u <xxala_qumsiehxx@xxyahooxx.com>
    Re: is there something more elegant to convert Dos to u <noreply@gunnar.cc>
    Re: is there something more elegant to convert Dos to u <ittyspam@yahoo.com>
    Re: is there something more elegant to convert Dos to u <uri@stemsystems.com>
    Re: OSs with Perl installed <abigail@abigail.nl>
    Re: OSs with Perl installed <flavell@ph.gla.ac.uk>
    Re: single-byte values <tassilo.parseval@rwth-aachen.de>
    Re: Why is Perl losing ground? <dformosa@zeta.org.au>
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Mon, 3 May 2004 19:24:27 GMT
From: "bxb7668" <bxb7668@somewhere.nocom>
Subject: Re: Books online????
Message-Id: <Hx5KKo.2LC@news.boeing.com>


"Alan J. Flavell" <flavell@ph.gla.ac.uk> wrote in message
news:Pine.LNX.4.53.0405031752460.9803@ppepc56.ph.gla.ac.uk...
> On Mon, 3 May 2004, bxb7668 wrote:
>
> [messy and IMHO excessive quotage now snipped]
>
> > I would like to interject on comment about RTFM. For the last two
> > weeks I have been learning Perl/Tk by RTFM several manuals,
searching
> > the web for help and tutorials (I have a pile of printed web pages
> > over 2" thick) and asking the perl.tk newsgroup. For someone that
is
> > good a Perl but new to Tk, the manuals are slightly better than
> > useless ("Programming Perl", "Mastering Perl/Tk" and "Perl in a
> > Nutshell"). Most of the web pages are either to simple ("Hello
World")
> > or too complex. The Tk newsgroup has been fairly good. Most of the
> > responses have been helpful by answering my questions or by
pointing
> > out where to look in the manual. There have been a few RTFM
answers,
> > but I chalk those people up as being too stuck up in their
Guru'hood
> > to care about we newbies.
>
> Well, a fine list of generalities.  Frankly I haven't a clue what
kind
> of answers you got or what kind of answers you thought you wanted,
> based on that kind of vague description that you are posting.

In this case I wasn't asking anything. My Tk questions had been
answered. I was commenting on sarcastic answers.

>
> > Please everyone. Before you reply with a sarcastic or RTFM answer,
> > consider how you'd like other to respond to you if you were new or
> > stuck or didn't know where to find the answer.
>
> When I'm in that position, I'd like them to show me where to find
the
> relevant things out for myself.  If there's an easy answer to my
> problem, I'll want to know not only what that easy answer is, but
why
> I missed it, so that I don't go missing other easy answers that
> are probably in the same place.
>
I agree with you completely. I prefer to get the answer to my question
and enough information so that I know how to find the answer myself
the next time.

> On Usenet, we represent ourselves by our postings, nothing more.  If
> it appears from the posting that the questioner hasn't found the
> relevant documentation, we'll try to show him where it is.  If it
> looks as if he -does- know where to find the relevant documentation,
> but prefers to have it read out to him by hundreds or thousands of
> fellow Usenauts instead, then we're liable to react differently. If
> you don't understand why that is, then maybe you need a bit of
> reorientation.  Some of us have been around here long enough to see
> some competent and respected contributors who have abandoned usenet
> because of hordes of whiners who basically wanted spoon-feeding, and
> turned extremely nasty when they didn't get it.  Irrespective of
> whether you are one of them (and I surmise that you are not), that
> unfortunately has a significant effect on the agenda.
>
I'm not sure which I find worse. The "whiners" that want us to do
their work for them, or the "sarcatics" who rant at posters for
questions that they consider "stupid" or "whining". There are too many
OP that don't say anything more than "Why isn't print working?"
without giving any details. I would rather reply with a polite request
for those details. I've seen some "experts" reply with statements like
"That's a stupid question." and nothing else. That is rude, not
helpful for anyone and a waste of bandwidth.

I'll get off my soapbox now.
Brian

> Now, if I may make a suggestion: try to put yourself into the
position
> of the many Usenauts who will read your posting.  Try to read it
over
> again on the assumption that it was written by a third party, about
> whose capabilities and present expertise you have no knowledge at
all,
> and try to work out, on the basis of the posting and nothing more,
> what they're complaining about and what answers you would give them
in
> order to satisfy their needs.
>
> Will you (and the other readers here) be able to work out what it
was
> that was needed, and why the questioner found himself in
difficulties,
> and how he could have been helped out of those difficulties?  My
> diagnosis, for what it's worth, was that they won't, and thus we're
> none of us really any further forward.  Sadly.
>
> ttfn




------------------------------

Date: 3 May 2004 12:42:41 -0700
From: frank.sconzo@dowjones.com (Frank Sconzo)
Subject: Re: CRC on Unix vs Win32
Message-Id: <5d56563e.0405031142.282330f@posting.google.com>

Tim,

Thanks very much for your response; I sincerely appreciate it! I've
been struggling over this for a few days, but you've solved the
puzzle.

How in the world did you figure this out?

Thank you,
Frank


Tim Heaney <theaney@cablespeed.com> wrote in message news:<878yg965ve.fsf@mrbun.watterson>...
> frank.sconzo@dowjones.com (Frank Sconzo) writes:
> >
> > I'm writing a perl module that sends rich-text messages to Microsoft
> > Outlook recipients from Unix. This involves generating CRCs of the
> > plaintext and rtf versions of the mail message.
> >
> > Unfortunately, when I use perl modules to generate the CRC, the values
> > do not match those that the Outlook Client is expecting.
> >
> > For example, I used the crc32 function from Digest::CRC to determine
> > the CRC of the string ABCD. I also sent a message from an Outlook
> > client containing only ABCD as the body text.
> >
> > Digest::CRC::crc32 gives me the following for the CRC of ABCD:
> > db 17 20 a5
> >
> > But the Outlook attachment contains a CRC of ABCD as:
> > b9 ff 53 fa
> >
> > Anyone know why these wouldn't match? 
> 
> I think perhaps Outlook inverted things in a different sense than
> is usual.
> 
>   $ perl -M'Digest::CRC qw(crc_hex)' -le 'print reverse unpack "A2"x4, 
>   crc_hex("ABCD",32,0,0,1,0x04C11DB7,1)'
> 
>   b9ff53fa
> 
> For comparison, the usual CRC32 corresponds to
> 
>   $ perl -M'Digest::CRC qw(crc_hex)' -le 'print crc_hex("ABCD",32,
>   0xffffffff,0xffffffff,1,0x04C11DB7,1)'
> 
>   db1720a5
> 
> Tim


------------------------------

Date: 3 May 2004 13:29:03 -0600
From: Jim Cochrane <jtc@shell.dimensional.com>
Subject: Re: execute every 15 min
Message-Id: <slrnc9d7bv.snv.jtc@shell.dimensional.com>

In article <324fbb60d3f7935147e9804982bb1cc9@news.teranews.com>, Jeff Boes wrote:
> luc wrote:
>> I have a perl program that downloads the value of some shares on the
>> Brussels stock market. The problem is that every 15 minutes the value
>> changes. How would I go about changing this program so that it automatically
>> starts up at 9.30(finishes at 16.00(market closed)) and every 15 minutes
>> would download the prices and put them in diffrent files, so that I would
>> have 32 files? With this data you could easily see the flow of the share.
> 
> As noted elsewhere in this thread, if the program's not running, it 
> can't determine if it should be running. You need cron for that, or else 
> the program has to *be* running, all the time, and just "wake up" 
> periodically to do its thing.

Yep - If a cron-like tool is not available you need to either write your
script to run constantly and do its own timing (sort of like writing your
own cron-daemon that is limited to just one hard-coded task) or write a
driver program or script that runs constantly, does the timing, and calls
the script when needed.

> ...
> A more complex approach involves setting an "alarm clock" signal to wake 
> your process:
> 
> #!/usr/bin/perl -w
> use strict;
> use POSIX;
> 
> $SIG{ALRM} = \&set_alarm_and_snooze;
> 
> set_alarm_and_snooze;
> 
> sub set_alarm_and_snooze {
>    alarm(15 * 60);
>    # Do your download here.
>    POSIX::pause;
> }
> 
> This approach ensures that the starts of two downloads will be separated 
> by 15 minutes, but it also has a couple of problems:
> 
> 1. If a download takes longer than 15 minutes, the next download may 
> interrupt it.

My guess is that it is so unlikely that a download will take longer than 15
minutes that the script can be written to assume this will rarely occur and
when it does occur, do the following: If it's time to download but the
last download is still ocurring, skip the current download - i.e., wait
until the next alarm to do the download.  Allowing the in-progress download
to finish will avoid possible data corruption problems.

> 2. If this script runs for a very long time (e.g., many days), you may 
> run out of memory, because it's recursive!

I don't think it would be particularly difficult to write the script such
that it manages resources well - so that its memory use does not constantly
increase.

> These examples are presented to show that a self-contained approach may 
> not be your best bet.

I think this approach is workable if no cron utility is available, as long
as the OP has the resources (skill and time, or money and time if he
decides to hire someone to do it) to implement it.  It's not an easy task,
but finding the right modules might make it not particularly difficult.

> Were this my task, I'd stick with cron -- but a 
> Windows or other OS may not provide such features (although I've had 
> success using the Windows task scheduler in XP, and I'm pretty sure that 
> other contemporary Windows versions have similar things).
> 
> Failing that, you will almost certainly end up at CPAN. Some promising 
> items there include Proc::Daemon, Coro::Timer, Prima::Timer, etc. I've 
> not used these -- I used the Event package instead, which provides a lot 
> more functionality but may be complete overkill for you.

Yes, after some exploration of CPAN, if you (the OP) are unsure which
module to use, a question here (or in the modules group) would probably be
in order.  You also might want to look at the "submodules" under Finance::.
It's possible that some of this functionality is already available.

-- 
Jim Cochrane; jtc@dimensional.com
[When responding by email, include the term non-spam in the subject line to
get through my spam filter.]


------------------------------

Date: 3 May 2004 13:31:22 -0700
From: rook_5150@yahoo.com (Bryan Castillo)
Subject: Re: Finding all open filehandles and closing them before exiting
Message-Id: <1bff1830.0405031231.4e135c8b@posting.google.com>

Rocco Caputo <troc@pobox.com> wrote in message news:<slrnc9879b.lqj.troc@eyrie.homenet>...
> On 30 Apr 2004 23:19:58 -0700, Vilmos Soti wrote:
> > rook_5150@yahoo.com (Bryan Castillo) writes:
> >
> >>>>> I have a signal handler which tries to unmount the disk in
> >>>>> the case of a sigint, but it will fail if copy from File::Copy
> >>>>> has an open filehandle on the mounted disk.
> 
> I use this:
> 
>   use POSIX qw(MAX_OPEN_FDS);
>   POSIX::close($_) for $^F+1 .. MAX_OPEN_FDS;
> 

Are you sure MAX_OPEN_FDS is right?

tuxedo@luke:/usr/include>/usr/local/bin/perl -e "use POSIX
qw(MAX_OPEN_FDS)"
"MAX_OPEN_FDS" is not exported by the POSIX module at
/usr/local/lib/perl5/5.6.1/sun4-solaris/POSIX.pm line 19
Can't continue after import errors at
/usr/local/lib/perl5/5.6.1/sun4-solaris/POSIX.pm line 19
BEGIN failed--compilation aborted at -e line 1.

I found this in limits.h

#define OPEN_MAX        64      /* max # of files a process can have
open */

Is this what you meant?  

tuxedo@luke:/usr/include>perl -MPOSIX -e 'print POSIX::OPEN_MAX(),
"\n"'
64

Not really a perl question then, but if you do use this logic, are you
assured that the OS will reuse file descriptor numbers?  So there is a
max of 64 open files on this particular system, but does that mean
that all file descriptor numbers will be < OPEN_MAX?  (I guess I
should really ask on comp.unix.programmer or better yet read the
Single Unix Specification).


> It's a heavy-handed way to close every file descriptor, whether open or
> not.  It won't close stdin, stdout, or stderr, however.  For that, use 0
> instead of $^F+1.


------------------------------

Date: Mon, 3 May 2004 19:55:58 +0100
From: "Alan J. Flavell" <flavell@ph.gla.ac.uk>
Subject: Re: How to call another program
Message-Id: <Pine.LNX.4.53.0405031944310.9935@ppepc56.ph.gla.ac.uk>

On Mon, 3 May 2004, Joe Smith wrote:

> You could have part-1.pl output HTML that runs two CGIs in parallel.
>
>       <frameset rows="100%,*">
> 	<frame src="/cgi-bin/part-2.pl">
> 	<frame src="http://www.domain.com/cgi-bin/copyprog.pl">
>       </frameset>
>       <noframes>
>          <body><img src="/cgi-bin/part-2.pl" width=1 height=1">
>          <img src="http://www.domain.com/cgi-bin/copyprog.pl" width=1 height=1>
>          </body>
>       </noframes>

Was that wise?  While it seems to have got the questioner off our
backs, I'd have to say that if it were to be presented as a solution
in a WWW group (which is where I suspect it would be more on-topic,
since there's almost no specifically Perl-language relevance here), it
would likely have been shot down in flames.

There's just too many imponderables being left to the client side
(who'd be perfectly entitled to disable frames and/or images if they
felt like it) for this to rate as a robust piece of web engineering,
I'm afraid.

Since the original poster still hasn't really told us what the
*underlying* problem is - beyond what we can deduce from the
description of a number of failed attempts at a solution - we don't
know whether such a shambolic edifice is going to represent a
tolerable kludge, or a life-threatening risk, or someting in between.

So "caveat implementor", or something like that.


------------------------------

Date: Mon, 3 May 2004 16:26:34 -0400
From: Brad Baxter <bmb@ginger.libs.uga.edu>
Subject: Re: Howto: Search between 2 files
Message-Id: <Pine.A41.4.58.0405031615500.12768@ginger.libs.uga.edu>

On Fri, 30 Apr 2004, josetg wrote:
> Am a absolute newbie.
>
> I have a list of keywords in one file, and the text to search for in 2nd file.
> I need to find lines in the 2nd file which match keywords in the 1st file.
>
> The output should be sorted by line # in 2nd file.

As an absolute newbie, this somewhat facetious answer might not help you.

perl -F'\W' -ane'if($x){(@a=grep$_{$_},@F)&&print"$.: @a:
$_"}else{map++$_,@_{@F}};eof&&($x++,$.=0)' file1 file2

This works for some definitions of 'keywords' and 'match'.

Regards,

Brad


------------------------------

Date: Mon, 03 May 2004 22:58:22 +0200
From: Gunnar Hjalmarsson <noreply@gunnar.cc>
Subject: Re: is there something more elegant to convert Dos to unix in   subroutine?
Message-Id: <c76c11$8jc1$1@ID-184292.news.uni-berlin.de>

Paul Lalli wrote:
>> Gunnar Hjalmarsson wrote:
>>> Yes. Subroutines that do what they are supposed to do are
>>> always more elegant.
> 
> I *think* the bit Gunnar was complaining about is the line right
> below chomp.

<snip>

> (Gunnar, feel free to point out if there's another bit that I'm
> missing)

My main "complaint" is the prototype that disallows that arguments are
passed to the sub.

-- 
Gunnar Hjalmarsson
Email: http://www.gunnar.cc/cgi-bin/contact.pl



------------------------------

Date: 3 May 2004 12:43:33 -0700
From: myfam@surfeu.fi (Andrew)
Subject: Re: is there something more elegant to convert Dos to unix in subroutine?
Message-Id: <c5826e91.0405031143.131652ac@posting.google.com>

Hm, my subroutine is actually working just fine, prove me wrong, it
conversts DOS files to UNIX just fine.
All examples given involve calling perl from code, I don't like it, I
would like a subroutine or function which can be included in my perl
code. I don't like calling perl from perl.
Thanks,


Gunnar Hjalmarsson <noreply@gunnar.cc> wrote in message news:<c73qfu$hqsum$1@ID-184292.news.uni-berlin.de>...
> Andrew wrote:
> > 
> > [ Subject: is there something more elegant to convert Dos to unix
> > in subroutine? ]
> > 
> > sub toUnixFile() {
> >   my ($file) = @_;
> >   my ($temp_file) = $file . ".tmp";
> >   move($file, $temp_file);
> >   open(in, "<$temp_file");
> >   open(out, ">$file");
> >   while(<in>) {
> >     chomp;
> >     ~ s/\r$//;
> >     print out "$_\n";
> >   }
> >   close in;
> >   close out;
> >   unlink $temp_file;
> >   return 0;
> > }
> 
> Yes. Subroutines that do what they are supposed to do are always more
> elegant.
> 
> Please post working code, or rephrase your question.


------------------------------

Date: Mon, 03 May 2004 20:27:34 GMT
From: "Ala Qumsieh" <xxala_qumsiehxx@xxyahooxx.com>
Subject: Re: is there something more elegant to convert Dos to unix in subroutine?
Message-Id: <WWxlc.44351$Lm6.13735@newssvr29.news.prodigy.com>

"Michele Dondi" <bik.mido@tiscalinet.it> wrote in message
news:2ecc90pjhom1m5qvtg6ouqeiu1sbhrmri9@4ax.com...
> On 2 May 2004 14:35:44 -0700, myfam@surfeu.fi (Andrew) wrote:
>
> >sub toUnixFile() {
> [snip]
>
> FWIW I customarily use
>
>   perl -lpi.bak -e '' <files>

Correct me if I'm wrong, but this doesn't work on *nix systems since the
auto chomp() will only remove \n characters, leaving \r's intact.

--Ala




------------------------------

Date: Mon, 03 May 2004 22:48:00 +0200
From: Gunnar Hjalmarsson <noreply@gunnar.cc>
Subject: Re: is there something more elegant to convert Dos to unix in subroutine?
Message-Id: <c76bdj$9ptk$1@ID-184292.news.uni-berlin.de>

[ Please put quoted text *before* your own comments. ]

Andrew wrote:
> Gunnar Hjalmarsson wrote:
>> Andrew wrote:
>>> 
>>> [ Subject: is there something more elegant to convert Dos to
>>> unix in subroutine? ]
>>> 
>>> sub toUnixFile() {
>>>   my ($file) = @_;
>>>   my ($temp_file) = $file . ".tmp";
>>>   move($file, $temp_file);
>>>   open(in, "<$temp_file");
>>>   open(out, ">$file");
>>>   while(<in>) {
>>>     chomp;
>>>     ~ s/\r$//;
>>>     print out "$_\n";
>>>   }
>>>   close in;
>>>   close out;
>>>   unlink $temp_file;
>>>   return 0;
>>> }
>> 
>> Yes. Subroutines that do what they are supposed to do are always
>> more elegant.
>> 
>> Please post working code, or rephrase your question.
> 
> Hm, my subroutine is actually working just fine, prove me wrong,

Sure.

     #!/usr/bin/perl

     sub toUnixFile() {
       my ($file) = @_;
     }

     toUnixFile('/path/to/file');

Resulting error message:
"Too many arguments for main::toUnixFile"

Since your subroutine requires an argument, the code won't even compile.

Are you still claiming that your sub is working just fine? ;-)

> All examples given involve calling perl from code, I don't like it,
> I would like a subroutine or function which can be included in my
> perl code.

Right, and you said so in the subject line. Note that you might have
got more accurate responses if you had repeated your question in the
body of the message.

If the files to be processed aren't too big, you can always slurp them
into a scalar:

     sub toUnixFile {
         my $file = shift;
         local(*FH, $/);
         open FH, "+< $file" or die $!;
         $_ = <FH>;
         tr/\r//d;
         seek FH, 0, 0;
         truncate FH, 0;
         print FH;
     }

-- 
Gunnar Hjalmarsson
Email: http://www.gunnar.cc/cgi-bin/contact.pl



------------------------------

Date: Mon, 3 May 2004 16:27:08 -0400
From: Paul Lalli <ittyspam@yahoo.com>
Subject: Re: is there something more elegant to convert Dos to unix in subroutine?
Message-Id: <20040503161946.F1160@dishwasher.cs.rpi.edu>

[please post your reply below the quoted material - doing otherwise is
considered rude]

On Mon, 3 May 2004, Andrew wrote:

> Gunnar Hjalmarsson <noreply@gunnar.cc> wrote in message news:<c73qfu$hqsum$1@ID-184292.news.uni-berlin.de>...
> >
> > Andrew wrote:
> > >
> > > [ Subject: is there something more elegant to convert Dos to unix
> > > in subroutine? ]
> > >
> > > sub toUnixFile() {
> > >   my ($file) = @_;
> > >   my ($temp_file) = $file . ".tmp";
> > >   move($file, $temp_file);
> > >   open(in, "<$temp_file");
> > >   open(out, ">$file");
> > >   while(<in>) {
> > >     chomp;
> > >     ~ s/\r$//;
> > >     print out "$_\n";
> > >   }
> > >   close in;
> > >   close out;
> > >   unlink $temp_file;
> > >   return 0;
> > > }
> >
> > Yes. Subroutines that do what they are supposed to do are always more
> > elegant.
> >
> > Please post working code, or rephrase your question.
>
> Hm, my subroutine is actually working just fine, prove me wrong, it
> conversts DOS files to UNIX just fine.

I *think* the bit Gunnar was complaining about is the line right below
chomp.  Presumably, your goal with this line is to perform a search and
replace on $_.  The bizzare part is the random ~ in front of it.  Why is
that there?  While not a syntax error, it is a most unusual logic error.
(And indeed, if you enable warnings, you'll see Perl yell at you about
"useless use of one's complement" or similar).  The fact that the code
still works is more of a side affect of that line than anything else.

(Gunnar, feel free to point out if there's another bit that I'm missing)

> All examples given involve calling perl from code, I don't like it, I
> would like a subroutine or function which can be included in my perl
> code. I don't like calling perl from perl.
> Thanks,

In the future, you should probably specify important details like that at
the beginning.  The reason everyone sent you those tiny short programs was
that you never specified this would be a small part of a larger program.

What, out of curiousity, is the cause of your disliking calling a Perl
one-liner from within a larger program?

Paul Lalli


------------------------------

Date: Mon, 03 May 2004 20:52:00 GMT
From: Uri Guttman <uri@stemsystems.com>
Subject: Re: is there something more elegant to convert Dos to unix in subroutine?
Message-Id: <x7zn8ptes0.fsf@mail.sysarch.com>

>>>>> "A" == Andrew  <myfam@surfeu.fi> writes:

  A> Hm, my subroutine is actually working just fine, prove me wrong, it
  A> conversts DOS files to UNIX just fine.

it has several bugs.

  A> All examples given involve calling perl from code, I don't like it, I
  A> would like a subroutine or function which can be included in my perl
  A> code. I don't like calling perl from perl.

huh? what calling perl from code are you talking about? the answers were
all one liners and it is trivial to convert any of them to a sub.

  >> > 
  >> > sub toUnixFile() {
  >> >   my ($file) = @_;
  >> >   my ($temp_file) = $file . ".tmp";

and what if that file already existed?

  >> >   move($file, $temp_file);
  >> >   open(in, "<$temp_file");
  >> >   open(out, ">$file");

and what if either of those open calls fails?

  >> >   while(<in>) {
  >> >     chomp;
  >> >     ~ s/\r$//;

what is that naked ~ doing there?

  >> >     print out "$_\n";
  >> >   }
  >> >   close in;
  >> >   close out;
  >> >   unlink $temp_file;
  >> >   return 0;

why the return 0? you don't return from anywhere else. 

so your sub it not 'actually working just fine'. proving it wrong was
too easy. you just didn't get the answers.

now fix your sub and use the ideas shown and post new code. you have to
do some of the work too.

uri

-- 
Uri Guttman  ------  uri@stemsystems.com  -------- http://www.stemsystems.com
--Perl Consulting, Stem Development, Systems Architecture, Design and Coding-
Search or Offer Perl Jobs  ----------------------------  http://jobs.perl.org


------------------------------

Date: 03 May 2004 21:01:07 GMT
From: Abigail <abigail@abigail.nl>
Subject: Re: OSs with Perl installed
Message-Id: <slrnc9dcoj.egl.abigail@alexandra.abigail.nl>

Matt Garrish (matthew.garrish@sympatico.ca) wrote on MMMDCCCXCVIII
September MCMXCIII in <URL:news:2Kilc.8851$ZJ5.412564@news20.bellglobal.com>:
::  
::  
::  At some point M$ has to stop expecting people to keep paying ridiculous
::  amounts of money to upgrade their own buggy platform. If they think they can
::  keep riding that tide they're sorely mistaken. Office's diminished sales are
::  proof of that. The growth of Linux is also proof. The ability to succeed
::  rests on a company's ability to adapt to its changing environment (or
::  ability to buy out all competition as M$ is wont to do on occasion), and
::  Linux is the first real test for Microsoft because they can't just buy Linux
::  out. As I said before, the next ten years will show whether they get the
::  real hint and do something about it (i.e., that people are getting fed up
::  with the exorbitatnt costs and shoddy software) or if it is their time to
::  wither. I don't see their current business model keeping them healthy for
::  long if they don't change, though.


Come back to me when 10% of the non-IT Fortune-500 companies have switched
at least 20% of its office management to Linux.

I've been hearing that Linux will be the death of Microsoft for years.
I don't know a single non-IT company where non-techies mainly work on
Unix/Linux platforms. But I do see and hear about companies switching
to Microsoft.



Abigail
-- 
use   lib sub {($\) = split /\./ => pop; print $"};
eval "use Just" || eval "use another" || eval "use Perl" || eval "use Hacker";


------------------------------

Date: Mon, 3 May 2004 22:58:19 +0100
From: "Alan J. Flavell" <flavell@ph.gla.ac.uk>
Subject: Re: OSs with Perl installed
Message-Id: <Pine.LNX.4.53.0405032252210.10388@ppepc56.ph.gla.ac.uk>

On Mon, 3 May 2004, Abigail wrote:

> I don't know a single non-IT company where non-techies mainly work on
> Unix/Linux platforms.

Are you taking up a new job as devil's advocate?


------------------------------

Date: Mon, 3 May 2004 22:08:46 +0200
From: "Tassilo v. Parseval" <tassilo.parseval@rwth-aachen.de>
Subject: Re: single-byte values
Message-Id: <c768sg$8rrn$1@ID-231055.news.uni-berlin.de>

Also sprach Don Stock:

>> Also sprach Don Stock:
> 
> as in the ape-man discovering tools? :)

Hmmh? I would carefully doubt that.

>> Finally, only use printf() when you actually make use of its
>> interpolation features. Again, Perl is not C.
> 
> by "interpolation features" you mean %d and such?  

Yes, exactly.

> I don't see any harm in using printf (correct me if I'm wrong).  

No harm per se. Just unnecessary work for the Perl interpreter.

> Plus, I once ran into a problem with "print $x" where $x contained a
> '%' or '@' (I don't remember for sure) and was evaluated at that point
> like a hash (or array).  So I got in the habit of always using printf
> ('printf "%s\n",$x' cured it).  Except that I couldn't recreate it
> just now with my new(er) version of perl, so maybe it was a bug that's
> gone away.  Or maybe I simply screwed up back then.  Or who knows...

What you describe can't have happened. Perl does no double-interpolation
(and it didn't do so in older versions either):

    my $var = '@array';
    print "$var is an array\n";
    __END__
    @array is an array

Same behaviour for a '%' and '$' and in fact any character.

> "hey buddy, toss me the upper leg bone of that antelope willya?  I
> need to go get breakfast."

I always devour antilopes as a whole. There's never anything left when I
am through with one. Sorry.

Tassilo
-- 
$_=q#",}])!JAPH!qq(tsuJ[{@"tnirp}3..0}_$;//::niam/s~=)]3[))_$-3(rellac(=_$({
pam{rekcahbus})(rekcah{lrePbus})(lreP{rehtonabus})!JAPH!qq(rehtona{tsuJbus#;
$_=reverse,s+(?<=sub).+q#q!'"qq.\t$&."'!#+sexisexiixesixeseg;y~\n~~dddd;eval


------------------------------

Date: 04 May 2004 07:52:46 +1000
From: ? the Platypus {aka David Formosa} <dformosa@zeta.org.au>
Subject: Re: Why is Perl losing ground?
Message-Id: <m3ad0p6uvl.fsf@dformosa.zeta.org.au>

fishfry <BLOCKSPAMfishfry@your-mailbox.com> writes:

[...]

> * Increasingly bizarre and arcane syntax features

I'll give you that.
 
> * Refusal of developers to produce a language standard

How does that reduce the popularity of the code.  I would counter
argue that the Perl is the most standard languge I know of since once
I've written the code I'm guaranteed it will work the same
everywhere.

> * Emphasis on new features over consistency and stability

I've never had stability problems with production perl.

> * Many original language features becoming deprecated, leading to code 
> that stops running.

Deprecated features don't stop working, deprecion means that you
shouldn't use that feature not that it has been removed.

-- 
Please excuse my spelling as I suffer from agraphia. See
http://dformosa.zeta.org.au/~dformosa/Spelling.html to find out more.
Free the Memes.


------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc.  For subscription or unsubscription requests, send
#the single line:
#
#	subscribe perl-users
#or:
#	unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.  

NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice. 

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V10 Issue 6511
***************************************


home help back first fref pref prev next nref lref last post