[30945] in Perl-Users-Digest
Perl-Users Digest, Issue: 2190 Volume: 11
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Sat Feb 7 14:09:46 2009
Date: Sat, 7 Feb 2009 11:09:10 -0800 (PST)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Sat, 7 Feb 2009 Volume: 11 Number: 2190
Today's topics:
[OT] Programmers (UK)...? <chris-usenet@roaima.co.uk>
Re: [OT] Programmers (UK)...? <tadmc@seesig.invalid>
Re: [OT] Programmers (UK)...? <uri@stemsystems.com>
Re: [OT] Programmers (UK)...? <cliff@excite.com>
Re: [OT] Programmers (UK)...? <tim@burlyhost.com>
Re: automating a perl installation on a cluster (or usi <sam@email-scan.com>
Re: Date/Time module <fake@phony.invalid>
Re: Extracting functions from C/C++ using Perl, would l sln@netherlands.com
Re: File handle to "in memory" file <devnull4711@web.de>
Re: File handle to "in memory" file <uri@stemsystems.com>
Re: HTML::TreeBuilder issue <hjp-usenet2@hjp.at>
Re: OT: Re: Perl Peeves <hjp-usenet2@hjp.at>
Perl Memory and die...what is happening here? <martinhport-news@yahoo.com>
Re: Programmers (UK)...? <cartercc@gmail.com>
Re: Reducing log size <tadmc@seesig.invalid>
Re: Reducing log size <ben@morrow.me.uk>
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Sat, 07 Feb 2009 12:37:07 +0000
From: Chris Davies <chris-usenet@roaima.co.uk>
Subject: [OT] Programmers (UK)...?
Message-Id: <jsb066x579.ln2@news.roaima.co.uk>
Looking for advice, please. In a couple of months I *might* be in the
market for a couple of permanent Perl programmers based out of Leeds,
UK. The job agencies I've used previously are all telling me that Perl
is a "legacy" language, and that they'd be surprised if they were able
to find anyone (!) with strong recent perl experience. (One person even
told me that "everyone" had migrated to .NET. Very strange.)
Can anyone recommend places (websites) or agencies I should be contacting,
where Perl experience is an advantage rather than a deadweight?
Since this is pretty much OT, I'd suggest it might be appropriate to email
me directly (the Reply-To is valid). As used to be the case "way back",
I can summarise anything useful in a few days.
Many thanks,
Chris
------------------------------
Date: Sat, 7 Feb 2009 09:15:21 -0600
From: Tad J McClellan <tadmc@seesig.invalid>
Subject: Re: [OT] Programmers (UK)...?
Message-Id: <slrngor9c9.5hv.tadmc@tadmc30.sbcglobal.net>
Chris Davies <chris-usenet@roaima.co.uk> wrote:
> Can anyone recommend places (websites) or agencies I should be contacting,
> where Perl experience is an advantage rather than a deadweight?
http://jobs.perl.org
--
Tad McClellan
email: perl -le "print scalar reverse qq/moc.noitatibaher\100cmdat/"
------------------------------
Date: Sat, 07 Feb 2009 11:48:25 -0500
From: Uri Guttman <uri@stemsystems.com>
Subject: Re: [OT] Programmers (UK)...?
Message-Id: <x7eiyatcpy.fsf@mail.sysarch.com>
>>>>> "TJM" == Tad J McClellan <tadmc@seesig.invalid> writes:
TJM> Chris Davies <chris-usenet@roaima.co.uk> wrote:
>> Can anyone recommend places (websites) or agencies I should be contacting,
>> where Perl experience is an advantage rather than a deadweight?
TJM> http://jobs.perl.org
also the london perl monger's manage their own jobs list so you would
get more a more focused group. and i know several UK hackers who might
be available. but maybe hiring in 2 months is not useful immediately.
as for placement agencies, i am the only one in the world that is 100%
dedicated to perl recruitment. you can reach me at uri AT
perlhunter.com. perl is not close to dead nor is its job market. you
have been talking to useless agents who do buzzword matching.
uri
--
Uri Guttman ------ uri@stemsystems.com -------- http://www.sysarch.com --
----- Perl Code Review , Architecture, Development, Training, Support ------
--------- Free Perl Training --- http://perlhunter.com/college.html ---------
--------- Gourmet Hot Cocoa Mix ---- http://bestfriendscocoa.com ---------
------------------------------
Date: Sat, 07 Feb 2009 12:43:53 -0500
From: Cliff MacGillivray <cliff@excite.com>
Subject: Re: [OT] Programmers (UK)...?
Message-Id: <9e072$498dc85b$d0365e05$5973@news.eurofeeds.com>
Uri Guttman wrote:
>>>>>> "TJM" == Tad J McClellan <tadmc@seesig.invalid> writes:
>
> TJM> Chris Davies <chris-usenet@roaima.co.uk> wrote:
> >> Can anyone recommend places (websites) or agencies I should be contacting,
> >> where Perl experience is an advantage rather than a deadweight?
>
> TJM> http://jobs.perl.org
>
> also the london perl monger's manage their own jobs list so you would
> get more a more focused group. and i know several UK hackers who might
> be available. but maybe hiring in 2 months is not useful immediately.
>
> as for placement agencies, i am the only one in the world that is 100%
> dedicated to SPAMSPAMSPAMSPAMSPAMSPAMSPAM you
> have been talking to useless agents who do buzzword matching.
abuse complaint sent
------------------------------
Date: Sat, 07 Feb 2009 10:08:45 -0800
From: Tim Greer <tim@burlyhost.com>
Subject: Re: [OT] Programmers (UK)...?
Message-Id: <N6kjl.331$jr7.39@newsfe08.iad>
Chris Davies wrote:
> Looking for advice, please. In a couple of months I *might* be in the
> market for a couple of permanent Perl programmers based out of Leeds,
> UK. The job agencies I've used previously are all telling me that Perl
> is a "legacy" language, and that they'd be surprised if they were able
> to find anyone (!) with strong recent perl experience. (One person
> even told me that "everyone" had migrated to .NET. Very strange.)
This is why you can't use a job agency. People that are too clueless
and more interested in using the newest buzz/hype words and
requirements. I hope they didn't charge for their non-service.
> Can anyone recommend places (websites) or agencies I should be
> contacting, where Perl experience is an advantage rather than a
> deadweight?
See http://jobs.perl.org.
--
Tim Greer, CEO/Founder/CTO, BurlyHost.com, Inc.
Shared Hosting, Reseller Hosting, Dedicated & Semi-Dedicated servers
and Custom Hosting. 24/7 support, 30 day guarantee, secure servers.
Industry's most experienced staff! -- Web Hosting With Muscle!
------------------------------
Date: Sat, 07 Feb 2009 07:09:32 -0600
From: Sam <sam@email-scan.com>
Subject: Re: automating a perl installation on a cluster (or using non-standard nfs paths)
Message-Id: <cone.1234012171.767069.27881.500@commodore.email-scan.com>
This is a MIME GnuPG-signed message. If you see this text, it means that
your E-mail or Usenet software does not support MIME signed messages.
The Internet standard for MIME PGP messages, RFC 2015, was published in 1996.
To open this message correctly you will need to install E-mail or Usenet
software that supports modern Internet standards.
--=_mimegpg-commodore.email-scan.com-27881-1234012171-0001
Content-Type: text/plain; format=flowed; charset="US-ASCII"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit
Rahul writes:
> I am configure a perl code (check_lm_sensors; a nagios plugin) so that I
> can eventually roll it out on 200+ servers we have in our computer
> cluster. The problem is that there are several dependancies. eg. the CPAN
> modules Nagios::Plugin, Class::Accessor, Config::Tiny, version.
>
> I already have one machine running where I used "perl -MCPAN -e 'install
> Math::Calc::Units'" etc. for each dependancy that the original code
> complained about. What's a better way of doing this so that I can
> automate the process?
Use your Linux distribution's native software update and maintenance tools.
With RPM/yum based distros, for example, provided that you package your Perl
module properly, and define your dependencies, and you have your standard
yum repositories enabled, "yum install perl-yourmodule.rpm" will
automatically download and install all of your dependencies, if necessary.
> What's the best way to handle these twin related problems: (1) Automating
> a multi-dependancy perl install to many servers
Use your Linux distribution's standard packaging tools.
> (2) using non-standard
> paths.
The best way to handle non-standard paths is to eliminate them. Use your
standard Perl install, and use the built-in, tested tools, that are part of
your distribution.
--=_mimegpg-commodore.email-scan.com-27881-1234012171-0001
Content-Type: application/pgp-signature
Content-Transfer-Encoding: 7bit
-----BEGIN xxx SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iEYEABECAAYFAkmNiAsACgkQx9p3GYHlUOI6FwCfWwy/IrNhRIHnrezpSW9TTXlW
88QAn0EXW/mLKPHczPUk3HQewbD6PhnX
=byVa
-----END PGP SIGNATURE-----
--=_mimegpg-commodore.email-scan.com-27881-1234012171-0001--
------------------------------
Date: Sat, 07 Feb 2009 02:18:08 -0700
From: Shawn N Blank <fake@phony.invalid>
Subject: Re: Date/Time module
Message-Id: <8gjqo49aq9ob74npt32ppg362hkbq9q452@4ax.com>
On Sat, 7 Feb 2009 07:51:41 +0000 (UTC), morty@sanctuary.arbutus.md.us
(Mordechai T. Abzug) wrote:
>Then instead of starting up for each connection via inetd, make it a
>standalone daemon. It's not hard to do, and should eliminate most of
>the startup cost.
Good plan. On the to do list.
--
Shawn
------------------------------
Date: Sat, 07 Feb 2009 18:36:13 GMT
From: sln@netherlands.com
Subject: Re: Extracting functions from C/C++ using Perl, would like Code Review Help if possible
Message-Id: <l0jro4thph0oimi84k84icbcse8dohokvu@4ax.com>
On Fri, 06 Feb 2009 23:09:47 GMT, sln@netherlands.com wrote:
>First installment.
>This was inspired by some other post on here.
>I was wondering if I could get a review of my preliminary.
>I need constructive critque's.
>
>Thank you!
>- sln
>
>## ===============================================
>## C_FunctionParser_v1.pl
>## -------------------------------
[snip]
Version 2 - same as v1 except the recursion was taken out of
Find_Function(). This speeds it up and was not really needed.
Don't worry I won't be posting meanial fixes, this should have
been the initial post and just trying to correct the base.
So version 3 will have significant modifications.
Mods like exclusions for language intrinsics (for,if,while,case,etc..),
distinctions for typedefs, macros, prototypes, class declarations, methods,
and expanding the parametric position data like pre/post line feeds and nesting
information.
I won't post any more of this unless it turns out to be a bit more usefull then
it is right now. If anything it will be a couple of weeks, if it doesen't pan out
then none at all. There is probably modules that do it all and this is a waste of time.
But, any helpfull comments on the code or what its doing are welcome.
- sln
## ===============================================
## C_FunctionParser_v2.pl @ 2/7/09
## -------------------------------
## C/C++ Style Function Parser
## Idea - To parse out C/C++ style functions
## that have parenthetical closures (some don't).
## (Could be a package some day, dunno, maybe ..)
## - sln
## ===============================================
my $VERSION = 2.0;
$|=1;
use strict;
use warnings;
# Prototype's
sub Find_Function(\$\@);
# File-scoped variables
my ($FxParse,$FName,$Preamble);
# Set default function name
SetFunctionName();
## --------
## Main
## --------
# Source file
my $Source = join '', <DATA>;
# Extended, possibly non-compliant, function name - pattern examples:
# SetFunctionName(qr/_T/);
# SetFunctionName(qr/\(\s*void\s*\)\s*function/);
# SetFunctionName("\\(\\s*void\\s*\\)\\s*function");
# Parse some functions
# func ...
my @Funct = ();
Find_Function($Source, @Funct);
# func2 ...
my @Funct2 = ();
SetFunctionName(qr/_T/);
Find_Function($Source, @Funct2);
# Print @Funct functions found
# Note that segments can be modified and collated.
if (!@Funct) {
print "Function name pattern: '$FName' not found!\n";
} else {
print "\nFound ".@Funct." matches.\nFunction pattern: '$FName' \n";
}
for my $ref (@Funct) {
printf "\n\@: %6d - %s\n", $$ref[3], substr($Source, $$ref[0], $$ref[2] - $$ref[0]);
}
## ----------
## End
## ----------
# ---------------------------------------------------------
# Set the parser's function regex pattern
#
sub SetFunctionName
{
if (!@_) {
$FName = "_*[a-zA-Z][\\w]*"; # Matches all compliant function names (default)
} else {
$FName = shift;
}
$Preamble = "\\s*\\(";
# Compile function parser regular expression
# Regex condensed:
# $FxParse = qr!/{2}.*?\n|/\*.*?\*/|\\.|'["()]'|(")|($FName$Preamble)|(\()|(\))!s;
# | | |1 1|2 2|3 3|4 4
# Note - Non-Captured, matching items, are meant to consume!
# -----------------------------------------------------------
# Regex /xpanded (with commentary):
$FxParse = # Regex Precedence (items MUST be in this order):
qr! # -----------------------------------------------
/{2}.*?\n | # comment - // + anything + end of line
/\*.*?\*/ | # comment - /* + anything + */
\\. | # escaped char - backslash + ANY character
'["()]' | # single quote char - quote then one of ", (, or ), then quote
(") | # capture $1 - double quote as a flag
($FName$Preamble) | # capture $2 - $FName + $Preamble
(\() | # capture $3 - ( as a flag
(\)) # capture $4 - ) as a flag
!xs;
}
# Procedure that finds C/C++ style functions
# (the engine)
# Notes:
# - This is not a syntax checker !!!
# - Nested functions index and closure are cached. The search is single pass.
# - Parenthetical closures are determined via cached counter.
# - This precedence avoids all ambigous paranthetical open/close conditions:
# 1. Dual comment styles.
# 2. Escapes.
# 3. Single quoted characters.
# 4. Double quotes, fip-flopped to determine closure.
# - Improper closures are reported, with the last one reliably being the likely culprit
# (this would be a syntax error, ie: the code won't complie, but it is reported as a closure error).
#
sub Find_Function(\$\@)
{
my ($src,$Funct) = @_;
my @Ndx = ();
my @Closure = ();
my ($Lines,$offset,$closure,$dquotes) = (1,0,0,0);
while ($$src =~ /$FxParse/g)
{
if (defined $1) # double quote "
{
$dquotes = !$dquotes;
}
next if ($dquotes);
if (defined $2) # 'function name'
{
# ------------------------------------
# Placeholder for exclusions......
# ------------------------------------
# Cache the current function index and current closure
push @Ndx, scalar(@$Funct);
push @Closure, $closure;
my ($funcpos, $parampos) = ( $-[0], pos($$src) );
# Get newlines since last function
$Lines += substr ($$src, $offset, $funcpos - $offset) =~ tr/\n//;
# print $Lines,"\n";
# Save positions: function( parms )
push @$Funct , [$funcpos, $parampos, 0, $Lines];
# Asign new offset
$offset = $funcpos;
# Closure is now 1 because of preamble '('
$closure = 1;
}
elsif (defined $3) # '('
{
++$closure;
}
elsif (defined $4) # ')'
{
--$closure;
if ($closure <= 0)
{
$closure = 0;
if (@Ndx)
{
# Pop index and closure, store position
$$Funct[pop @Ndx][2] = pos($$src);
$closure = pop @Closure;
}
}
}
}
# To test an error, either take off the closure of a function in its source,
# or force it this way (pseudo error, make sure you have data in @$Funct):
# push @Ndx, 1;
# Its an error if index stack has elements.
# The last one reported is the likely culprit.
if (@Ndx)
{
## BAD RETURN ...
## All elements in stack have to be fixed up
while (@Ndx) {
my $func_index = shift @Ndx;
my $ref = $$Funct[$func_index];
$$ref[2] = $$ref[1];
print STDERR "** Bad return, index = $func_index\n";
print "** Error! Unclosed function [$func_index], line ".
$$ref[3].": '".substr ($$src, $$ref[0], $$ref[2] - $$ref[0] )."'\n";
}
return 0;
}
return 1
}
__DATA__
------------------------------
Date: Sat, 07 Feb 2009 16:22:58 +0100
From: Frank Seitz <devnull4711@web.de>
Subject: Re: File handle to "in memory" file
Message-Id: <6v5nacFhfru7U1@mid.individual.net>
Uri Guttman wrote:
>>>>>> "FS" == Frank Seitz <devnull4711@web.de> writes:
>
> FS> Christian Winter wrote:
> >> Frank Seitz schrieb:
> >>> with
> >>>
> >>> open $fh,'>',\$var
> >>>
> >>> I can connect a file handle $fh with a scalar $var.
> >>>
> >>> How can I find out, given $fh, whether $fh is connected
> >>> with a scalar and which scalar it is connected with.
> >> I don't think there's a solution to that which doesn't entail
> >> delving into the depth of the header files and analyzing the
> >> file handle's glob magic.
>
> FS> It's a pity, because I think I need this.
>
> i would wager a large sum of quatloos that you don't need it. please
> post your (imagined :) reasons for needing this.
I wrote a simple class "StderrCatcher". The constructor
closes STDERR and reopens it as handle to an "in memory" file:
our $Stderr;
sub new {
my $class = shift;
close STDERR;
open STDERR,'>',\$Stderr or die;
return bless \$Stderr,$class;
}
With a call to
sub string {
my $self = shift;
return $$self;
}
I get everything the program has written to STDERR since
object instantiation.
Now the problem: The constructor should close and reopen STDERR
if and only if STDERR is not connected with class variable $Stderr.
It's a class which is used in a persistent runtime environment
(CGI::SpeedyCGI) and knows nothing about the outside world, especially
nothing about the state of the global STDERR.
Frank
--
Dipl.-Inform. Frank Seitz; http://www.fseitz.de/
Anwendungen für Ihr Internet und Intranet
Tel: 04103/180301; Fax: -02; Industriestr. 31, 22880 Wedel
------------------------------
Date: Sat, 07 Feb 2009 11:43:56 -0500
From: Uri Guttman <uri@stemsystems.com>
Subject: Re: File handle to "in memory" file
Message-Id: <x7iqnmtcxf.fsf@mail.sysarch.com>
>>>>> "FS" == Frank Seitz <devnull4711@web.de> writes:
FS> Uri Guttman wrote:
FS> I wrote a simple class "StderrCatcher". The constructor
FS> closes STDERR and reopens it as handle to an "in memory" file:
FS> our $Stderr;
no need for out as long as only this module needs to see $Stderr
FS> sub new {
FS> my $class = shift;
FS> close STDERR;
FS> open STDERR,'>',\$Stderr or die;
FS> return bless \$Stderr,$class;
FS> }
FS> With a call to
FS> sub string {
FS> my $self = shift;
FS> return $$self;
FS> }
FS> I get everything the program has written to STDERR since
FS> object instantiation.
and this is important? and you only have one string var but you asked to
find out WHICH var was opened for a handle. we showed you to how to
determine that the handle is attached to a scalar.
FS> Now the problem: The constructor should close and reopen STDERR
FS> if and only if STDERR is not connected with class variable $Stderr.
but how could it be connected to any other variable in your code? you
control it. also your current open does a > which supposed should wipe
out the variable's current data (assuming it acts like > on a file).
if you really need more control use a tied handle (which can be attached
to any object reference including a hash). store in that hash the
reference as well and make a method that accesses it. but i still don't
see the need to know which var is attached to the handle if you only
have one global handle/var.
there are other ways to trap stderr too. assuming you use warn (not
print STDERR) you can use the warning handler in %SIG and then easily
append all warn output to some var or a log file.
FS> It's a class which is used in a persistent runtime environment
FS> (CGI::SpeedyCGI) and knows nothing about the outside world, especially
FS> nothing about the state of the global STDERR.
and now the harder question, why are you keeping a stderr log in ram and
not on disk? when are you going to check it? it will be lost when the
demon restarts.
uri
--
Uri Guttman ------ uri@stemsystems.com -------- http://www.sysarch.com --
----- Perl Code Review , Architecture, Development, Training, Support ------
--------- Free Perl Training --- http://perlhunter.com/college.html ---------
--------- Gourmet Hot Cocoa Mix ---- http://bestfriendscocoa.com ---------
------------------------------
Date: Sat, 7 Feb 2009 19:35:38 +0100
From: "Peter J. Holzer" <hjp-usenet2@hjp.at>
Subject: Re: HTML::TreeBuilder issue
Message-Id: <slrngorl3r.6fk.hjp-usenet2@hrunkner.hjp.at>
On 2009-02-05 21:19, Tad J McClellan <tadmc@seesig.invalid> wrote:
> Dean Karres <dean.karres@gmail.com> wrote:
>> Hi I posted something similar to this over in comp.lang.perl.modules
[...]
> It is often a Good Idea to try and make the smallest program
> possible that will still produce the problem you need help with.
>
> I'll do that much for you at least:
>
> ---------------------------------
> #!/usr/bin/perl
> use warnings;
> use strict;
> use HTML::TreeBuilder;
>
> my $tree = HTML::TreeBuilder->new_from_file(*DATA);
>
> my $body = eval { $tree->look_down('_tag', 'body'); };
> die __LINE__ . ": " . $@ if $@;
>
> die "missing a BODY tag\n" unless $body;
>
> my @bodyElementList = eval { $body->content_list(); };
> die __LINE__ . ": " . $@ if $@;
| $h->content_list()
| Returns a list of the child nodes of this element -- i.e., what
| nodes (elements or text segments) are inside/under this element.
| (Note that this may be an empty list.)
Note: "elements or text segments".
>
>
> foreach my $i ( 0 .. $#bodyElementList )
> {
> warn "i=$i\n";
> my $H2 = eval { $bodyElementList[$i]->look_down('_tag', 'h2'); };
> die __LINE__ . ": " . $@ if $@;
Here the code assumes that all the members of @bodyElementList are
objects which have a method look_down(). But this is only true of
elements - text segments are simple strings, not objects.
> __DATA__
[...]
> <a href="http://www.usc.edu"> <a name="foo"> <h2>Nav topic 7</h2>
^
Here is the text segment - the blank between
'<a href="http://www.usc.edu">' and '<a name="foo">'.
Tip of the day: Use the perl debugger and/or Data::Dumper.
hp
------------------------------
Date: Sat, 7 Feb 2009 19:06:39 +0100
From: "Peter J. Holzer" <hjp-usenet2@hjp.at>
Subject: Re: OT: Re: Perl Peeves
Message-Id: <slrngorjdh.6fk.hjp-usenet2@hrunkner.hjp.at>
On 2009-02-05 00:24, Bruce Cook <bruce-usenet@noreplybicycle.synonet.comnoreply> wrote:
> Peter J. Holzer wrote:
>
>> On 2009-02-03 14:59, Bruce Cook
>> <bruce-usenet@noreplybicycle.synonet.comnoreply> wrote:
>>> Uri Guttman wrote:
>>>> forcing a specific value to be the one 'true' false (sic :) is a
>>>> waste of time and anal. it is like normalizing for no reason and it
>>>> bloats the code. even the concept of using the result of a boolean
>>>> test for its numeric or string value bothers me. a boolean test
>>>> value (of any kind in any lang) should only be used in a boolean
>>>> context. anything else is a slippery shortcut that makes the code
>>>> more complex and harder to read.
>>>
>>> That's basically where I'm coming from - I have an immediate cringe when
>>> I see the result of a test being used as an int.
>>
>> I find this odd from someone who claims to have an extensive background
>> in assembler and early C programming. After all, in machine code
>> everything is just bits. And early C had inherited quite a bit (no pun
>> intended) from that mindset, if mostly via its typeless predecessors
>> BCPL and B.
>
> It's basically a background thing. As you say everything is just bits. The
> earlier compilers I work with were all 16 bit, and literally everything was
> a 16-bit int, pointers, *everything*
floats and longs weren't, I hope.
> (even when chars were passed into functions, there were passed 16-bit
> (to satisfy even-byte boundary constraints),
char (or more precisely any integral type smaller than int) is promoted
to (unsigned) int when passed to a function without a prototype. This is
still the case in C.
> manipulated 16-bit, you just ignored the top 8-bits of the word in
> certain operations.
In arithmetic expressions, char (or more precisely any integral type
smaller than int) is promoted to (unsigned) int. This is still the case
in C.
> To add to this, the compilers didn't do a lot of sanity checking, the
> compiler just assumed you knew what you were doing and would
> faithfully "just do it".
That's what lint was for, as you note below. If you have only 64 kB
address space, you want to keep your programs small.
> Early compilers didn't have function prototyping (a function prototype
> was a syntax error),
Prototypes were originally a feature of C++ and were introduced into C
with the C89 standard. I think I've used compilers which supported them
before that (gcc, Turbo C, MSC, ...) but it's too long ago for me to be
certain.
> void was a keyword introduced to the language later, so void * was
> unheard of in most code.
About the same time as prototypes, although I don't think I've ever used
a C compiler which didn't support it, while I've used several which
didn't support prototypes.
> This engendered very fast and loose usage of ints for everything. In a lot
> of early code you'd see pointers, chars and all sorts declared as int and
> some truely horrendous coding:
[...]
> Code could have been done properly using unions, however that was work and
> because everyone knew what was really happening in the background why
> bother?
>
> This all came crashing down when we started porting to other platforms,
> which had different architecture driven rules.
As they say, "port early, port often". Thankfully I was exposed to the
VAX and 68k-based systems shortly after starting to program in C, so I
never got into the "I know what the compiler is doing" mindset and
rather quickly got into the "if it isn't documented/specified you cannot
rely on it" mindset.
> It became quite common for a project to have a project-wide header file
> which defined the projects' base datatypes and one of the common ones that
> turned up was:
>
> typedef int bool;
>
> This didn't mean bool was special, declaring it just signaled to the
> programmers that they were dealing with an int that had certain meaning.
That's a good thing if the "certain meaning" is documented and strictly
adhered to.
> In systems programming you would get things like this simplistic example:
>
> bool is_rx_buffer_full (int buffer) {
> ....
> return (qio_buffer_set[buffer]->qio_status & QIO_RX_FLAGS);
> }
So a "bool" isn't a two-valued type - it can take many values. This is
not what I expect from a boolean type.
> note that this function is declared as returning bool, which implies that
> what it returns should only be used in a conditional expression. If you
> tried to use it as an int, you could, but you wouldn't get what you
> expected.
Actually I would get what I expect if I treat your "bool" as an int, but
not what I expect when I treat your "bool" as what I expect from a
boolean type.
Expectations differ.
So documentation is very important and this is (to get back to Perl) why
I criticised that the "true" return value of several operators in Perl
is not documented.
> The whole industry hit the portability issue at about the same time. This
> lead to a lot of the modern features of C, including posix, function
> prototypes, a lot of the standard header files, many of the standard
> compiler warnings and of course the C standards. Others decided that C was
> just stupid for portability and created their own language (DEC used BLISS,
> which was an extremely strongly typed language and served them well across
> many very different architectures)
Actually, BLISS is older than C, so it can't have been developed because
people were disappointed by C. Also according to wikipedia, BLISS was
typeless, not "extremely strongly typed".
>>> I find it odd that
>>> normalization of bool results is built into the compiler,
>>
>> What "normalization of bool results is built into the compiler"?
>
> Consider:
> c= (a || b)
>
> as you say, these are just ints like everything else in C.
> Easiest way to compile that on most architectures would be:
>
> mov a,c
> bis c,b ; bis being the '11 OR operator
Not generally, because
* || is defined to be short-circuiting, so it MUST NOT evaluate
b unless a is false.
* a and b need not be integer types.
And of course the result of the operation is defined as being 0 or 1.
I don't see this as "normalisation", because there is no intermediate
step which does a bit-wise or.
c = (a || b)
is semantically exactly equivalent to
c = (a != 0 ? 1
: (b != 0 ? 1
: 0))
It is not equivalent to
c = ((a | b) != 0 ? 1 : 0)
(Of course in some situations an optimizing compiler may determine that
it is equivalent in this specific situation and produce the same code)
>> I find it very useful that operators built into the language returned a
>> defined result. If anything, C has too many "implementation defined" and
>> "undefined" corners for my taste.
>
> Yes, but I think it's also one of the strengths of C. You define your own
> rules to make it fit to your needs for a particular project and as long as
> you're consistent and design those rules properly it all works.
>
> Modern languages try to address these undefined corners, but it often makes
> them difficult to use for some applications.
I strongly disagree with this. The various implementation defined and
undefined features of C don't make it simpler for the application
programmer - on the contrary, they make it harder, sometimes a lot
harder. What they do simplify (and in some cases even make possible) is
to write an efficient compiler for very different platforms.
hp
------------------------------
Date: Sat, 07 Feb 2009 13:55:32 +0000
From: Martin Harris <martinhport-news@yahoo.com>
Subject: Perl Memory and die...what is happening here?
Message-Id: <yNudnSMlPLpJDxDUnZ2dnUVZ8uKWnZ2d@pipex.net>
This program uses up all the memory deliberatly...but I have some questions
about what is actually happening...
The first loop will run out of RAM and exit status 1. Â Out of Memory is
printed to stderr.
while (1) {
 # Out of Memory when RAM limit is reached, then Crash out.
 push @growthing,@growthing;
}
This second loop takes longer to grab the memory. Â i.e the growth line is
not so steep.
In addition it grows past RAM into Swap.
while (1) {
 # These seem to carry on growing
 # into swap until eventualy even these die ..why?
 # These are also much slower, why?
 push @growthing,@growthing || die "ouch: $!\n";
Â
 # Similar to the die example
 # push @growthing,@growthing || exit 1;
}
1. Why is the second loop slower to eat the memory.
2. Why does the second loop eat into swap before exit where the first loop
exits at the end of ram?
3. The die never gets executed...presumably because there is no memory to
perform the operation.
4. How can you write perl that checks something will fit into memory before
it tries the operation?
--
MH
------------------------------
Date: Sat, 7 Feb 2009 10:52:32 -0800 (PST)
From: cartercc <cartercc@gmail.com>
Subject: Re: Programmers (UK)...?
Message-Id: <5a5aae4f-b392-401e-803b-75c4d8c9a95d@v4g2000yqa.googlegroups.com>
On Feb 7, 7:37=A0am, Chris Davies <chris-use...@roaima.co.uk> wrote:
> Looking for advice, please. In a couple of months I *might* be in the
> market for a couple of permanent Perl programmers based out of Leeds,
> UK. The job agencies I've used previously are all telling me that Perl
> is a "legacy" language, and that they'd be surprised if they were able
> to find anyone (!) with strong recent perl experience. (One person even
> told me that "everyone" had migrated to .NET. Very strange.)
This depends on the universe one lives in. In my city (Southeastern
U.S.) there are two large employers of programmers, on an insurance
company and the other a bank holding company/credit card processor. I
know a number of people who work for these companies. Both have a .NET
group and a Java group, with supporting HR people, and seemingly a
hermetic wall between them. Once, I was told that the Java group at
one of these companies was understaffed and looking to beef up. I
spoke to the head HR guy, someone I knew personally in other contexts,
who SWORE that X company didn't do Java and had NOT ONE SINGLE Java
programmer on the payroll.
Perl is a specialized tool used for specialized jobs. I've had some
recent experience programming in .NET, and while it's true that you
can do bunches of things in .NET that you can't do in Perl, it's also
true that you can do bunches of things in Perl that you can't do
in .NET. But of course you already know that.
My advice:
1. Advertise for programmers, Perl or otherwise.
2. Don't fall into the need-to-know trap. A competent programmer who
doesn't know Perl will do you a lot more good in the long run that an
incompetent programmer that does know Perl.
3. Realize that the Perl world has different parts, and that domain
knowledge is just as important as language knowledge if not more so. A
Perl data munger won't help you with Perl sys admin -- hire a sys
admin even if his knowledge of Perl is rudimentary.
4. You should be looking for a competent programmer with domain
knowledge, rather than someone who knows Perl. Ideally you would find
a competent programmer who possesses both domain knowledge and Perl.
CC
PS - I can't help but repeat this. Perl is only a tool, yes, but it's
undoubtedly the very best tool for some jobs. If anyone tells you that
Perl is dead or is a legacy language, what they are REALLY telling you
is that some jobs don't need doing or don't need doing well, and that
someone is ignorant. You can tell this to him in person and say that I
said so.
CC
------------------------------
Date: Sat, 7 Feb 2009 09:13:19 -0600
From: Tad J McClellan <tadmc@seesig.invalid>
Subject: Re: Reducing log size
Message-Id: <slrngor98f.5hv.tadmc@tadmc30.sbcglobal.net>
Cosmic Cruizer <XXjbhuntxx@white-star.com> wrote:
> my $max_file_size = 3000000; # Maximum logfile size
Write is so that you don't get fingerprints on the screen counting digits:
my $max_file_size = 3_000_000; # Maximum logfile size
> my $size = (stat("$log"))[7]; # Get current logfile size
my $size = -s $log;
See also:
perldoc -q vars
What's wrong with always quoting "$vars"?
> unlink $log;
You should check the return value to see if you actually got
what you asked for...
--
Tad McClellan
email: perl -le "print scalar reverse qq/moc.noitatibaher\100cmdat/"
------------------------------
Date: Sat, 7 Feb 2009 18:54:43 +0000
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: Reducing log size
Message-Id: <j02166-ute.ln1@osiris.mauzo.dyndns.org>
Quoth Cosmic Cruizer <XXjbhuntxx@white-star.com>:
> I searched, but could not find suggestions on reducing the size of log
> files on Windows. Truncate did not work since it keeps to top portion of
> the log. I did not try to use tail.
>
> Now that I wrote something that does what I want... what is a cleaner way
> of doing this?
>
> ### Manage logfile size ###
> sub getLogFile {
Why is this sub called 'getLogFile' when it actually truncates one?
> use File::Copy "move";
I wouldn't use File::Copy for this. You're renaming in the same
directory, so you know rename will work (and if it doesn't, something
weird is going on and you want to know about it).
> my $max_line_count = 50000; # Number of lines for replaced file
> my $max_file_size = 3000000; # Maximum logfile size
> my $size = (stat("$log"))[7]; # Get current logfile size
Tad's already commented on this.
> my $x; # Incrementer for counter
>
> if($size > $max_file_size) {
> # Get the number of lines in the logfile
> open my $in, '<', $log or die "cannot open $log: $!";
> 1 while <$in>;
> my $lines = $.;
> close $in;
It's always a bad idea to open a file twice: you can never be sure
you're getting the same file both times. Learn how to use seek.
Also, unless your logfiles are *really* huge before truncation, it's
almost certainly better to just slurp the whole thing into memory and
then count the newlines:
use File::Slurp qw/slurp/;
my $log = slurp $log;
my $lines = $log =~ tr/\n//;
Then you can use something like
my $newlog = $log =~ /(.*\n){$max_line_count}$/m;
to grab the bit you want to keep. (That won't actually work as written:
50_000 is too big for a {} quantifier. Either split it into two or use
an explicit loop.)
If your log files are too big to hold in memory, I would always do
line-based manipulation by using Tie::File. Tie both old and new
logfiles, and then use splice or an explicit loop to copy the lines.
Tie::File will handle locating and caching the locations of the
newlines.
> # Number of preceding lines to ignore (not copy)
> my $target_size = $lines - $max_line_count;
Again, $target_size is a bad name for a variable holding how many lines
to ignore.
> # Open to read old log and write new temp log
> open my $out, '>>', "TEMP.txt" or die "cannot open TEMP.txt: $!";
Why open for append?
> open my $in, '<', $log or die "cannot open $log: $!";
> while (<$in>){
> $x+=1;
> print $out $_ if($x > $target_size);
You know about $. Why aren't you using it here?
> }
> print $out "\n$time_stamp\tFile size truncated\n\n";
> close $in;
> close $out;
>
> # Replace old logfile and clean up temp
> unlink $log;
> move ("TEMP.txt", "$log");
You should just use
rename "TEMP.txt", $log or die "can't overwrite $log: $!";
here. rename will (on most systems) atomically replace $log with
TEMP.txt.
> unlink "TEMP.txt";
If TEMP.txt is still there, something's gone wrong. You should indicate
an error, not just silently remove it.
Ben
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc. For subscription or unsubscription requests, send
#the single line:
#
# subscribe perl-users
#or:
# unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.
NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice.
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V11 Issue 2190
***************************************