[32731] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 3995 Volume: 11

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Sun Jul 21 09:09:30 2013

Date: Sun, 21 Jul 2013 06:09:05 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Sun, 21 Jul 2013     Volume: 11 Number: 3995

Today's topics:
    Re: names, values, boxes and microchips <rweikusat@mssgmbh.com>
    Re: names, values, boxes and microchips <ben@morrow.me.uk>
    Re: names, values, boxes and microchips <rweikusat@mssgmbh.com>
    Re: names, values, boxes and microchips <ben@morrow.me.uk>
    Re: names, values, boxes and microchips <rw@sapphire.mobileactivedefense.com>
    Re: names, values, boxes and microchips gamo@telecable.es
        perl-5.18.0 <nospam.gravitalsun.antispam@spamno.hotmail.anispam.com.nospam>
    Re: perl-5.18.0 <kkeller-usenet@wombat.san-francisco.ca.us>
    Re: Restart Perl Application upon KDE Restart <josef.moellers@invalid.invalid>
    Re: Restart Perl Application upon KDE Restart <ben@morrow.me.uk>
    Re: Restart Perl Application upon KDE Restart (Seymour J.)
    Re: the fastest way to create a directory <hjp-usenet3@hjp.at>
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Fri, 19 Jul 2013 19:28:40 +0100
From: Rainer Weikusat <rweikusat@mssgmbh.com>
Subject: Re: names, values, boxes and microchips
Message-Id: <874nbq5rqf.fsf@sapphire.mobileactivedefense.com>

Rainer Weikusat <rweikusat@mssgmbh.com> writes:

> Ben Morrow <ben@morrow.me.uk> writes:
>> Quoth Rainer Weikusat <rweikusat@mssgmbh.com>:
>
> [...]
>
>>> sub filter_against
>
> [...]
>
>> There are times when it's necessary to write in that sort of state-
>> machine style, but it's always clearer to avoid it when you can.
>>
>>     use List::MoreUtils qw/part/;
>>
>>     my %rc = (
>>         R_AFTER     => 0,
>>         R_SAME      => 1,
>>         R_SUPER     => 1,
>>         R_BEFORE    => 2,
>>         R_SUB       => 2,
>>     );
>>
>>     sub filter_against {
>>         my ($in, $filter) = @_;
>>
>>         my @out;
>>         for my $f (@$filter) {
>>             (my $out, my $drop, $in) = part {
>>                 my ($rc) = $_->net_compare($f);
>>                 $rc{$rc} // die "bad return from net_compare";
>>             } @$in;
>>
>>             push @out, @$out;
>>             p_alloc("%s: dropping %s (%s)", __func__, $_, $f)
>>                 for @$drop;
>>         }
>>
>>         return \@out;
>>     }
>>
>> This does more compares than yours, but given small-lists-infrequently-
>> compared I doubt that matters.

[...]

> But I'm not really in the mood to figure out if this
> hypercomplicated attempt at reverse dicksizing

[...]

> is semantically equivalent to the fairly straight-forward list merge
> based algorithm I'm using.

I've nevertheless been curious enough that I spend some time thinking
about this while walking back to my flat after a trip to the
supermarket: Provided I understand this correctly, this is basically
the bubblesort algorithm: It compares each element of the filter list
to all elements on the input list in turn, moving the input elements
which are in front of the current filter item to a temporary output
list and the others onto the input list for the next iteration. It
then does a 2nd pass through the temporary output list in order to
move the elements on there to the real output list and a second pass
through the 'drop' list in order to print the tracing
information. These two fixup steps are necessary because the 'part'
routine is really totally unsuitable for this particular task because
it always creates a bunch of new anonymous arrays to store its output.

Possibly, there's an even worse way to perform what is essentially a
merge operation of two sorted lists but if the idea was to produce the
most inefficient implementation of this, the code above is certainly a
promising contender for the 'baddest of the worst' title.

"It must 're-use' code someone else already wrote" is not a sensible,
overriding concern for creating software

Joke I invented a while ago: Imagine software companies would be
constructing lorries, how would these look like?

Version 1: Like a regular lorry except that the wheels aren't round
but elliptical because the ellipsoids were left-over from a previous
project.

Version 1.5: Like a lorry with elliptical wheels but it has an
additional jet engine on the back to achieve Superior(!) speed when
compared with the lorries built by competitors using round wheels.

Version 2: Comes with an advanced AI auxiliary steering system
supposed to enable smooth movement despite the combined effects of the
jet engine and the elliptical wheels.

And so forth. The only thing which is never going to happen is that
someone goes back to step one and redesigns the autobody to
accommodate round wheels.


------------------------------

Date: Sat, 20 Jul 2013 00:11:13 +0100
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: names, values, boxes and microchips
Message-Id: <hltoba-vq7.ln1@anubis.morrow.me.uk>


Quoth Rainer Weikusat <rweikusat@mssgmbh.com>:
> Rainer Weikusat <rweikusat@mssgmbh.com> writes:
> > Ben Morrow <ben@morrow.me.uk> writes:
> >>
> >> There are times when it's necessary to write in that sort of state-
> >> machine style, but it's always clearer to avoid it when you can.
> >>
> >>     use List::MoreUtils qw/part/;
> >>
> >>     my %rc = (
> >>         R_AFTER     => 0,
> >>         R_SAME      => 1,
> >>         R_SUPER     => 1,
> >>         R_BEFORE    => 2,
> >>         R_SUB       => 2,
> >>     );
> >>
> >>     sub filter_against {
> >>         my ($in, $filter) = @_;
> >>
> >>         my @out;
> >>         for my $f (@$filter) {
> >>             (my $out, my $drop, $in) = part {
> >>                 my ($rc) = $_->net_compare($f);
> >>                 $rc{$rc} // die "bad return from net_compare";
> >>             } @$in;
> >>
> >>             push @out, @$out;
> >>             p_alloc("%s: dropping %s (%s)", __func__, $_, $f)
> >>                 for @$drop;
> >>         }
> >>
> >>         return \@out;
> >>     }
> >>
> >> This does more compares than yours, but given small-lists-infrequently-
> >> compared I doubt that matters.
> 
> > But I'm not really in the mood to figure out if this
> > hypercomplicated attempt at reverse dicksizing

There's really no need to be rude.

> > is semantically equivalent to the fairly straight-forward list merge
> > based algorithm I'm using.
> 
> I've nevertheless been curious enough that I spend some time thinking
> about this while walking back to my flat after a trip to the
> supermarket: Provided I understand this correctly, this is basically
> the bubblesort algorithm: It compares each element of the filter list
> to all elements on the input list in turn, moving the input elements
> which are in front of the current filter item to a temporary output
> list and the others onto the input list for the next iteration.

It's nowhere near as bad as bubblesort, though I think it probably does
still have O(n^2) complexity. (I admit to being somewhat ignorant of the
theory of limiting complexity; I studied it once upon a time, but I've
never found it useful in practice, given that there are generally
already-existing implementations of most important algorithms, or at
least of all the algorithms I've ever needed.)

The only extra work is does over your algorithm is that it runs all the
way to the end of @$in every time. As I said, if this matters then
eliminting it with firstidx and splice would not be difficult, just a bit
messy. (I don't like dealing with array indices in Perl. It feels like
writing C.)

> It
> then does a 2nd pass through the temporary output list in order to
> move the elements on there to the real output list and a second pass
> through the 'drop' list in order to print the tracing
> information.

Is this important? We're talking about an array of, what, a few dozen
entries at most?

Copying lists (and strings) is what Perl does. If that level of
microoptimisation is necessary, you should probably be writing in C.

> These two fixup steps are necessary because the 'part'
> routine is really totally unsuitable for this particular task because
> it always creates a bunch of new anonymous arrays to store its output.

Yeah, I don't much like the part interface either, though it's hard to
see how to do it better. What it wants, really, is some sort of 'array
slice' entity, the list equivalent of the lvalue returned by substr, but
Perl doesn't have that.

> Possibly, there's an even worse way to perform what is essentially a
> merge operation of two sorted lists but if the idea was to produce the
> most inefficient implementation of this, the code above is certainly a
> promising contender for the 'baddest of the worst' title.

Who cares how efficient it is? You said yourself this is run
infrequently over short lists; it's far more important that the
algorithm be simple and the code be comprehensible than that it run as
fast as possible.

Your example may have been 'simple' from a theoretical standpoint, but
as a piece of Perl it's a nigh-incomprehensible mess. (Actually it looks
like a piece of C badly translated into Perl, and not only because of
__func__.) The most important conceptual part of the algorithm--that
each entry in the filter list divides the input list into three
sections, those before, those matched, and those after--was completely
obscured by callbacks and globals. (I consider those state variables to
be 'globals', in the sense that they are modified in non-obvious ways.)

I suspect it would not be *that* difficult to implement exactly the
algorithm you used, comprehensibly; but I didn't think it was worth the
time. You can spend hours carefully crafting a terribly clever solution
noone else can read, or you can write a simple, straightforward,
somewhat stupid and probably not terribly efficient implementation, and
see if it's good enough to get the job done. Most of the time it will
be. ('When in doubt, use brute force.')

> "It must 're-use' code someone else already wrote" is not a sensible,
> overriding concern for creating software

I used part because the first step in the algorithm is 'divide this list
into three parts'.

> Joke I invented a while ago: Imagine software companies would be
> constructing lorries, how would these look like?
> 
> Version 1: Like a regular lorry except that the wheels aren't round
> but elliptical because the ellipsoids were left-over from a previous
> project.
> 
> Version 1.5: Like a lorry with elliptical wheels but it has an
> additional jet engine on the back to achieve Superior(!) speed when
> compared with the lorries built by competitors using round wheels.
> 
> Version 2: Comes with an advanced AI auxiliary steering system
> supposed to enable smooth movement despite the combined effects of the
> jet engine and the elliptical wheels.
> 
> And so forth. The only thing which is never going to happen is that
> someone goes back to step one and redesigns the autobody to
> accommodate round wheels.

Welcome to Perl 5. The redesign effort is called 'Perl 6', and is
--> thataway.

Ben



------------------------------

Date: Sat, 20 Jul 2013 15:02:19 +0100
From: Rainer Weikusat <rweikusat@mssgmbh.com>
Subject: Re: names, values, boxes and microchips
Message-Id: <87ehatpbx0.fsf@sapphire.mobileactivedefense.com>

Ben Morrow <ben@morrow.me.uk> writes:
> Quoth Rainer Weikusat <rweikusat@mssgmbh.com>:
>> Rainer Weikusat <rweikusat@mssgmbh.com> writes:
>> > Ben Morrow <ben@morrow.me.uk> writes:
>> >>
>> >> There are times when it's necessary to write in that sort of state-
>> >> machine style, but it's always clearer to avoid it when you can.
>> >>
>> >>     use List::MoreUtils qw/part/;
>> >>
>> >>     my %rc = (
>> >>         R_AFTER     => 0,
>> >>         R_SAME      => 1,
>> >>         R_SUPER     => 1,
>> >>         R_BEFORE    => 2,
>> >>         R_SUB       => 2,
>> >>     );
>> >>
>> >>     sub filter_against {
>> >>         my ($in, $filter) = @_;
>> >>
>> >>         my @out;
>> >>         for my $f (@$filter) {
>> >>             (my $out, my $drop, $in) = part {
>> >>                 my ($rc) = $_->net_compare($f);
>> >>                 $rc{$rc} // die "bad return from net_compare";
>> >>             } @$in;
>> >>
>> >>             push @out, @$out;
>> >>             p_alloc("%s: dropping %s (%s)", __func__, $_, $f)
>> >>                 for @$drop;
>> >>         }
>> >>
>> >>         return \@out;
>> >>     }
>> >>
>> >> This does more compares than yours, but given small-lists-infrequently-
>> >> compared I doubt that matters.
>> 
>> > But I'm not really in the mood to figure out if this
>> > hypercomplicated attempt at reverse dicksizing
>
> There's really no need to be rude.

Indeed. In particular, there was no need to ignore the reason why I
posted this example and produce this absolutely atrocious
demonstration that some people are really too clever for their own
good instead. That was not only rude but - as a sort of reverse
categorical imperative - it could be regarded as morally repellent as
well.

[...]

>> Possibly, there's an even worse way to perform what is essentially a
>> merge operation of two sorted lists but if the idea was to produce the
>> most inefficient implementation of this, the code above is certainly a
>> promising contender for the 'baddest of the worst' title.
>
> Who cares how efficient it is? You said yourself this is run
> infrequently over short lists; it's far more important that the
> algorithm be simple and the code be comprehensible than that it run as
> fast as possible.

There's no point in deliberately using algorithms which are badly
suited for certain task, especially when said 'badly suited'
algorithms rely on outright bizarre 'homegrown' abuses for
pre-existing functions instead of simple and well-known 'standard
operations' like merging to sorted lists. That's thoroughly
'undergraduate' stuff and one can expect people to be sufficently
familiar with that that they refrain from totally outlandish flights
of fancy of this kind.

In any case, if you feel like disputing the results of some sixty
years (probably) of research into elementary sorting algorithms based
on NIH and "But I don't care for that!" please write a
book. Suggestion for a suitable cover:

http://cheezburger.com/4354656256




------------------------------

Date: Sat, 20 Jul 2013 16:45:54 +0100
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: names, values, boxes and microchips
Message-Id: <iunqba-h6o.ln1@anubis.morrow.me.uk>


Quoth Rainer Weikusat <rweikusat@mssgmbh.com>:
> Ben Morrow <ben@morrow.me.uk> writes:
> > Quoth Rainer Weikusat <rweikusat@mssgmbh.com>:
> >> Rainer Weikusat <rweikusat@mssgmbh.com> writes:
> >> > Ben Morrow <ben@morrow.me.uk> writes:
> >> >>
> >> >> There are times when it's necessary to write in that sort of state-
> >> >> machine style, but it's always clearer to avoid it when you can.
[...]
> >> >> This does more compares than yours, but given small-lists-infrequently-
> >> >> compared I doubt that matters.
> >> 
> >> > But I'm not really in the mood to figure out if this
> >> > hypercomplicated attempt at reverse dicksizing
> >
> > There's really no need to be rude.
> 
> Indeed. In particular, there was no need to ignore the reason why I
> posted this example

That reason is still unclear, at least to me. Possibly I'm being stupid,
but if so it is unintentional. Your stated reason was something to do
with the relative merits of declaring variables as they are needed and
declaring them all at the top of a sub; AFAICS all your code
demonstrates in that regard is that declaring variables before you need
them and then using them in nonobvious ways produces extremely confusing
code. Presumably you intended it to demonstrate something else.

> and produce this absolutely atrocious
> demonstration that some people are really too clever for their own
> good instead.

I would appreciate it if you could explain what it is, exactly, about my
code that you find atrocious. If it is simply that it uses a slightly
less efficient algorithm, I have already explained that I don't think
that matters in this case, and that if it does I'm fairly sure it could
be fixed without making the code unreadable.

> That was not only rude but - as a sort of reverse
> categorical imperative - it could be regarded as morally repellent as
> well.

You posted code. I posted code. You responded with vulgarity and
insults.

> >> Possibly, there's an even worse way to perform what is essentially a
> >> merge operation of two sorted lists but if the idea was to produce the
> >> most inefficient implementation of this, the code above is certainly a
> >> promising contender for the 'baddest of the worst' title.
> >
> > Who cares how efficient it is? You said yourself this is run
> > infrequently over short lists; it's far more important that the
> > algorithm be simple and the code be comprehensible than that it run as
> > fast as possible.
> 
> There's no point in deliberately using algorithms which are badly
> suited for certain task,

I was not criticising your algorithm: it's entirely appropriate. I was
criticising your implementation of that algorithm, which is baroque and
incomprehensible.

> especially when said 'badly suited'
> algorithms rely on outright bizarre 'homegrown' abuses for
> pre-existing functions instead of simple and well-known 'standard
> operations' like merging to sorted lists. That's thoroughly
> 'undergraduate' stuff and one can expect people to be sufficently
> familiar with that that they refrain from totally outlandish flights
> of fancy of this kind.

Since I have no degree and have received very little formal instruction
in Computer Science, you must forgive me if I don't look at a piece of
code and say immediately 'oh, that's just merge sort'. That may be a
useful skill when reading C or Lisp or other languages where it's common
to reimplement standard algorithms for fun; in Perl, if we want merge
sort we use 'sort'.

> In any case, if you feel like disputing the results of some sixty
> years (probably) of research into elementary sorting algorithms based

I was not criticising the algorithm, I was criticising the
implementation.

> on NIH and "But I don't care for that!" please write a
> book. Suggestion for a suitable cover:
> 
> http://cheezburger.com/4354656256

Now you're just being rude again.

Ben



------------------------------

Date: Sun, 21 Jul 2013 11:15:08 +0100
From: Rainer Weikusat <rw@sapphire.mobileactivedefense.com>
Subject: Re: names, values, boxes and microchips
Message-Id: <87r4es5idv.fsf@sapphire.mobileactivedefense.com>

Rainer Weikusat <rweikusat@mssgmbh.com> writes:
> Rainer Weikusat <rweikusat@mssgmbh.com> writes:
>
>> Ben Morrow <ben@morrow.me.uk> writes:

[...]

>>>     use List::MoreUtils qw/part/;
>>>
>>>     my %rc = (
>>>         R_AFTER     => 0,
>>>         R_SAME      => 1,
>>>         R_SUPER     => 1,
>>>         R_BEFORE    => 2,
>>>         R_SUB       => 2,
>>>     );
>>>
>>>     sub filter_against {
>>>         my ($in, $filter) = @_;
>>>
>>>         my @out;
>>>         for my $f (@$filter) {
>>>             (my $out, my $drop, $in) = part {
>>>                 my ($rc) = $_->net_compare($f);
>>>                 $rc{$rc} // die "bad return from net_compare";
>>>             } @$in;
>>>
>>>             push @out, @$out;
>>>             p_alloc("%s: dropping %s (%s)", __func__, $_, $f)
>>>                 for @$drop;
>>>         }
>>>
>>>         return \@out;
>>>     }
>>>
>>> This does more compares than yours, but given small-lists-infrequently-
>>> compared I doubt that matters.
>
> [...]
>
>> But I'm not really in the mood to figure out if this
>> hypercomplicated attempt at reverse dicksizing
>
> [...]
>
>> is semantically equivalent to the fairly straight-forward list merge
>> based algorithm I'm using.
>
> I've nevertheless been curious enough that I spend some time thinking
> about this while walking back to my flat after a trip to the
> supermarket: Provided I understand this correctly, this is basically
> the bubblesort algorithm:

I've used the catchup function of my newsreader to throw everything in
this group away since my less-than-reasoned two postings of
yesterday. While I've grown somewhat more accustomed to ordinary human
meanness, the umount of totally irrational[*] vitriolic rage some
people are capable of when being confronted with concepts alien to
them (such as using 'lexically scoped subroutines' with meaningful
names instead of duplicated code) is still a little bit too much for
me. But there are two points I'd like to clear up a little.

1. 'Bubblesort': This is not, in fact, the bubblesort algortihm but an
insertion sort variant, somewhat obscured by the need to work around
behaviour built into the 'part' subroutine which isn't really useful
for this case: It works by searching the place where the item on the
filter list had to be inserted into the other if it was to be
inserted. A somewhat less contorted implementation could look like
this (uncompiled/ tested) example:

my ($f, $i, $next, @out);

for $f (@$filter) {
	$next = [];
    
	for $i (@$in) {
	    	given ($i->net_compare($f)) {
	        	when (R_AFTER) {
	                	push(@out, $i);
			}

	                when ([R_BEFORE, R_SUB) {
	                	push(@$next, $i);
			}
	}

        $in = $next;
}

This is still rather bizarre, given that both input list are sorted
and free of items which are a subset of other items on the list,
because there's no reason to continue comparing $f with elements on
@$in after its place in the list has been found, ie, after a
comparison either returned '$f is in front of this' (R_BEFORE) or '$f
is a subset of this' (R_SUB) since the result of all remaining
invocations will be R_BEFORE. That's a direct result of the
unfortunate choice to use a general purpose 'partition a list' routine
for the inner loop: That's not quite what is really needed for an
insertion sort but 'clarity of the implementation be damned,
('clarity' in the sense that all operations which are part of the
algorithm are actually useful steps wrt accomplishing the desired
end) we can Cleverly(!) save two lines of code here' ...

This is really, seriously bad for writing something which can be
maintained by someone other than the original author because nobody
except him knows which parts of the implemented algortihm are
functional and which are accidental --- this will need to be
rediscovered by everyone who is tasked with making sense of this code
(which includes the original author after a sufficient amount of time
has passed).

2. __func__: That's a feature of C Perl unfortunately lacks, namely, a
'special expression' containing the name of the current
subroutine. This is useful for diagnostic messages because it enables
someone reading through the debug output to identify the code which
produces a particular 'diagnostics statement' more easily than by
grepping for the string. The equivalent Perl expression is

(caller(1))[3]

when invoked as an actual subroutine or

(caller(0))[3]

when the expression is 'inlined' instead. __func__ is defined as
subroutine,

sub __func__()
{
    return (caller(1))[3];
}

to make the perl compiler happy when being run for syntax/ strictness
checking and warnings alone and prior to installation (into a debian/
directory, actually), all code files are preprocessed via m4 -P using
an 'init file' containing

m4_define(`__func__',   `(caller(0))[3]')

m4_changequote(`', `')

because this code prints tenthousands of messages quickly for 'large'
installations (several thousand users) and they're necessary for
identifying and fixing errors in it.

[*] Eg, referring to use of an O(n) algorithm for merging two sorted
lists as 'pointless microoptimization which would be better
accomplished by using C instead of Perl' --- that's not 'an
optimization' aka 'attempt to cover the well after the
baby drowned' at all but the obvious 'textbook' choice.


------------------------------

Date: Sun, 21 Jul 2013 12:47:55 +0000 (UTC)
From: gamo@telecable.es
Subject: Re: names, values, boxes and microchips
Message-Id: <ksgl9r$tdn$1@speranza.aioe.org>

2. __func__: That's a feature of C Perl unfortunately lacks, namely, a
'special expression' containing the name of the current
subroutine. This is useful for diagnostic messages because it enables
------------------

There is a new function that could be what you want: 

>perldoc -f __SUB__

       __SUB__ A special token that returns a reference to the current
               subroutine, or "undef" outside of a subroutine.

               The behaviour of "__SUB__" within a regex code block (such as
               "/(?{...})/") is subject to change.

               This token is only available under "use v5.16" or the
               "current_sub" feature.  See feature.

Best regards,




------------------------------

Date: Sun, 21 Jul 2013 00:07:12 +0300
From: "George Mpouras" <nospam.gravitalsun.antispam@spamno.hotmail.anispam.com.nospam>
Subject: perl-5.18.0
Message-Id: <kseu6f$1jub$1@news.ntua.gr>

Where is the release/changelog of perl-5.18.0 ( 
http://www.perl.org/get.html ) 



------------------------------

Date: Sat, 20 Jul 2013 15:29:15 -0700
From: Keith Keller <kkeller-usenet@wombat.san-francisco.ca.us>
Subject: Re: perl-5.18.0
Message-Id: <rifrbax8f3.ln2@goaway.wombat.san-francisco.ca.us>

On 2013-07-20, George Mpouras <nospam.gravitalsun.antispam@spamno.hotmail.anispam.com.nospam> wrote:
> Where is the release/changelog of perl-5.18.0 ( 
> http://www.perl.org/get.html ) 

http://search.cpan.org/~rjbs/perl-5.18.0/pod/perldelta.pod

--keith

-- 
kkeller-usenet@wombat.san-francisco.ca.us
(try just my userid to email me)
AOLSFAQ=http://www.therockgarden.ca/aolsfaq.txt
see X- headers for PGP signature information



------------------------------

Date: Fri, 19 Jul 2013 23:00:59 +0200
From: Josef Moellers <josef.moellers@invalid.invalid>
Subject: Re: Restart Perl Application upon KDE Restart
Message-Id: <b4tnobF2b62U1@mid.individual.net>

On 07/18/2013 10:46 PM, Henry Law wrote:
> On 18/07/13 21:13, George Mpouras wrote:
>> define you script here
>>
>>     /etc/rc.local
> 
> George, I don't think that's what the OP wants.  If you read the post
> you'll see
> 
>> e.g. when I exit without closing firefox,
>> thunderbird, kate, they are all restarted
> 
> So I think what's required is some way to register a Perl application
> with KDE such that if it is shut down ungracefully (i.e. by the system
> rather than the user) then KDE will know to restart it.

Exactly. I *do* know that it's somehow ksmserver's responsibility and
the applications that are running (and would therefore be eligible for
restart) are recorded in said ~/.kde/share/config/ksmserverrc, but I
doubt that I can just modify that file without telling ksmserver!
> 
> It seems that he also wants some "snapshot" capability which will allow
> KDE to apply some logic to the restart; in the case of the browser this
> means opening the pages that were open at the point of shutdown.  What
> that might mean in the case of the Perl application I couldn't say.

I think it would be OK to just restart the application. Restoring the
application's internal state might be the application's responsibility,
e.g. by storing it in an rc file in the user's home directory.

> That said, I have no idea what the answer is.

Thanks anyway.

Josef



------------------------------

Date: Fri, 19 Jul 2013 22:44:03 +0100
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: Restart Perl Application upon KDE Restart
Message-Id: <3iooba-s77.ln1@anubis.morrow.me.uk>


Quoth Josef Moellers <josef.moellers@invalid.invalid>:
> On 07/18/2013 10:46 PM, Henry Law wrote:
> > On 18/07/13 21:13, George Mpouras wrote:
> >> define you script here
> >>
> >>     /etc/rc.local
> > 
> > George, I don't think that's what the OP wants.  If you read the post
> > you'll see
> > 
> >> e.g. when I exit without closing firefox,
> >> thunderbird, kate, they are all restarted
> > 
> > So I think what's required is some way to register a Perl application
> > with KDE such that if it is shut down ungracefully (i.e. by the system
> > rather than the user) then KDE will know to restart it.
> 
> Exactly. I *do* know that it's somehow ksmserver's responsibility and
> the applications that are running (and would therefore be eligible for
> restart) are recorded in said ~/.kde/share/config/ksmserverrc, but I
> doubt that I can just modify that file without telling ksmserver!

ksmserver appears to talk the standard X session management protocol,
implemented in -lSM. I can't see a Perl binding to this, so you would
need to either 1. write your own XS binding (probably including a -lICE
binding as well), 2. reimplement the protocol in Perl or 3. use PerlQt
and create a QtApplication just for the sake of registering with the
session manager.

See also:

https://projects.kde.org/projects/kde/kde-workspace/repository/revisions/
    master/entry/ksmserver/README
ftp://ftp.x.org/pub/X11R7.0/doc/PDF/xsmp.pdf
http://search.cpan.org/~cburel/Qt-0.96.0/
http://qt-project.org/doc/qt-4.8/session.html
xsm(1)

The X distribution doesn't appear to provide any documentation for -lSM
or -lICE, or at least not any I can find.

Ben



------------------------------

Date: Sat, 20 Jul 2013 23:35:57 -0400
From: Shmuel (Seymour J.) Metz <spamtrap@library.lspace.org.invalid>
Subject: Re: Restart Perl Application upon KDE Restart
Message-Id: <51eb571d$10$fuzhry+tra$mr2ice@news.patriot.net>

In <ks9i9d$1bf8$1@news.ntua.gr>, on 07/18/2013
   at 11:13 PM, "George Mpouras"
<nospam.gravitalsun.antispam@spamno.hotmail.anispam.com.nospam> said:

>Content-Type: text/plain;
>	format=flowed;
>	charset="iso-8859-1";
>	reply-type=original
>Content-Transfer-Encoding: 7bit

!

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org



------------------------------

Date: Fri, 19 Jul 2013 22:11:44 +0200
From: "Peter J. Holzer" <hjp-usenet3@hjp.at>
Subject: Re: the fastest way to create a directory
Message-Id: <slrnkuj7c0.qf.hjp-usenet3@hrunkner.hjp.at>

On 2013-07-19 15:21, Ben Morrow <ben@morrow.me.uk> wrote:
> Quoth "Peter J. Holzer" <hjp-usenet3@hjp.at>:
>> Each successful mkdir call will cause at least 5 disk accesses on a
>> typical Linux file system: 1 for the journal, 2 for the inode and
>> content of the parent directory and 2 for the inode and content of the
>> new directory (Oh, I forgot the bitmaps, add another 2 or 4 ...). These
>> will happen *after* mkdir returns because of the writeback cache, and
>> the kernel will almost certainly succeed in coalescing at least some and
>> maybe many of those writes, but if you create a lot of directories
>> (George wrote about "thousands", in my tests I created about 150000)
>> these writes will eventually dominate.
>
> Since people've been quoting Henry Spencer:
>
>     10. Thou shalt foreswear, renounce, and abjure the vile heresy which
>     claimeth that ``All the world's a VAX'', and have no commerce with
>     the benighted heathens who cling to this barbarous belief, that the
>     days of thy program may be long even though the days of thy current
>     machine be short.
>
> I believe on my machine (FreeBSD/ZFS) mkdir does a single write to the
> intent log, which is on an SSD. The writes to spinning rust come (a good
> deal) later, and are thoroughly batched.

I did write:

>> These will happen *after* mkdir returns because of the writeback
>> cache, and the kernel will almost certainly succeed in coalescing at
>> least some and maybe many of those writes,

So I was obviously aware of that (Linux isn't that different from BSD in
that regard. Unix has had a write back cache since the very beginning,
although BSD FFS was for some time infamous for synchronous inode updates
(which would hit mkdir hard), but that was solved with soft updates for
FFS long before ZFS came along. Linux ext never did that (by default)).

In many cases the real I/O may come only after the program causing it
has finished. In that case the I/O won't influence the program, but it
may slow down some other program, so completely ignoring it seems like
cheating.  So in my benchmarks I created enough directories that the
kernel had to do real I/O while the program was still running.

It also very much raises the question whether it is worthwhile to
optimize. George's benchmark takes less than half a second to create 5000
directorys on my system. The difference between the slowest and the
fastest method is about 0.1 seconds. In relative terms, that's a nice
improvement (~ 25%), but it's still only 0.1 seconds in a program which
creates 5000 directorys, and then presumably *does something*
with those directories. Is it worthwhile to spend effort saving that 0.1
seconds which are almost certainly negligible compared to the total time
the program runs? 

Measure, optimize, measure again. Concentrate on the parts where your
program spends most of its time. Look for algorithmic improvements,
not microoptimizations (maybe you don't need those 5000 directories at
all?).


> You've missed an important overhead: the system call itself. This is
> never cheap, and depending on architecture can be very expensive.

I didn't miss it, I deliberately didn't mention it.

I started programming in the 1980s, and I have that "OMG, a system call
is expensive, I must avoid it!!!" gut feeling, too. But on my 1.8GHz
Core2 a call to time(2) takes less than 160 ns, which compares
favourably to about 200 ns for a call to an empty perl sub. 

Again: Measure, optimize, measure again. Gut feelings are dangerous.

	hp

PS: I have written at least two implementations of mkdir_p myself. I
    don't remember if I wasn't aware File::Path::make_path at the
    time or if I had a reason to write my own. It doesn't really matter
    - the function is so simple that writing it doesn't take much more
    time than reading the docs.

-- 
   _  | Peter J. Holzer    | Fluch der elektronischen Textverarbeitung:
|_|_) | Sysadmin WSR       | Man feilt solange an seinen Text um, bis
| |   | hjp@hjp.at         | die Satzbestandteile des Satzes nicht mehr
__/   | http://www.hjp.at/ | zusammenpaßt. -- Ralph Babel


------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

Back issues are available via anonymous ftp from
ftp://cil-www.oce.orst.edu/pub/perl/old-digests. 

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V11 Issue 3995
***************************************


home help back first fref pref prev next nref lref last post