[32747] in Perl-Users-Digest
Perl-Users Digest, Issue: 4011 Volume: 11
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Fri Aug 9 14:09:28 2013
Date: Fri, 9 Aug 2013 11:09:05 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Fri, 9 Aug 2013 Volume: 11 Number: 4011
Today's topics:
Re: fast scan <derykus@gmail.com>
Re: fast scan <derykus@gmail.com>
Re: fork it <jblack@nospam.com>
Re: fork it <rweikusat@mssgmbh.com>
Re: fork it <gravitalsun@hotmail.foo>
Re: fork it <ben@morrow.me.uk>
Re: fork it <jblack@nospam.com>
Re: fork it <hjp-usenet3@hjp.at>
Re: fork it <rweikusat@mssgmbh.com>
Re: How to minimize server load when program is run <justin.1303@purestblue.com>
Re: translate human-readable time shorthand <ben@morrow.me.uk>
Re: translate human-readable time shorthand (Tim McDaniel)
Re: translate human-readable time shorthand <gravitalsun@hotmail.foo>
Re: translate human-readable time shorthand <peter@makholm.net>
Re: translate human-readable time shorthand <jimsgibson@gmail.com>
Re: translate human-readable time shorthand (Tim McDaniel)
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Thu, 08 Aug 2013 15:02:26 -0700
From: Charles DeRykus <derykus@gmail.com>
Subject: Re: fast scan
Message-Id: <ku14in$b2k$1@speranza.aioe.org>
On 8/6/2013 11:51 PM, George Mpouras wrote:
> Στις 7/8/2013 08:17, ο/η Charles DeRykus έγραψε:
>> On 8/6/2013 4:26 PM, Rainer Weikusat wrote:
>>> Charles DeRykus <derykus@gmail.com> writes:
>>>> On 8/6/2013 3:32 AM, Rainer Weikusat wrote:
>>>>> Charles DeRykus <derykus@gmail.com> writes:
>>>>>> On 8/5/2013 12:28 PM, Rainer Weikusat wrote:
>>>>> ...
>>>>>
>>>>> The solution is really simply to rate-limit requests being sent
>>>
>>> ..
>>>
>>>> Overall send-rate is throttled with the configurable parallelism
>>>> setting.
>>>
>>> 'Send 1000 requests as fast as you can, then, do nothing for ten
>>> seconds' is not the same as 'continue sending a request every 0.01s
>>> for 10 seconds': Again simplifying things, 'an ethernet is binary': At
>>> any given time, it is either 'in use' or 'not in use'. The bulk send
>>> means it is 'in use' for a relatively long period of time at the
>>> beginning and will be 'in use' for a similarly long period of time as
>>> soon as the replies start arriving. Otherwise, it will be 'in use' for
>>> many short time periods and 'be available' in between (the same is
>>> true for resources on the sending/ receiving host where it means 'be
>>> available to deal with replies').
>>>
>>
>> Yes, thanks for the clarification. POE auto-sizes various default
>> settings including 'Parallelism' based on OS and other factors. I notice
>> there's also a FAQ strategy to introduce pauses within its pseudo-kernel
>> if the requirement is there. Clearly any approach will require tuning.
>>
>> >>
>> ...
>>
>
>
> can you run the POE stuff while monitor cpu and network and reply your
> findings ;
>
>
I think at this point there are too many moving parts to draw definitive
conclusions about speed, cpu, network congestion, etc.
Benchmarks are always hard to get right. FUD can immobilize you if you
let it. More often, just applying common sense measures to mitigate any
potential impact is the best course.
Network congestion should always be a concern though so Rainer's
suggestion of the 0.01s delay is well advised. Especially if you pinging
tens of thousands of hosts across the public internet instead of a few
hundred hosts in a private, less trafficked area. The POE pinger
mentioned could be tweaked with the .01s delay (or even written to take
advantage of POE's own kernel delay call I think). The other potential
impact would be consuming your own host resources too greedily with lots
of rapid forking which is why I suggested POE... as well as the
advantage of POE's large base of robust templates for more complicated
network app's.
--
Charles DeRykus
------------------------------
Date: Thu, 08 Aug 2013 15:05:07 -0700
From: Charles DeRykus <derykus@gmail.com>
Subject: Re: fast scan
Message-Id: <ku14nn$bo4$1@speranza.aioe.org>
On 8/6/2013 11:51 PM, George Mpouras wrote:
> Στις 7/8/2013 08:17, ο/η Charles DeRykus έγραψε:
>> On 8/6/2013 4:26 PM, Rainer Weikusat wrote:
>>> Charles DeRykus <derykus@gmail.com> writes:
>>>> On 8/6/2013 3:32 AM, Rainer Weikusat wrote:
>>>>> Charles DeRykus <derykus@gmail.com> writes:
>>>>>> On 8/5/2013 12:28 PM, Rainer Weikusat wrote:
>>>>> ...
>>>>>
>>>>> The solution is really simply to rate-limit requests being sent
>>>
>>> ..
>>>
>>>> Overall send-rate is throttled with the configurable parallelism
>>>> setting.
>>>
>>> 'Send 1000 requests as fast as you can, then, do nothing for ten
>>> seconds' is not the same as 'continue sending a request every 0.01s
>>> for 10 seconds': Again simplifying things, 'an ethernet is binary': At
>>> any given time, it is either 'in use' or 'not in use'. The bulk send
>>> means it is 'in use' for a relatively long period of time at the
>>> beginning and will be 'in use' for a similarly long period of time as
>>> soon as the replies start arriving. Otherwise, it will be 'in use' for
>>> many short time periods and 'be available' in between (the same is
>>> true for resources on the sending/ receiving host where it means 'be
>>> available to deal with replies').
>>>
>>
>> Yes, thanks for the clarification. POE auto-sizes various default
>> settings including 'Parallelism' based on OS and other factors. I notice
>> there's also a FAQ strategy to introduce pauses within its pseudo-kernel
>> if the requirement is there. Clearly any approach will require tuning.
>>
>> >>
>> ...
>>
>
>
> can you run the POE stuff while monitor cpu and network and reply your
> findings ;
>
>
I think at this point there are too many moving parts to draw definitive
conclusions about speed, cpu, network congestion, etc.
Benchmarks are always hard to get right. FUD can immobilize you if you
let it. More often, just applying common sense measures to mitigate any
potential impact is the best course.
Network congestion should always be a concern though so Rainer's
suggestion of the 0.01s delay is well advised. Especially if you pinging
tens of thousands of hosts across the public internet instead of a few
hundred hosts in a private, less trafficked area. The POE pinger
mentioned could be tweaked with the .01s delay (or even written to take
advantage of POE's own kernel delay call I think). The other potential
impact would be consuming your own host resources too greedily with lots
of rapid forking which is why I suggested POE... as well as the
advantage of POE's large base of robust templates for more complicated
network app's.
--
Charles DeRykus
------------------------------
Date: Thu, 8 Aug 2013 15:47:48 -0500
From: John Black <jblack@nospam.com>
Subject: Re: fork it
Message-Id: <MPG.2c6dbfee6e05b9e7989786@news.eternal-september.org>
In article <87r4e4fi2d.fsf@vps1.hacking.dk>, peter@makholm.net says...
>
> George Mpouras <nospam.gravitalsun.noadsplease@hotmail.noads.com>
> writes:
>
> > The idea is to finish a "special" job as soon as possible by auto
> > split it and explicitly assign its parts to dedicated cores.
>
> Often it will be hard to reliably split the job into into a number of
> chunks that precisely fits with the number of cores available.
>
> Most of the time I split the task into natural chunks and then maintain
> a queue of chunks to be processed. Then I fork a new process for each
> chunk with some code to ensure that I only have $N jobs running at the
> same time.
>
> This scheme is implemented by Parallel::ForkManager available on CPAN.
>
> https://metacpan.org/module/Parallel::ForkManager
>
> I have never cared about pinning a task to a specific CPU. Most of my
> task are inherently IO-bound and often running on servers doing other
> work at the same time. Both issues that makes pinning less important, if
> not right out bad.
>
> //Makholm
Can anyone describe the pros/cons of using fork here instead of the routines provided in "use
threads"?
http://perldoc.perl.org/threads.html
John Black
------------------------------
Date: Thu, 08 Aug 2013 22:10:41 +0100
From: Rainer Weikusat <rweikusat@mssgmbh.com>
Subject: Re: fork it
Message-Id: <87y58b28j2.fsf@sapphire.mobileactivedefense.com>
John Black <jblack@nospam.com> writes:
> In article <87r4e4fi2d.fsf@vps1.hacking.dk>, peter@makholm.net says...
>> George Mpouras <nospam.gravitalsun.noadsplease@hotmail.noads.com>
>> writes:
>>
>> > The idea is to finish a "special" job as soon as possible by auto
>> > split it and explicitly assign its parts to dedicated cores.
>>
>> Often it will be hard to reliably split the job into into a number of
>> chunks that precisely fits with the number of cores available.
>>
>> Most of the time I split the task into natural chunks and then maintain
>> a queue of chunks to be processed. Then I fork a new process for each
>> chunk with some code to ensure that I only have $N jobs running at the
>> same time.
>>
>> This scheme is implemented by Parallel::ForkManager available on CPAN.
>>
>> https://metacpan.org/module/Parallel::ForkManager
>>
>> I have never cared about pinning a task to a specific CPU. Most of my
>> task are inherently IO-bound and often running on servers doing other
>> work at the same time. Both issues that makes pinning less important, if
>> not right out bad.
>>
>> //Makholm
>
> Can anyone describe the pros/cons of using fork here instead of the routines provided in "use
> threads"?
>
> http://perldoc.perl.org/threads.html
Perl threading support is based on the Windows 'fork emulation': It
creates a new interpreter for every thread which doesn't share memory
with any other interpreter. This means 'createing a thread' is
expensive (comparable to forking prior to COW because it essentially
'copies the complete process') and 'Perl threads' use a lot of
memory. Also, inter-thread communication is (reportedly) based on tied
variables, another 'slow' mechanism.
------------------------------
Date: Fri, 09 Aug 2013 00:46:57 +0300
From: George Mpouras <gravitalsun@hotmail.foo>
Subject: Re: fork it
Message-Id: <ku13kg$ald$1@news.ntua.gr>
Στις 8/8/2013 21:42, ο/η Charles DeRykus έγραψε:
> kill(TERM=>$tid) or kill(KILL=>$tid)
what I wrote it was only quick example, your recommendation of
kill(TERM=>$tid) is very good, in cases you want to tell the process
gently to go away
------------------------------
Date: Thu, 8 Aug 2013 22:44:32 +0100
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: fork it
Message-Id: <03gdda-pmd2.ln1@anubis.morrow.me.uk>
Quoth Rainer Weikusat <rweikusat@mssgmbh.com>:
> John Black <jblack@nospam.com> writes:
> >
> > Can anyone describe the pros/cons of using fork here instead of the
> routines provided in "use
> > threads"?
> >
> > http://perldoc.perl.org/threads.html
It's usual, at least on Unix, not to build perl with thread support. It
makes everything slower, even if you're not using threads.
> Perl threading support is based on the Windows 'fork emulation': It
> creates a new interpreter for every thread which doesn't share memory
> with any other interpreter. This means 'createing a thread' is
> expensive (comparable to forking prior to COW because it essentially
> 'copies the complete process') and 'Perl threads' use a lot of
> memory. Also, inter-thread communication is (reportedly) based on tied
> variables, another 'slow' mechanism.
Shared arrays and hashes are tied. Shared scalars use magic, which is
like tying but implemented in C and considerably more efficient. The
original design was for arrays and hashes to use magic as well (they
have a magic type reserved, 'N', which is currently unused), but it
turns out the magic interface isn't sufficient to implement them
correctly.
More importantly, however, because of the limitations of the perl SV
interface (basically, because there are no locks), every variable shared
between N threads exists in N+1 instances. Every write to a shared
variable is first written to the variable in 'this' thread, then copied
to the 'master' instance, and then copied from there to the instances in
every other thread. (I think these downward copies happen on demand, as
the variable is read in each thread, but there's little point sharing a
variable with a thread that isn't going to read it.)
Amazingly, there are (apparently) still situations where threading is
more efficient than forking, even on Unix, though I suspect you have to
know exactly what you're doing to accomplish that.
Ben
------------------------------
Date: Fri, 9 Aug 2013 10:20:30 -0500
From: John Black <jblack@nospam.com>
Subject: Re: fork it
Message-Id: <MPG.2c6ec4b9f25dee33989787@news.eternal-september.org>
In article <87y58b28j2.fsf@sapphire.mobileactivedefense.com>, rweikusat@mssgmbh.com says...
> > Can anyone describe the pros/cons of using fork here instead of the routines provided in "use
> > threads"?
> >
> > http://perldoc.perl.org/threads.html
>
> Perl threading support is based on the Windows 'fork emulation':
Are you saying that "use threads" should not or cannot be used on non-Windows platforms? And
am I to infer that fork cannot be used on Windows platforms?
> It
> creates a new interpreter for every thread which doesn't share memory
> with any other interpreter.
Does fork not have this problem? How would it not since Perl is interpreted?
> This means 'createing a thread' is
> expensive (comparable to forking prior to COW because it essentially
What is COW?
> 'copies the complete process') and 'Perl threads' use a lot of
> memory. Also, inter-thread communication is (reportedly) based on tied
> variables, another 'slow' mechanism.
I've programmed lots in Perl but never multi-threaded yet. I have been wanting to make some
of my programs multi-threaded but haven't gotten around to learning that yet. Ideally, I'd
like a general solution that works for both Windows and non-Windows platforms. Is there a
good way to do multi-threading that is platform independent?
John Black
------------------------------
Date: Fri, 9 Aug 2013 17:54:01 +0200
From: "Peter J. Holzer" <hjp-usenet3@hjp.at>
Subject: Re: fork it
Message-Id: <slrnl0a44r.2ju.hjp-usenet3@hrunkner.hjp.at>
On 2013-08-09 15:20, John Black <jblack@nospam.com> wrote:
> In article <87y58b28j2.fsf@sapphire.mobileactivedefense.com>, rweikusat@mssgmbh.com says...
>> > Can anyone describe the pros/cons of using fork here instead of the
>> > routines provided in "use threads"?
>> >
>> > http://perldoc.perl.org/threads.html
>>
>> Perl threading support is based on the Windows 'fork emulation':
>
> Are you saying that "use threads" should not or cannot be used on
> non-Windows platforms? And am I to infer that fork cannot be used on
> Windows platforms?
The windows OS doesn't have a fork system call. The perl implementation
on Windows emulates it by cloning the interpreter and starting a new
thread (basically what starting a thread with use threads does).
>> It creates a new interpreter for every thread which doesn't share
>> memory with any other interpreter.
>
> Does fork not have this problem? How would it not since Perl is
> interpreted?
Yes, but the OS can do this much more efficiently than the perl
interpreter can.
Firstly, the OS doesn't actually have to copy the process memory at all.
It can just set all pages to read only, let both processes refer to
those read-only pages and carry on. When one process tries to modify
some data, it can intercept that write and create a writable copy of
that page on the fly ("copy on write" = "COW"). That makes startup of
the new process very fast and it often also saves time over the lifetime
of the process since many pages will never be written to.
Secondly, even if the OS has to (eventually) copy all the process
memory, it can do it blindly via bulk copies, while the perl interpreter
needs to copy each variable individually, manipulate pointers, etc.
(because the copy will live in the same process, i.e. address space).
>> This means 'createing a thread' is expensive (comparable to forking
>> prior to COW because it essentially
>
> What is COW?
See above.
>> 'copies the complete process') and 'Perl threads' use a lot of
>> memory. Also, inter-thread communication is (reportedly) based on tied
>> variables, another 'slow' mechanism.
>
> I've programmed lots in Perl but never multi-threaded yet. I have
> been wanting to make some of my programs multi-threaded but haven't
> gotten around to learning that yet. Ideally, I'd like a general
> solution that works for both Windows and non-Windows platforms. Is
> there a good way to do multi-threading that is platform independent?
Multithreading is platform-independent, it is just as horrible on Unix
as on Windows. Fork is fast on Unix and slow on Windows (almost always
- as Ben stated, there are some rare situations where threads are
faster).
If you are serious about writing an application which can exploit
parallelism on Unix and Windows, you should probably look at
POE for a high-level framework, ZeroMQ for relatively low-level
building blocks or even the IPC primitives for the DIY approach.
hp
--
_ | Peter J. Holzer | Fluch der elektronischen Textverarbeitung:
|_|_) | Sysadmin WSR | Man feilt solange an seinen Text um, bis
| | | hjp@hjp.at | die Satzbestandteile des Satzes nicht mehr
__/ | http://www.hjp.at/ | zusammenpat. -- Ralph Babel
------------------------------
Date: Fri, 09 Aug 2013 17:16:19 +0100
From: Rainer Weikusat <rweikusat@mssgmbh.com>
Subject: Re: fork it
Message-Id: <87txiyu9f0.fsf@sapphire.mobileactivedefense.com>
John Black <jblack@nospam.com> writes:
> In article <87y58b28j2.fsf@sapphire.mobileactivedefense.com>, rweikusat@mssgmbh.com says...
>> > Can anyone describe the pros/cons of using fork here instead of the routines provided in "use
>> > threads"?
>> >
>> > http://perldoc.perl.org/threads.html
>>
>> Perl threading support is based on the Windows 'fork emulation':
>
> Are you saying that "use threads" should not or cannot be used on
> non-Windows platforms?
Why do you think so?
> And am I to infer that fork cannot be used on Windows platforms?
Fork doesn't exist on VMSish platforms because Digital didn't invent
it. Because of this, fork is emulated on Windows, cf
The fork() emulation is implemented at the level of the Perl
interpreter. What this means in general is that running
fork() will actually clone the running interpreter and all its
state, and run the cloned interpreter in a separate thread,
beginning execution in the new thread just after the point
where the fork() was called in the parent.
[perldoc perlfork]
>> It creates a new interpreter for every thread which doesn't share memory
>> with any other interpreter. This means 'createing a thread' is
>> expensive (comparable to forking prior to COW because it essentially
[sensible ordering restored]
> Does fork not have this problem? How would it not since Perl is interpreted?
> What is COW?
Copy-on-write. That's how fork has been usually implemented since
System V except on sufficiently ancient BSD-based system (prior to
4.4BSD) because it wasn't invented in Berkeley and the guys who
'invented stuff in Berkeley' and thus,got their code into the
BSD-kernel regardless of any technical merits it might have agreed
with the Digital tribe in one important aspect: Nobody uses fork for
multiprocessing (to this date, this is probably true for BSD because
it faithfully preserves UNIX(*) V7 'fork failure semantics', IOW, large
processes can't be forked[*]). Consequently, an even remotely efficient
fork which supports actual concurrent execution isn't needed (a
splendid example of a self-fullfilling prophecy). Back to COW: This
means that, by default, parent and child share all 'physical memory'
after a fork with individual page copies being created as the need
arises. This is beneficial to byte-compiled languages because both
copies can not only share the interpeter code but also all 'read-only
after compilation phase' parts of the interpreter state.
[*] My opinion on the 'system which refuse to work because it is too
afraid of possible future problems' is "I don't want it". YMMV.
>> 'copies the complete process') and 'Perl threads' use a lot of
>> memory. Also, inter-thread communication is (reportedly) based on tied
>> variables, another 'slow' mechanism.
>
> I've programmed lots in Perl but never multi-threaded yet. I have
> been wanting to make some of my programs multi-threaded but haven't
> gotten around to learning that yet. Ideally, I'd like a general
> solution that works for both Windows and non-Windows platforms. Is
> there a good way to do multi-threading that is platform independent?
If you don't mind createing code whose runtime behaviour is even more
'UNIX(*) V7 style' than 4.4BSD, the Perl threading support is the way
to go. If 'predominantly targetting UNIX(*)', I think fork is the
better choice, especially as this will auto-degrade to something which
works on Windows. OTOH, it pretty much won't work anywhere else. But
one gets some 'nice' features in return such as "multiple threads of
execution which can't crash each other".
------------------------------
Date: Fri, 9 Aug 2013 11:02:40 +0100
From: Justin C <justin.1303@purestblue.com>
Subject: Re: How to minimize server load when program is run
Message-Id: <0breda-2pb.ln1@zem.masonsmusic.co.uk>
On 2013-06-13, Justin C <justin.1303@purestblue.com> wrote:
> My web-hosts are running perl 5.8.8, other software there is of a
> similar age, and some things are missing (I wanted to 'nice' my
> program, but there is no 'nice').
>
> I have written a backup program to tar and gzip my entire directory
> tree on their site, and also to dump the db and add that to the tar.
> The program I have written runs one of my cores at 100% for two
> minutes, and uses almost 100MB RAM. If there is a way I'd like to
> reduce this load (as I can't 'nice' it).
[snip]
Apologies for the (very) late follow up to this. I spent some time
pondering the options, and tried Ben's suggestion of
Archive::Tar::Streamed, but it's not installed (and fixed my bad
date call, thank you Ben). In the end I used bash, and the program
runs in about ten seconds.
I realise that, had I written the program well enough, I might have
got close to that short a time with Perl, but I'm happy with the
bash solution.
Thanks to all who replied, all suggestions were useful.
Justin.
--
Justin C, by the sea.
------------------------------
Date: Thu, 8 Aug 2013 21:43:26 +0100
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: translate human-readable time shorthand
Message-Id: <egcdda-46d2.ln1@anubis.morrow.me.uk>
Quoth tmcd@panix.com:
> In article <norada-ai8.ln1@anubis.morrow.me.uk>,
> Ben Morrow <ben@morrow.me.uk> wrote:
> >
> >If you s/m/mn/g; s/(?<!\d)(?=\d)/ /g; then these can be parsed by
> >Date::Manip::Delta.
>
> I am not at all familiar with the fancy-pants newfangled stuff in
'Newfangled'?! (?<!) was added in 5.005, released in 1998. (?=) was
already there at that point, but the perldeltas don't go back further
than 5.004 so I don't know when it appeared.
> regexps like in that second example. To save other people trouble,
> - "m" has to be expressed as "mn" (in Date::Manip::Delta,
> "m" appears to be "month" and "mn" is "minute")
> - Date::Manip::Delta requires space (or comma) before digits.
> Find each place where the character before is not a digit and the
> character following is a digit, and put a space there.
> (Those are zero-width assertions.) I see no reason why it could not
> be expressed, albeit probably with less efficiency, as
> s/(\d+)/ $1/g
Nor do I. I was thinking 'I want to add a space between every non-digit
followed by a digit', so that's what I wrote. I was actually thinking in
terms of '\b for digits', which is why I was using zero-width
assertions.
Ben
------------------------------
Date: Thu, 8 Aug 2013 21:34:35 +0000 (UTC)
From: tmcd@panix.com (Tim McDaniel)
Subject: Re: translate human-readable time shorthand
Message-Id: <ku12tb$fe$1@reader1.panix.com>
In article <egcdda-46d2.ln1@anubis.morrow.me.uk>,
Ben Morrow <ben@morrow.me.uk> wrote:
>
>Quoth tmcd@panix.com:
>> In article <norada-ai8.ln1@anubis.morrow.me.uk>,
>> Ben Morrow <ben@morrow.me.uk> wrote:
>> >
>> >If you s/m/mn/g; s/(?<!\d)(?=\d)/ /g; then these can be parsed by
>> >Date::Manip::Delta.
>>
>> I am not at all familiar with the fancy-pants newfangled stuff in
>
>'Newfangled'?!
FANCY-PANTS newfangled. 'Tweren't in Perl 4.038.
*ptooo* [ping!]
Seriously, I've never had cause to use a zero-width assertion, or
possibly any other "?" syntax other than the one that keeps (...) from
touching $digit ("(?: ... )"), and that I had to look up just now.
--
Tim McDaniel, tmcd@panix.com
------------------------------
Date: Fri, 09 Aug 2013 00:48:22 +0300
From: George Mpouras <gravitalsun@hotmail.foo>
Subject: Re: translate human-readable time shorthand
Message-Id: <ku13n4$ald$2@news.ntua.gr>
>
> Seriously, I've never had cause to use a zero-width assertion, or
> possibly any other "?" syntax other than the one that keeps (...) from
> touching $digit ("(?: ... )"), and that I had to look up just now.
>
all these are very wise and clever except ... that the OP is missing !
------------------------------
Date: Fri, 09 Aug 2013 10:01:00 +0200
From: Peter Makholm <peter@makholm.net>
Subject: Re: translate human-readable time shorthand
Message-Id: <87mworfg3n.fsf@vps1.hacking.dk>
Jim Gibson <jimsgibson@gmail.com> writes:
> So we should try to agree that one minute of duration is 60 seconds,
> one hour is 60 minutes, one day is 24 hours, and one week is 7 days,
> regardless of when those periods start or stop.
Depending on the context I think that '1 day' could mean both a 24 hours
duration or the next day at the same (local) time. When the duration
is the primary object I would understand 24 hours and when the point in
time is the primary object I would understand at the same time the next
day.
In some cases I use the same interpretation when I change time zone due
to travel (at least for time zone changes within a few hours). So if I
have a recurring task I have to perform every 2 days I would default to
do it on the same time of the day accoridng to the local time.
For month and year I think I would make the same distinctions, where the
duration would be some sort of normalized duration (30 days/365 days)
and the point in time would depend on the specific month or year.
A recurring monthly meeting would by default be at the same day of the
month. For a duration of several months I would probably just consider
the difference negligible.
//Makholm
------------------------------
Date: Fri, 09 Aug 2013 09:29:11 -0700
From: Jim Gibson <jimsgibson@gmail.com>
Subject: Re: translate human-readable time shorthand
Message-Id: <090820130929112506%jimsgibson@gmail.com>
In article <ku13n4$ald$2@news.ntua.gr>, George Mpouras
<gravitalsun@hotmail.foo> wrote:
> >
> > Seriously, I've never had cause to use a zero-width assertion, or
> > possibly any other "?" syntax other than the one that keeps (...) from
> > touching $digit ("(?: ... )"), and that I had to look up just now.
> >
>
> all these are very wise and clever except ... that the OP is missing !
The OP moved his question over to comp.lang.perl.modules -- still
looking for a module that would do the conversion for him. He also made
it clear that the context of the question is in the time delays used by
'at', the Unix task scheduling utility, so anything longer than a week
is probably irrelevant, and the interpretation of months and years is
moot.
--
Jim Gibson
------------------------------
Date: Fri, 9 Aug 2013 18:04:29 +0000 (UTC)
From: tmcd@panix.com (Tim McDaniel)
Subject: Re: translate human-readable time shorthand
Message-Id: <ku3avd$cs9$1@reader1.panix.com>
In article <090820130929112506%jimsgibson@gmail.com>,
Jim Gibson <jimsgibson@gmail.com> wrote:
>He also made it clear that the context of the question is in the time
>delays used by 'at', the Unix task scheduling utility
I have the impression that cron (as an analogy) has had extensive work
to make corner cases work, like 1-3 AM on daylight-saving change day,
or other clock adjustments. I mention that only as an example about
how time handling can be tricky and so I would be more inclined to use
a module that shows some signs of handling intervals properly.
--
Tim McDaniel, tmcd@panix.com
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
Back issues are available via anonymous ftp from
ftp://cil-www.oce.orst.edu/pub/perl/old-digests.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V11 Issue 4011
***************************************