[29019] in Perl-Users-Digest
Perl-Users Digest, Issue: 263 Volume: 11
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Sun Mar 25 09:10:13 2007
Date: Sun, 25 Mar 2007 06:09:09 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Perl-Users Digest Sun, 25 Mar 2007 Volume: 11 Number: 263
Today's topics:
Re: Executing awk from perl script <someone@example.com>
Re: Executing awk from perl script <tintin@invalid.invalid>
new CPAN modules on Sun Mar 25 2007 (Randal Schwartz)
Re: parsing a tab delimited or CSV, but keep the delimi <tadmc@augustmail.com>
Re: Reading from fixed-length text file <tadmc@augustmail.com>
Re: Replacing characters in file <anony-mouse@hole.in.the.wall.com>
Re: Replacing characters in file <rvtol+news@isolution.nl>
Re: Replacing characters in file <klaus03@gmail.com>
Re: Server/Clients system (Jamie)
Re: Server/Clients system <hjp-usenet2@hjp.at>
Re: time structure without shift <rvtol+news@isolution.nl>
Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Sat, 24 Mar 2007 19:12:36 GMT
From: "John W. Krahn" <someone@example.com>
Subject: Re: Executing awk from perl script
Message-Id: <EKeNh.8193$__3.7547@edtnps90>
AyOut wrote:
> I'm new to perl and have the following line that I try to execute:
>
> ...
> my $hm = `date +%H:%M --date "1 minute ago"`;
> my $fname = `date +AppName_stats.log.%Y-%m-%d --date "1 days ago"`;
>
> my $hits = `find /logs/server_name0?/dir/. -name "$fname*" -exec
> gunzip -c {} ;\ | /grep SearchStr | /bin/awk -F","
> '{if(substr($1,12,5)~hmpoint)print $3}' hmpoint=$hm | /bin/awk '{tot+=
> $1}END{print tot}'`;
>
> When I run this I get all kinds of complaints on the $hits line. I've
> tried various modifications, but can't seem to get it to work. Any
> idea of what's wrong here?
In Perl you would write that as:
use POSIX 'strftime';
use File::Find;
use Compress::Zlib;
my $hm = strftime '%H:%M', localtime $^T - 60;
my $fname = strftime 'AppName_stats.log.%Y-%m-%d', localtime $^T - 86400;
my $hits;
find sub {
return unless /\A\Q$fname/;
my $gz = gzopen( $_, 'rb' ) or die "Cannot open $_: $gzerrno\n";
while ( $gz->gzreadline( my $line ) > 0 ) {
if ( $line =~ /SearchStr/ && substr( $line, 11, 5 ) eq $hm ) {
$hits += ( split /,/, $line )[ 2 ];
}
}
die "Error reading from $_: $gzerrno\n" if $gzerrno != Z_STREAM_END;
$gz->gzclose();
}, glob '/logs/server_name0?/dir';
> Thanks.
You're welcome.
John
--
Perl isn't a toolbox, but a small machine shop where you can special-order
certain sorts of tools at low cost and in short order. -- Larry Wall
------------------------------
Date: Sun, 25 Mar 2007 22:24:50 +1200
From: Tintin <tintin@invalid.invalid>
Subject: Re: Executing awk from perl script
Message-Id: <46064018$0$16279$88260bb3@free.teranews.com>
AyOut wrote:
> I'm new to perl and have the following line that I try to execute:
>
> ....
> my $hm = `date +%H:%M --date "1 minute ago"`;
> my $fname = `date +AppName_stats.log.%Y-%m-%d --date "1 days ago"`;
>
> my $hits = `find /logs/server_name0?/dir/. -name "$fname*" -exec
> gunzip -c {} ;\ | /grep SearchStr | /bin/awk -F","
> '{if(substr($1,12,5)~hmpoint)print $3}' hmpoint=$hm | /bin/awk '{tot+=
> $1}END{print tot}'`;
>
> When I run this I get all kinds of complaints on the $hits line. I've
> tried various modifications, but can't seem to get it to work. Any
> idea of what's wrong here?
Why bother writing in Perl, if you are essentially writing a shell script?
--
Posted via a free Usenet account from http://www.teranews.com
------------------------------
Date: Sun, 25 Mar 2007 04:42:08 GMT
From: merlyn@stonehenge.com (Randal Schwartz)
Subject: new CPAN modules on Sun Mar 25 2007
Message-Id: <JFFzq8.1I7K@zorch.sf-bay.org>
The following modules have recently been added to or updated in the
Comprehensive Perl Archive Network (CPAN). You can install them using the
instructions in the 'perlmodinstall' page included with your Perl
distribution.
Alien-wxWidgets-0.31
http://search.cpan.org/~mbarbon/Alien-wxWidgets-0.31/
building, finding and using wxWidgets binaries
----
Apache-Status-DBI-v1.0.0
http://search.cpan.org/~timb/Apache-Status-DBI-v1.0.0/
Show status of all DBI database and statement handles
----
Array-Each-Override-0.01
http://search.cpan.org/~arc/Array-Each-Override-0.01/
each for iterating over an array's keys and values
----
Aspect-0.12
http://search.cpan.org/~eilara/Aspect-0.12/
AOP for Perl
----
Astro-MoonPhase-0.60
http://search.cpan.org/~brett/Astro-MoonPhase-0.60/
Information about the phase of the Moon
----
Bio-Grep-v0.5.0
http://search.cpan.org/~limaone/Bio-Grep-v0.5.0/
Perl extension for searching in Fasta files
----
Bio-Phylo-0.16_RC2
http://search.cpan.org/~rvosa/Bio-Phylo-0.16_RC2/
Phylogenetic analysis using perl.
----
CAM-PDF-1.10
http://search.cpan.org/~cdolan/CAM-PDF-1.10/
PDF manipulation library
----
Contextual-Call-0.01
http://search.cpan.org/~hio/Contextual-Call-0.01/
call sub with caller's context
----
Coro-3.55
http://search.cpan.org/~mlehmann/Coro-3.55/
coroutine process abstraction
----
ICal-QuickAdd-0.5_1
http://search.cpan.org/~markstos/ICal-QuickAdd-0.5_1/
----
IP-QQWry-v0.0.12
http://search.cpan.org/~sunnavy/IP-QQWry-v0.0.12/
a simple interface for QQWry IP database(file).
----
IPC-Run3-0.037
http://search.cpan.org/~rjbs/IPC-Run3-0.037/
run a subprocess in batch mode (a la system) on Unix, Win32, etc.
----
InSilicoSpectro-Databanks-0.0.18
http://search.cpan.org/~alexmass/InSilicoSpectro-Databanks-0.0.18/
parsing protein/nucleotides sequence databanks (fasta, uniprot...)
----
JSON-XS-0.5
http://search.cpan.org/~mlehmann/JSON-XS-0.5/
JSON serialising/deserialising, done correctly and fast
----
JSON-XS-0.7
http://search.cpan.org/~mlehmann/JSON-XS-0.7/
JSON serialising/deserialising, done correctly and fast
----
LibA2-0.08
http://search.cpan.org/~cjm/LibA2-0.08/
----
Lingua-EN-Conjugate-0.25
http://search.cpan.org/~rwg/Lingua-EN-Conjugate-0.25/
Conjugation of English verbs
----
Lingua-EN-Conjugate-0.26
http://search.cpan.org/~rwg/Lingua-EN-Conjugate-0.26/
Conjugation of English verbs
----
Lingua-EN-Conjugate-0.27
http://search.cpan.org/~rwg/Lingua-EN-Conjugate-0.27/
Conjugation of English verbs
----
Lingua-EN-Conjugate-0.28
http://search.cpan.org/~rwg/Lingua-EN-Conjugate-0.28/
Conjugation of English verbs
----
Lingua-EN-Conjugate-0.291
http://search.cpan.org/~rwg/Lingua-EN-Conjugate-0.291/
Conjugation of English verbs
----
Lingua-EN-Conjugate-0.292
http://search.cpan.org/~rwg/Lingua-EN-Conjugate-0.292/
Conjugation of English verbs
----
Parse-QTEDI-0.02_02
http://search.cpan.org/~dongxu/Parse-QTEDI-0.02_02/
Parse QT/KDE preprocessed headers
----
PerlIO-via-Logger-1.01
http://search.cpan.org/~akaplan/PerlIO-via-Logger-1.01/
PerlIO layer for prefixing current time to log output
----
Statistics-RankCorrelation-0.10
http://search.cpan.org/~gene/Statistics-RankCorrelation-0.10/
Compute the rank correlation between two vectors
----
Test-MonitorSites-0.09
http://search.cpan.org/~hesco/Test-MonitorSites-0.09/
Monitor availability and function of a list of websites
----
XUL-Node-0.06
http://search.cpan.org/~eilara/XUL-Node-0.06/
----
bin-sqlpp-0.06
http://search.cpan.org/~karasik/bin-sqlpp-0.06/
cpp-alike SQL preprocessor
----
delicious-backup-0.011
http://search.cpan.org/~rjbs/delicious-backup-0.011/
----
dvdrip-0.98.4
http://search.cpan.org/~jred/dvdrip-0.98.4/
----
oEdtk-0.312
http://search.cpan.org/~daunay/oEdtk-0.312/
If you're an author of one of these modules, please submit a detailed
announcement to comp.lang.perl.announce, and we'll pass it along.
This message was generated by a Perl program described in my Linux
Magazine column, which can be found on-line (along with more than
200 other freely available past column articles) at
http://www.stonehenge.com/merlyn/LinuxMag/col82.html
print "Just another Perl hacker," # the original
--
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
<merlyn@stonehenge.com> <URL:http://www.stonehenge.com/merlyn/>
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!
------------------------------
Date: Sat, 24 Mar 2007 08:30:00 -0500
From: Tad McClellan <tadmc@augustmail.com>
Subject: Re: parsing a tab delimited or CSV, but keep the delimiter
Message-Id: <slrnf0a9uo.7rp.tadmc@tadmc30.august.net>
["Followup-To:" header set to comp.lang.perl.misc.]
Lew <lew@nospam.lewscanon.com> wrote:
> Jürgen Exner wrote:
>> Well, sorry, but when you are talking Perl, then CPAN is the one and only
>> repository for modules.
> Well, sorry, but you're the one talking Perl.
That can happen in articles posted to the Perl newsgroup...
--
Tad McClellan SGML consulting
tadmc@augustmail.com Perl programming
Fort Worth, Texas
------------------------------
Date: Sat, 24 Mar 2007 15:46:50 -0500
From: Tad McClellan <tadmc@augustmail.com>
Subject: Re: Reading from fixed-length text file
Message-Id: <slrnf0b3hq.95u.tadmc@tadmc30.august.net>
nun <junk@yahoo.com> wrote:
> J. Gleixner wrote:
>> DB wrote:
>>> if ($line_length=124) {
>> use ==
>>
>> '=' does the assignment.
>>
>
> Yes, right after I posted I noticed that and now the script works.
That is why you should always enable warnings!
--
Tad McClellan SGML consulting
tadmc@augustmail.com Perl programming
Fort Worth, Texas
------------------------------
Date: Sun, 25 Mar 2007 13:58:20 +1200
From: Anony-mouse <anony-mouse@hole.in.the.wall.com>
Subject: Re: Replacing characters in file
Message-Id: <250320071358202315%anony-mouse@hole.in.the.wall.com>
In article <1174727138.489310.127080@l75g2000hse.googlegroups.com>,
"Klaus" <klaus03@gmail.com> wrote:
> On Mar 24, 7:15 am, Anony-mouse <anony-mo...@hole.in.the.wall.com>
> wrote:
> > Hi,
> >
> > I'm trying to find a way to replace characters in a file, but have run
> > into problems. I'm trying to go through a .DAT file replacing the
> > characters 015F (in hex) with 015C.
> >
> > I tried playing around with versions of SED, but the file also contains
> > EOF control characters which cause that to abort part-way through the
> > file (although it wasn't doing the replacement anyway).
> >
> > It also needs to be via the DOS command line or a similar way that can
> > be performed by a BAT file and be as small as possible since it needs
> > to run from a keyring Flash drive and still leave enough room for the
> > data files.
> >
> > Is Perl going to be able to do this??
>
> C:\>perl -Mbytes -Mopen=IO,:raw -pi.bak -e "s/\x01\x5c/\x01\x5f/g"
> test.dat
>
> see also "perldoc perlopentut" and "perldoc perlrun"
>
> "-Mbytes" is needed to disallow character encodings (such as Utf8,
> etc...)
> "-Mopen=IO,:raw" forces I/O to "binmode" (i.e. disallow transformation
> of CR/LF)
> "-pi.bak" performs inplace editing (that's the "-i" part)
> whereas the "-p" runs automated loop to process the file line by line
>
> I am running ActiveState Perl on Windows XP
>
> C:\>perl -v
>
> This is perl, v5.8.8 built for MSWin32-x86-multi-thread
> (with 50 registered patches, see perl -V for more detail)
>
> Copyright 1987-2006, Larry Wall
>
> Binary build 820 [274739] provided by ActiveState http://www.ActiveState.com
> Built Jan 23 2007 15:57:46
Many thanks. I'll give that a try. :o)
_
_/ \___
Anony-mouse says o_/O _/ \
"Eek-eek-eek!" \__/_|_/_|\____/
------------------------------
Date: Sun, 25 Mar 2007 11:24:34 +0200
From: "Dr.Ruud" <rvtol+news@isolution.nl>
Subject: Re: Replacing characters in file
Message-Id: <eu5m75.r0.1@news.isolution.nl>
Anony-mouse schreef:
> I'm trying to find a way to replace [..]
> characters 015F (in hex) with 015C.
Is this about double-byte characters?
--
Affijn, Ruud
"Gewoon is een tijger."
------------------------------
Date: 25 Mar 2007 05:16:54 -0700
From: "Klaus" <klaus03@gmail.com>
Subject: Re: Replacing characters in file
Message-Id: <1174825014.219191.180330@b75g2000hsg.googlegroups.com>
On Mar 25, 11:24 am, "Dr.Ruud" <rvtol+n...@isolution.nl> wrote:
> Anony-mouse schreef:
>
> > I'm trying to find a way to replace [..]
> > characters 015F (in hex) with 015C.
>
> Is this about double-byte characters?
...or maybe even about converting BCD (binary coded (packed) decimal)
from unsigned value "15" to signed value "+15" ?
--
Klaus
------------------------------
Date: Sat, 24 Mar 2007 23:45:21 GMT
From: nospam@geniegate.com (Jamie)
Subject: Re: Server/Clients system
Message-Id: <Lc1174755118105150x89a77ac@pong.podro.com>
In <slrnf0aiq3.db.hjp-usenet2@yoyo.hjp.at>,
"Peter J. Holzer" <hjp-usenet2@hjp.at> mentions:
>I'm probably missing something because I know little about podcasts, but
>AFAIK they consist basically of
>
>* individual media files
>* RSS (or Atom, or whatever) files which refer to the media files and
> add a bit of information about them.
>
>The RSS file normally is at a fixed URL, and clients poll the URL to see
>if the file has changed, and either notify the user or automatically
>download the new media files so that the user can listen to them resp.
>view them.
That is more or less correct. In theory, one could have multiple media
files pr. item, but I've never seen it done.
>So as a first step you can set up a set of newgroups with categories.
>Then instead of (or in addition to) posting your RSS file on a web site,
>you post it in the apropriate newsgroup(s). Clients subscribed to at
>least one of the newsgroups will retrieve the RSS and can continue to
>act just the same as if they had retrieved it vie HTTP. (The user might
>want to specify additional filters, for example automatically download
>only files by author X or with subject Y).
Hmm.. I was thinking you'd post the individual items in some manner,
so they scroll off. Sort of like an item is a post, the channel is
??? (and this is where it seems like it'd break)
Seems like this would involve re-posting the same channel data for
each item? (maybe thats a good thing?)
... OR.. (and this feels a little crazy):
LIST CHANNEL http://foo.example.com/bar/none/feed.rss
Which would, I presume, fetch the channel info for that particular
feed. The URL would serve as the unique ID and as I'm thinking
about it now, the result would be a channel with no items.
I don't like it.
>The harder part is to distribute the media files themselves over NNTP:
>You can just post them to the same newsgroups, but that means that every
>file is flooded to every server subscribed to a newsgroup. Well, let's
>just say there's a reason why most newsserver operators don't carry
>binary groups :-).
Like this?
rss.binaries.technology.*
rss.podcast.feed.technology.*
rss.podcast.feed.discuss.*
>That's where the "more finegrained" subscription model I was talking
>about comes in. Traditionally, feeds in NNTP are configured
>"out-of-band": If I want a feed for some newsgroup, I send mail to my
>neighbour news admins, and tell them I want to get a feed for that
>newsgroup. Of course that's quite a bit of work, so it's almost never
Ah, subscription as in, peer subscription. (I was thinking client)
I've set up simple news servers but have no experience with larger
"real" news servers that participate in networks. (I do like
the technology behind them though, always thought it was cool)
>So you could have one newsgroup per channel, and each server only
>requests a feed for that channel when there are users actually reading
>items from it (Incidentally, that gets rid of the problem that neither
>news: nor nntp: URIs allow specifying both a message id and a newsgroup)
Then anyone would have to be able to create new channels, (I don't know how
usenet does this in practice, but, as I understand it, one can't just create a
new channel and expect it to be carried. Certainly not without actually having
their own peer on the network.
I can't just post with a header of "Newsgroups: bogus.newsgroup" and
expect the new newsgroup to be created for me. (least I /HOPE/ I can't!)
>So, if I wanted to make podcasts, I'd start by creating a new newsgroup
>for my channel on my news server (there may need to be some global
^^^^^^^^^^^^^^
Thats the trouble, podcasters probably won't go through the trouble.
Hmm.. this is a business model? offer to create channels for podcasters,
with the promise of how much they'll save on bandwidth by doing it this
way instead of web based. (be a hard sell though, SOMEONE has to pay
for the bandwidth)
I'd rather multiple channels appear on the same newsgroup though, with
a "one newsgroup pr. channel" model, there would be a LOT of dead
newsgroups. If you go to podcastalley.com and do searches, you'll see
podcasts on practically every subject in the known universe, but most
of them are dead.
A newsgroup as a category really fits in well with RSS.
In the RSS:
<category domain="news.example.com">podcast/channels/foo</category>
Alas, the "domain" attribute is hardly known, wish someone would have
told Apple about it..
>naming scheme to avoid collisions, for example by using the reversed
>domain name as prefix: "podcasts.channels.at.hjp") and post my first
>opus there:
Multiple podcasts can (and do) appear on the same domain. Some places (at
least with RSS) will have several feeds related to particular subjects,
BBC and I believe CNN have many streams.
> From: <hjp-usenet2@hjp.at>
> Newsgroups: podcasts.channels.at.hjp
> Subject: Writing a newsserver in perl
> Content-Type: video/mpeg
> Message-Id: <first-video@hjp.at>
And this would be the binary data, (and /all/ the binary data) in one single
post? Some of those media files are HUGE.
>Then I'll post an RSS file to the appropriate categories newsgroups:
[RSS example of a channel with one item snipped]
> <!-- or something like that - please ignore any errors in the RSS -->
I like it. :-) But some how, I should think the <channel> data should be stored
elsewhere.
The natural order in how articles expire with NNTP would be very useful if
items could be individual posts. (the same thing could be said if channels
themselves could "expire" when there were no items in them for a period of
time)
>The RSS will be flooded out to all servers which carry either
>podcasts.categories.programming.perl or podcasts.categories.comm.usenet.
>The video will currently stay on my own NNTP server, because no other
>carries podcasts.channels.at.hjp yet.
>
>A client will get the RSS file, and if the user requests my
>presentation, it will first try to enter the group
>podcasts.channels.at.hjp and request the article <first-video@hjp.at> at
>the local server(s), and if that fails, from news.hjp.at.
>
>The server will notice a request for a non-existing newsgroup
>podcasts.channels.at.hjp and request a feed from its peers. If none has
>it, it may parse the RSS files to find that this newsgroup is available
>from news.hjp.at and request it there (it is a matter of policy if a
>server should autonomously establish new peer relationships).
Hmm.. I like that idea, I don't really like binaries as posts.
I wrote a news reader once (using it now actually) parsing and downloading
binary postings is really a black art, never perfect and full of gotchas. (same
with binary posting tools, a lot of 'yenc' tools won't handle the leading '.'
problem very well, resulting in corrupt downloads.
Binaries split across multiple postings with (NN/NN) patterns (and now
[AA/BB] (NN/NN) file.part012) is an extremely hit or miss procedure.
You might get the first 12 parts, but as you were fetching them, part 13
was deleted and the whole thing is a miss.
>Regular postings in plain text can be intermixed between RSS and media
>postings. (Hey, you can even have a discussion thread consisting of
>audio or video postings :-).
Yea, you could! :-)
>> NNTP isn't 8-bit clean and doesn't really provide a way to xfer enclosures.
>
>NNTP has been in practice 8-bit clean (although not binary-clean) since
>the early 1990's. Even before, binaries were distributed just fine using
>uuencoding. Today base64 and yenc exist as alternatives. Distributing
>binaries over NNTP is really not a problem - people have been doing it
>since the beginning of Usenet.
I really like the technology and the ideas of NNTP, I just wish the people
would realize the internet is /NOT/ the web. The ONLY advantage I've
ever seen in those goofy web based message boards is that webmasters
can charge PPC advertising easier, if people re-discovered NNTP, they'd
never want to use a web based gizmo again.
Jamie
--
http://www.geniegate.com Custom web programming
Perl * Java * UNIX User Management Solutions
------------------------------
Date: Sun, 25 Mar 2007 13:43:57 +0200
From: "Peter J. Holzer" <hjp-usenet2@hjp.at>
Subject: Re: Server/Clients system
Message-Id: <slrnf0co3t.ubn.hjp-usenet2@yoyo.hjp.at>
[I think we're getting quite off-topic here. I've set Followup-To:
poster, but feel free to use a more appropriate newsgroup instead]
On 2007-03-24 23:45, Jamie <nospam@geniegate.com> wrote:
> In <slrnf0aiq3.db.hjp-usenet2@yoyo.hjp.at>,
> "Peter J. Holzer" <hjp-usenet2@hjp.at> mentions:
>>So as a first step you can set up a set of newgroups with categories.
>>Then instead of (or in addition to) posting your RSS file on a web site,
>>you post it in the apropriate newsgroup(s). Clients subscribed to at
>>least one of the newsgroups will retrieve the RSS and can continue to
>>act just the same as if they had retrieved it vie HTTP. (The user might
>>want to specify additional filters, for example automatically download
>>only files by author X or with subject Y).
>
> Hmm.. I was thinking you'd post the individual items in some manner,
> so they scroll off. Sort of like an item is a post, the channel is
> ??? (and this is where it seems like it'd break)
>
> Seems like this would involve re-posting the same channel data for
> each item? (maybe thats a good thing?)
Yes, or for a small number of items. Since newsservers normally keep
articles for some time before expiring them, you can just post each item
individually, and don't have to keep the last n items in the RSS file to
implement "scrolling" - the newsserver does that for you.
In this case the channel data would be repeated for each item. But you
could put a few items into the same posting, either just the last 3 or 4
(to guard against message-loss) or some related items.
I think the additional bandwidth is negligible.
>>The harder part is to distribute the media files themselves over NNTP:
>>You can just post them to the same newsgroups, but that means that every
>>file is flooded to every server subscribed to a newsgroup. Well, let's
>>just say there's a reason why most newsserver operators don't carry
>>binary groups :-).
>
> Like this?
>
> rss.binaries.technology.*
> rss.podcast.feed.technology.*
> rss.podcast.feed.discuss.*
Yes, something like that.
>>That's where the "more finegrained" subscription model I was talking
>>about comes in. Traditionally, feeds in NNTP are configured
>>"out-of-band": If I want a feed for some newsgroup, I send mail to my
>>neighbour news admins, and tell them I want to get a feed for that
>>newsgroup. Of course that's quite a bit of work, so it's almost never
>
> Ah, subscription as in, peer subscription. (I was thinking client)
>
> I've set up simple news servers but have no experience with larger
> "real" news servers that participate in networks. (I do like
> the technology behind them though, always thought it was cool)
I run two small newsservers (one at work, the other at a local user
group), with a few peers each. Text only, no binaries (we used to have a
full feed at work, but we had to stop that in the late 1990's - took
way too much bandwidth).
>>So you could have one newsgroup per channel, and each server only
>>requests a feed for that channel when there are users actually reading
>>items from it (Incidentally, that gets rid of the problem that neither
>>news: nor nntp: URIs allow specifying both a message id and a newsgroup)
>
> Then anyone would have to be able to create new channels, (I don't know how
> usenet does this in practice, but, as I understand it, one can't just create a
> new channel and expect it to be carried. Certainly not without actually having
> their own peer on the network.
>
> I can't just post with a header of "Newsgroups: bogus.newsgroup" and
> expect the new newsgroup to be created for me. (least I /HOPE/ I can't!)
You can send out Control: newgroup messages at any time. The question
is, will anybody honor them? That depends on the hierarchy. In most
hierarchies, Control messages must be signed with a specific PGP key to
be honored by most newsservers. In some (e.g., alt.*) anybody can send
out newgroups, but if it hasn't been discussed before, other people will
send rmgroups. And in some (e.g., free.*, oesterreich.*, ...) every
newgroup will be honored.
But I wasn't thinking of using control messages for this, but letting
each server create a newsgroups when a) a local user tries to access
them and b) they can get a feed for them. If a newsgroup hasn't been
accessed for some time, it can be removed again.
>>So, if I wanted to make podcasts, I'd start by creating a new newsgroup
>>for my channel on my news server (there may need to be some global
> ^^^^^^^^^^^^^^
> Thats the trouble, podcasters probably won't go through the trouble.
I wasn't thinking that everybody would have to run their own newsserver.
Just have access to a newsserver which lets them create their own
groups. (or maybe automatically creates a new group for each registered
user)
> Hmm.. this is a business model? offer to create channels for podcasters,
> with the promise of how much they'll save on bandwidth by doing it this
> way instead of web based. (be a hard sell though, SOMEONE has to pay
> for the bandwidth)
Yep, but the cost is distributed and for the poster it is (almost)
constant. If his casts are popular, they will just be flooded out over
the whole net and most people can get them from their newsservers and
don't have to get it from his. (Bittorrent has a similar property, BTW)
> I'd rather multiple channels appear on the same newsgroup though, with
> a "one newsgroup pr. channel" model, there would be a LOT of dead
> newsgroups.
Yes, but that doesn't matter. If a newsgroup in podcasts.channels.* (to
stay with my naming scheme) hasn't been accessed for some time it can be
removed locally. It may still exist as a dead group on the server which
created it in the first place but everywhere else it will vanish if it
is dead (it will also vanish if there is traffic but nobody is reading
it, which is IMHO a big advantage over current usenet).
> A newsgroup as a category really fits in well with RSS.
Yep.
>>naming scheme to avoid collisions, for example by using the reversed
>>domain name as prefix: "podcasts.channels.at.hjp") and post my first
>>opus there:
>
> Multiple podcasts can (and do) appear on the same domain. Some places (at
> least with RSS) will have several feeds related to particular subjects,
> BBC and I believe CNN have many streams.
The domain name system is hierarchical, so you can make that as fine
grained as you want (well, not quite: There's a limit of 255
octets in DNS, but DNS isn't used here). Using my own domain was
probably a bad example (because there is only one user there), so I'll
use my employer's: wsr.ac.at. All the channel newsgroups created on this
server would start with "podcasts.channels.at.ac.wsr" to avoid conflicts
with those created by other servers. As a matter of policy, we could then
create one per user, so I would get podcasts.channels.at.ac.wsr.hjp, and
if I wanted to create several channels, I could do that below that,
e.g., podcasts.channels.at.ac.wsr.hjp.computers,
podcasts.channels.at.ac.wsr.hjp.sf, etc.
(If you're familiar with Java, you know where I stole that idea :-).
>> From: <hjp-usenet2@hjp.at>
>> Newsgroups: podcasts.channels.at.hjp
>> Subject: Writing a newsserver in perl
>> Content-Type: video/mpeg
>> Message-Id: <first-video@hjp.at>
>
> And this would be the binary data, (and /all/ the binary data) in one single
> post? Some of those media files are HUGE.
Yes. NNTP doesn't have a size limit, although most newsservers have (but
since they would have to be modified to implement dynamic feed
configuration, that's the least problem). I think sending a file as one
huge posting is better than sending it in lots of little chunks (as is
currently done in binary newsgroups) for the reasons you mention below.
Of course the URLs in the RSS file can also be http or bittorrent URLs,
in which case NNTP is only used to distribute the metadata. But since I
claimed that NNTP could be used, I had to demonstrate that it can be
used to distribute *all* the data. And with a fine-grained group
structure and dynamic configuration of feeds I think it would also
reduce bandwidth requirements to a manageable level (although probably
not quite as low as bittorrent).
> I really like the technology and the ideas of NNTP, I just wish the people
> would realize the internet is /NOT/ the web. The ONLY advantage I've
> ever seen in those goofy web based message boards is that webmasters
> can charge PPC advertising easier, if people re-discovered NNTP, they'd
> never want to use a web based gizmo again.
While I also vastly prefer Usenet to web forums (haven't ever seen one I
liked), I don't think that's true for all or even most of the people.
There are people who prefer to use a web mailer when they could use a
real MUA with IMAP.
hp
--
_ | Peter J. Holzer | Blaming Perl for the inability of programmers
|_|_) | Sysadmin WSR | to write clearly is like blaming English for
| | | hjp@hjp.at | the circumlocutions of bureaucrats.
__/ | http://www.hjp.at/ | -- Charlton Wilbur in clpm
------------------------------
Date: Sun, 25 Mar 2007 11:30:00 +0200
From: "Dr.Ruud" <rvtol+news@isolution.nl>
Subject: Re: time structure without shift
Message-Id: <eu5mgi.1hc.1@news.isolution.nl>
Petr Vileta schreef:
> Michael Carman:
>> Petr Vileta:
>>> I have time in seconds and want to get time structure
>>> of this time but without shift to local time.
>>
>> perldoc -f gmtime
>
> gmtime can't help me.
Just read the advised documentation again. And again.
> gmtime() suppose time value in local time
Huh? From that doc: "not locale dependent".
--
Affijn, Ruud
"Gewoon is een tijger."
------------------------------
Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>
Administrivia:
#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc. For subscription or unsubscription requests, send
#the single line:
#
# subscribe perl-users
#or:
# unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.
NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice.
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.
#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V11 Issue 263
**************************************