[31623] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 2882 Volume: 11

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Sun Mar 21 06:09:29 2010

Date: Sun, 21 Mar 2010 03:09:09 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Sun, 21 Mar 2010     Volume: 11 Number: 2882

Today's topics:
    Re: FAQ 5.17 Is there a leak/bug in glob()? <kst-u@mib.org>
    Re: Perl HTML searching sln@netherlands.com
    Re: reading file round and round <ben@morrow.me.uk>
        Terms of Use    (was Re: Perl HTML searching) sln@netherlands.com
    Re: Terms of Use    (was Re: Perl HTML searching) <tadmc@seesig.invalid>
    Re: Terms of Use    (was Re: Perl HTML searching) sln@netherlands.com
    Re: Terms of Use    (was Re: Perl HTML searching) <hjp-usenet2@hjp.at>
        using Bot::BasicBot with private channels <olingaa@gmail.com>
    Re: using Bot::BasicBot with private channels <tadmc@seesig.invalid>
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Sat, 20 Mar 2010 09:59:04 -0700
From: Keith Thompson <kst-u@mib.org>
Subject: Re: FAQ 5.17 Is there a leak/bug in glob()?
Message-Id: <lnhbob53qf.fsf@nuthaus.mib.org>

PerlFAQ Server <brian@theperlreview.com> writes:
> 5.17: Is there a leak/bug in glob()?
>
>     Due to the current implementation on some operating systems, when you
>     use the glob() function or its angle-bracket alias in a scalar context,
>     you may cause a memory leak and/or unpredictable behavior. It's best
>     therefore to use glob() only in list context.

How old is this FAQ?  Is the leak still there?

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Nokia
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"


------------------------------

Date: Sat, 20 Mar 2010 14:17:22 -0700
From: sln@netherlands.com
Subject: Re: Perl HTML searching
Message-Id: <okeaq512p0tvdr00t33537vsd8apm16cas@4ax.com>

On Fri, 19 Mar 2010 11:25:12 -0700 (PDT), Steve <steve@staticg.com> wrote:

>On Mar 19, 11:01 am, Jürgen Exner <jurge...@hotmail.com> wrote:
>> Steve <st...@staticg.com> wrote:
>> >I started a little project where I need to search web pages for their
>> >text and return the links of those pages to me.  I am using
>> >LWP::Simple, HTML::LinkExtor, and Data::Dumper.  Basically all I have
>> >done so far is a list of URL's from my search query of a website, but
>> >I want to be able to filter this content based on the pages contents.
>> >How can I do this? How can I get the content of a web page, and not
>> >just the URL?
>>
>> ???
>>
>> I don't understand.
>>
>>         use LWP::Simple;
>>         $content = get("http://www.whateverURL");
>>
>> will get you exactly the content of that web page and assign it to
>> $content and apparently you are doing that already.
>>
>> So what is your problem?
>>
>> jue
>
>Sorry I am a little overwhelmed with the coding so far (I'm not very
>good at perl).  I have what you have posted, but my problem is that I
>would like to filter that content... like lets say I searched a site
>that had 15 news links and 3 of them said "Hello" in the title.  I
>would want to extract only the links that said hello in the title.

This might help you. Requires Perl 5.10 or better.

-sln

Output:
Specific Tag/Attr Titles found --
  Hello:
    "http://helloA.com"
    "helloB.com"
  no_title:
    "/info/twitter.aspx"

All Tag/Attr found --
  a-href:
    "http://helloA.com"
    "/info/twitter.aspx"
    "helloB.com"
  link-href:
    "/includes/css/main.css"

Code:
# -------------------------------------------
# rx_html_href.pl
# -sln, 3/20/2010
# 
# Util to extract some attribute/val's from
# html/xml
# -------------------------------------------

use strict;
use warnings;

my ($Name,$Rxmarkup);
InitName();

my $rxopen  = "(?: $Name )"; # Open tag with 'href' attrib, cannot be empty alternation

#my $rxopen  = "(?: a )";       # Open tag with 'href' attrib, cannot have an empty alternation
my $rxattr  = "(?: href )";     # Attribute we seek, cannot have an empty alternation
my $rxclose = "(?: a )";        # Close tag to match with content, cannot have an empty alternation
my $rxtitle = "(?: Hello | )";  # Content Title, can be empty alternation

my %hTitles;   # hash of titles => attribute values matching tag open, title, and tag close
my %hHrefs;    # hash of tag => attribute values matching tag open expression, not necessaryily titles

InitRegex();

##
 # open my $fh, '<', 'C:/temp/XML/tennis1.html' or
 #   die "can't open file for input: $!";
 # my $html = join '', <$fh>;
 # close $fh;

 my $html = join '', <DATA>;

##
 ParseHref(\$html);

##
 print "\nSpecific Tag/Attr Titles found --\n";
 for my $key (keys %hTitles) {
	print "  $key:\n";
	for my $val (@{$hTitles{$key}}) {
		print "    $val\n";
	}
 }

 print "\nAll Tag/Attr found -- \n";
 for my $key (keys %hHrefs) {
	print "  $key:\n";
	for my $val (@{$hHrefs{$key}}) {
		print "    $val\n";
	}
 }

exit (0);


##
sub ParseHref
{
	my ($markup) = @_;
	my (
	  $url,
	  $title,
	  $content,
	  $tfound,
	  $lcbpos,
	  $last_content_pos,
	  $begin_pos
	) = ('','','',0,0,0,0);

	## parse loop
	while ($$markup =~ /$Rxmarkup/g)
	{
		## handle content buffer
		  if (defined $+{C1}) {
			## speed it up
			$content .= $+{C1};
			if (length $+{C2})
			{
				if ($lcbpos == pos($$markup))	{
					$content .= $+{C2};
				} else {
					$lcbpos = pos($$markup);
					pos($$markup) = $lcbpos - 1;
				}
			}
			$last_content_pos = pos($$markup);
			next;
		  }
		## content here ... take it off
		  if (length $content)
		  {
			$begin_pos = $last_content_pos;
			## check '<'
			if ($content =~ /</) {
				## markup  in content
				#print "Markup '<' in content, da stuff is crap!\n";
			}
			if ($content =~ /($rxtitle)/x && length $url) {
				$tfound = 1;
				$title = $1;
				$title =~ s/^\s*//;
				$title =~ s/\s*$//;
				$title = 'no_title' if !length($title);
			}
			$content = '';
		  }
		## markup here ... take it off
		  if (defined $+{OPEN}) {
			push @{$hHrefs{$+{OPEN}.'-'.$+{ATTR}}}, $+{VAL} ;
			$url = $+{VAL};
			$tfound = 0;
			$title  = '';
		  }
		  elsif (defined $+{CLOSE}) {
			if (length $url && $tfound) {
				push @{$hTitles{$title}}, $url;
			}
			$url    = '';
			$tfound = 0;
			$title  = '';
		  }
	} ## end parse loop

	## check for leftover content
	if (length $content)
	{
		## check '<'
		if ($content =~ /</) {
			## markup  in content
			#print "Markup '<' in left over content, da stuff is crap!\n";
		}
	}
}

sub InitName
{
  my @UC_Nstart = (
    "\\x{C0}-\\x{D6}",
    "\\x{D8}-\\x{F6}",
    "\\x{F8}-\\x{2FF}",
    "\\x{370}-\\x{37D}",
    "\\x{37F}-\\x{1FFF}",
    "\\x{200C}-\\x{200D}",
    "\\x{2070}-\\x{218F}",
    "\\x{2C00}-\\x{2FEF}",
    "\\x{3001}-\\x{D7FF}",
    "\\x{F900}-\\x{FDCF}",
    "\\x{FDF0}-\\x{FFFD}",
    "\\x{10000}-\\x{EFFFF}",
  ); 
  my @UC_Nchar = (
    "\\x{B7}",
    "\\x{0300}-\\x{036F}",
    "\\x{203F}-\\x{2040}",
  );
  my $Nstrt = "[A-Za-z_:".join ('',@UC_Nstart)."]";
  my $Nchar = "[\\w:.".join ('',@UC_Nchar).join ('',@UC_Nstart)."-]";
  $Name  = "(?:$Nstrt$Nchar*)";
}

sub InitRegex
{
  $Rxmarkup = qr/
  (?:
     <
     (?:
         # Specific markup
        (?: (?<OPEN> $rxopen ) \s+[^>]*? (?<=\s) (?<ATTR> $rxattr) \s*=\s* (?<VAL> ".+?"|'.+?')[^>]*? \s* \/?)  # OPEN, ATTR, VAL
       |(?: (?<CLOSE> \/$rxclose ) \s* )        # CLOSE

         # Ordinary exclusionary markup
       |(?: \/* $Name \s* \/*)
       |(?: $Name (?:\s+(?:".*?"|'.*?'|[^>]*?)+) \s* \/?)
       |(?: \?.*?\?)
       |(?:
          !
          (?:  # markup types that have '!'
              (?: DOCTYPE.*?)
             |(?: \[CDATA\[.*?\]\])
             |(?: --.*?--)
             |(?: \[[A-Z][A-Z\ ]*\[.*?\]\]) # who knows?
             |(?: ATTLIST.*?)
             |(?: ENTITY.*?)
             |(?: ELEMENT.*?)
               # add more if necessary
          )
       )
     )
     >
  )
     # This alternation handles content
  | (?<C1> [^<]*) (?<C2> <?)               # C1, C2
  /xs;

}


__DATA__
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 # $ \ Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; =
charset=3Diso-8859-1">
<META content=3D "MSHTML 6.00.2900.3395" name=3DGENERATOR>

<STYLE></STYLE>
<test name = " thi<s # $ \ is a " test>
</HEAD>
<BODY bgColor=3D#ffffff>

  should fix these: # $ \
  but not these:    &#21; &#xAF;
  fix some here:    &&%#$  &as; &&#a0

<a href="http://helloA.com">Hello</a>

  <IMG SRC = "foo.gif" ALT = "A > B">
  <IMG SRC = "foo.gif"
       ALT = "A > # $ \ B">
  <!-- <A comment # $ \ > -->
  <NN & a # $ \>
  <AA &  # $ \>    

  <# Just data #>

  <![INCLUDE CDATA [ >>>>>\\ # $ \ >>>>>>> ]]>

  <!-- This section commented out.
     <B>You can't # $ \ see me!</B>
   -->

<link rel="stylesheet" type="text/css" href="/includes/css/main.css">


at root # $ \ > # $ \ level

<a href="/info/twitter.aspx" target="_top">
<img src="/images/icons/icon_twitter.gif" border="0" align="absmiddle">
</a>


<html><body>
<p>Hello
Kitty</p>
<a
href
=
"helloB.com"
>Hello</a
>
<!--
    There is no Hello here
-->
</body></html>



------------------------------

Date: Sat, 20 Mar 2010 17:12:35 +0000
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: reading file round and round
Message-Id: <39cf77-0771.ln1@osiris.mauzo.dyndns.org>


Quoth "Peter J. Holzer" <hjp-usenet2@hjp.at>:
> On 2010-03-19 19:39, Ben Morrow <ben@morrow.me.uk> wrote:
> > Quoth "C.DeRykus" <derykus@gmail.com>:
> >>   seek($handle, 0, 0) or die ...
> >
> > Don't do that (I know perldoc -q tail recommends it, but it ought to be
> > updated). Use the constants from the Fcntl module, they're more
> > portable.
> 
> Are they really?

Honestly? I've no idea :). In principle they might be, though, and
anyway a proper constant is always nicer than random magic numbers.

Ben



------------------------------

Date: Sat, 20 Mar 2010 07:43:21 -0700
From: sln@netherlands.com
Subject: Terms of Use    (was Re: Perl HTML searching)
Message-Id: <olm9q5lu8vj6inv26nku95jpu52ldlv68p@4ax.com>

On Fri, 19 Mar 2010 21:40:14 +0000, Ben Morrow <ben@morrow.me.uk> wrote:

>
>Quoth Steve <steve@staticg.com>:
>> 
>> Based on what you all said, I can make a more clear description.
>> Essentially, I'm trying to search craigslist more efficiently.  I want
>
>Are you sure craigslist's Terms of Use allow this? Most sites of this
>nature don't.

There is no "Terms of Use" web page making a caller
agree to, sign, a legal notorized document as a condition of usage.
Its a public record, available to be parsed, quoted or anything else,
by routers, virus scanners, BROWSERs, hosts filters, search engines,
Operating Systems, etc..

As for alterring the content and viewing just what the viewer wants,
its a one way street. I filter adds, active controls/content, links
and anything else I want to.

Don't make me laugh, this lame phrase is just that -- LAME!

-sln



------------------------------

Date: Sat, 20 Mar 2010 19:58:48 -0500
From: Tad McClellan <tadmc@seesig.invalid>
Subject: Re: Terms of Use    (was Re: Perl HTML searching)
Message-Id: <slrnhqarip.hht.tadmc@tadbox.sbcglobal.net>

sln@netherlands.com <sln@netherlands.com> wrote:
> On Fri, 19 Mar 2010 21:40:14 +0000, Ben Morrow <ben@morrow.me.uk> wrote:
>
>>
>>Quoth Steve <steve@staticg.com>:
>>> 
>>> Based on what you all said, I can make a more clear description.
>>> Essentially, I'm trying to search craigslist more efficiently.  I want
>>
>>Are you sure craigslist's Terms of Use allow this? Most sites of this
>>nature don't.
>
> There is no "Terms of Use" web page making a caller
> agree to, sign, a legal notorized document as a condition of usage.


There is no legal need to sign anything.

    http://www.craigslist.org/about/terms.of.use

    By using the Service in any way, you are agreeing to comply with the TOU.


> Its a public record, 


Whether it is public or private does not matter either.

It is copyrighted either way.


> available to be parsed, quoted or anything else,
> by routers, virus scanners, BROWSERs, hosts filters, search engines,
> Operating Systems, etc..


The owner can impose whatever restrictions they want.

    This license does not include:
    ...
    (b) any collection, aggregation, copying, duplication, display 
    or derivative use of the Service nor any use of data mining, 
    robots, spiders, or similar data gathering and extraction tools 
    for any purpose unless expressly permitted by craigslist.


> As for alterring the content and viewing just what the viewer wants,
> its a one way street. I filter adds, active controls/content, links
> and anything else I want to.


Just because you violate the license you've been given does not
make it OK for others to also violate the license.


-- 
Tad McClellan
email: perl -le "print scalar reverse qq/moc.liamg\100cm.j.dat/"
The above message is a Usenet post.
I don't recall having given anyone permission to use it on a Web site.


------------------------------

Date: Sat, 20 Mar 2010 20:25:27 -0700
From: sln@netherlands.com
Subject: Re: Terms of Use    (was Re: Perl HTML searching)
Message-Id: <ra3bq5lao9162np2uo1on6l811rh1nabke@4ax.com>

On Sat, 20 Mar 2010 19:58:48 -0500, Tad McClellan <tadmc@seesig.invalid> wrote:

>sln@netherlands.com <sln@netherlands.com> wrote:
>> On Fri, 19 Mar 2010 21:40:14 +0000, Ben Morrow <ben@morrow.me.uk> wrote:
>>
>>>
>>>Quoth Steve <steve@staticg.com>:
>>>> 
>>>> Based on what you all said, I can make a more clear description.
>>>> Essentially, I'm trying to search craigslist more efficiently.  I want
>>>
>>>Are you sure craigslist's Terms of Use allow this? Most sites of this
>>>nature don't.
>>
>> There is no "Terms of Use" web page making a caller
>> agree to, sign, a legal notorized document as a condition of usage.
>
>
>There is no legal need to sign anything.
>
>    http://www.craigslist.org/about/terms.of.use
>
>    By using the Service in any way, you are agreeing to comply with the TOU.
>
>
>> Its a public record, 
>
>
>Whether it is public or private does not matter either.
>
>It is copyrighted either way.
      ^^^^^^^^^^^^^^
There is nothing copyrighted about a href link. There is
nothing copyrighted about words, html, xml, browsers, nor
anything else that flows through the public airways, nor
is air, water or food copyrighted.

If craig has some unique combination of words that may
be considered "artfull and unique" and apart from all others, that
may be extracted from thier "public" broadcast, they would publish
it as literrary content.

Otherwise, the computer rips appart, repackages, transmits data
as it sees fit, unless you think the HOSTS file violates that 
"artfull and unique" web page.

>
>
>> available to be parsed, quoted or anything else,
>> by routers, virus scanners, BROWSERs, hosts filters, search engines,
>> Operating Systems, etc..
>
>
>The owner can impose whatever restrictions they want.
 ^^^^^^^^^^^^^^^
No, they cannot. Give an example.

>
>    This license does not include:
>    ...

BEGIN Browser definition
>    (b) any collection, aggregation, copying, duplication, display 
>    or derivative use of the Service nor any use of data mining, 
>    robots, spiders, or similar data gathering and extraction tools 
>    for any purpose unless expressly permitted by craigslist.
END Browser definition

>
>
>> As for alterring the content and viewing just what the viewer wants,
>> its a one way street. I filter adds, active controls/content, links
>> and anything else I want to.
>
>
>Just because you violate the license you've been given does not
>make it OK for others to also violate the license.

Just because you say it doesen't make it so.
Its not a movie, music, literrary art. Its a composition
of ordinary off the shelf components that can be broken down
and examined. Happens every day, its public information, and
public information cannot be licensed for which craig has any
patent.

-sln


------------------------------

Date: Sun, 21 Mar 2010 09:56:17 +0100
From: "Peter J. Holzer" <hjp-usenet2@hjp.at>
Subject: Re: Terms of Use    (was Re: Perl HTML searching)
Message-Id: <slrnhqbnpi.110.hjp-usenet2@hrunkner.hjp.at>

This is getting a bit off-topic, but ...

On 2010-03-21 03:25, sln@netherlands.com <sln@netherlands.com> wrote:
> On Sat, 20 Mar 2010 19:58:48 -0500, Tad McClellan <tadmc@seesig.invalid> wrote:
>
>>sln@netherlands.com <sln@netherlands.com> wrote:
>>> On Fri, 19 Mar 2010 21:40:14 +0000, Ben Morrow <ben@morrow.me.uk> wrote:
>>>
>>>>
>>>>Quoth Steve <steve@staticg.com>:
>>>>> 
>>>>> Based on what you all said, I can make a more clear description.
>>>>> Essentially, I'm trying to search craigslist more efficiently.  I want
>>>>
>>>>Are you sure craigslist's Terms of Use allow this? Most sites of this
>>>>nature don't.
>>>
>>> There is no "Terms of Use" web page making a caller
>>> agree to, sign, a legal notorized document as a condition of usage.
>>
>>
>>There is no legal need to sign anything.
>>
>>    http://www.craigslist.org/about/terms.of.use
>>
>>    By using the Service in any way, you are agreeing to comply with
>>    the TOU.

That may or may not be binding.


>>> Its a public record, 
>>
>>
>>Whether it is public or private does not matter either.
>>
>>It is copyrighted either way.
>       ^^^^^^^^^^^^^^
> There is nothing copyrighted about a href link. There is
> nothing copyrighted about words, html, xml, browsers, nor
> anything else that flows through the public airways, nor
> is air, water or food copyrighted.
>
> If craig has some unique combination of words that may
> be considered "artfull and unique" and apart from all others, that
> may be extracted from thier "public" broadcast, they would publish
> it as literrary content.


>>The owner can impose whatever restrictions they want.
>  ^^^^^^^^^^^^^^^
> No, they cannot. Give an example.

"whatever restrictions they want" is too strong. The copyright law has
some limits.


>>    This license does not include:
>>    ...
>
> BEGIN Browser definition
>>    (b) any collection, aggregation, copying, duplication, display 
>>    or derivative use of the Service nor any use of data mining, 
>>    robots, spiders, or similar data gathering and extraction tools 
>>    for any purpose unless expressly permitted by craigslist.
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> END Browser definition

I would assume that viewing stuff in the browser is expressly permitted
by craigslist.


>>> As for alterring the content and viewing just what the viewer wants,
>>> its a one way street. I filter adds, active controls/content, links
>>> and anything else I want to.
>>
>>
>>Just because you violate the license you've been given does not
>>make it OK for others to also violate the license.
>
> Just because you say it doesen't make it so.  Its not a movie, music,
> literrary art. Its a composition of ordinary off the shelf components
> that can be broken down and examined.  Happens every day, its public
> information, and public information cannot be licensed for which craig
> has any patent.

Don't know about the US, but in Europe "a composition of ordinary ...
information" is more strongly protected by copyright law than "a movie,
music, literrary art". Because while the former need to be "artful and
unique" as you say (in Austrian law the term is "Werkshöhe"), no such
restriction exists for databases. So if you if you compile a list of the
students of your final year in high school, that's copyrighted. Same for
the data on craigs list.

(Similarly for programs: A "hello world" program is copyrighted, a
literary work of the same originality wouldn't be - but that's not the
point here)

	hp



------------------------------

Date: Sat, 20 Mar 2010 18:55:10 -0700 (PDT)
From: Olinga <olingaa@gmail.com>
Subject: using Bot::BasicBot with private channels
Message-Id: <b88e2224-98f9-42f5-83d6-686e3c3791d2@g4g2000yqa.googlegroups.com>

[This message was posted to comp.lang.perl.modules yesterday but I've
received no responses so I'm posting here]

The attached Bot::BasicBot script, modified to protect some string
literals, works fine when connecting to servers using public channels,
including logging the first call to "say" sent to itself. Private
channels require an invitation request sent to an invite bot. In a
normal irc client I would type this to get the invitation:

/msg <botname> invite <username> <password>

To do this with BasicBot I called the "say" function on lines 30-34,
which are commented out in the attached script:

$self->say(
  channel=>"msg",
  who=>"welcomebot",
  body=>"invite username password"
);

The script appears to hang (I don't know for sure because I don't see
a way to report errors) with those lines un-commented and never logs
any messages as it does when connected to public channels and lines
30-34 commented out.

Does anyone have suggestions on how to obtain the invite properly?
Perhaps this functionality is not supported.
=================================================================
#!/usr/bin/perl
use warnings;
use strict;

package MyBot;
use base qw( Bot::BasicBot );

open FILE,">/path/to/log";

MyBot->new(
  server => 'uri',
  channels => [ '#channel'],
  port => '6667',
  nick => 'username'
)->run();

sub connected {
  my $self = shift;
  #just to make sure this function was entered
  $self->forkit({
    run => [print FILE "connecting\n"]
  });
  #test by sending message to myself: works
  $self->say(
    channel=>"msg",
    who=>"username",
    body=>"invite username password"
  );
#invite required prior to connecting to #tv channel: does not work
#   $self->say(
#     channel=>"msg",
#     who=>"welcomebot",
#     body=>"invite username password"
#   );

}

sub said {
  my ($self,$message) = @_;
  my $who = $message->{who};
  my $raw_nick = $message->{raw_nick};
  my $channel = $message->{channel};
  my $body = $message->{body};
  my $address = $message->{address};
  #for now just record a transcript of messages
  my $str = "who: $who nick: $raw_nick channel: $channel body: $body
address: $address\n";
  $self->forkit({
    run => [print FILE $str]
  });
  return;


------------------------------

Date: Sun, 21 Mar 2010 02:11:49 -0500
From: Tad McClellan <tadmc@seesig.invalid>
Subject: Re: using Bot::BasicBot with private channels
Message-Id: <slrnhqbhe7.i1n.tadmc@tadbox.sbcglobal.net>

Olinga <olingaa@gmail.com> wrote:

> open FILE,">/path/to/log";


You should always, yes always, check the return value from open.

Nowadays, you should also use the 3-arg form of open() along
with a lexical filehandle:

   open my $FILE, '>', '/path/to/log' or die "could not open: $!";


-- 
Tad McClellan
email: perl -le "print scalar reverse qq/moc.liamg\100cm.j.dat/"
The above message is a Usenet post.
I don't recall having given anyone permission to use it on a Web site.


------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

Back issues are available via anonymous ftp from
ftp://cil-www.oce.orst.edu/pub/perl/old-digests. 

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V11 Issue 2882
***************************************


home help back first fref pref prev next nref lref last post