[31622] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 2881 Volume: 11

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Sat Mar 20 09:09:27 2010

Date: Sat, 20 Mar 2010 06:09:08 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Sat, 20 Mar 2010     Volume: 11 Number: 2881

Today's topics:
    Re: PDF::API2 underlining text <hjp-usenet2@hjp.at>
    Re: Perl HTML searching <steve@staticg.com>
    Re: Perl HTML searching <glex_no-spam@qwest-spam-no.invalid>
    Re: Perl HTML searching <ben@morrow.me.uk>
    Re: Perl HTML searching <steve@staticg.com>
    Re: Perl HTML searching <ben@morrow.me.uk>
    Re: Perl HTML searching <steve@staticg.com>
    Re: Perl HTML searching <tadmc@seesig.invalid>
        Proper quoting (wss: Perl HTML searching) <hjp-usenet2@hjp.at>
    Re: reading file round and round <jl_post@hotmail.com>
    Re: reading file round and round <jurgenex@hotmail.com>
    Re: reading file round and round <jurgenex@hotmail.com>
    Re: reading file round and round <hjp-usenet2@hjp.at>
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Sat, 20 Mar 2010 14:00:13 +0100
From: "Peter J. Holzer" <hjp-usenet2@hjp.at>
Subject: Re: PDF::API2 underlining text
Message-Id: <slrnhq9hmt.5v6.hjp-usenet2@hrunkner.hjp.at>

On 2010-03-19 17:08, Ben Morrow <ben@morrow.me.uk> wrote:
> Quoth ccc31807 <cartercc@gmail.com>:
>> I've looked but must be missing something -- how do you underline
>> text?
>
> Put a line underneath it?
>
>> Is it part of the font object, as Bold or Italic or Roman?
>> 
>> Or is it some kind of transformation of text, like the translate
>> method?
>
> 'Underlined' is not a property of the character, like 'bold' or
> 'italic'. It's just a straight line stuck underneath the line of text.
> (It's also *really* tacky and should be avoided in anything purporting
> to be properly typeset.) I'm sure it's straightforward to find out where
> the text begins and ends; then just move those two points down a point
> or so and draw a line between them.

Position and thickness of the line should probably depend on the font
size and maybe also on font family and weight. 

I remember that GEM fonts (GEM was a windowing system for PCs and Atari
ST in the 1980's) included the offset of the upper and lower edge of the
underline. Postscript and Truetype fonts don't, AFAIK.

	hp



------------------------------

Date: Fri, 19 Mar 2010 14:10:15 -0700 (PDT)
From: Steve <steve@staticg.com>
Subject: Re: Perl HTML searching
Message-Id: <d862e31d-e29b-4acc-bc3b-33e126e71808@b9g2000pri.googlegroups.com>

On Mar 19, 11:42=A0am, "J. Gleixner" <glex_no-s...@qwest-spam-
no.invalid> wrote:
> Steve wrote:
> > On Mar 19, 11:01 am, J rgen Exner <jurge...@hotmail.com> wrote:
> >> Steve <st...@staticg.com> wrote:
> >>> I started a little project where I need to search web pages for their
> >>> text and return the links of those pages to me. =A0I am using
> >>> LWP::Simple, HTML::LinkExtor, and Data::Dumper. =A0Basically all I ha=
ve
> >>> done so far is a list of URL's from my search query of a website, but
> >>> I want to be able to filter this content based on the pages contents.
> >>> How can I do this? How can I get the content of a web page, and not
> >>> just the URL?
> >> ???
>
> >> I don't understand.
>
> >> =A0 =A0 =A0 =A0 use LWP::Simple;
> >> =A0 =A0 =A0 =A0 $content =3D get("http://www.whateverURL");
>
> >> will get you exactly the content of that web page and assign it to
> >> $content and apparently you are doing that already.
>
> >> So what is your problem?
>
> >> jue
>
> > Sorry I am a little overwhelmed with the coding so far (I'm not very
> > good at perl). =A0I have what you have posted, but my problem is that I
> > would like to filter that content... like lets say I searched a site
> > that had 15 news links and 3 of them said "Hello" in the title. =A0I
> > would want to extract only the links that said hello in the title.
>
> '"Hello" in the title'??.. The title element of the HTML????
> Or the 'a' element contains 'Hello'?? e.g. <a href=3D"...">Hello Kitty</a=
>
>
> How are you using HTML::LinkExtor??
>
> That seems like the right choice.
>
> Why are you using Data::Dumper?
>
> That's helpful when debugging, or logging, so how are you using it?
>
> Post your very short example, because there's something you're
> missing and no one can tell what that is based on your description.

Based on what you all said, I can make a more clear description.
Essentially, I'm trying to search craigslist more efficiently.  I want
the link the a tag points to, as well as the description.  here is
code I used already that I made that gets me only the links:
-----------------------------

#!/usr/bin/perl -w
use strict;
use LWP::Simple;
use HTML::LinkExtor;
use Data::Dumper;

###### VARIABLES ######
my $craigs =3D "http://seattle.craigslist.org";
my $source =3D "$craigs/search/sss?query=3Dwhat+Im+Looking
+for&catAbbreviation=3Dsss";
my $browser =3D 'google-chrome';

###### SEARCH #######

my $page =3D get("$source");
my $parser =3D HTML::LinkExtor->new();

$parser->parse($page);
my @links =3D $parser->links;
open LINKS, ">/home/me/Desktop/links.txt";
print LINKS Dumper \@links;

open READLINKS, "</home/me/Desktop/links.txt";
open OUT, ">/home/me/Desktop/final.txt";
while (<READLINKS>){
	if ( /html/ ){
		my $url =3D $_;
		for ($url){
			s/\'//g;
			s/^\s+//;
		}

		print OUT "$craigs$url";
	}
}
open BROWSE, "</home/me/Desktop/final.txt";

system ($browser);
foreach(<BROWSE>){
	system ($browser, $_);
}
-----------------------------

I've since created a different script that's a little more cleaned up


------------------------------

Date: Fri, 19 Mar 2010 16:10:14 -0500
From: "J. Gleixner" <glex_no-spam@qwest-spam-no.invalid>
Subject: Re: Perl HTML searching
Message-Id: <4ba3e837$0$48215$815e3792@news.qwest.net>

J. Gleixner wrote:
> Steve wrote:
>> On Mar 19, 11:01 am, Jürgen Exner <jurge...@hotmail.com> wrote:
>>> Steve <st...@staticg.com> wrote:
>>>> I started a little project where I need to search web pages for their
>>>> text and return the links of those pages to me.  I am using
>>>> LWP::Simple, HTML::LinkExtor, and Data::Dumper.  Basically all I have
>>>> done so far is a list of URL's from my search query of a website, but
>>>> I want to be able to filter this content based on the pages contents.
>>>> How can I do this? How can I get the content of a web page, and not
>>>> just the URL?
>>> ???
>>>
>>> I don't understand.
>>>
>>>         use LWP::Simple;
>>>         $content = get("http://www.whateverURL");
>>>
>>> will get you exactly the content of that web page and assign it to
>>> $content and apparently you are doing that already.
>>>
>>> So what is your problem?
>>>
>>> jue
>>
>> Sorry I am a little overwhelmed with the coding so far (I'm not very
>> good at perl).  I have what you have posted, but my problem is that I
>> would like to filter that content... like lets say I searched a site
>> that had 15 news links and 3 of them said "Hello" in the title.  I
>> would want to extract only the links that said hello in the title.
> 
> 
> '"Hello" in the title'??.. The title element of the HTML????
> Or the 'a' element contains 'Hello'?? e.g. <a href="...">Hello Kitty</a>
> 
> How are you using HTML::LinkExtor??
> 
> That seems like the right choice.
After looking at it further,  HTML::LinkExtor only gives the
attributes, not the text that makes up the hyperlink.  Seems
like that would be a useful enhancement.

This might help you:

http://cpansearch.perl.org/src/GAAS/HTML-Parser-3.64/eg/hanchors


------------------------------

Date: Fri, 19 Mar 2010 21:40:14 +0000
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: Perl HTML searching
Message-Id: <ui7d77-qsp.ln1@osiris.mauzo.dyndns.org>


Quoth Steve <steve@staticg.com>:
> 
> Based on what you all said, I can make a more clear description.
> Essentially, I'm trying to search craigslist more efficiently.  I want

Are you sure craigslist's Terms of Use allow this? Most sites of this
nature don't.

> the link the a tag points to, as well as the description.  here is
> code I used already that I made that gets me only the links:
> -----------------------------
> 
> #!/usr/bin/perl -w
> use strict;
> use LWP::Simple;
> use HTML::LinkExtor;
> use Data::Dumper;
> 
> ###### VARIABLES ######
> my $craigs = "http://seattle.craigslist.org";
> my $source = "$craigs/search/sss?query=what+Im+Looking
> +for&catAbbreviation=sss";
> my $browser = 'google-chrome';
> 
> ###### SEARCH #######
> 
> my $page = get("$source");
> my $parser = HTML::LinkExtor->new();
> 
> $parser->parse($page);
> my @links = $parser->links;
> open LINKS, ">/home/me/Desktop/links.txt";

Use 3-arg open.
Use lexical filehandles.
*Always* check the return value of open.

    open my $LINKS, ">", "/home/me/Desktop/links.txt"
        or die "can't write to 'links.txt': $!";

You may wish to consider using the 'autodie' module from CPAN, which
will do the 'or die' checks for you.

> print LINKS Dumper \@links;
> 
> open READLINKS, "</home/me/Desktop/links.txt";
> open OUT, ">/home/me/Desktop/final.txt";

As above.

> while (<READLINKS>){

Why are you writing the links out to a file only to read them in again?
Just use the array you already have:

    for (@links) {

> 	if ( /html/ ){
> 		my $url = $_;
> 		for ($url){
> 			s/\'//g;
> 			s/^\s+//;
> 		}
> 
> 		print OUT "$craigs$url";
> 	}
> }
> open BROWSE, "</home/me/Desktop/final.txt";

As above.

Ben



------------------------------

Date: Fri, 19 Mar 2010 15:10:15 -0700 (PDT)
From: Steve <steve@staticg.com>
Subject: Re: Perl HTML searching
Message-Id: <5e480084-0af5-49a6-aa63-9c62c3594a0a@v34g2000prm.googlegroups.com>

On Mar 19, 2:40=A0pm, Ben Morrow <b...@morrow.me.uk> wrote:
> Quoth Steve <st...@staticg.com>:
>
>
>
> > Based on what you all said, I can make a more clear description.
> > Essentially, I'm trying to search craigslist more efficiently. =A0I wan=
t
>
> Are you sure craigslist's Terms of Use allow this? Most sites of this
> nature don't.
>
>
>
>
>
> > the link the a tag points to, as well as the description. =A0here is
> > code I used already that I made that gets me only the links:
> > -----------------------------
>
> > #!/usr/bin/perl -w
> > use strict;
> > use LWP::Simple;
> > use HTML::LinkExtor;
> > use Data::Dumper;
>
> > ###### VARIABLES ######
> > my $craigs =3D "http://seattle.craigslist.org";
> > my $source =3D "$craigs/search/sss?query=3Dwhat+Im+Looking
> > +for&catAbbreviation=3Dsss";
> > my $browser =3D 'google-chrome';
>
> > ###### SEARCH #######
>
> > my $page =3D get("$source");
> > my $parser =3D HTML::LinkExtor->new();
>
> > $parser->parse($page);
> > my @links =3D $parser->links;
> > open LINKS, ">/home/me/Desktop/links.txt";
>
> Use 3-arg open.
> Use lexical filehandles.
> *Always* check the return value of open.
>
> =A0 =A0 open my $LINKS, ">", "/home/me/Desktop/links.txt"
> =A0 =A0 =A0 =A0 or die "can't write to 'links.txt': $!";
>
> You may wish to consider using the 'autodie' module from CPAN, which
> will do the 'or die' checks for you.
>
> > print LINKS Dumper \@links;
>
> > open READLINKS, "</home/me/Desktop/links.txt";
> > open OUT, ">/home/me/Desktop/final.txt";
>
> As above.
>
> > while (<READLINKS>){
>
> Why are you writing the links out to a file only to read them in again?
> Just use the array you already have:
>
> =A0 =A0 for (@links) {
>
> > =A0 =A0if ( /html/ ){
> > =A0 =A0 =A0 =A0 =A0 =A0my $url =3D $_;
> > =A0 =A0 =A0 =A0 =A0 =A0for ($url){
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0s/\'//g;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0s/^\s+//;
> > =A0 =A0 =A0 =A0 =A0 =A0}
>
> > =A0 =A0 =A0 =A0 =A0 =A0print OUT "$craigs$url";
> > =A0 =A0}
> > }
> > open BROWSE, "</home/me/Desktop/final.txt";
>
> As above.
>
> Ben

I have no idea, but it's personal use.  I don't see what so bad about
it, if I was using my web browser I'd be doing the same thing.
Craigslist is just an example.

That's aside the point though, I'm just doing it for fun/practice/
learning.  Let's say we are using a different site then, perhaps one
I'm going to make, it makes no difference to me.

So any way I can do this or...?


------------------------------

Date: Fri, 19 Mar 2010 22:30:11 +0000
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: Perl HTML searching
Message-Id: <jgad77-n8q.ln1@osiris.mauzo.dyndns.org>


Quoth Steve <steve@staticg.com>:
> 
> I have no idea, but it's personal use.  I don't see what so bad about
> it, if I was using my web browser I'd be doing the same thing.

That's not the point. If their TOS say 'no robots' then that means 'no
robots', not 'no robots unless it's for personal use and you can't see
why you shouldn't'. Apart from anything else, a lot of these sites make
money from ads, which you will completely bypass.

> Craigslist is just an example.
> 
> That's aside the point though, I'm just doing it for fun/practice/
> learning.  Let's say we are using a different site then, perhaps one
> I'm going to make, it makes no difference to me.
> 
> So any way I can do this or...?

I've already suggested using XML::LibXML. Others have pointed you to an
example of using HTML::Parser. Pick one and try it.

Ben



------------------------------

Date: Fri, 19 Mar 2010 15:39:49 -0700 (PDT)
From: Steve <steve@staticg.com>
Subject: Re: Perl HTML searching
Message-Id: <9122ee50-8dbc-4825-8d2c-ec303632236a@h35g2000pri.googlegroups.com>

On Mar 19, 3:30=A0pm, Ben Morrow <b...@morrow.me.uk> wrote:
> Quoth Steve <st...@staticg.com>:
>
>
>
> > I have no idea, but it's personal use. =A0I don't see what so bad about
> > it, if I was using my web browser I'd be doing the same thing.
>
> That's not the point. If their TOS say 'no robots' then that means 'no
> robots', not 'no robots unless it's for personal use and you can't see
> why you shouldn't'. Apart from anything else, a lot of these sites make
> money from ads, which you will completely bypass.
>
> > Craigslist is just an example.
>
> > That's aside the point though, I'm just doing it for fun/practice/
> > learning. =A0Let's say we are using a different site then, perhaps one
> > I'm going to make, it makes no difference to me.
>
> > So any way I can do this or...?
>
> I've already suggested using XML::LibXML. Others have pointed you to an
> example of using HTML::Parser. Pick one and try it.
>
> Ben

I realize this, I'm not using craigslist.  It was the first thing I
could think of for an example.  This is for internal/personal use
only, and I don't like how you're labeling me as breaking any TOS for
an _EXAMPLE_.  Notice how my home folder is changed to "me"? I'm
putting as little personal information here, hence the craigslist
example.


------------------------------

Date: Fri, 19 Mar 2010 21:38:19 -0500
From: Tad McClellan <tadmc@seesig.invalid>
Subject: Re: Perl HTML searching
Message-Id: <slrnhq8d1d.cfg.tadmc@tadbox.sbcglobal.net>

Kyle T. Jones <KBfoMe@realdomain.net> wrote:
> Steve wrote:

>> like lets say I searched a site
>> that had 15 news links and 3 of them said "Hello" in the title.  I
>> would want to extract only the links that said hello in the title.
>
> Read up on perl regular expressions.


While reading up on regular expressions is certainly a good idea,
it is a horrid idea for the purposes of parsing HTML.

Have you read the FAQ answers that mention HTML?

    perldoc -q HTML


> for instance, taking the above, you might first split it into a 
> "one-line per" array -
>
> @stuff=split(/\n/, $content);
>
> then parse each line for hello -
>
> foreach(@stuff){
> 	if($_=~/Hello/){
> 		do whatever;}
> }


The code below prints "do whatever" 3 times, but there is only one link
containing "Hello"...


---------------------------
#!/usr/bin/perl
use warnings;
use strict;

# some perfectly valid HTML:
my $content = '
<html><body>
<p>Hello
Kitty</p>
<a
href
=
"hello.com"
>Hello</a
>
<!--
    There is no Hello here
-->
</body></html>
';

my @stuff = split /\n/, $content;
foreach (@stuff) {
    if(/Hello/) {
        print "do whatever\n";
    }
}
---------------------------


-- 
Tad McClellan
email: perl -le "print scalar reverse qq/moc.liamg\100cm.j.dat/"
The above message is a Usenet post.
I don't recall having given anyone permission to use it on a Web site.


------------------------------

Date: Sat, 20 Mar 2010 12:35:53 +0100
From: "Peter J. Holzer" <hjp-usenet2@hjp.at>
Subject: Proper quoting (wss: Perl HTML searching)
Message-Id: <slrnhq9cop.c5s.hjp-usenet2@hrunkner.hjp.at>

On 2010-03-19 22:39, Steve <steve@staticg.com> wrote:
> On Mar 19, 3:30 pm, Ben Morrow <b...@morrow.me.uk> wrote:
>> Quoth Steve <st...@staticg.com>:
>> > I have no idea, but it's personal use.  I don't see what so bad about
>> > it, if I was using my web browser I'd be doing the same thing.
>>
>> That's not the point. If their TOS say 'no robots' then that means 'no
>> robots', not 'no robots unless it's for personal use and you can't see
>> why you shouldn't'. Apart from anything else, a lot of these sites make
>> money from ads, which you will completely bypass.

=======

>> > Craigslist is just an example.
>>
>> > That's aside the point though, I'm just doing it for fun/practice/
>> > learning.  Let's say we are using a different site then, perhaps one
>> > I'm going to make, it makes no difference to me.
>>
>> > So any way I can do this or...?
>>
>> I've already suggested using XML::LibXML. Others have pointed you to an
>> example of using HTML::Parser. Pick one and try it.
>>
>> Ben
>
> I realize this,

Please quote only the relevant parts of the posting you are responding
to and write your answer directly beneath the part you are referring to. 

Nobody knows what "this" is that you realize. From your quoting it looks
like you realize that you should use XML::LibXML or HTML::Parser. But
from the content of your reply it seems more likely you realize that you
should abide of the terms of use of any site you use. If so you should
have inserted your response at the point I've marked with "======="
above. And if you don't intend to respond to the part about the tools
you should use, don't quote it (and change the subject, since the topic
is now no longer "Perl HTML searching" but "TOS of web pages").

	hp



------------------------------

Date: Fri, 19 Mar 2010 14:27:01 -0700 (PDT)
From: "jl_post@hotmail.com" <jl_post@hotmail.com>
Subject: Re: reading file round and round
Message-Id: <f28f9c8d-4f90-4165-9e52-4a00686175e5@a16g2000pre.googlegroups.com>

On Mar 19, 12:43=A0pm, cerr <ron.egg...@gmail.com> wrote:
> Hi There,
>
> I read out the content from a file like:
> foreach $line (<$handle>) {
> =A0 =A0 =A0 =A0 =A0 print $line;
> =A0 =A0 =A0 =A0 =A0 sleep(1);
> =A0 =A0 =A0 =A0 }
> whixh works well so far. But what I would like is, if the loop gets to
> eof, it should start over on top again. How can i reset the reading
> pointer back to the beginning of the file?


   If an infinite loop is what you want, you can always use the
Tie::File module (which you may already have in your installation of
Perl) and just increment the index variable, making sure to "mod" it
by the number of lines.

   Here's an example:


#!/usr/bin/perl

use strict;
use warnings;

my $fileName =3D 'file.txt';

use Tie::File;
tie my @lines, 'Tie::File', $fileName
   or die "Could not open file '$fileName': $!\n";

die "No lines found in '$fileName'.\n"  unless @lines;

my $lineNum =3D 0;

while (1)
{
   print $lines[$lineNum], "\n";
   sleep(1);
}
continue
{
   # Increment $lineNum, but make sure it doesn't exceed $#lines:
   $lineNum++;
   $lineNum %=3D @lines;
}

__END__


   This solution may seem overkill for your example, but if you have a
more complex task where you'd rather traverse arrays instead of
manipulating file handles, Tie::File is quite nice.

   Cheers,

   -- Jean-Luc


------------------------------

Date: Fri, 19 Mar 2010 15:09:00 -0700
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: reading file round and round
Message-Id: <fft7q5hhsk71qsheb1eubsiqd0b20djnd0@4ax.com>

cerr <ron.eggler@gmail.com> wrote:
>Hi There,
>
>I read out the content from a file like:
>foreach $line (<$handle>) {
>	  print $line;
>	  sleep(1);
>        }
>whixh works well so far. But what I would like is, if the loop gets to
>eof, it should start over on top again. How can i reset the reading
>pointer back to the beginning of the file?

perldoc -f seek

jue


------------------------------

Date: Fri, 19 Mar 2010 15:11:42 -0700
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: reading file round and round
Message-Id: <ght7q5thmu9egrlv74ksopov0st7q25o9s@4ax.com>

"C.DeRykus" <derykus@gmail.com> wrote:
>On Mar 19, 11:46 am, cerr <ron.egg...@gmail.com> wrote:
>> On Mar 19, 11:43 am, cerr <ron.egg...@gmail.com> wrote:
>>
>> > I read out the content from a file like:
>> > foreach $line (<$handle>) {
>> >           print $line;
>> >           sleep(1);
>> >         }
>> > whixh works well so far. But what I would like is, if the loop gets to
>> > eof, it should start over on top again. How can i reset the reading
>> > pointer back to the beginning of the file?
>>
>> Do I actually need to close() and reopen my file or is there another
>> way to achieve this?
>>
>>
>
>LOOP:
>{
>  foreach $line (<$handle>)
>  {
>     ...
>  }
>  seek($handle, 0, 0) or die ...
>  redo LOOP;
>}

Why not

	while(1){
		 foreach $line (<$handle>){
			  {    ...
		  }
  		seek(...) or die ...
	}

instead of that ugly label?

jue


------------------------------

Date: Sat, 20 Mar 2010 12:37:52 +0100
From: "Peter J. Holzer" <hjp-usenet2@hjp.at>
Subject: Re: reading file round and round
Message-Id: <slrnhq9csg.c5s.hjp-usenet2@hrunkner.hjp.at>

On 2010-03-19 19:39, Ben Morrow <ben@morrow.me.uk> wrote:
> Quoth "C.DeRykus" <derykus@gmail.com>:
>>   seek($handle, 0, 0) or die ...
>
> Don't do that (I know perldoc -q tail recommends it, but it ought to be
> updated). Use the constants from the Fcntl module, they're more
> portable.

Are they really?

	hp


------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

Back issues are available via anonymous ftp from
ftp://cil-www.oce.orst.edu/pub/perl/old-digests. 

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V11 Issue 2881
***************************************


home help back first fref pref prev next nref lref last post