[17259] in Perl-Users-Digest
Perl-Users Digest, Issue: 4681 Volume: 9
daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Fri Oct 20 21:10:30 2000
Date: Fri, 20 Oct 2000 18:10:14 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Message-Id: <972090614-v9-i4681@ruby.oce.orst.edu>
Content-Type: text
Perl-Users Digest Fri, 20 Oct 2000 Volume: 9 Number: 4681
Today's topics:
Re: table of contents newbie question <auday@mindspring.com>
Re: table of contents newbie question <auday@mindspring.com>
Re: Time-out error (Clinton A. Pierce)
Re: very new at perl, need a few examples explained (Clay Irving)
Re: Writing multiple files to a particular directory <krahnj@acm.org>
Re: Writing multiple files to a particular directory <jeff@vpservices.com>
Re: Writing multiple files to a particular directory <krahnj@acm.org>
Digest Administrivia (Last modified: 16 Sep 99) (Perl-Users-Digest Admin)
----------------------------------------------------------------------
Date: Fri, 20 Oct 2000 18:40:02 -0400
From: "Nuey" <auday@mindspring.com>
Subject: Re: table of contents newbie question
Message-Id: <8sqhk4$5gp$1@slb0.atl.mindspring.net>
Thanks, Bart, for your reply. Your solution actually makes sense. My data
is static and if Perl can automatically create the .html files then that
would basically solve the problem.
Gwyn, what an amusing email. As a consolation prize you get an all expense
paid trip to alt.comedy.standup. :)
Nuey
Bart Lateur wrote in message ...
>Nuey wrote:
>
>>I have a .doc file that is like a large flat file database. It is a well
>>structured tab-delimited text file that exports from an Access database.
Its
>>size is 800 pages in 8.5x11 format. It has about 15000 records (or
>>paragraphs). Each record has three fields. The entire file is about 3
MB.
>
>That doesn't sound like a .doc file. Oh well, you're free to choose
>whatever file extension you like.
>
>>On my web site I will have a table of contents for this document. The
table
>>of contents has about 1000 entries. Each table of contents entry
correspond
>>to a group of several adjacent records in the flat file that comprise a
>>small "chapter".
>
>Is your tab delimited text file exported in a sorted order? CChapters
>grouped together?
>
>That's one of the nice things of tab delimited text files vs. real
>databases: you only have to set the order only once, when generating
>this file. There's no reason to sort it again and again every time your
>program reads the data; especially if the order is always the same.
>
>>Since downloading large html pages is slow, and in order to save the time
of
>>my web-visitors, I imagine that each entry in the table of contents can be
a
>>hyperlink that connects via cgi-perl to the flat file and quickly outputs
a
>>"chapter" to the browser.
>
>Been there, done that. Are you generating the HTML file on the fly from
>this "database file"? Because there actually is no need. It's not like
>the content is very dynamic, is it?
>
>You can use a perl script to generate both the table of contents, and a
>html file for each of the chapters. Nobody in the world will ever know
>that your data consists of hundreds of files, instead of just one
>database file. But it sure helps improving the speed. You don't even
>need CGI: static pages are enough!
>
>>I have been searching the web for a script that
>>performs the above table of contents function. Maybe this project is too
>>simple even for a script. Does anyone know of a simple solution?
>
>It's not really "simple". I don't think a generic script can be made, or
>that it's worth bothering, because what it should do depends strongly on
>the structure of your records, and on the HTML you want outputted.
>
>This is a relatively simple thing to write for a moderately experienced
>Perl scripter. But you don't seem to be a programmer in any way, let
>alone a Perl programmer, so it looks to me like it will be too difficult
>for you to do. I won't even bother with giving too much details.
>
>What you'd need to do is read this file into memory -- possibly in
>chunks if it's too big for your computer, but make sure it's presorted
>then; for example using the Text::CSV or Text::CSV_XS file. Sort and
>group it according to chapter, and output a HTML file for each chapter,
>and keep a list of the chapters. Finally, generate another HTML file for
>the TOC. Upload the output files to your ISP server.
>
>--
> Bart.
I was shocked! How could Nuey <auday@mindspring.com>
say such a terrible thing:
>Hello,
>
>I have skimmed a dozen CGI books and
Oh Good!
> hope this question is not too annoying.
No I just *live* to answer posts like yours.
>I have a .doc file that is like a large flat file database. It is a well
>structured tab-delimited text file that exports from an Access database.
Its
Hell, sounds like the program will just write itself, doesn't it?
>size is 800 pages in 8.5x11 format. It has about 15000 records (or
>paragraphs). Each record has three fields. The entire file is about 3 MB.
>
>On my web site I will have a table of contents for this document. The
table
>of contents has about 1000 entries. Each table of contents entry
correspond
>to a group of several adjacent records in the flat file that comprise a
>small "chapter".
>
>Since downloading large html pages is slow, and in order to save the time
of
>my web-visitors, I imagine that each entry in the table of contents can be
a
Now if Microsoft will just come up with that ESP Visual basic extension,
you wouldn't need Perl at all would you?
>hyperlink that connects via cgi-perl to the flat file and quickly outputs a
>"chapter" to the browser.
I suppose so, but I think you should go with the "everything in one
page" solution. After all, if it was hard to write, it should be hard to
read too. This will help weed out the users that aren't really
interested in what you have to offer from those who are the true
fanatics.
>My ISP supports Perl scripts. (I have installed a freeware hit counter
>script and it works).
No it doesn't. Whoops, wrong thread!
>I have been searching the web for a script that performs the above
>table of contents function. Maybe this project is too simple even for
>a script. Does anyone know of a simple solution?
Hire a programmer.
--
Gwyn "bored as usual" Judd (print `echo 'tjla@guvfybir.qlaqaf.bet' | rot13`)
Old minds are like old horses; you must exercise them if you wish to
keep them in working order.
-John Quincy Adams
------------------------------
Date: Fri, 20 Oct 2000 18:55:19 -0400
From: "Nuey" <auday@mindspring.com>
Subject: Re: table of contents newbie question
Message-Id: <8sqigb$e25$1@slb7.atl.mindspring.net>
Thanks, Bart, for your reply. Your solution actually makes sense. My data
is static and if Perl can automatically create the .html files then that
would basically solve the problem.
Gwyn, what an amusing email. As a consolation prize you get an all expense
paid trip to alt.comedy.standup. :)
Nuey
Bart Lateur wrote in message ...
>Nuey wrote:
>
>>I have a .doc file that is like a large flat file database. It is a well
>>structured tab-delimited text file that exports from an Access database.
Its
>>size is 800 pages in 8.5x11 format. It has about 15000 records (or
>>paragraphs). Each record has three fields. The entire file is about 3
MB.
>
>That doesn't sound like a .doc file. Oh well, you're free to choose
>whatever file extension you like.
>
>>On my web site I will have a table of contents for this document. The
table
>>of contents has about 1000 entries. Each table of contents entry
correspond
>>to a group of several adjacent records in the flat file that comprise a
>>small "chapter".
>
>Is your tab delimited text file exported in a sorted order? CChapters
>grouped together?
>
>That's one of the nice things of tab delimited text files vs. real
>databases: you only have to set the order only once, when generating
>this file. There's no reason to sort it again and again every time your
>program reads the data; especially if the order is always the same.
>
>>Since downloading large html pages is slow, and in order to save the time
of
>>my web-visitors, I imagine that each entry in the table of contents can be
a
>>hyperlink that connects via cgi-perl to the flat file and quickly outputs
a
>>"chapter" to the browser.
>
>Been there, done that. Are you generating the HTML file on the fly from
>this "database file"? Because there actually is no need. It's not like
>the content is very dynamic, is it?
>
>You can use a perl script to generate both the table of contents, and a
>html file for each of the chapters. Nobody in the world will ever know
>that your data consists of hundreds of files, instead of just one
>database file. But it sure helps improving the speed. You don't even
>need CGI: static pages are enough!
>
>>I have been searching the web for a script that
>>performs the above table of contents function. Maybe this project is too
>>simple even for a script. Does anyone know of a simple solution?
>
>It's not really "simple". I don't think a generic script can be made, or
>that it's worth bothering, because what it should do depends strongly on
>the structure of your records, and on the HTML you want outputted.
>
>This is a relatively simple thing to write for a moderately experienced
>Perl scripter. But you don't seem to be a programmer in any way, let
>alone a Perl programmer, so it looks to me like it will be too difficult
>for you to do. I won't even bother with giving too much details.
>
>What you'd need to do is read this file into memory -- possibly in
>chunks if it's too big for your computer, but make sure it's presorted
>then; for example using the Text::CSV or Text::CSV_XS file. Sort and
>group it according to chapter, and output a HTML file for each chapter,
>and keep a list of the chapters. Finally, generate another HTML file for
>the TOC. Upload the output files to your ISP server.
>
>--
> Bart.
I was shocked! How could Nuey <auday@mindspring.com>
say such a terrible thing:
>Hello,
>
>I have skimmed a dozen CGI books and
Oh Good!
> hope this question is not too annoying.
No I just *live* to answer posts like yours.
>I have a .doc file that is like a large flat file database. It is a well
>structured tab-delimited text file that exports from an Access database.
Its
Hell, sounds like the program will just write itself, doesn't it?
>size is 800 pages in 8.5x11 format. It has about 15000 records (or
>paragraphs). Each record has three fields. The entire file is about 3 MB.
>
>On my web site I will have a table of contents for this document. The
table
>of contents has about 1000 entries. Each table of contents entry
correspond
>to a group of several adjacent records in the flat file that comprise a
>small "chapter".
>
>Since downloading large html pages is slow, and in order to save the time
of
>my web-visitors, I imagine that each entry in the table of contents can be
a
Now if Microsoft will just come up with that ESP Visual basic extension,
you wouldn't need Perl at all would you?
>hyperlink that connects via cgi-perl to the flat file and quickly outputs a
>"chapter" to the browser.
I suppose so, but I think you should go with the "everything in one
page" solution. After all, if it was hard to write, it should be hard to
read too. This will help weed out the users that aren't really
interested in what you have to offer from those who are the true
fanatics.
>My ISP supports Perl scripts. (I have installed a freeware hit counter
>script and it works).
No it doesn't. Whoops, wrong thread!
>I have been searching the web for a script that performs the above
>table of contents function. Maybe this project is too simple even for
>a script. Does anyone know of a simple solution?
Hire a programmer.
--
Gwyn "bored as usual" Judd (print `echo 'tjla@guvfybir.qlaqaf.bet' | rot13`)
Old minds are like old horses; you must exercise them if you wish to
keep them in working order.
-John Quincy Adams
------------------------------
Date: Fri, 20 Oct 2000 23:12:19 GMT
From: clintp@geeksalad.org (Clinton A. Pierce)
Subject: Re: Time-out error
Message-Id: <nj4I5.51765$hD4.12251875@news1.rdc1.mi.home.com>
[Posted and mailed]
In article <8spfk8$r4$1@nnrp1.deja.com>,
fayerman@my-deja.com writes:
> First line of this code causes time-out, what can be wrong?
>
For starters, the fact that you didn't post nearly enough useful
information. $t is apparently an object. Created by what module?
Or something homespun? Without knowing that, how the heck are we
supposed to figure out what class "cmd" belongs to and what it's supposed
to be doing.
Try again.
--
Clinton A. Pierce Teach Yourself Perl in 24 Hours!
clintp@geeksalad.org for details see http://www.geeksalad.org
"If you rush a Miracle Man,
you get rotten Miracles." --Miracle Max, The Princess Bride
------------------------------
Date: 20 Oct 2000 22:32:16 GMT
From: clay@panix.com (Clay Irving)
Subject: Re: very new at perl, need a few examples explained
Message-Id: <slrn8v1hvg.cef.clay@panix3.panix.com>
On Fri, 20 Oct 2000 21:52:11 GMT, tgerstmar@my-deja.com
<tgerstmar@my-deja.com> wrote:
>Hey, well Like i said, i am very new at perl, i
>have only started learning over the past couple
>weeks. Anyways enough chit chat, i was wondering
>if anyone could help me out with a few things...
Use the documentation that comes with every Perl distribution.
>When ever i see examples i always see at the top
>of each program the 'Use Strict' module with
>no '''s thier, and i was wondering if someone
>could tell me what it did..
Try:
perldoc strict
Result:
NAME
strict - Perl pragma to restrict unsafe constructs
SYNOPSIS
use strict;
use strict "vars";
use strict "refs";
use strict "subs";
use strict;
no strict "vars";
DESCRIPTION
If no import list is supplied, all possible restrictions
are assumed. (This is the safest mode to operate in, but
is sometimes too strict for casual programming.)
Currently, there are three possible things to be strict
about: "subs", "vars", and "refs".
[...]
>I was also wondering
>what $_ does, becuz i seem to see quite a few
>Input values put into the $_ variable. If anyone
>could get back to me that would be excellent.
Try:
perldoc perlvar
Result:
[...]
$_ The default input and pattern-searching space.
The following pairs are equivalent:
while (<>) {...} # equivalent only in while!
while (defined($_ = <>)) {...}
/^Subject:/
$_ =~ /^Subject:/
tr/a-z/A-Z/
$_ =~ tr/a-z/A-Z/
chomp
chomp($_)
[...]
--
Clay Irving <clay@panix.com>
For NASA, space is still a high priority.
- Dan Quayle
------------------------------
Date: Fri, 20 Oct 2000 15:11:36 -0700
From: "John W. Krahn" <krahnj@acm.org>
Subject: Re: Writing multiple files to a particular directory
Message-Id: <39F0C318.86CA8EAE@acm.org>
Mark wrote:
>
> Problem:
> Attempting to process a number of files, then to write the output to a
> specific directory.
>
> open (FIXEDLOG, '>c:\baalogs\fixeddirectory\$fixed') or die "n\nCan not
> open $fixed for write: $!\n\n";
Use either:
open (FIXEDLOG, ">c:/baalogs/fixeddirectory/$fixed") or die "...
$!\n\n";
or
open (FIXEDLOG, '>c:\baalogs\fixeddirectory\' . $fixed) or die "...
$!\n\n";
or
open (FIXEDLOG, ">c:\\baalogs\\fixeddirectory\\$fixed") or die "...
$!\n\n";
HTH
John
------------------------------
Date: Fri, 20 Oct 2000 15:26:02 -0700
From: Jeff Zucker <jeff@vpservices.com>
Subject: Re: Writing multiple files to a particular directory
Message-Id: <39F0C67A.EA5607C4@vpservices.com>
"John W. Krahn" wrote:
>
> or
> open (FIXEDLOG, '>c:\baalogs\fixeddirectory\' . $fixed) ...
Did you try that one? :-)
--
Jeff
------------------------------
Date: Fri, 20 Oct 2000 16:59:46 -0700
From: "John W. Krahn" <krahnj@acm.org>
Subject: Re: Writing multiple files to a particular directory
Message-Id: <39F0DC72.1C053AFD@acm.org>
Jeff Zucker wrote:
>
> "John W. Krahn" wrote:
> >
> > or
> > open (FIXEDLOG, '>c:\baalogs\fixeddirectory\' . $fixed) ...
>
> Did you try that one? :-)
I'm running on Linux so I don't have a c: drive or \ directory
separators. :-)
John
------------------------------
Date: 16 Sep 99 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin)
Subject: Digest Administrivia (Last modified: 16 Sep 99)
Message-Id: <null>
Administrivia:
The Perl-Users Digest is a retransmission of the USENET newsgroup
comp.lang.perl.misc. For subscription or unsubscription requests, send
the single line:
subscribe perl-users
or:
unsubscribe perl-users
to almanac@ruby.oce.orst.edu.
| NOTE: The mail to news gateway, and thus the ability to submit articles
| through this service to the newsgroup, has been removed. I do not have
| time to individually vet each article to make sure that someone isn't
| abusing the service, and I no longer have any desire to waste my time
| dealing with the campus admins when some fool complains to them about an
| article that has come through the gateway instead of complaining
| to the source.
To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.
To request back copies (available for a week or so), send your request
to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
where x is the volume number and y is the issue number.
For other requests pertaining to the digest, send mail to
perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
sending perl questions to the -request address, I don't have time to
answer them even if I did know the answer.
------------------------------
End of Perl-Users Digest V9 Issue 4681
**************************************