[11643] in Commercialization & Privatization of the Internet

home help back first fref pref prev next nref lref last post

The timeliness of Internet facts

daemon@ATHENA.MIT.EDU (Daniel P Dern)
Tue Apr 12 15:34:14 1994

Date: Tue, 12 Apr 1994 10:22:46 -0400
From: ddern@world.std.com (Daniel P Dern)
To: com-priv@psi.com


Internet facts are volatile, and as Peter Deutsch points out,
old documents don't die, they get put into archives.  I faced
this problem in writing my own Internet book; for example, I 
wrote up a bunch of stuff about BITNET based on documents I
grabbed, only to be told that a lot of it was out of date when
I had some CRENners (CRENites?) review it.  Sigh.  I'll be
real happy when we get global filing and URLs and whatever so that 
we can get master documents, pointers, and caching/salting.

My solutions, FWIW, were:
1. Datestamp all facts, e.g., "As of MONTH DATE..."
2. Give the best metapointers, e.g., "For a list of publiclly
   accessible FOO clients, see Scott Yanoff's Special Internet 
   List" "For a list of Internet shell account providers, get
   Peter Kaminski's PDIAL list at ..."
3. Keep reminding people that things change... and to always
   start by trying locally, e.g., "type archie, type xarchie, type 
   gopher..."

Doing fact-checking helped too of course; Peter gave me the info
so I could include:

  (The server at McGill has been out of service since 1992, 
  Deutsch notes.  "_quiche.cs.mcgill.ca_, the borrowed 
  machine on which the original archie ran for so long, closed down 
  after hardware failure and was never replaced.") 

And I don't believe for a moment that any incorrect statements
in Johna's article were a reflection of McGraw-Hill's ambitions
in the market.  That's too complicated a theory.

I believe that Gopher and WWW and the like are helping, in that they
encourage pointers to a single document, but we still have a lot of
old info roosting in file trees out there.

DPD

home help back first fref pref prev next nref lref last post