[18454] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 622 Volume: 10

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Tue Apr 3 21:11:02 2001

Date: Tue, 3 Apr 2001 18:10:19 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)
Message-Id: <986346618-v10-i622@ruby.oce.orst.edu>
Content-Type: text

Perl-Users Digest           Tue, 3 Apr 2001     Volume: 10 Number: 622

Today's topics:
        File locking problem David.nospam.Smith@ds-nospam-electronics.co.uk
        Force Integer Division?? <d96-rsa@nada.kth.se>
    Re: Force Integer Division?? (John Joseph Trammell)
        how I could find how many times my program has run <ksundar@landsend.com>
    Re: how I could find how many times my program has run (David Efflandt)
        how to create a new user by script? <mgd@converging.net>
    Re: Multidimensional Arrays? <goldbb2@earthlink.net>
    Re: Perl for Unix Sparc (solaris 7.0) (Damian James)
    Re: Perl::mySQL <goldbb2@earthlink.net>
    Re: Please Flame my Benchmark: open vs. cat (Dave Bailey)
    Re: Please Flame my Benchmark: open vs. cat <mischief@velma.motion.net>
    Re: regex-qr// for search and replace <goldbb2@earthlink.net>
        SVG Perl Libraries <jxm96c@hotmail.com>
        Digest Administrivia (Last modified: 16 Sep 99) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Wed, 04 Apr 2001 02:04:49 +0100
From: David.nospam.Smith@ds-nospam-electronics.co.uk
Subject: File locking problem
Message-Id: <3ACA7331.B0830011@ds-nospam-electronics.co.uk>

This is a multi-part message in MIME format.
--------------BAA9B6ED63D5B620372E2AAA
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Hi, everybody.  I've got a particularly thorny file locking problem.
 
I have a PERL script (enclosed) that opens a file in write+append mode,
then fcntl() to obtain an exclusive lock on that file.  Once this lock
is
obtained, the script reads the file, searching for occurrences of a
particular string (actually the hostname).  Every time it finds one of
these occurrences, it prints "hello" to stderr.  Once it gets to the end
of the file, it adds an extra line on the end containing, among other
things, the hostname.  It then releases the lock, and closes the file.
 
This algorithm is enclosed in a loop, which executes 10 times, so the
process asks for 10 locks.
 
I then submit a number of these processes (50, to be precise) to a
network of machines, so that all of the scripts start at approximately
the same time.  All the scripts are pointing at the same shared lock
file, which is on networked disk.
Therefore, 500 locks are requested in total, and there is a reasonable
amount of collision between the scripts.
 
I observe two problems:
 
  1) The performance of the locking mechanism is very poor - the locked
     file grows in large spurts, with gaps of about 1 minute long where
     there is no activity, and all the processes are just sitting around
     waiting for a lock.
     This problem seems to be caused by either the OS or the networked
     disk (NetApp), as I've re-written the script in C (enclosed), and I
     get the same performance problems.  I've taken this up with our
     system vendors, but if anyone could shed any light on what might be
     happening, I'd be grateful
 
  2) This is the more worrying problem.  The shared file itself seems to
     be getting corrupted - there are bits of lines missing.  I've
     enclosed this file so you can see what I'm talking about.

If I run multiple copies of the same process on one machine (i.e. so
it's
not trying to lock across the network), I don't see either of these
problems.
 
So, can anyone tell me that I'm doing something stupid in the script, or
give me a clue as to what's going wrong?
 
Setup is:
  Compute hardware: 50 x Sun Ultra 10
  Disk Hardware:    Network Appliance
  OS:               Sun Solaris 2.5; this may show up on 2.8 as well,
but we
                    don't have enough 2.8 machines to get collision
between
                    the locks.
  Perl:             5.004
 
Many thanks in advance for anyone who can
help...
--------------BAA9B6ED63D5B620372E2AAA
Content-Type: application/x-perl;
 name="lock_file.pl"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline;
 filename="lock_file.pl"


#{{{  Magic Startup

eval "exec $PERL $0 $*"           # Magic to invoke perl on this script
  if $untrue;                     # using the environment variable PERL.

#}}}  

require "sys/unistd.ph";
use Fcntl;


#A nasty hack to get round a perl instalation problem
#would prefer to use 'use POSIX ;' instead but this gives unwanted/
#unacceptable output
sub SEEK_SET
  {
  return 0;
  }


local ($code) = @_;
local ($flock, $status, $found, $line);

#First open and lock the shared file
# Create lock file if not exists
$share_file = "/design/tmp/dsmith/share.txt";

if (! -e $share_file)
  {
  if (system ("touch $share_file"))
    {
    die("can't create share file $share_file: $!");
    }
  }

for($i=1; $i<=10; $i++)
  {
  # Open share file (in read + append)

  if (!open (LOCKFD, "+>> $share_file"))
    {
    die("unable to open $share_file :$!");
    }

  $flock = pack ("sslll",
		 &F_WRLCK,
		 &SEEK_SET,
		 0,
		 0,
		 $$);

  ($status = fcntl (LOCKFD, &F_SETLK, $flock)) || ($status = -1);

  if ($status == -1)
    {
    print STDERR "Another session is accessing the share file, waiting for if it to finish\n";
    }

  $status = fcntl (LOCKFD, &F_SETLKW, $flock);

  if ($status ne "0 but true")
    {
    die ("something bad with share file: $share_file");
    }
  #move to EOF

  while(defined($input_line = <LOCKFD>))
    {
    if ($input_line =~ /$ENV{"HOSTNAME"}/)
      {
      print STDERR "Hello\n";
      }
    }

  printf LOCKFD ("hostname %s jobid %d attempt $i\n", $ENV{"HOSTNAME"}, $ENV{"LSB_JOBID"});

  $flock = pack ("sslll",
		 &F_UNLCK,
		 &SEEK_SET,
		 0,
		 0,
		 $$);

  ($status = fcntl (LOCKFD, &F_SETLK, $flock)) || ($status = -1);

  if ($status == -1)
    {
    die ("something bad with unlocking share file: $share_file\n");
    }

  close (LOCKFD);
  }


--------------BAA9B6ED63D5B620372E2AAA
Content-Type: text/plain; charset=us-ascii;
 name="lock_file.c"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline;
 filename="lock_file.c"


#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <fcntl.h>
#include <string.h>

const char* share_file = "/design/tmp/dsmith/share.txt";
char linebuf[1024];
int  buf_end_ptr;
int  curr_buf_ptr;

int get_line(char *outstring, int FD);

int main(int argc, char* argv[], char* envp[])
  {
  char *prog_name = argv[0];
  int i, status;
  int LOCKFD;
  flock_t f_struct;
  char input_line[80];
  char temp_line[80];
  char *user = getenv("USER");
  char *hostname = getenv("HOSTNAME");
  char *jobid    = getenv("LSB_JOBID");


/*

if (! -e $share_file)
  {
  if (system ("touch $share_file"))
    {
    die("can't create share file $share_file: $!");
    }
  }

*/

  for(i=1; i<=10; i++)
    {
    /* Open share file (in read + append) */

    if((LOCKFD = open(share_file, O_RDWR | O_CREAT)) <= 0)
      {
      fprintf(stderr, "%s - can't open shared file %s\n", prog_name, share_file);
      exit(1);
      }

    f_struct.l_type   = F_WRLCK;
    f_struct.l_whence = SEEK_SET;
    f_struct.l_start  = 0;
    f_struct.l_len    = 0;

    status = fcntl (LOCKFD, F_SETLK, &f_struct);

    if (status)
      {
      fprintf(stderr, "Another session is accessing the share file, waiting for if it to finish\n");
      status = fcntl (LOCKFD, F_SETLKW, &f_struct);
      }


    if (status)
      {
      fprintf(stderr, "something bad with shared file\n");
      exit(1);
      }

    buf_end_ptr = 0;
    curr_buf_ptr = 0;

    while(get_line(input_line, LOCKFD))
      {
      if (strstr(input_line, hostname))
	{

	fprintf (stdout, "Hello\n");
	}
      }

    sprintf (temp_line, "hostname %s jobid %s attempt %d\n", hostname, jobid, i);
    write (LOCKFD, temp_line, strlen(temp_line));

    f_struct.l_type   = F_UNLCK;
    f_struct.l_whence = SEEK_SET;
    f_struct.l_start  = 0;
    f_struct.l_len    = 0;

    status = fcntl (LOCKFD, F_SETLK, &f_struct);

    if (status)
      {
      fprintf(stderr, "something bad with unlocking share file\n");
      exit(1);
      }

    close(LOCKFD);
    }
    exit(0);
  }

int get_line(char *outstring, int FD)
  {
  char *lineptr = outstring;
  int index = 0;
  int line_read = 0;
  int num_read;
  int eol_not_found = 1;

  if (buf_end_ptr == 0)
    {
    buf_end_ptr = read(FD, linebuf, 1024);
    curr_buf_ptr = 0;
    if (buf_end_ptr < 0)
      {
      fprintf(stderr, "something bad with reading file\n");
      exit(1);
      }
    }
  while (eol_not_found)
    {
    if (curr_buf_ptr >= buf_end_ptr)
      {
      buf_end_ptr = read(FD, linebuf, 1024);
      curr_buf_ptr = 0;
      if (buf_end_ptr < 0)
	{
	fprintf(stderr, "something bad with reading file\n");
	exit(1);
	}
      }
    if (buf_end_ptr  == 0)
      {
      lineptr[0] = '\0';
      return line_read;
      }
    line_read = 1;
    if ((*lineptr = linebuf[curr_buf_ptr++]) == '\n')
      {
      lineptr[1] = '\0';
      return line_read;
      }
    lineptr++;
    }
  }

--------------BAA9B6ED63D5B620372E2AAA
Content-Type: text/plain; charset=us-ascii;
 name="share.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline;
 filename="share.txt"

hostname stonecrop jobid 25997 attempt 1
hostname stonecrop jobid 25997 attempt 2
hostname stonecrop jobid 25997 attempt 3
hostname stonecrop jobid 25997 attempt 4
hostname stonecrop jobid 25997 attempt 5
hostname stonecrop jobid 25997 attempt 6
hostname stonecrop jobid 25997 attempt 7
hostname stonecrop jobid 25997 attempt 8
hostname stonecrop jobid 25997 attempt 9
hostname stonecrop jobid 25997 attempt 10
hostname sumo jobid 25999 attempt 1
hostname sumo jobid 25999 attempt 2
hostname sumo jobid 25999 attempt 3
hostname sumo jobid 25999 attempt 4
hostname sumo jobid 25999 attempt 5
hostname sumo jobid 25999 attempt 6
hostname sumo jobid 25999 attempt 7
hostname sumo jobid 25999 attempt 8
hostname sumo jobid 25999 attempt 9
hostname sumo jobid 25999 attempt 10
hostname airedale jobid 26000 attempt 1
hostname airedale jobid 26000 attempt 2
hostname airedale jobid 26000 attempt 3
hostname airedale jobid 26000 attempt 4
hostname airedale jobid 26000 attempt 5
hostname airedale jobid 26000 attempt 6
hostname airedale jobid 26000 attempt 7
hostname airedale jobid 26000 attempt 8
hostname airedale jobid 26000 attempt 9
hostname airedale jobid 26000 attempt 10
hostname stonecrop jobid 26001 attempt 1
hostname stonecrop jobid 26001 attempt 2
hostname stonecrop jobid 26001 attempt 3
hostname stonecrop jobid 26001 attempt 4
hostname stonecrop jobid 26001 attempt 5
hostname stonecrop jobid 26001 attempt 6
hostname stonecrop jobid 26001 attempt 7
hostname stonecrop jobid 26001 attempt 8
hostname stonecrop jobid 26001 attempt 9
hostname stonecrop jobid 26001 attempt 10
hostname airedale jobid 26003 attempt 1
hostname airedale jobid 26003 attempt 2
hostname airedale jobid 26003 attempt 3
hostname airedale jobid 26003 attempt 4
hostname airedale jobid 26003 attempt 5
hostname airedale jobid 26003 attempt 6
hostname airedale jobid 26003 attempt 7
hostname airedale jobid 26003 attempt 8
hostname airedale jobid 26003 attempt 9
hostname airedale jobid 26003 attempt 10
hostname stonecrop jobid 26009 attempt 1
hostname stonecrop jobid 26009 attempt 2
hostname stonecrop jobid 26009 attempt 3
hostname stonecrop jobid 26009 attempt 4
hostname stonecrop jobid 26009 attempt 5
hostname stonecrop jobid 26009 attempt 6
hostname stonecrop jobid 26009 attempt 7
hostname stonecrop jobid 26009 attempt 8
hostname stonecrop jobid 26009 attempt 9
hostname stonecrop jobid 26009 attempt 10
hostname airedale jobid 26010 attempt 1
hostname airedale jobid 26010 attempt 2
hostname airedale jobid 26010 attempt 3
hostname airedale jobid 26010 attempt 4
hostname airedale jobid 26010 attempt 5
hostname airedale jobid 26010 attempt 6
hostname airedale jobid 26010 attempt 7
hostname airedale jobid 26010 attempt 8
hostname airedale jobid 26010 attempt 9
hostname airedale jobid 26010 attempt 10
hostname airedale jobid 26018 attempt 1
hostname airedale jobid 26018 attempt 2
hostname airedale jobid 26018 attempt 3
hostname airedale jobid 26018 attempt 4
hostname airedale jobid 26018 attempt 5
hostname airedale jobid 26018 attempt 6
hostname airedale jobid 26018 attempt 7
hostname airedale jobid 26018 attempt 8
hostname airedale jobid 26018 attempt 9
hostname airedale jobid 26018 attempt 10
hostname pomeranian jobid 26005 attempt 1
hostname pomeranian jobid 26005 attempt 2
hostname pomeranian jobid 26005 attempt 3
hostname pomeranian jobid 26005 attempt 4
hostname pomeranian jobid 26005 attempt 5
hostname pomeranian jobid 26005 attempt 6
hostname pomeranian jobid 26005 attempt 7
hostname pomeranian jobid 26005 attempt 8
hostname pomeranian jobid 26005 attempt 9
hostname stonecrop jobid 26017 attempt 1
hostname pomeranian jobid 26005 attempt 10
hostname stonecrop jobid 26017 attempt 3
hostname stonecrop jobid 26017 attempt 4
hostname stonecrop jobid 26017 attempt 5
hostname stonecrop jobid 26017 attempt 6
hostname stonecrop jobid 26017 attempt 7
hostname stonecrop jobid 26017 attempt 8
hostname stonecrop jobid 26017 attempt 9
hostname stonecrop jobid 26017 attempt 10
hostname pointer jobid 26007 attempt 1
hostname pointer jobid 26007 attempt 2
hostname pointer jobid 26007 attempt 3
hostname pointer jobid 26007 attempt 4
hostname pointer jobid 26007 attempt 5
 1
hostname technetium jobid 26004 attempt 2
hostname pointer jobid 26007 attempt 6
 3
hostname technetium jobid 26004 attempt 4
hostname pointer jobid 26007 attempt 7
 5
hostname technetium jobid 26004 attempt 6
hostname technetium jobid 26004 attempt 7
hostname technetium jobid 26004 attempt 8
hostname pointer jobid 26007 attempt 9
 9
hostname technetium jobid 26004 attempt 10
hostname pointer jobid 26007 attempt 10
hostname ytterbium jobid 26029 attempt 1
hostname ytterbium jobid 26029 attempt 2
hostname ytterbium jobid 26029 attempt 3
hostname ytterbium jobid 26029 attempt 4
hostname ytterbium jobid 26029 attempt 5
hostname ytterbium jobid 26029 attempt 6
hostname ytterbium jobid 26029 attempt 7
hostname ytterbium jobid 26029 attempt 8
hostname ytterbium jobid 26029 attempt 9
hostname ytterbium jobid 26029 attempt 10
hostname cornishrex jobid 26033 attempt 1
hostname cornishrex jobid 26033 attempt 2
hostname cornishrex jobid 26033 attempt 3
hostname cornishrex jobid 26033 attempt 4
hostname cornishrex jobid 26033 attempt 5
hostname cornishrex jobid 26033 attempt 6
hostname cornishrex jobid 26033 attempt 7
hostname cornishrex jobid 26033 attempt 8
hostname cornishrex jobid 26033 attempt 9
hostname collie jobid 26011 attempt 2
hostname cornishrex jobid 26033 attempt 10
hostname collie jobid 26011 attempt 3
hostname collie jobid 26011 attempt 4
hostname collie jobid 26011 attempt 5
hostname collie jobid 26011 attempt 6
hostname collie jobid 26011 attempt 7
hostname collie jobid 26011 attempt 8
hostname collie jobid 26011 attempt 9
hostname collie jobid 26011 attempt 10
hostname harry jobid 26035 attempt 1
hostname harry jobid 26035 attempt 2
hostname harry jobid 26035 attempt 3
hostname harry jobid 26035 attempt 4
hostname harry jobid 26035 attempt 5
hostname harry jobid 26035 attempt 6
hostname harry jobid 26035 attempt 7
hostname harry jobid 26035 attempt 8
hostname harry jobid 26035 attempt 9
hostname harry jobid 26035 attempt 10
mpt 1
hostname protactinium jobid 26016 attempt 2
hostname platinum jobid 26015 attempt 2
hostname platinum jobid 26015 attempt 3
t 3
hostname platinum jobid 26015 attempt 4
hostname platinum jobid 26015 attempt 5
t 4
hostname platinum jobid 26015 attempt 6
hostname platinum jobid 26015 attempt 7
t 5
hostname platinum jobid 26015 attempt 8
hostname platinum jobid 26015 attempt 9
t 6
hostname platinum jobid 26015 attempt 10
hostname protactinium jobid 26016 attempt 7
hostname protactinium jobid 26016 attempt 8
hostname protactinium jobid 26016 attempt 9
hostname protactinium jobid 26016 attempt 10
hostname osmian jobid 26014 attempt 1
hostname osmian jobid 26014 attempt 2
hostname osmian jobid 26014 attempt 3
hostname osmian jobid 26014 attempt 4
hostname osmian jobid 26014 attempt 5
hostname osmian jobid 26014 attempt 6
hostname osmian jobid 26014 attempt 7
hostname osmian jobid 26014 attempt 8
hostname osmian jobid 26014 attempt 9
hostname osmian jobid 26014 attempt 10
hostname boxer jobid 26034 attempt 1
hostname boxer jobid 26034 attempt 2
hostname boxer jobid 26034 attempt 3
hostname boxer jobid 26034 attempt 4
hostname boxer jobid 26034 attempt 5
hostname boxer jobid 26034 attempt 6
hostname boxer jobid 26034 attempt 7
hostname boxer jobid 26034 attempt 8
hostname boxer jobid 26034 attempt 9
hostname boxer jobid 26034 attempt 10
hostname lanthanum jobid 26013 attempt 1
hostname lanthanum jobid 26013 attempt 2
hostname lanthanum jobid 26013 attempt 3
hostname lanthanum jobid 26013 attempt 4
hostname lanthanum jobid 26013 attempt 5
hostname lanthanum jobid 26013 attempt 6
hostname lanthanum jobid 26013 attempt 7
hostname lanthanum jobid 26013 attempt 8
hostname lanthanum jobid 26013 attempt 9
hostname lanthanum jobid 26013 attempt 10
hostname terrier jobid 26006 attempt 1
hostname terrier jobid 26006 attempt 2
hostname terrier jobid 26006 attempt 3
hostname terrier jobid 26006 attempt 4
hostname terrier jobid 26006 attempt 5
hostname terrier jobid 26006 attempt 6
hostname terrier jobid 26006 attempt 7
hostname terrier jobid 26006 attempt 8
hostname terrier jobid 26006 attempt 9
hostname terrier jobid 26006 attempt 10
hostname stbernard jobid 26022 attempt 1
hostname stbernard jobid 26022 attempt 2
hostname stbernard jobid 26022 attempt 3
hostname stbernard jobid 26022 attempt 4
hostname xeon jobid 26008 attempt 1
hostname stbernard jobid 26022 attempt 5
hostname xeon jobid 26008 attempt 2
hostname stbernard jobid 26022 attempt 6
hostname xeon jobid 26008 attempt 3
hostname stbernard jobid 26022 attempt 7
hostname xeon jobid 26008 attempt 4
hostname stbernard jobid 26022 attempt 8
hostname xeon jobid 26008 attempt 5
hostname stbernard jobid 26022 attempt 9
hostname xeon jobid 26008 attempt 6
hostname stbernard jobid 26022 attempt 10
hostname xeon jobid 26008 attempt 7
hostname xeon jobid 26008 attempt 8
hostname xeon jobid 26008 attempt 9
hostname airedale jobid 26040 attempt 1
hostname xeon jobid 26008 attempt 10
hostname airedale jobid 26040 attempt 2
hostname airedale jobid 26040 attempt 3
hostname airedale jobid 26040 attempt 4
hostname airedale jobid 26040 attempt 5
hostname airedale jobid 26040 attempt 6
hostname airedale jobid 26040 attempt 7
hostname airedale jobid 26040 attempt 8
hostname silver jobid 26019 attempt 1
9
hostname airedale jobid 26040 attempt 10
hostname silver jobid 26019 attempt 2
hostname mercury jobid 26021 attempt 2
hostname silver jobid 26019 attempt 3
hostname mercury jobid 26021 attempt 3
hostname silver jobid 26019 attempt 4
hostname mercury jobid 26021 attempt 4
hostname silver jobid 26019 attempt 5
hostname mercury jobid 26021 attempt 5
hostname silver jobid 26019 attempt 6
hostname mercury jobid 26021 attempt 6
hostname silver jobid 26019 attempt 7
hostname mercury jobid 26021 attempt 7
hostname silver jobid 26019 attempt 8
hostname mercury jobid 26021 attempt 8
hostname mercury jobid 26021 attempt 9
ostname stonecrop jobid 26045 attempt 1

hostname neodymium jobid 26026 attempt 1

hostname gold jobid 26024 attempt 1
hostname molybdenum jobid 26027 attempt 1
hostname elkhound jobid 26031 attempt 1
hostname silver jobid 26019 attempt 10
hostname dysprosium jobid 26023 attempt 2
hostname neodymium jobid 26026 attempt 2

hostname molybdenum jobid 26027 attempt 2
hostname elkhound jobid 26031 attempt 2
hostname stonecrop jobid 26045 attempt 3
hostname pomeranian jobid 26044 attempt 3
hostname molybdenum jobid 26027 attempt 3hostname elkhound jobid 26031 attempt 3
hostname gold jobid 26024 attempt 3
hostname stonecrop jobid 26045 attempt 4
hostname dysprosium jobid 26023 attempt 4
hostname neodymium jobid 26026 attempt 4

hostname elkhound jobid 26031 attempt 4
4
hostname gold jobid 26024 attempt 4
hostname stonecrop jobid 26045 attempt 5
hostname pomeranian jobid 26044 attempt 5
hostname molybdenum jobid 26027 attempt 5
hostname elkhound jobid 26031 attempt 5
hostname gold jobid 26024 attempt 5
hostname stonecrop jobid 26045 attempt 6
hostname pomeranian jobid 26044 attempt 6
hostname neodymium jobid 26026 attempt 6
hostname elkhound jobid 26031 attempt 6
6
hostname gold jobid 26024 attempt 6
hostname stonecrop jobid 26045 attempt 7

hostname neodymium jobid 26026 attempt 7

hostname molybdenum jobid 26027 attempt 7
hostname gold jobid 26024 attempt 7
t 7
hostname stonecrop jobid 26045 attempt 8

hostname neodymium jobid 26026 attempt 8

hostname molybdenum jobid 26027 attempt 8
hostname elkhound jobid 26031 attempt 8
hostname gold jobid 26024 attempt 8
hostname wiemaraner jobid 26043 attempt 3
1
hostname stonecrop jobid 26045 attempt 9
hostname neodymium jobid 26026 attempt 9

hostname molybdenum jobid 26027 attempt 9
hostname pomeranian jobid 26044 attempt 9
hostname elkhound jobid 26031 attempthoshostname praseodymium jobid 26039 attempt 2
hostname stonecrop jobid 26045 attempt 10
hostname neodymium jobid 26026 attempt 10

hostname pomeranian jobid 26044 attempt 10
hostname elkhound jobid 26031 attempt 10
hostname gold jobid 26024 attempt 10
empt 3
hostname halon jobid 26020 attempt 3
pt 5
hostname praseodymium jobid 26039 attempt 4
hostname halon jobid 26020 attempt 4
pt 6
hostname praseodymium jobid 26039 attempt 5
hostname wiemaraner jobid 26043 attempt 7
hostname praseodymium jobid 26039 attempt 6
hostname wiemaraner jobid 26043 attempt 8
hostname halon jobid 26020 attempt 6
hostname wiemaraner jobid 26043 attempt 9
7
hostname halon jobid 26020 attempt 7
hostname wiemaraner jobid 26043 attempt 10

hostname praseodymium jobid 26039 attempt 9
hostname praseodymium jobid 26039 attempt 10
hostname halon jobid 26020 attempt 10
hostname americium jobid 26036 attempt 1
hostname americium jobid 26036 attempt 2
hostname americium jobid 26036 attempt 3
hostname americium jobid 26036 attempt 4
hostname americium jobid 26036 attempt 5
hostname americium jobid 26036 attempt 6
hostname americium jobid 26036 attempt 7
hostname americium jobid 26036 attempt 8
hostname americium jobid 26036 attempt 9
hostname americium jobid 26036 attempt 10
hostname niobium jobid 26041 attempt 1
hostname niobium jobid 26041 attempt 2
hostname niobium jobid 26041 attempt 3
hostname niobium jobid 26041 attempt 4
hostname niobium jobid 26041 attempt 5
hostname niobium jobid 26041 attempt 6
hostname niobium jobid 26041 attempt 7
hostname niobium jobid 26041 attempt 8
hostname niobium jobid 26041 attempt 9
hostname niobium jobid 26041 attempt 10
hostname mercury jobid 26021 attempt 10
hostname indium jobid 26028 attempt 1
hostname indium jobid 26028 attempt 2
hostname indium jobid 26028 attempt 3
hostname indium jobid 26028 attempt 4
hostname indium jobid 26028 attempt 5
hostname indium jobid 26028 attempt 6
hostname indium jobid 26028 attempt 7
hostname indium jobid 26028 attempt 8
hostname indium jobid 26028 attempt 9
hostname indium jobid 26028 attempt 10
hostname barium jobid 26046 attempt 1
hostname barium jobid 26046 attempt 2
hostname barium jobid 26046 attempt 3
hostname barium jobid 26046 attempt 4
hostname barium jobid 26046 attempt 5
hostname barium jobid 26046 attempt 6
hostname barium jobid 26046 attempt 7
hostname barium jobid 26046 attempt 8
hostname barium jobid 26046 attempt 9
hostname barium jobid 26046 attempt 10
hostname californium jobid 26037 attempt 1
hostname californium jobid 26037 attempt 2
hostname californium jobid 26037 attempt 3
hostname californium jobid 26037 attempt 4
hostname californium jobid 26037 attempt 5
hostname californium jobid 26037 attempt 6
hostname californium jobid 26037 attempt 7
hostname californium jobid 26037 attempt 8
hostname californium jobid 26037 attempt 9
hostname californium jobid 26037 attempt 10
hostname berkelium jobid 26038 attempt 1
hostname berkelium jobid 26038 attempt 2
hostname thulium jobid 26032 attempt 1
hostname berkelium jobid 26038 attempt 3
hostname thulium jobid 26032 attempt 2
hostname berkelium jobid 26038 attempt 4
hostname thulium jobid 26032 attempt 3
hostname berkelium jobid 26038 attempt 5
hostname thulium jobid 26032 attempt 4
hostname tin jobid 26030 attempt 1
mpt 6
hostname radium jobid 26042 attempt 1
hostname thulium jobid 26032 attempt 5
hostname berkelium jobid 26038 attempt 7
hostname tin jobid 26030 attempt 2
hostname radium jobid 26042 attempt 2
hostname thulium jobid 26032 attempt 6
hostname berkelium jobid 26038 attempt 8
hostname tin jobid 26030 attempt 3
hostname radium jobid 26042 attempt 3
hostname thulium jobid 26032 attempt 7
hostname berkelium jobid 26038 attempt 9
hostname tin jobid 26030 attempt 4
hostname radium jobid 26042 attempt 4
hostname thulium jobid 26032 attempt 8
hostname berkelium jobid 26038 attempt 10
hostname tin jobid 26030 attempt 5
hostname radium jobid 26042 attempt 5
hostname thulium jobid 26032 attempt 9
hostname tin jobid 26030 attempt 6
hostname radium jobid 26042 attempt 6
hostname thulium jobid 26032 attempt 10
hostname radium jobid 26042 attempt 7
hostname tin jobid 26030 attempt 8
hostname radium jobid 26042 attempt 8
hostname tin jobid 26030 attempt 9
hostname radium jobid 26042 attempt 9
hostname tin jobid 26030 attempt 10
hostname radium jobid 26042 attempt 10
hostname stonecrop jobid 26047 attempt 1
hostname stonecrop jobid 26047 attempt 2
hostname stonecrop jobid 26047 attempt 3
hostname stonecrop jobid 26047 attempt 4
hostname stonecrop jobid 26047 attempt 5
hostname stonecrop jobid 26047 attempt 6
hostname stonecrop jobid 26047 attempt 7
hostname stonecrop jobid 26047 attempt 8
hostname stonecrop jobid 26047 attempt 9
hostname stonecrop jobid 26047 attempt 10
hostname pomeranian jobid 26049 attempt 1
hostname pomeranian jobid 26049 attempt 2
hostname pomeranian jobid 26049 attempt 3
hostname pomeranian jobid 26049 attempt 4
hostname pomeranian jobid 26049 attempt 5
hostname terrier jobid 26050 attempt 1
hostname pomeranian jobid 26049 attempt 6
hostname terrier jobid 26050 attempt 2
hostname pomeranian jobid 26049 attempt 7
hostname terrier jobid 26050 attempt 3
hostname pomeranian jobid 26049 attempt 8
hostname terrier jobid 26050 attempt 4
hostname osmian jobid 26052 attempt 1
hostname pomeranian jobid 26049 attempt 9
hostname terrier jobid 26050 attempt 5
hostname osmian jobid 26052 attempt 2
hostname pomeranian jobid 26049 attempt 10
hostname osmian jobid 26052 attempt 4

hostname terrier jobid 26050 attempt 8
hostname osmian jobid 26052 attempt 6
hostname terrier jobid 26050 attempt 9
hostname osmian jobid 26052 attempt 7
hostname terrier jobid 26050 attempt 10
hostname osmian jobid 26052 attempt 8
hostname osmian jobid 26052 attempt 9
hostname osmian jobid 26052 attempt 10
hostname protactinium jobid 26054 attempt 1
hostname protactinium jobid 26054 attempt 2
hostname protactinium jobid 26054 attempt 3
hostname protactinium jobid 26054 attempt 4
hostname protactinium jobid 26054 attempt 5
hostname mercury jobid 26055 attempt 1
hostname protactinium jobid 26054 attempt 6
hostname mercury jobid 26055 attempt 2
hostname protactinium jobid 26054 attempt 7
hostname mercury jobid 26055 attempt 3
hostname protactinium jobid 26054 attempt 8
hostname mercury jobid 26055 attempt 4
hostname protactinium jobid 26054 attempt 9
hostname mercury jobid 26055 attempt 5
hostname protactinium jobid 26054 attempt 10
hostname mercury jobid 26055 attempt 6
hostname mercury jobid 26055 attempt 7
hostname mercury jobid 26055 attempt 8
hostname mercury jobid 26055 attempt 9
hostname mercury jobid 26055 attempt 10
hostname stbernard jobid 26059 attempt 1
hostname stbernard jobid 26059 attempt 2
hostname stbernard jobid 26059 attempt 3
hostname stbernard jobid 26059 attempt 4
hostname stbernard jobid 26059 attempt 5
hostname dysprosium jobid 26060 attempt 1
hostname stbernard jobid 26059 attempt 6
hostname gold jobid 26061 attempt 1
hostname dysprosium jobid 26060 attempt 2
hostname stbernard jobid 26059 attempt 7
hostname gold jobid 26061 attempt 2
hostname dysprosium jobid 26060 attempt 3
hostname stbernard jobid 26059 attempt 8
hostname gold jobid 26061 attempt 3
hostname dysprosium jobid 26060 attempt 4
hostname stbernard jobid 26059 attempt 9
hostname gold jobid 26061 attempt 4
hostname dysprosium jobid 26060 attempt 5
hostname stbernard jobid 26059 attempt 10
hostname gold jobid 26061 attempt 5
hostname dysprosium jobid 26060 attempt 6
hostname gold jobid 26061 attempt 6
hostname radium jobid 26062 attempt 1
hostname dysprosium jobid 26060 attempt 7
hostname gold jobid 26061 attempt 7
hostname radium jobid 26062 attempt 2
hostname gold jobid 26061 attempt 8
mpt 8
hostname radium jobid 26062 attempt 3
hostname gold jobid 26061 attempt 9
mpt 9
hostname radium jobid 26062 attempt 4
hostname neodymium jobid 26063 attempt 1
hostname osmian jobid 26058 attempt 1
hostname dysprosium jobid 26060 attempt 10
hostname gold jobid 26061 attempt 10
hostname tin jobid 26068 attempt 1
hostname radium jobid 26062 attempt 5
hostname indium jobid 26064 attempt 2
 2
hostname osmian jobid 26058 attempt 2
hostname tin jobid 26068 attempt 2
hostname radium jobid 26062 attempt 6
hostname neodymium jobid 26063 attempt 3
hostname indium jobid 26064 attempt 3
hostname osmian jobid 26058 attempt 3
hostname tin jobid 26068 attempt 3
pt 1
hostname radium jobid 26062 attempt 7
hostname neodymium jobid 26063 attempt 4
hostname indium jobid 26064 attempt 4
hostname osmian jobid 26058 attempt 4
hostname elkhound jobid 26069 attempt 2
hostname tin jobid 26068 attempt 4
hostname radium jobid 26062 attempt 8
hostname neodymium jobid 26063 attempt 5
hostname indium jobid 26064 attempt 5
hostname osmian jobid 26058 attempt 5
hostname elkhound jobid 26069 attempt 3
hostname tin jobid 26068 attempt 5
hostname radium jobid 26062 attempt 9
hostname neodymium jobid 26063 attempt 6
hostname osmian jobid 26058 attempt 6
hostname elkhound jobid 26069 attempt 4
hostname tin jobid 26068 attempt 6
hostname radium jobid 26062 attempt 10
hostname indium jobid 26064 attempt 7
 7
hostname osmian jobid 26058 attempt 7
hostname elkhound jobid 26069 attempt 5
hostname tin jobid 26068 attempt 7
hostname indium jobid 26064 attempt 8
 8
hostname osmian jobid 26058 attempt 8
hostname elkhound jobid 26069 attempt 6
hostname tin jobid 26068 attempt 8
hostname neodymium jobid 26063 attempthostname elkhound jobid 26069 attempt 7
hostname tin jobid 26068 attempt 9
hostname indium jobid 26064 attempt 10
hostname osmian jobid 26058 attempt 10
hostname neodymium jobid 26063 attempt 10
hostname elkhound jobid 26069 attempt 8
hostname tin jobid 26068 attempt 10
hostname elkhound jobid 26069 attempt 9
hostname elkhound jobid 26069 attempt 10
hostname thulium jobid 26075 attempt 1
hostname thulium jobid 26075 attempt 2
hostname thulium jobid 26075 attempt 3
hostname thulium jobid 26075 attempt 4
hostname thulium jobid 26075 attempt 5
hostname thulium jobid 26075 attempt 6
hostname thulium jobid 26075 attempt 7
hostname thulium jobid 26075 attempt 8
hostname thulium jobid 26075 attempt 9
hostname thulium jobid 26075 attempt 10
hostname spaniel jobid 26065 attempt 1
hostname spaniel jobid 26065 attempt 2
hostname spaniel jobid 26065 attempt 3
hostname spaniel jobid 26065 attempt 4
hostname spaniel jobid 26065 attempt 5
hostname collie jobid 26051 attempt 1
hostname spaniel jobid 26065 attempt 6
hostname collie jobid 26051 attempt 2
hostname spaniel jobid 26065 attempt 7
hostname stbernard jobid 26070 attempt 1
hostname terrier jobid 26057 attempt 1
 1
hostname spaniel jobid 26065 attempt 8
hostname stbernard jobid 26070 attempt 2
hostname terrier jobid 26057 attempt 2
 2
hostname stonecrop jobid 26053 attempt 1
hostname spaniel jobid 26065 attempt 9
hostname stbernard jobid 26070 attempt 3
hostname pomeranian jobid 26056 attempt 3
hostname terrier jobid 26057 attempt 3
hostname collie jobid 26051 attempt 5
hostname stonecrop jobid 26053 attempt 2
hostname spaniel jobid 26065 attempt 10
hostname protactinium jobid 26066 attempt 1
hostname cornishrex jobid 26071 attempt 1
hostname stbernard jobid 26070 attempt 4
hostname pomeranian jobid 26056 attempt 4
hostname terrier jobid 26057 attempt 4
hostname collie jobid 26051 attempt 6
hostname stonecrop jobid 26053 attempt 3
hostname mercury jobid 26067 attempt 2
hostname protactinium jobid 26066 attempt 2
hostname cornishrex jobid 26071 attempt 2
hostname stbernard jobid 26070 attempt 5
hostname pomeranian jobid 26056 attempt 5
hostname terrier jobid 26057 attempt 5
hostname collie jobid 26051 attempt 7
hostname stonecrop jobid 26053 attempt 4
hostname mercury jobid 26067 attempt 3
hostname harry jobid 26073 attempt 1
hostname cornishrex jobid 26071 attempt 3
3
hostname stbernard jobid 26070 attempt 6
hostname terrier jobid 26057 attempt 6
 6
hostname collie jobid 26051 attempt 8
hostname stonecrop jobid 26053 attempt 5
hostname mercury jobid 26067 attempt 4
hostname harry jobid 26073 attempt 2
hostname cornishrex jobid 26071 attempt 4
4
hostname stbernard jobid 26070 attempt 7
hostname terrier jobid 26057 attempt 7
 7
hostname collie jobid 26051 attempt 9
hostname stonecrop jobid 26053 attempt 6
hostname harry jobid 26073 attempt 3
5
hostname cornishrex jobid 26071 attempt 5
5
hostname stbernard jobid 26070 attempt 8
hostname terrier jobid 26057 attempt 8
 8
hostname collie jobid 26051 attempt 10
hostname stonecrop jobid 26053 attempt 7
hostname harry jobid 26073 attempt 4
6
hostname cornishrex jobid 26071 attempt 6
6
hostname stbernard jobid 26070 attempt 9
hostname terrier jobid 26057 attempt 9
 9
hostname stonecrop jobid 26053 attempt 8
hostname harry jobid 26073 attempt 5
7
hostname cornishrex jobid 26071 attempt 7
7
hostname stbernard jobid 26070 attempt 10
hostname pomeranian jobid 26056 attempt 10
hostname terrier jobid 26057 attempt 10
hostname stonecrop jobid 26053 attempt 9
hostname mercury jobid 26067 attempt 8
hostname harry jobid 26073 attempt 6
hostname cornishrex jobid 26071 attempt 8
8
hostname stonecrop jobid 26053 attempt 10
hostname mercury jobid 26067 attempt 9
hostname harry jobid 26073 attempt 7
hostname cornishrex jobid 26071 attempt 9
9
hostname mercury jobid 26067 attempt 10
hostname harry jobid 26073 attempt 8
hostname cornishrex jobid 26071 attempt 10
0
hostname harry jobid 26073 attempt 9
hostname nitrogen jobid 26074 attempt 1
hostname briard jobid 26072 attempt 1
hostname harry jobid 26073 attempt 10
hostname nitrogen jobid 26074 attempt 2
hostname briard jobid 26072 attempt 2
hostname nitrogen jobid 26074 attempt 3
hostname briard jobid 26072 attempt 3
hostname nitrogen jobid 26074 attempt 4
hostname briard jobid 26072 attempt 4
hostname nitrogen jobid 26074 attempt 5
hostname briard jobid 26072 attempt 5
hostname nitrogen jobid 26074 attempt 6
hostname briard jobid 26072 attempt 6
hostname nitrogen jobid 26074 attempt 7
hostname briard jobid 26072 attempt 7
hostname nitrogen jobid 26074 attempt 8
hostname briard jobid 26072 attempt 8
hostname nitrogen jobid 26074 attempt 9
hostname briard jobid 26072 attempt 9
hostname nitrogen jobid 26074 attempt 10
hostname briard jobid 26072 attempt 10

--------------BAA9B6ED63D5B620372E2AAA--



------------------------------

Date: Mon, 2 Apr 2001 11:24:40 +0200
From: Rickard <d96-rsa@nada.kth.se>
Subject: Force Integer Division??
Message-Id: <Pine.SOL.4.30.0104021122030.5789-100000@my.nada.kth.se>

Hi

I read that a division of two integers should produce an integer
answer but that isn't true on my computer.

12/10 = 1.2

How can I force Perl to perform an integer division, I want the
answer to be simply "1".

Please reply to my email:

d96-rsa@d.kth.se

Thanx!!!

/rick



------------------------------

Date: Tue, 03 Apr 2001 23:26:39 GMT
From: trammell@bayazid.hypersloth.invalid (John Joseph Trammell)
Subject: Re: Force Integer Division??
Message-Id: <slrn9ckkvv.cgt.trammell@bayazid.hypersloth.net>

On Mon, 2 Apr 2001 11:24:40 +0200, Rickard <d96-rsa@nada.kth.se> wrote:
> How can I force Perl to perform an integer division, I want the
> answer to be simply "1".

[ ~ ] perl -e 'print 12/10, "\n"'
1.2
[ ~ ] perl -Minteger -e 'print 12/10, "\n"'
1



------------------------------

Date: Tue, 03 Apr 2001 18:20:12 -0500
From: Kumar Sundaram <ksundar@landsend.com>
Subject: how I could find how many times my program has run
Message-Id: <3ACA5AAC.E2570F5F@landsend.com>

Does anyone know how I could find how many times my program has run

Thanks,
KS



------------------------------

Date: Wed, 4 Apr 2001 00:46:15 +0000 (UTC)
From: efflandt@xnet.com (David Efflandt)
Subject: Re: how I could find how many times my program has run
Message-Id: <slrn9ckrn1.mi.efflandt@efflandt.xnet.com>

On Tue, 03 Apr 2001, Kumar Sundaram <ksundar@landsend.com> wrote:
>Does anyone know how I could find how many times my program has run

Have it write to a log file, either to read, update and write a number, or
whatever else you want to log.  But you should open the file for
read/write, flock it, manipulate the count, truncate it if necessary and
save it.

If you open it for read, close it and open it for write, it is possible
that another instance of the script could corrupt your log.

-- 
David Efflandt  efflandt@xnet.com  http://www.de-srv.com/
http://www.autox.chicago.il.us/  http://www.berniesfloral.net/
http://cgi-help.virtualave.net/  http://hammer.prohosting.com/~cgi-wiz/


------------------------------

Date: Tue, 03 Apr 2001 22:11:46 GMT
From: "MGD" <mgd@converging.net>
Subject: how to create a new user by script?
Message-Id: <CUry6.81417$Lm2.10053913@news0.telusplanet.net>

I am trying to automate setting up web pages and new users on a web and mail
server. I have scripted the web page set up for apache, but I am stumpted as
to how to specify an encrypted password for a new user. I can edit the vipw
file but, of course, any change to the pw would be in encrypted form. Any
hints or urls...?

I am using FreeBSD 3.4. I usually use the adduser util to manually create a
user.  A rather tedious process.




------------------------------

Date: Tue, 03 Apr 2001 23:59:44 GMT
From: Benjamin Goldberg <goldbb2@earthlink.net>
Subject: Re: Multidimensional Arrays?
Message-Id: <3ACA64D0.1D62B2AB@earthlink.net>

Joe Schaefer wrote:
> 
> Benjamin Goldberg <goldbb2@earthlink.net> writes:
> 
> > Garry T. Williams wrote:
> > >
> > > On Mon, 02 Apr 2001 22:14:52 GMT, Benjamin Goldberg
> > > <goldbb2@earthlink.net> wrote:
> > >
> > > >
> > > > For example:
> > > > my @x;
> > > > $x[$i][$j] = 4 foreach my $i (0..6) foreach my $j (0..3);
> > > >
> > > > Magically makes x into a 7x4 array.
> > >
> > > Funny, it doesn't compile:
> > >
> > > perl -we 'my @x;$x[$i][$j] = 4 foreach my $i (0..6) foreach my $j (0..3);'
> > > syntax error at -e line 1, near "$i ("
> > > Execution of -e aborted due to compilation errors.
> >
> > Umm, oops?
> >
> > Well, the error is with my loop code, not with my array code, anyway.
> >
> 
> Umm, it's a syntax error, so perl can't appreciate the difference.
> But it does tell you where it's getting stuck, and apparently
> you haven't yet understood what perl is complaining about.
> 
> > > > Another way of doing something like this is:
> > > > @$x[$i] = ($a, $b, $c, $d) foreach my $i (0..6);
> > >
> > > Do I detect a pattern?
> >
> > Guess so :)
> >
> > And my next suggestion would have been
> > $x[$i] = [$a, $b, $c, $d] for my $i (0..6);
> >
> > Or something like that :)
> 
>   % perl -wce  '$x[$i] = [$a, $b, $c, $d] for my $i (0..6);'
> 
> Can you guess what happens now?

Probably a syntax error :)

> See
> 
>   % man perlsyn
> 
> in particular the section on "Simple statements".

Sadly, I do not have perl installed on my computer, and am working from
memory from when I was in college.  I would install it, but I have very
little room left on my hdd, and am too cheap to get a bigger hdd.

-- 
Sometimes the journey *is* its own reward--but not when you're trying to get to the bathroom in time.


------------------------------

Date: 3 Apr 2001 23:40:49 GMT
From: damian@qimr.edu.au (Damian James)
Subject: Re: Perl for Unix Sparc (solaris 7.0)
Message-Id: <slrn9cknqa.ii2.damian@puma.qimr.edu.au>

Milliwave chose Tue, 3 Apr 2001 22:32:52 +0100 to say this:
>
>What's the most uptodate version of Perl which is available for the Solaris
>operating system?
>On sun's freeware web site, the most uptodate version for the Solaris 7.0
>operating system
>is perl5.005. Where can I find the binaries for perl?
>

Have you looked at www.cpan.org or www.perl.com? In particular, you should
visit the list of mirrors at http://www.cpan.org/SITES.html and familiarise
yourself with the one nearest you. 

Anyway, you are running unix -- what need have you for binaries? What you
want to do is download the source and compile it. It's not brain surgery
-- you should find sufficient instructions in the file INSTALL that comes with
the source distribution. In brief, you type something like:

	./configure && make && make install

from the unpacked distribution dir (FSVO 'something like' :-). You should
be able to find the latest stable source distribution hiding as:

	ftp://$YOUR_LOCAL_CPAN_MIRROR/src/stable.tar.gz

HTH,

Cheers,
Damian
-- 
@:=grep!($;+=m!$/|#!),split//,<DATA>;@;=0..$#:;while(@;){for($;=@;;--$;;){;(
$:=rand$;+$|)==$;&&next;@;[$;,$:]=@;[$:,$;]}push@|,shift@;if$;[0]==@|;select
$,,$,,$,,1/80;print qq x\bxx((@;+@|)*$|++),@:[@|,@;],!@;&&$/} __END__
Just another Perl Hacker # rev 3 -- a JAPH in progress, I guess...


------------------------------

Date: Wed, 04 Apr 2001 00:16:56 GMT
From: Benjamin Goldberg <goldbb2@earthlink.net>
Subject: Re: Perl::mySQL
Message-Id: <3ACA68D8.2FFE7C74@earthlink.net>

Richard Dobson wrote:
> 
> Hi,
> I have been set a task that I would like a bit of guidance with if
> possible.
> The idea is that someone wants to track files i.e when they were
> created, editted, proofed etc.

Sounds like you want a Version Control System, like CVS.

> I was thinking of having a form on their intranet that they filled in
> whenever they made alterations to the file.

Versioning systems can do all this for you and more.

> They could then search for files that fitted within certain criteria.

Huh?

> This would rely on them going and filling in the form.

With CVS, whenever you finish editing a file, you run a kind of checkin
command, which does anything you might want such a form to do.

> Would it be possible to make this nicer by having the form
> automatically open whenever a file was closed?

Some file systems and operating systems have ways to actively react to
the opening and closing of files, so that when a file is closed,
something else happens automatically.  So you may be able to find a way
for program x to automatically be run whenever file y is closed. 
However, most systems don't support this.  The closest you can do, is
either *demand* that your users always checkin after editing, or
*demand* that they use some particular editing program which will
checkin before exiting.

> I am planning it with Perl DBI and mySQL. I would like the files to be
> within the database and to open them from a browser. Is there a
> function within perl for storing documents as opposed to text records
> within mySQL.

Ugh.  Use a real version control system, like CVS.

> Any guidance would be a great help. Is perl the best tool for this?

No, CVS is.

If you want to use Perl as a component, there are places where CVS will
run user specified scripts.  There's no harm in replacing the sh scripts
with perl scripts, if you know what you're doing.

-- 
Sometimes the journey *is* its own reward--but not when you're trying to
get to the bathroom in time.


------------------------------

Date: 03 Apr 2001 23:21:45 GMT
From: dave@sydney.daveb.net (Dave Bailey)
Subject: Re: Please Flame my Benchmark: open vs. cat
Message-Id: <slrn9ckcf2.6oa.dave@sydney.daveb.net>

On Tue, 3 Apr 2001 20:34:07 +0000 (UTC), Abigail <abigail@foad.org> wrote:
>Uri Guttman (uri@sysarch.com) wrote on MMDCCLXXII September MCMXCIII in
><URL:news:x78zlhvs70.fsf@home.sysarch.com>:
>;; >>>>> "A" == Abigail  <abigail@foad.org> writes:
>;; 
>;;   A> I find the Benchmark not very useful. One doesn't repeatedly read
>;;   A> the same file over and over again in a typical program.
>;; 
>;;   A> Furthermore, you are banchmarking the repeat read of a single
>;;   A> file. A good Benchmark would bench a whole range of file sizes
[...]
>;; i disagree. the repeated reading means the file will almost assuredly be
>;; in cache the entire time so the disk access part is removed. then you
>;; are comparing the overhead of the fork/exec of `cat` to the speed of
>;; open. the rest is irrelevent. the bencmark isolates the real difference
>;; between the two methods and leave the disk and other stuff outside.
>
>Which means, it's pretty pointless.
>
>If I need a string 10,000 times in a program, I would not read it
>10,000 times from a file and benchmark which is the fastest.
>
>I'd redesign my program to something less absurd.

Your posts indicate that you don't understand what a benchmark is, nor
do you understand why the poster originally wrote this particular one.

--
Dave Bailey
davidb54@yahoo.com


------------------------------

Date: Tue, 03 Apr 2001 23:46:13 -0000
From: Chris Stith <mischief@velma.motion.net>
Subject: Re: Please Flame my Benchmark: open vs. cat
Message-Id: <tcko65602p854f@corp.supernews.com>

Dave Bailey <dave@sydney.daveb.net> wrote:
> On Tue, 3 Apr 2001 20:34:07 +0000 (UTC), Abigail <abigail@foad.org> wrote:
>>Uri Guttman (uri@sysarch.com) wrote on MMDCCLXXII September MCMXCIII in
>><URL:news:x78zlhvs70.fsf@home.sysarch.com>:
>>;; >>>>> "A" == Abigail  <abigail@foad.org> writes:
>>;; 
>>;;   A> I find the Benchmark not very useful. One doesn't repeatedly read
>>;;   A> the same file over and over again in a typical program.
>>;; 
>>;;   A> Furthermore, you are banchmarking the repeat read of a single
>>;;   A> file. A good Benchmark would bench a whole range of file sizes
> [...]

[snip]

>>If I need a string 10,000 times in a program, I would not read it
>>10,000 times from a file and benchmark which is the fastest.
>>
>>I'd redesign my program to something less absurd.

> Your posts indicate that you don't understand what a benchmark is, nor
> do you understand why the poster originally wrote this particular one.

That is a pretty drastic conclusion to draw from such little context.
This is especially true since Abigail is one of the more respected
regulars in the group.

The fact that Abigail doesn't think it's normal to read the same
file for contents several thousand times in a program doesn't mean
she doesn't recognize the value of a benchmark. She just thinks the
benchmark is silly, considering the task isn't one that will be done
in any real program (outside a dain bramaged design or some example
specifically concocted to do so).

I think if you want to know the difference between cat and open in
this controlled environment, the benchmark is fine. If you want to
measure real-world performance, you need to do as Abigail suggests
by feeding it several hundred or a few thousand different files.
In the latter case, you need to remember that you're benchmarking
your disk susbsystem more than your program. If you want the benchmark
to be more relevant, it needs to be rewritten to do something a real
program is likely to do.

In the special case that you've got a fifo file from which you're
reading randomized data, the existing benchmark makes a lot of sense.

Chris

-- 
Christopher E. Stith
For the pleasure of others, please adhere to the following
rules when visiting your park:
    No swimming.  No fishing.  No flying kites.  No frisbees.
    No audio equipment. Stay off grass.  No pets. No running.



------------------------------

Date: Wed, 04 Apr 2001 00:45:18 GMT
From: Benjamin Goldberg <goldbb2@earthlink.net>
Subject: Re: regex-qr// for search and replace
Message-Id: <3ACA6F68.37F3A011@earthlink.net>

W K wrote:
> 
> I want to ge able to save a regular expression. easy, you say.
> $re=qr/interseting word/;
> if ($text =~/$re){ etc. etc.}
> 
> The question is, can I store a search and replace in such a way?
> If I am going to look through hundreds of strings doing search and
> replace on them it would normally require a compilation of the regular
> extression each time(?).

You can use qr// for this.

This code is untested, but shows what I mean.
my @subs = (
	[ qr/foo/, 'bar' ],
	[ qr/bar/, 'baz' ],
);

my @data = <>;
foreach my $sub (@subs) {
	s/$sub->[0]/$sub->[1]/ foreach(@data);
}
print @data;
__END__

Of couse, if you need 'bar' and 'baz' needed to be things containing
variables to be interpolated during the substitution (eg, $1, $2, etc),
then I can't help you... I haven't a clue how you would do it.  Or at
least, not efficiently.

> It would seem like a nice optimisation to have, but I didn't see
> anything like this mentioned.
> 
> (The /o wouldn't be useful in my case)

-- 
Sometimes the journey *is* its own reward--but not when you're trying to
get to the bathroom in time.


------------------------------

Date: Tue, 3 Apr 2001 23:51:30 +0100
From: "Dr Joolz" <jxm96c@hotmail.com>
Subject: SVG Perl Libraries
Message-Id: <hvsy6.2903$rP3.459086@news2-win.server.ntlworld.com>

We at the University of Nottingham have recently made available a set of
Perl modules for code generating SVG documents, both online as CGI and
locally. It is generally free, license is available for commercial use. The
website is at

http://broadway.cs.nott.ac.uk/projects/SVG/svgpl/

Yours,
        Jules

***24 hours in a day...24 beers in a case...coincidence?***




------------------------------

Date: 16 Sep 99 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 16 Sep 99)
Message-Id: <null>


Administrivia:

The Perl-Users Digest is a retransmission of the USENET newsgroup
comp.lang.perl.misc.  For subscription or unsubscription requests, send
the single line:

	subscribe perl-users
or:
	unsubscribe perl-users

to almanac@ruby.oce.orst.edu.  

| NOTE: The mail to news gateway, and thus the ability to submit articles
| through this service to the newsgroup, has been removed. I do not have
| time to individually vet each article to make sure that someone isn't
| abusing the service, and I no longer have any desire to waste my time
| dealing with the campus admins when some fool complains to them about an
| article that has come through the gateway instead of complaining
| to the source.

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

To request back copies (available for a week or so), send your request
to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
where x is the volume number and y is the issue number.

For other requests pertaining to the digest, send mail to
perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
sending perl questions to the -request address, I don't have time to
answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V10 Issue 622
**************************************


home help back first fref pref prev next nref lref last post