[9834] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 3427 Volume: 8

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Wed Aug 12 12:07:22 1998

Date: Wed, 12 Aug 98 09:01:39 -0700
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Wed, 12 Aug 1998     Volume: 8 Number: 3427

Today's topics:
    Re: Beginners problem? <adam@fastfare.co.uk>
    Re: cgi <alan@find-it.furryferret.uk.com>
    Re: checking if files exist (Abigail)
    Re: dates in excess of 2037 (A Problem???) <khera@kciLink.com>
    Re: Execute a program then return to perl? huntersean@hotmail.com
    Re: File updating question <rootbeer@teleport.com>
    Re: File updating question <rootbeer@teleport.com>
    Re: How to test for failed command open useing FileHand <rootbeer@teleport.com>
    Re: I can't run any perl prog. on my server (Dan Nguyen)
        lexically scoped aliases <matthies@fsinfo.cs.uni-sb.de>
        long story - fork & multiprocessing problem <usenet-replies@rocketmail.com>
        LWP & 'HTTP/1.0' <jon@resonate.com>
    Re: matching problem with (xx)? (Patrick Timmins)
    Re: Need to Lock files (NFS) <tchrist@mox.perl.com>
    Re: Newbie Question About 'for' (root)
    Re: ODBC, Perl, Unix and Macs <thaynes@openlinksw.co.uk>
    Re: Perl 5.005.1 core dumps under Irix 6.2 (Greg Bacon)
    Re: Perl Style <jdporter@min.net>
    Re: Q: open2 : whats wrong? (Andre Merzky)
    Re: re first language <jdporter@min.net>
    Re: re first language (Rich Lafferty)
    Re: search text in specific columns (Patrick Timmins)
    Re: What's the most efficient regex to force NOT matchi huntersean@hotmail.com
        Special: Digest Administrivia (Last modified: 12 Mar 98 (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Wed, 12 Aug 1998 15:33:26 +0000
From: Adam Ipnarski <adam@fastfare.co.uk>
To: Heikki Luukkala <hluukkala@bigfoot.com>
Subject: Re: Beginners problem?
Message-Id: <35D1B5C6.827A1890@fastfare.co.uk>

Heikki Luukkala wrote:
> 
> How can I make words beginning with http:// to work as links?
> 
> $text = "Some text http://www.domain.com/page.html some more text."
> 
> How can I change it to:
> "Some text <A
> HREF=http://www.domain.com/page.html>http://www.domain.com/page.html</A>
> some more text."

Use the following regexp (wrapped in this code):

$address = "Go to http://www.domain.com/page.html to see stuff";

$address =~ s!(\bhttp://[^ ]+)!<a href="\1">\1</a>!g; ## ! to prevent
LTS.

print "HTMLised:$string:\n\n";

This will print:
HTMLised:Go to <a
href="http://www.domain.com/page.html">http://www.domain.com/page.html</a>
to see stuff:

if you don't want the quotes, just remove them.

The regexp works like this: the ! is there to stop having to escape the
//.
The ()s are there to put the matched string into \1
Then, I match <word boundary>http://<followed by any non space
character>
This match is then inserted into the <a href="...">...</a> statement.

If you wanted to match ftp://, gopher://, mail:// etc. replace the
'http://' in the regexp with '\w+://' - that'll match any chars followed
by ://.

--adam

Adam Ipnarski
Senior Programmer, Faresearch
adam@fastfare.co.uk | www.travelselect.com


------------------------------

Date: Wed, 12 Aug 1998 16:53:22 +0100
From: Alan Silver <alan@find-it.furryferret.uk.com>
Subject: Re: cgi
Message-Id: <dBK+fJAypb01EwfZ@find-it.uk.com>

In article <MPG.1039f8a3e13b15a99897c4@nntp.hpl.hp.com>, Larry Rosler
<lr@hpl.hp.com> writes
>Gag me.  The quotes are to bind around spaces, which I don't see in that 
>example.

Not strictly true. Some browsers do not handle unquoted options well. I
said it wasn't a major point, but it is a point nevertheless.

>In my case, transmission time of the HTML is a serious issue, and all 
>those extras (such as '</td>' for example) waste bandwidth.  As does this 
>discussion.

All those "extras" also ensure that your page is parsed correctly. I
have seen many pages that were displayed in many different manners,
simply because some people write complete HTML (with all those extras)
and some people leave them out. Some browsers are more forgiving, some
do not handle bad HTML very well. Either well,  bad HTML takes longer to
parse and display than good HTML, so even if download time is shorter,
parsing time can make up for it.

Anyway, you're right that this is off-topic. Maybe I should have sent my
comments privately. Let's agree to differ and get back to perl.

Alan

-- 
Alan Silver
Please remove the furryferret when replying by e-mail


------------------------------

Date: 12 Aug 1998 15:53:09 GMT
From: abigail@fnx.com (Abigail)
Subject: Re: checking if files exist
Message-Id: <6qsdp5$oda$2@client3.news.psi.net>

Russ Allbery (rra@stanford.edu) wrote on MDCCCVII September MCMXCIII in
<URL: news:m3iujywlol.fsf@windlord.Stanford.EDU>:
++ Tom Christiansen <tchrist@mox.perl.com> writes:
++ 
++ > Assuming that you bothered to install a reasonable operating system and
++ > traditional filesystem, the following is efficient and effective:
++ 
++ >     $is_empty = (stat($dir))[3] == 2;
++ 
++ > If one of my premises is false, you're on your own.
++ 
++ Right.  Note that in particular, this fails under AFS:
++ 
++ windlord:/afs/ir/site/leland> perl -e 'print ((stat "dist")[3], "\n")'
++ 2
++ windlord:/afs/ir/site/leland> ls dist
++ maillocal   newsyslog


It fails on Solaris/NFS for auto mounted directories as well.

$ cd /
$ perl -we 'print +(stat "nfs2")[3], "\n"'
1
$ cd nfs2
$ cd /
$ perl -we 'print +(stat "nfs2")[3], "\n"' 
16
$


So, I guess either Solaris isn't a reasonable operating system,
or NFS isn't a traditional filesystem.



Abigail
-- 
perl -we '$_ = q ?4a75737420616e6f74686572205065726c204861636b65720as?;??;
          for (??;(??)x??;??)
              {??;s;(..)s?;qq ?print chr 0x$1 and \161 ss?;excess;??}'


------------------------------

Date: 12 Aug 1998 11:23:45 -0400
From: Vivek Khera <khera@kciLink.com>
Subject: Re: dates in excess of 2037 (A Problem???)
Message-Id: <x7iujyusu6.fsf@kci.kciLink.com>

>>>>> "MR" == Mark Rafn <dagon@halcyon.com> writes:

MR> Most programmers use a system-dependent definition of time_t,
MR> rather than defining it for their program.  It takes simply a
MR> recompile on a system with a different time_t to change.  The
MR> majority of software will automatically upgrade to 64-bit time_t
MR> when ported to an OS that uses it.

I'll assume that "most programmers" in your example above do not write 
binary data to files, either.  Changing time_t to 64 bits suddenly
makes structures get sized differently, and this makes binary files
incompatible, in particular things like DB files.  You can't just
change time_t -- you need to invent a new name for it.  And while
you're at it, might as well change the Epoch to something more useful.

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD       +1-301-258-8292
PGP/MIME spoken here              http://www.kciLink.com/home/khera/


------------------------------

Date: Wed, 12 Aug 1998 15:19:14 GMT
From: huntersean@hotmail.com
Subject: Re: Execute a program then return to perl?
Message-Id: <6qsbph$c8u$1@nnrp1.dejanews.com>

In article <35D1507F.D62F739F@emw.ericsson.se>,
  Clas <qmwclka@emw.ericsson.se> wrote:
> Hello!
>
> I don't know how to execute a program then return back to perl.
>
> I use the exec method, but this doesn't seem to be right .
> Because after the exec command I come right to the shell... :-(
>
> Any smart idea how I should do to solve this problem??
>
> Thank you for any suggestion!
>
> //Clas
>
>
you need to use either the system function or backticks (``).  exec replaces
your program with the one exec'd

Sean H

-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/rg_mkgrp.xp   Create Your Own Free Member Forum


------------------------------

Date: Wed, 12 Aug 1998 15:47:47 GMT
From: Tom Phoenix <rootbeer@teleport.com>
Subject: Re: File updating question
Message-Id: <Pine.GSO.4.02.9808120840580.10161-100000@user2.teleport.com>

On Tue, 11 Aug 1998, Ha wrote:

> if you use flock 2, remember to flock 8 (unlock
> it) before closing FILE. 

No, you should simply close the file, and let perl take care of unlocking
it. (That's what the original poster was doing, correctly.)

>     open(DATA, ">$datapath")
>         || print ("Son, it's not my fault.");
>     flock(DATA, 2);

That's not a good way to cooperate with other processes. If another
process were writing that file, you just clobbered it!

Cheers!

-- 
Tom Phoenix       Perl Training and Hacking       Esperanto
Randal Schwartz Case:     http://www.rahul.net/jeffrey/ovs/



------------------------------

Date: Wed, 12 Aug 1998 15:51:01 GMT
From: Tom Phoenix <rootbeer@teleport.com>
Subject: Re: File updating question
Message-Id: <Pine.GSO.4.02.9808120848240.10161-100000@user2.teleport.com>

On Tue, 11 Aug 1998, Ketan Patel wrote:

> What is a more efficient way of doing this?  Doing it the way I am now
> (open input,read data,close input ---> modify data ---> open
> output,write data,close output)?

Well, once you close the file, you lose the lock. If concurrency is an
issue (and it must be or you wouldn't be locking), you'll need to keep the
lock.

Of course, you may need (or want) to use a temp file for scratch purposes,
then re-write the original file when you're ready. 

Hope this helps!

-- 
Tom Phoenix       Perl Training and Hacking       Esperanto
Randal Schwartz Case:     http://www.rahul.net/jeffrey/ovs/



------------------------------

Date: Wed, 12 Aug 1998 15:36:31 GMT
From: Tom Phoenix <rootbeer@teleport.com>
Subject: Re: How to test for failed command open useing FileHandle
Message-Id: <Pine.GSO.4.02.9808120833000.10161-100000@user2.teleport.com>

On 12 Aug 1998, Ephrayim "EJ" Naiman wrote:

> $fh = new FileHandle("non-existent-command |");
> if(!defined($fh))

When I want a command pipe to let me know whether something has gone
wrong, I often use a second pipe to carry that information from the child
process to the parent, something like this:

    use Fcntl;
    pipe(R,W) or die "Can't make pipe: $!";
    # We'll set the write handle to automatically close if
    # the exec succeeds.
    fcntl W, F_SETFD, FD_CLOEXEC or die "fcntl failed: $!";
    my $pid = open(PIPE, "-|");
    die "Couldn't open pipe: $!" unless defined $pid;
    unless ( $pid ) {
        # Child process here
        close R;        # We won't be reading from this
        $^W = 0;        # No warnings now(!)
        unless ( exec qw{ some process -args -more } ) {
            # If we get here, we're the child process
            # and the exec didn't work. :-(
            print W "Exec failed: $!";
            exit;
        }
    }
    # Parent process here
    close W;    # Got to do this to avoid blocking
    my $message = join '', <R>;
    close R;    # Waits until either exec or exit
    # No news is good news!
    die "Pipe problem: $message" if $message;
    # At this point, the pipe is ready to go.

Does that do anything good for you? Hope this helps!

-- 
Tom Phoenix       Perl Training and Hacking       Esperanto
Randal Schwartz Case:     http://www.rahul.net/jeffrey/ovs/



------------------------------

Date: 12 Aug 1998 15:07:02 GMT
From: nguyend7@msu.edu (Dan Nguyen)
Subject: Re: I can't run any perl prog. on my server
Message-Id: <6qsb2m$m2i$3@msunews.cl.msu.edu>

Veronica Machado <vmachado@nt.com> wrote:
: My server doesn't accept any perl program .
: Does anyone know why ??

That question is best directed to your system administrator.

-- 
           Dan Nguyen            | There is only one happiness in
        nguyend7@msu.edu         |   life, to love and be loved.
http://www.cse.msu.edu/~nguyend7 |                   -George Sand


------------------------------

Date: 12 Aug 1998 14:32:22 GMT
From: Niklas Matthies <matthies@fsinfo.cs.uni-sb.de>
Subject: lexically scoped aliases
Message-Id: <6qs91m$8hi$1@hades.rz.uni-sb.de>

Hi,

Is it somehow possible to create lexically scoped aliases, i.e. what
one might expect that 

  my *foo = \$bar;

would do if it were legal?

E.g. I want to manipulate parameters passed to a subroutine directly
(without copying them or using references to them), but want to use
more descriptive names than @_ or $_[5]. Using 'local' would pollute
the symbol table, which I don't want to.

-- Niklas


------------------------------

Date: Wed, 12 Aug 1998 11:12:39 -0400
From: Debbie Whitten <usenet-replies@rocketmail.com>
Subject: long story - fork & multiprocessing problem
Message-Id: <35D1B0E7.BCA59540@rocketmail.com>

Hello,

I am fairly new to using fork() in Perl, although I'm fairly familiar
with Unix...It's kind of a long problem, so here goes:

I am writing a script that will process all files in a directory. The
files are going to be ftp'd there by another machine, and may be coming
in in chunks of 100 or so (not sure yet).

All I need to do for each file is to:
1) check if the destination dir exists (based on the filename). If not,
create it

2) mv the file to the destination (checking for duplicates and renaming
as necessary)

3) gzip the file

The problem is that we're going to be processing about 2.2 million small
files, and I calculated that my script is going to take anywhere from 15
- 22 days to run. I've been told this is too slow.

I tested the script with just sequential processing of 150 files and
using the time command and got this result:

     34.0u 56.0s 1:52 79% 0+0k 0+0io 0pf+0w

I wrote a 2nd script using fork() and ran into the zombie process
problem. I added the $SIG{CHLD} = sub { wait; } statement and it still
didn't solve the problem. I had to add a handler sub that would reset
the $SIG{CHLD} every time (WHY!?)

sub handler {
        local ($sig) = @_;
        wait;
        $forks_returned++;
        $SIG{CHLD} = 'handler';
}

and then it worked fine with no zombies. The timing for that one was:

     33.0u 56.0s 1:42 86% 0+0k 0+0io 0pf+0w, 

But the system load went up to 18! And besides, this isn't much faster
than the sequential script.

I also tried writing this in C, which was slower than the sequential
Perl script. Then I tried Perl but using a system ("cmd &"); call
instead of fork. I calculated this one would take 22 days.

What should I do? The Perl script with the fork() function is the
fastest, but not by much. Is there any way to speed this up? The problem
is that after the files are transferred, a 2nd script needs to be called
to format the data, and judging from what I have so far, we're looking
at a month to process!

I'm concerned about creating too many child processes and slowing the
system down. I also thought that the parent Perl script would be much
quicker - waiting for the child processes to finish seems to be a waste
of time.

Here's the script:

#!/usr/bin/perl

$secs = 120;

$SIG{CHLD} = 'handler';
$SIG{KILL} = 'exit_gracefully';
$SIG{QUIT} = 'exit_gracefully';

##
## Use a variable to control where the files will be read from.
##

$from = "./test";

##
## Use a variable to control where the files will be written.
##

$dest_root = "./dest/";

##
## Will be writing to a log file so turn on auto-flushing.
##

$| = 1;

##
## Keep track of all files processed. This list will be used later.
##

open LOG, ">> mv_files.log";
$timestamp = `date`;
## print LOG "# $timestamp"; In parallel this doesn't work the way I 
## expected. EVERY process writes to the log file. I don't want that.

## while (1) { ## will need this for the real run.

    ## Order files by time, that way the newest file can be skipped.

    @files = `ls -t $from`;
    $xxx = $#files;
    print "Num files: $xxx\n";

    if ($xxx == -1) {
        print "$$: No files ready. Sleep $secs\n";
        sleep $secs;
    } elsif ($xxx == 0) {
        print "$$: Only 1 file -- in progress. Sleep $secs\n";
        sleep $secs;
    }
    else {
        print "$$: Starting. \n";
        ##sleep 5;

    ##
    ## get rid of the newest file since it is most likely still being
    ## transferred. Note - this program will need to be run once more
    ## to ensure that the last file gets processed, or maybe set up so
    ## that ^C does this...Or the last file can be processed manually.
    ##
    $lastfile = shift @files;

    ##
    ## Remove the newline.
    ##

    chomp ($lastfile);
##      print "$lastfile is still being transferred.\n";

    $forked = 0;
    $forks_returned = 0;
    foreach $file (@files) {

        chomp ($file);

        ## what I was trying to do here was to ensure not too many
        ## child processes were created at once. It didn't improve 
        ## run time although it helped the system load average.
        
        if ($forked - $forks_returned > 10) {
            print "forks returned: $forks_returned\n";
            print "forked: $forked\n";
            sleep 60;
        }
        $pid = fork();
        ## print "pid: $pid, file: $file\n";

        if (defined ($pid)) {
           $forked++;
        }
        if ($pid == 0) { ## supposedly, this is all the child process
                         ## will be doing...

            ##
            ## Break up the filename into the directory names.
            ##

            ($dir1, $dir2, $dir3) = ($file =~ m/^(..)(..)(..)/);

            ##
            ## Append the destination directory root name 
            ## (./dest/ in this case).
            ##

            $dir1_name = $dest_root . $dir1;
            $dir2_name = $dir1_name . "\/" . $dir2;
            $dir3_name = $dir2_name . "\/" . $dir3;

            ##
            ## Reduce the following to 1 system call instead of
            ## potentially 3 by using mkdir -p to create all
            ## parent directories.
            ##

            if (! -e $dir3_name) {
               system "mkdir -p $dir3_name";
            }

            ##
            ## $filename is of the format:
            ##      /dest/ak/11/22/ak11223.listing
            ##

            $filename = $dir3_name . "\/" . $file;

            ##
            ## $dest_file is of the format:
            ##      /dest/ak/11/22/ak11223.listing.gz

            $dest_file = $filename . ".gz";

            ##
            ## Check for existence of $dest_file, keep trying new
            ## filenames by incrementing a counter. (Assumption is
            ## that if file.gz doesn't exist, neither does file,
            ## or if it does it's ok to be overwritten).
            ##

            $counter = 0;
            while (-e $dest_file) {
               $counter++;
               ## print "$dest_file exists!\n";
               $dest_file = $filename . ".gz";
               $dest_file =~ s/(\w+)(\.\w+)(\.gz)$/$1$counter$2$3/ ;
            }
            $dest_file =~ s/\.gz// ;

            ##
            ## First, mv the file to it's final destination.
            ##

            $err = system "mv $from\/$file $dest_file 2>/dev/null";

            ##print "Error from mv = $err\n";
            ## print "Err: $err\n";
            ## print "cp $from\/$file $dest_file\n";

            ##
            ## For the mv command:
            ## 0 = success
            ## > 0 = failure
            ##

            ##if ($err <= 0) { 
            ## Sometimes get -1 returned - maybe
            ## from system() instead of mv??
            ##
            ## Then finally, gzip the file.
            ##

                   system "gzip -qf1 $dest_file 2>/dev/null";
                   ## print "gzip $dest_file\n";

                   print LOG "$dest_file.gz\n";
            ##}

            exit(0);

        } ## end of child process.
    } ## foreach file
} ## else
## } ## while (1)
close LOG;
##close MSG;
exit(0);

sub handler {
    local ($sig) = @_;
    wait;
    $forks_returned++;
    $SIG{CHLD} = 'handler';
}

sub exit_gracefully {
    local ($sig) = @_;
    close LOG;
    $SIG{CHLD} = 'handler';
    ##close MSG;
    exit(0);
}


------------------------------

Date: 12 Aug 1998 15:32:58 GMT
From: "Jon C. Hodgson" <jon@resonate.com>
Subject: LWP & 'HTTP/1.0'
Message-Id: <35D1B5B0.6B88@resonate.com>

Help!

I'm using LWP to retrieve HTTP docs, but I've found that some servers
require the HTTP version for the 'GET' header:

GET /path/file.html HTTP/1.0
                    ^^^^^^^^

LWP's Request method does not add this (or I can't figure it out).
I could do raw sockets, but I need LWP for this proj.

How can I get the HTTP/1.0 to appear in the GET?

Please help!


------------------------------

Date: Wed, 12 Aug 1998 15:31:22 GMT
From: ptimmins@netserv.unmc.edu (Patrick Timmins)
Subject: Re: matching problem with (xx)?
Message-Id: <6qscgb$f31$1@nnrp1.dejanews.com>

In article <6qrvjo$42p$1@nnrp1.dejanews.com>,
  oc1658@my-dejanews.com wrote:
> Suppose i need to match a substring which *could* be preceded by 'is'.
> I'm trying:
> m/(.+)(is)?(.*)
>
> For example in
> 'The cat is over the table'
> i need 'The cat' in $1
> and in
> 'The cats are over the table'
> i need 'The cats are over the table' in $1
> Now i always obtain 'The cats are over the table' in $1.
>
> I can't understand why (is)? doesn't match according to the 'greedy' perl
> regexp default. I suspect it's something related to the first item (.+) but i
> didn't find any solution out there.
>
> Please, remember i don't need a generic solution to this example (there are
> many trivial), but an opinion on the regexp pattern.
>
> Can someone help me?
>
> Thank you, Omar Cal
> omar.cal@altavista.net
[snip]

So you mean you want to capture everything in front of the first 'is'
(if it exists) into $1, or if there is no 'is', capture the whole
thing into $1. Is this right?

How about:

/(.*?)\bis\b/ or /(.*)/;

Hope this helps.

Patrick Timmins
U. Nebraska Medical Center

-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/rg_mkgrp.xp   Create Your Own Free Member Forum


------------------------------

Date: 12 Aug 1998 15:24:06 GMT
From: Tom Christiansen <tchrist@mox.perl.com>
Subject: Re: Need to Lock files (NFS)
Message-Id: <6qsc2m$nof$1@csnews.cs.colorado.edu>

 [courtesy cc of this posting sent to cited author via email]

In comp.lang.perl.misc, sgr@logsoft.com writes:
:Cripes! Won't this thread ever die?
:
:NFS *does* support the normal Unix file creation semantics 

No, it just pretends.  Neither file creation nor file deletion are atomic
under NFS.  That means two processes can both successfully exclusively
create or unlink the same filename.  There are work-arounds in the code,
but it isn't guaranteed.  Remember that with UDP you can lose a message
or get the same message twice.  If you create a file, get no ack because
the package was lost, and then resend the same create, you need to get
a success back, not a failure.  Sucks, eh?

--tom
-- 
    I won't mention any names, because I don't want to get sun4's into
    trouble...  :-)     --Larry Wall in <11333@jpl-devvax.JPL.NASA.GOV>


------------------------------

Date: 12 Aug 1998 15:12:03 GMT
From: root@am.westblaak.spirit.nl (root)
Subject: Re: Newbie Question About 'for'
Message-Id: <6qsbc3$ef$1@newnews.nl.uu.net>

In article <obu33is7pt.fsf@alder.dev.tivoli.com>, "Jim Woodgate" <jdw@dev.tivoli.com> writes:
   
    root@am.westblaak.spirit.nl (root) writes:
    > $_ is 2 from the last init statement in the first run, and later on 1
    > from the print statements in the run before...
    > Why the loop rans for 3 times was already explained in this thread I think..
    
    I don't believe that the 2 is caused by the last init statement, as
    the following will print 111:
    
    perl -e 'for ( $b=$a=1, $a < 7, $a++){print}'

Yep, you're right...
As others in this shred already explained, the whole thing inside () is
taken as list, like in foreach, and therefore the loop is iterated 3 times:

someone else wrote:

  - Try using semicolons instead of commas.  What you have specified is a 
  - "foreach" loop ("for" and "foreach" are synonyms) consisting of the three 
  - items $a = 1, $a < 7, ++$a.  These are evaluated before the looping 
  - begins, so the value of $a is 2.  This is then printed three times, once 
  - for each element of the list.  No surprise, but not what you wanted, 
  - either.
  
  
Andre.


------------------------------

Date: Wed, 12 Aug 1998 15:30:16 GMT
From: Tim Haynes <thaynes@openlinksw.co.uk>
Subject: Re: ODBC, Perl, Unix and Macs
Message-Id: <6qsce8$f0v$1@nnrp1.dejanews.com>

In article <35cb3838.0@news.new-era.net>,
  scott@softbase.com wrote:
> William Burrow (aa126@NOSPAM.fan.nb.ca) wrote:
> > I'm trying to get a grasp on how Perl and ODBC can work for me.  First off,
> > where in Perl can I get ODBC support for other than Win32?
> Windows ODBC drivers are cheap and plentiful. ODBC on other platforms
> costs $$$ -- only one company specializes in making non-Windows
> drivers, Intersolv, and they charge a lot for them because they aren't
> much used on other platforms.

Please! Why go with any company that thinks it should charge you for different
client OSs?
The one & only company that specialises in OS-independent client & server
components for ODBC (and JDBC) is OpenLink :)

> Using a small PC-based Windows or Mac database from a UNIX machine is
> weird and unnatural -- going from server to client isn't done much --
> if any -- in the industry, and there's very little support for it.

I agree, there is a certain bias towards client->server, but there's no reason
why a connection can't be made the other way round, to a "personal" database
like Access, or whatever.

> > Second, can I use this Perl ODBC support as a means to access any
> > database that supports ODBC?
> If there's a driver for it -- I don't think the ODBC support *IN PERL
> ITSELF* is an ODBC driver. It is just capable of connecting to one. If
> I'm wrong, I'll be shocked.

You're more right than wrong here.
The Perl CPAN archive has several database interfaces, some for native access
like Postgresql, Oracle, etc, and one or two for ODBC, in particular DBI/DBD,
or Win32::ODBC. These are add-ins to the regular Perl distribution, and still
require an ODBC driver.

Regards,

~Tim

-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/rg_mkgrp.xp   Create Your Own Free Member Forum


------------------------------

Date: 12 Aug 1998 14:51:29 GMT
From: gbacon@cs.uah.edu (Greg Bacon)
Subject: Re: Perl 5.005.1 core dumps under Irix 6.2
Message-Id: <6qsa5h$29h$2@info.uah.edu>

In article <wr4svi1ff6.fsf@ruin.informatik.uni-bremen.de>,
	Oliver Laumann <net@informatik.uni-bremen.de> writes:
: Has anybody managed to successfully compile Perl 5.005.1 under Irix
: 6.2 using cc -n32 (on an IP-22 machine)?

I never built 5.005_01, but I have built 5.005_02:

[9:45] mork% uname -a
IRIX mork 6.2 06101030 IP22
[9:45] mork% perl5.00502 -V
Summary of my perl5 (5.0 patchlevel 5 subversion 2) configuration:
  Platform:
    osname=irix, osvers=6.2, archname=IP22-irix
    uname='irix mork 6.2 06101030 ip22 '
    hint=recommended, useposix=true, d_sigaction=define
    usethreads=undef useperlio=undef d_sfio=undef
  Compiler:
    cc='cc -n32', optimize='-O3', gccversion=
    cppflags='-D_BSD_TYPES -D_BSD_TIME -OPT:Olimit=0 -I/usr/local/include -DLANGUAGE_C'
    ccflags ='-D_BSD_TYPES -D_BSD_TIME -woff 1009,1110,1184 -OPT:Olimit=0 -I/usr/local/include -DLANGUAGE_C'
    stdchar='unsigned char', d_stdstdio=define, usevfork=false
    intsize=4, longsize=4, ptrsize=4, doublesize=8
    d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16
    alignbytes=8, usemymalloc=n, prototype=define
  Linker and Libraries:
    ld='ld', ldflags =' -L/usr/local/lib32 -L/usr/local/lib'
    libpth=/usr/local/lib /usr/lib32 /lib32 /lib /usr/lib
    libs=-lgdbm -ldb -lm -lc
    libc=/usr/lib32/libc.so, so=so, useshrplib=false, libperl=libperl.a
  Dynamic Linking:
    dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags=' '
    cccdlflags=' ', lddlflags='-n32 -shared -L/usr/local/lib32 -L/usr/local/lib'

I really should get around to writing that perlbug -ok database. :-(

Hope this helps,
Greg
-- 
Boon: Now, she should be decent looking, but we're willing to trade looks for
      a certain kind of morally casual attitude.


------------------------------

Date: Wed, 12 Aug 1998 11:11:50 -0400
From: John Porter <jdporter@min.net>
Subject: Re: Perl Style
Message-Id: <35D1B0B6.74AE@min.net>

Tom Christiansen wrote:
> 
> Scott.L.Erickson@HealthPartners.com (Scott Erickson) writes:
> :I, for one, believe
> :that using 'or' is much more readable than ||,
> 
> Do you also believe that `plus' is more readable than `+' comma
> or that `BEGIN' is more readable than `{' question mark

The answers to those questions do not have to be the same.
How can anyone argue that 
	ASSIGN THE SUM OF X AND Y TO Z
is preferable, for any reason, to
	z = x + y
?

But if you're arguing about readability only, then Yes,
	BEGIN
	  compactor->compact( garbage )
	END
is more readable than the equivalent with curlies.

More to the point, 'or' is not a synonym for '||', 
or this would truly be a religious debate.

-- 
John Porter


------------------------------

Date: 12 Aug 1998 15:28:58 GMT
From: am@am.westblaak.spirit.nl (Andre Merzky)
Subject: Re: Q: open2 : whats wrong?
Message-Id: <6qscbq$i10$1@newnews.nl.uu.net>


> my $pid = open2 ( \*READ, \*WRITE, "/bin/cat -u -n" );
                                               ^^^^^^
> why do i get line numbers?

Gosh, what a stupid mail...
I am really sorry.... *blush*

Andre.


------------------------------

Date: Wed, 12 Aug 1998 11:05:26 -0400
From: John Porter <jdporter@min.net>
Subject: Re: re first language
Message-Id: <35D1AF36.7A01@min.net>

Real Programmers type directly into the stdin of cc.

Real Programmers can write Fortran programs in any language.

(I know these are old; sorry...)

-- 
John Porter


------------------------------

Date: 12 Aug 1998 15:34:14 GMT
From: rich@vax2.concordia.ca (Rich Lafferty)
Subject: Re: re first language
Message-Id: <6qsclm$es4$1@newsflash.concordia.ca>

John Porter <jdporter@min.net> wrote:
>Real Programmers type directly into the stdin of cc.
>
>Real Programmers can write Fortran programs in any language.
>
>(I know these are old; sorry...)

Something about banging rocks together to get 1's.

There, this thread can stop. :)

  -Rich

-- 
Rich Lafferty ---------------------------------------------------------
IITS/Computing Services     |      
Concordia University        |    Nothing sucks like a Vax! (tm)
rich@vax2.concordia.ca -----------------------------------------[McQ]--


------------------------------

Date: Wed, 12 Aug 1998 14:50:22 GMT
From: ptimmins@netserv.unmc.edu (Patrick Timmins)
Subject: Re: search text in specific columns
Message-Id: <6qsa3f$767$1@nnrp1.dejanews.com>

I'm re-posting this, because dejanews obfuscated some of the code in
my previous post in this thread. If it fails again, then so be it. I've
also added some clarification before the code, explaining what the
different sections are doing.

P. Timmins 8/12/98

In article <6qqele$e2u$1@nnrp1.dejanews.com>,
  stevenba@carr.org wrote:
> Hi.  I'm very new to Perl, so please excuse me if this is not the correct
> forum or the answer is obvious.
>
> Can anyone tell me if (and if so, how & where) to search a text file in
> specific columns? or search beginning in column X? Are there any existing
> tools to do this? Are there any such 'window'-like facilities?        (like
samples
> I saw in 'widgets') after I installed Perl?
>
> Thanks in advance.
>
> Steve Barbash
>
[snip]

Very 'awk'ward, in my experience. Use awk, if you can. Otherwise,
lots of substr(), and 'for' loops to create hashes of hashes. Probably
wouldn't be nearly so difficult if you don't have "holes" in your data,
like I usually do. eg (from an actual example that I use "in-house"):

1. use a regex to id the line containing column headers
2. use a 'for' loop with substr to create a hash that links each column
   header with its' offset
3. use a regex to id the lines containing rows of data
4. use a 'for' loop with substr to create a hash that link each piece
   of data in the row with its' offset
5. create a hash of hashes linking, in this case, each row identifier
   with a hash of 'column-header=row_data' pairs.

while (<>) {

# load the column headers
    if (/   ^CUP\ ACC\ \#           # make sure were on the correct line


            .+?TECH             # anything after ACC # matched non-greedily
                                # up to TECH. That way we can have a test code
                                # of 'TECH' and not break the script.


            \s+                 # white space


        (                       # begin capture of $1 - used anonymously below
                                # to capture all tests listed on the worksheet
            .+                  # one or more of any characters into $1
        )                       # end capture of $1 (the test_list string)


            $                   # capture everything in $1 up to the EOL


            /x) {               # end of extended regex; end of conditional;
                                # beginning of "true" block

# Create a "search index for each column header.
# Each "column" for each test is 6 characters wide

        for ($i; $i<25; $i++) {
            $test_index = 49+$i*6;
            $test = substr $_, $test_index, 6;
            chomp $test;
            $test =~ s/ //g;
            if ($test ne '') {
                $find_test_index{$test_index} = $test;
            }
        }
        $i = 0;

    }

#load rows of data into columns
    if (/(                      # begin 'if' conditional and capture $1 - the
                                # entire line

            ^\s+HOSP\ ID:\      # make sure we're on the correct line; note
                                # the literal space at the end of the match

        (                       # begin capture #2 - the hospital ID
            \S+                 # the hospital ID itself
        )                       # end capture of hospital ID


            \s+                 # white space


        (                       # begin capture of $3 - the tech id number
            \d+                 # the tech id number itself
        )?                      # end capture of tech id number and make it
                                # optional (in case result is 'DEL')



            \s+                 # white space


        (                       # begin capture of $4 - the rest of the line
            \ .+                # the rest of the line itself
        )                       # end capture of the rest of the line


            $                   # make the rest of the line ($4) match to the
                                # end of the line

        )                       # end capture of the whole line ($1)


            /x) {               # end of extended regex; end of conditional,
                                # and beginning of "true" block


       $specimen{$acc_no}{referrer} = $2;
       $specimen{$acc_no}{run_by} = $3;


  for ($i; $i<25; $i++) {  $result_index = 49+$i*6;  $result = substr $_,
$result_index, 6;  chomp $result;  $result =~ s/ //g;  if ($result ne '') { 
# we don't want any of the holes 
$specimen{$acc_no}{result}{$find_test_index{$result_index}}=$result;  }  } 
$i = 0;

    }

}


while (($key,$value) = each %{ $specimen{$id}{result} }  ) {
        print "$key = $value\n";
    }

    print "\n\n";
}



I'm actually kicking around attempting some sort of a module to make it
easier to do this type of thing (COLUMNS.pm ?): eg call a function
(or use split in combination with regex and substr() to identify a
specific columnar data format (offset, column width, column headers,
etc), then another function to load data into that format. So what was
accomplished in all that mess above could be done with something like:

use COLUMNS;
$matrix1 = new COLUMNS;
while (<>) {
    if (/(s)o(m)e (r)e(g)ex/) {
        $matrix1->headers($1, $2, $3, $4);
    }
    if (/another regex, line count,(row id), etc/) {
        $new_row = $matrix1->load_row($1);
        &addrow($new_row);
    }
}
print $matrix1->data(by_row);   # print out data by row in column=datapairs
print $matrix1->data(by_column) # print any data for the first column, then
                                # second, etc


Good Luck!

Patrick Timmins
U. Nebraska Medical Center

-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/rg_mkgrp.xp   Create Your Own Free Member Forum


------------------------------

Date: Wed, 12 Aug 1998 15:14:46 GMT
From: huntersean@hotmail.com
Subject: Re: What's the most efficient regex to force NOT matching any char? (repost)
Message-Id: <6qsbh6$btj$1@nnrp1.dejanews.com>

In article <35D0B3C2.CF354404@cas.org>,
  Marco Moreno <mmoreno@cas.org> wrote:
> (This is a repost as my previous posting had no bites ;)
>
> I would like to iterate thru a string and skip over as many chars as
> possible that don't need anything done to them, but stop and perform
> some conversion on each char that does not match a regex.  The regex
> may vary depending on the data format and would be defined by a
> variable at runtime.
>
> My question is:  If I want to force the conversion of every char, what
> is the most efficient regex I should use to force an unsuccessful
> match of the pass-thru regex?
>
> I've thought of "[^\x00-\xFF]" and "(?!.)", but is there something
> better?
[...snippage...]

I'm not surprised you got no bites before.  This posting makes no sense.

Surely you just want a regexp that _matches_ the stuff you want you want to
convert.  Just write a regexp to match the stuff you want and make the
conversion on that.  You don't have to write a regexp to _skip_ stuff you
don't want.

As far as efficiency goes, you're hosed anyway, because you're using a regexp
that's evaluated at runtime - very expensive.

Sean Hunter

-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/rg_mkgrp.xp   Create Your Own Free Member Forum


------------------------------

Date: 12 Jul 98 21:33:47 GMT (Last modified)
From: Perl-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Special: Digest Administrivia (Last modified: 12 Mar 98)
Message-Id: <null>


Administrivia:

Special notice: in a few days, the new group comp.lang.perl.moderated
should be formed. I would rather not support two different groups, and I
know of no other plans to create a digested moderated group. This leaves
me with two options: 1) keep on with this group 2) change to the
moderated one.

If you have opinions on this, send them to
perl-users-request@ruby.oce.orst.edu. 


The Perl-Users Digest is a retransmission of the USENET newsgroup
comp.lang.perl.misc.  For subscription or unsubscription requests, send
the single line:

	subscribe perl-users
or:
	unsubscribe perl-users

to almanac@ruby.oce.orst.edu.  

To submit articles to comp.lang.perl.misc (and this Digest), send your
article to perl-users@ruby.oce.orst.edu.

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

To request back copies (available for a week or so), send your request
to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
where x is the volume number and y is the issue number.

The Meta-FAQ, an article containing information about the FAQ, is
available by requesting "send perl-users meta-faq". The real FAQ, as it
appeared last in the newsgroup, can be retrieved with the request "send
perl-users FAQ". Due to their sizes, neither the Meta-FAQ nor the FAQ
are included in the digest.

The "mini-FAQ", which is an updated version of the Meta-FAQ, is
available by requesting "send perl-users mini-faq". It appears twice
weekly in the group, but is not distributed in the digest.

For other requests pertaining to the digest, send mail to
perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
sending perl questions to the -request address, I don't have time to
answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V8 Issue 3427
**************************************

home help back first fref pref prev next nref lref last post