[29374] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 618 Volume: 11

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Fri Jul 6 14:11:19 2007

Date: Fri, 6 Jul 2007 11:09:56 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Fri, 6 Jul 2007     Volume: 11 Number: 618

Today's topics:
        [Job Opening]  Experienced Perl Developer <azricareers@gmail.com>
    Re: Adding to a perl script automatically <MMWJones@googlemail.com>
        Array in splice <vedpsingh@gmail.com>
    Re: Array in splice <joe@inwap.com>
    Re: Asynchronous forking(?) processes on Windows <tbrazil@perforce.com>
    Re: Asynchronous forking(?) processes on Windows <tbrazil@perforce.com>
    Re: Asynchronous forking(?) processes on Windows xhoster@gmail.com
    Re: Asynchronous forking(?) processes on Windows <tbrazil@perforce.com>
    Re: Asynchronous forking(?) processes on Windows xhoster@gmail.com
        Check if file is being modified by another process  kyle.halberstam@gmail.com
    Re: Check if file is being modified by another process <veatchla@yahoo.com>
    Re: Check if file is being modified by another process anno4000@radom.zrz.tu-berlin.de
        Creating and returning a code-reference from a XS routi  wikenfalk@enhorning.se
    Re: Creating and returning a code-reference from a XS r <spamtrap@dot-app.org>
    Re: Creating and returning a code-reference from a XS r <nospam-abuse@ilyaz.org>
        DBIx::Simple variable interpolation problem <justin.0706@purestblue.com>
    Re: DBIx::Simple variable interpolation problem <mritty@gmail.com>
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Thu, 05 Jul 2007 13:23:29 -0000
From:  Azri Careers <azricareers@gmail.com>
Subject: [Job Opening]  Experienced Perl Developer
Message-Id: <1183641809.151886.148970@z28g2000prd.googlegroups.com>


Azri Solutions Pvt Limited (http://www.azri.biz) is an ISO 9001:2000
certified company. We provide a challenging work environment, an open
work culture & competitive remuneration: the right ingredients to
facilitate superlative performance. Azri is an extremely flexible &
sustainable networked enterprise with presence in Germany, the U.S.A.
& India. Our team has intense hands-on experience in designing,
deploying and managing high-volume Service-Oriented Architectures
(SOA) including RDBMS-backed Web services. In most cases, we leverage
tried and tested open-source software.

Join our highly motivated and dedicated team to embark upon a
challenging and rewarding career where you get to make decisions,
create great products and in the process, have some fun! We believe in
the concept of continuous learning, taking on responsibilities and
providing growth opportunities for every team member. Our environment
encourages innovation. Ideas are welcome and every individual is
empowered to think, share and take ownership of their ideas and
creations.

MAIL YOUR RESUME TO anup@azri.biz

Job Description:

1=2E	Should have developed applications using Open Source systems.
2=2E	Scripting: TCL, PHP, Perl, Python, and Ruby.
3=2E	Languages: C, C++ development experience is an added advantage.
4=2E	OS: Linux / UNIX, Windows. (Must have Linux / UNIX experience).
5=2E	Frameworks: Should have used Open Source languages like PHP / Perl/
Ruby / TCL and frameworks like Drupal / Mojave / Rails / OpenACS.

Candidate Profile:

1=2E	Should enjoy programming.
2=2E	Experience: Two to four years' programming experience.
3=2E	Knowledge of Linux, Shell scripting, Web application development,
Quality software development.
4=2E	Excellent conceptual, analytical and programming skills.
5=2E	Should have Application Design experience.
6=2E	Familiarity with Open Source Web Application Frameworks.
7=2E	Participation in open source communities is an added advantage.

Experience: 2 - 4 years

SOFTSKILLS 	:	=B7 Good communication and interpersonal skills.
				=B7 Team player.

Job Location: Hyderabad

MAIL YOUR RESUME TO anup@azri.biz



------------------------------

Date: Thu, 05 Jul 2007 08:00:56 -0700
From:  "MMWJones@googlemail.com" <MMWJones@googlemail.com>
Subject: Re: Adding to a perl script automatically
Message-Id: <1183647656.797989.213370@w5g2000hsg.googlegroups.com>

I thought i would continue in this post rather than start a new one as
it follows on from this problem.

I have managed to use an intermediate text file to store a list which
can be added to with one cgi script and accessed with another script.
That seemed easy compared to my current problem. Im basically trying
to do the same thing but with setting up a radio set. So the script
will also add a radio group to another cgi script. I thought i would
use the same principle as before and list the radio sets in a text
file so to speak.

I can create the radio sets in html but it does not complete the
required action. Therefore i would like to know if i can use the
actual cgi in the text file and then read this into the other script.

Example:

 $query->radio_group(-name=>'srs_Email_List', -values=>['All
notices'], -default=>"@rows"),
      '</td><td>',
    $query->radio_group(-name=>'srs_Email_List', -values=>['Major
notices'], -default=>"@rows"),
      '</td><td>',
    $query->radio_group(-name=>'srs_Email_List', -values=>['No
email'], -default=>"@rows"),
      '</td><tr>',


when i read this into the cgi script it comes back as a string. Any
ideas how to get around this?

In case you are wondering why html did not work - the @rows in the
above code is from a fetchrow_array and so would not check the right
radio button depending on the sql.

Thanks,
Matt



------------------------------

Date: Wed, 04 Jul 2007 12:35:16 -0000
From:  Ved <vedpsingh@gmail.com>
Subject: Array in splice
Message-Id: <1183552516.526940.240730@z28g2000prd.googlegroups.com>

Hi all,
I have got some numbers in @values.

I have to use the below line in my code:

splice @array, 25, 0, 'setParam("STA[1].CSD_HT_1","$values[6]");';

Now above line will print $values[6] as a text.
How can I enable the $value array inside the splice ?

Regards
Ved



------------------------------

Date: Wed, 04 Jul 2007 05:49:26 -0700
From: Joe Smith <joe@inwap.com>
Subject: Re: Array in splice
Message-Id: <XO2dnU4B_vLLChbbnZ2dnUVZ_uPinZ2d@comcast.com>

Ved wrote:

> splice @array, 25, 0, 'setParam("STA[1].CSD_HT_1","$values[6]");';
> 
> Now above line will print $values[6] as a text.
> How can I enable the $value array inside the splice ?

Use qq{} instead of ''.

   qq{setParam("STA[1].CSD_HT_1","$values[6]");}

	-Joe


------------------------------

Date: Thu, 05 Jul 2007 10:40:33 -0700
From:  Tim <tbrazil@perforce.com>
Subject: Re: Asynchronous forking(?) processes on Windows
Message-Id: <1183657233.615923.188700@x35g2000prf.googlegroups.com>

Hi again

I appreciate all the help. I hate to resurrect this issue but my
manager re-clarified the requirements of this project and I'm back.
Just to give you an overview of what I'm trying accomplish... I'm
creating a harness for performance testing. A server/master script
will send command requests via TCP to clients/slave daemons. These
slaves will spawn/fork off "n" number of these commands which need run
and finish at their own rate. The output of these commands needs to
get stored in some fashion.

Although Xho's "foreach" solution cured my problem, the reaping of the
pids is performed in a strict order (reading them until they're
finished). I understand the need for this in order to prevent a
deadlock but my ultimate goal to to keep these processes as
asynchronous as possible (using wait?). Charles' comment on the
"asynchronous wait" seems like what I need but this solution needs to
run on Windows.

Here's my question about an alternative... let's say I keep all the
processing of these children in the children and don't pipe anything
back to the parent. Instead I store the results of the children in a
global/shared hash. Can a single hash be operated on by several
children. I apologize if this is a newby question. I'm finding myself
pushing the envelope of my perl knowledge.

P.S. - Hope you'll had a good 4th



------------------------------

Date: Thu, 05 Jul 2007 10:59:13 -0700
From:  Tim <tbrazil@perforce.com>
Subject: Re: Asynchronous forking(?) processes on Windows
Message-Id: <1183658353.485270.227870@i38g2000prf.googlegroups.com>

As I'm thinking about it.. The parent is the only thing tying the
processes together. Aggregating the data needs to go through the
parent. The children wouldn't have any knowledge of a shared
structure. Time for another cup of coffee.



------------------------------

Date: 05 Jul 2007 20:52:54 GMT
From: xhoster@gmail.com
Subject: Re: Asynchronous forking(?) processes on Windows
Message-Id: <20070705165257.134$bB@newsreader.com>

Tim <tbrazil@perforce.com> wrote:
> Hi again
>
> I appreciate all the help. I hate to resurrect this issue but my
> manager re-clarified the requirements of this project and I'm back.
> Just to give you an overview of what I'm trying accomplish... I'm
> creating a harness for performance testing.

If you only care about *testing* the performance, I would probably use a
completely different method, depending on what aspect of performance you
are testing.  If you are only interested in how long it takes the child
to finish, then forget the pipes.  Just have the child exit when it is
done, throwing away any output it would have generated in a non-test
environment.

> A server/master script
> will send command requests via TCP to clients/slave daemons. These
> slaves will spawn/fork off "n" number of these commands which need run
> and finish at their own rate.

Whose own rate?  One master talks to several daemons, and each daemon
forks several processes.  Is it the daemon's own rate, or the process's
own rate, that is required?  We've seen code that reflects the process
communicating back to the daemon, but how/what does the daemon communicate
back to the master?

> The output of these commands needs to
> get stored in some fashion.

Unless the point of the test is to test how long it takes to store the
results, why do the results need to be stored?

> Although Xho's "foreach" solution cured my problem, the reaping of the
> pids is performed in a strict order (reading them until they're
> finished). I understand the need for this in order to prevent a
> deadlock but my ultimate goal to to keep these processes as
> asynchronous as possible (using wait?). Charles' comment on the
> "asynchronous wait" seems like what I need but this solution needs to
> run on Windows.

Asynchronous waiting won't solve the deadlock issue.  It only lets your
parent program do other things while it is waiting (which will be forever
in a deadlock).  But if the parent has nothing to do while it is waiting,
there is no point.

You could use IO::Select (I've never used it on Windows) to determine which
child is ready to be read at any given time.  Once you have read a child's
pipe upto the eof, then you can wait for that child (possibly
asynchronously).

> Here's my question about an alternative... let's say I keep all the
> processing of these children in the children and don't pipe anything
> back to the parent. Instead I store the results of the children in a
> global/shared hash. Can a single hash be operated on by several
> children.

There are modules that allow that to happen (like forks::shared), but I
generally try to stay away from them, especially when I am concerned about
performance.

Xho

-- 
-------------------- http://NewsReader.Com/ --------------------
Usenet Newsgroup Service                        $9.95/Month 30GB


------------------------------

Date: Thu, 05 Jul 2007 15:40:20 -0700
From:  Tim <tbrazil@perforce.com>
Subject: Re: Asynchronous forking(?) processes on Windows
Message-Id: <1183675220.839290.284740@o11g2000prd.googlegroups.com>

Hi Xho

I thought you'd be tired of me by now ;) You have good questions. I'm
about to try and answer them within the comments below. In general,
the "harness" I'm in the midst of writing is a way to drive tests on a
remote clients. As you indicated, the clients/slaves will spawn off
child processes which will do the actual work. We want to have the
ability to initiate different commands, shell script, C programs or
whatever on these spawn processes which will then load our Perforce
server. Kinda like Mstone, if you know what that is. If it was just
the test execution "times" on the children my life would be easier.
I'm trying to design the harness to handle the worse case scenario for
tests that have yet been written. which would be handing back the
begin time, the child process output data, and end time. The
discussion continues below...


On Jul 5, 1:52 pm, xhos...@gmail.com wrote:
> Tim <tbra...@perforce.com> wrote:
> > Hi again
>
> > I appreciate all the help. I hate to resurrect this issue but my
> > manager re-clarified the requirements of this project and I'm back.
> > Just to give you an overview of what I'm trying accomplish... I'm
> > creating a harness for performance testing.
>
> If you only care about *testing* the performance, I would probably use a
> completely different method, depending on what aspect of performance you
> are testing.  If you are only interested in how long it takes the child
> to finish, then forget the pipes.  Just have the child exit when it is
> done, throwing away any output it would have generated in a non-test
> environment.
>

If I could only wish. Sure, the "times" are my primary interest
however the potential for requiring the output of the children is
inevitable. It looks like I'll need the pipes. It's either that or
creating temp files to store the STDOUT of the child processes and
then picking it up later. I already prototyped the temp file idea on
Windows using the Win32::process::create module and dup'ed STDOUT
filehandles. That would be my last resort

> > A server/master script
> > will send command requests via TCP to clients/slave daemons. These
> > slaves will spawn/fork off "n" number of these commands which need run
> > and finish at their own rate.
>
> Whose own rate?  One master talks to several daemons, and each daemon
> forks several processes.  Is it the daemon's own rate, or the process's
> own rate, that is required?  We've seen code that reflects the process
> communicating back to the daemon, but how/what does the daemon communicate
> back to the master?

The individual rate of each spawn process.  The master sends off a TCP
request to the client daemon and leaves the connection open for a
response. The daemon spawns off the children which runs the tests.
Upon completion, the daemon sends a return TCP response to the master
with the status of the children's test (times? data?).

>
> > The output of these commands needs to
> > get stored in some fashion.
>
> Unless the point of the test is to test how long it takes to store the
> results, why do the results need to be stored?

I think this is explained above.

>
> > Although Xho's "foreach" solution cured my problem, the reaping of the
> > pids is performed in a strict order (reading them until they're
> > finished). I understand the need for this in order to prevent a
> > deadlock but my ultimate goal to to keep these processes as
> > asynchronous as possible (using wait?). Charles' comment on the
> > "asynchronous wait" seems like what I need but this solution needs to
> > run on Windows.
>
> Asynchronous waiting won't solve the deadlock issue.  It only lets your
> parent program do other things while it is waiting (which will be forever
> in a deadlock).  But if the parent has nothing to do while it is waiting,
> there is no point.

Yeah, I just found this out, The asynchronous wait *is* available for
Windows (they say) but it just hung on me.


>
> You could use IO::Select (I've never used it on Windows) to determine which
> child is ready to be read at any given time.  Once you have read a child's
> pipe upto the eof, then you can wait for that child (possibly
> asynchronously).

When I first started this project I "borrowed" a chunk of code from:

http://www.wellho.net/solutions/perl-controlling-multiple-asynchronous-processes-in-perl.html

which utilizes select and signals. It worked great on UNIX but failed
on Windows. It looks like the USR1 signals weren't happening on
Windows. :(


>
> > Here's my question about an alternative... let's say I keep all the
> > processing of these children in the children and don't pipe anything
> > back to the parent. Instead I store the results of the children in a
> > global/shared hash. Can a single hash be operated on by several
> > children.
>
> There are modules that allow that to happen (like forks::shared), but I
> generally try to stay away from them, especially when I am concerned about
> performance.
>

I need to stick with resident perl modules. We don't want the
requirement of installing additional packages unless it's pure perl
and instantly portable. I'm not asking for miracles at this point. As
a matter of fact, just having the opportunity to tell someone like
yourself where things are at at helped me think this through. If you
have any other ideas I'm ready to listen. Thanks Xho

Tim





------------------------------

Date: 05 Jul 2007 23:55:20 GMT
From: xhoster@gmail.com
Subject: Re: Asynchronous forking(?) processes on Windows
Message-Id: <20070705195523.822$U0@newsreader.com>

Tim <tbrazil@perforce.com> wrote:

> On Jul 5, 1:52 pm, xhos...@gmail.com wrote:
> > Tim <tbra...@perforce.com> wrote:
> > > Hi again
> >
> > > I appreciate all the help. I hate to resurrect this issue but my
> > > manager re-clarified the requirements of this project and I'm back.
> > > Just to give you an overview of what I'm trying accomplish... I'm
> > > creating a harness for performance testing.
> >
> > If you only care about *testing* the performance, I would probably use
> > a completely different method, depending on what aspect of performance
> > you are testing.  If you are only interested in how long it takes the
> > child to finish, then forget the pipes.  Just have the child exit when
> > it is done, throwing away any output it would have generated in a
> > non-test environment.
> >
>
> If I could only wish. Sure, the "times" are my primary interest
> however the potential for requiring the output of the children is
> inevitable. It looks like I'll need the pipes. It's either that or
> creating temp files to store the STDOUT of the child processes and
> then picking it up later.

That would have been my second choice--dump the output into temp files,
then once the stress test is done, go through them at your leisure.  If you
were going to return the values into a hash, then you must already have
a way to generate the unique keys to be used in that hash, so using
them as file-names instead should be easy enough.  Of course, if the Perl
tool that is currently being developed for stress testing purposes will
eventually be used for other purposes, and the other purposes don't have a
"sort through the files at your leisure" phase, then I can see why you
wouldn't use this as your first choice--you might as well have one code
base for both cases.

> >
> > You could use IO::Select (I've never used it on Windows) to determine
> > which child is ready to be read at any given time.  Once you have read
> > a child's pipe upto the eof, then you can wait for that child (possibly
> > asynchronously).
>
> When I first started this project I "borrowed" a chunk of code from:
>
> http://www.wellho.net/solutions/perl-controlling-multiple-asynchronous-pr
> ocesses-in-perl.html
>
> which utilizes select and signals.

Maybe I just don't understand it, but that code looks awful.  They just
loop over handles doing "select" one handle at a time.  The point of select
is that you stuff into it all the handles you are interested in, and then
make one call and it will let you know when *any* of these handles is ready
for reading.  If they did it that way, there would be no reason to use
USR1 signals, as the USR1 signal is only used to notify the parent "at
least one child just printed, so there is something there to read."  When
used correctly, select inherently tells you when there is something to read
(or when the other end of a handle has been closed).  But I wouldn't bother
with select anyway, I always use the IO::Select wrapper.  There are some
examples in perldoc IO::Select, and I'm sure you can find examples of
its use elsewhere as well.

You would have to change your hash structure to do this--currently you have
pids mapping to file handles, but IO::Select can_read gives you back
handles, not pids.

There are two fundamental ways to handle the select.  One is to
say that the child will not print anything until just before it is done,
at which point it will very rapidly print everything it has, then exit.
In this case, once the child's file handle becomes readable, you read
everything on it, then close the handle.  The other way is to say that the
child might print slowly, so each time a handle becomes readable, you only
read (sysread, actually) as much as you can without blocking, then come
back later to read more.  I'll illustrate only the first, simpler, way:

use strict;
use warnings;
use IO::Select;
my $cmd = qq{perl -le "print q{x}x12"};


my $nchildren = 10;
my %rfhs;

my $s=IO::Select->new();
for( my $ichild = $nchildren; $ichild > 0; $ichild-- )
{
        pipe my($RFH, $WFH) or die $!;
        my $pid=fork;
        if( $pid ) {
            print "Forked pid $pid.\n";
            $rfhs{ $RFH } = $pid;
            $s->add($RFH);
            close $WFH or die $!;
        } elsif( defined $pid ) {
            close $RFH or die $!;
            print $WFH `$cmd 2>&1`;
            close $WFH or die $!;
            exit;
        } else {
            print "fork failed!\n";
            exit 1;
        }
}

while ($s->handles) {
  foreach my $rfh ( $s->can_read() ) {
     my $pid = $rfhs{$rfh};
     print "\n$pid completed. It's output is:\n", <$rfh>;
     $s->remove($rfh);
     close $rfh;
     waitpid $pid,0 or die $!;
  };
}

Again, I've done this in a way to make as few changes as feasible
to your original code.

Xho

-- 
-------------------- http://NewsReader.Com/ --------------------
Usenet Newsgroup Service                        $9.95/Month 30GB


------------------------------

Date: Wed, 04 Jul 2007 15:21:25 -0000
From:  kyle.halberstam@gmail.com
Subject: Check if file is being modified by another process
Message-Id: <1183562485.109038.14360@q75g2000hsh.googlegroups.com>

Hi,

I have an application that creates and writes to an output file I need
to process. I need to process the file when it is completely written
to. I do not initially know how big the file will be in the end.
Further, the application does NOT put a write lock on the file while
it is writing it. because of the buffering, the program wirtes to the
file in random chunks not continuously. And what is worse, the file
format itself could vary so there is nothing in the actual file that
signals the end of it. Everything is on a linux server.

What's the most efficient way of checking this? - one way is perhaps
inifinite loop checking mmtime until it is stable for a certain amount
of time?? I am not sure.

Any help will be greatly appreciated. Thanks


Kyle



------------------------------

Date: Wed, 04 Jul 2007 11:01:03 -0500
From: l v <veatchla@yahoo.com>
Subject: Re: Check if file is being modified by another process
Message-Id: <138nh0mepet2126@news.supernews.com>

kyle.halberstam@gmail.com wrote:
> Hi,
> 
> I have an application that creates and writes to an output file I need
> to process. I need to process the file when it is completely written
> to. I do not initially know how big the file will be in the end.
> Further, the application does NOT put a write lock on the file while
> it is writing it. because of the buffering, the program wirtes to the
> file in random chunks not continuously. And what is worse, the file
> format itself could vary so there is nothing in the actual file that
> signals the end of it. Everything is on a linux server.
> 
> What's the most efficient way of checking this? - one way is perhaps
> inifinite loop checking mmtime until it is stable for a certain amount
> of time?? I am not sure.
> 
> Any help will be greatly appreciated. Thanks
> 
> 
> Kyle
> 

Does the unix command lsof list that the output file is in use?

-- 

Len


------------------------------

Date: 4 Jul 2007 16:01:23 GMT
From: anno4000@radom.zrz.tu-berlin.de
Subject: Re: Check if file is being modified by another process
Message-Id: <5f1uijF38k0m2U1@mid.dfncis.de>

 <kyle.halberstam@gmail.com> wrote in comp.lang.perl.misc:
> Hi,
> 
> I have an application that creates and writes to an output file I need
> to process. I need to process the file when it is completely written
> to. I do not initially know how big the file will be in the end.
> Further, the application does NOT put a write lock on the file while
> it is writing it.

Start the application from a Perl script that does hold a lock.
Roughly:

    use Fcntl qw( :flock);

    my $out = shift;
    $^F = 10_000;
    open my $o, '>', $out or die "Can't create '$out': $!";
    flock $o, LOCK_EX;
    exec '/the/application', '-o', $out;

(Assuming -o sets the output file of the application.)

Setting $^F makes sure the filehandle isn't closed across exec(),
see perlvar.  If you use system() instead of exec() you don't need
it.

Anno


------------------------------

Date: Thu, 05 Jul 2007 06:32:09 -0700
From:  wikenfalk@enhorning.se
Subject: Creating and returning a code-reference from a XS routine
Message-Id: <1183642329.520908.34590@q75g2000hsh.googlegroups.com>

Hi

I want to create a SV * which references CODE.

In c I would do

void *fptr {
  int afunc(void);
  return (void *) &afunc;
}

Now I would like to

SV *
fptr

  CODE:
     RETVAL = newRV_noinc( (SV *) &XS_fptr);

  OUTPUT:
     RETVAL

(ignore that the name of the function is probably not correct .. that
part I have sorted out.)
The problem is how to generate the correct RETVAL.
I cannot find this information in perlguts .. maybe because I don't
understand ..

/xb



------------------------------

Date: Thu, 05 Jul 2007 10:34:35 -0400
From: Sherm Pendley <spamtrap@dot-app.org>
Subject: Re: Creating and returning a code-reference from a XS routine
Message-Id: <m2644yc41w.fsf@dot-app.org>

wikenfalk@enhorning.se writes:

> Hi
>
> I want to create a SV * which references CODE.
>
> In c I would do
>
> void *fptr {
>   int afunc(void);
>   return (void *) &afunc;
> }
>
> Now I would like to
>
> SV *
> fptr
>
>   CODE:
>      RETVAL = newRV_noinc( (SV *) &XS_fptr);
>
>   OUTPUT:
>      RETVAL
>

Try fetching the CV* of the function with get_cv(), then return a ref that
points to that:

SV *
fptr

    CODE:
        RETVAL = newRV_noinc( (SV*) get_cv("XS_fptr", 0));

    OUTPUT:
        RETVAL

> (ignore that the name of the function is probably not correct ..

OK.

sherm--

-- 
Web Hosting by West Virginians, for West Virginians: http://wv-www.net
Cocoa programming in Perl: http://camelbones.sourceforge.net


------------------------------

Date: Thu, 5 Jul 2007 20:16:57 +0000 (UTC)
From:  Ilya Zakharevich <nospam-abuse@ilyaz.org>
Subject: Re: Creating and returning a code-reference from a XS routine
Message-Id: <f6jjjp$1i70$1@agate.berkeley.edu>

[A complimentary Cc of this posting was sent to

<wikenfalk@enhorning.se>], who wrote in article <1183642329.520908.34590@q75g2000hsh.googlegroups.com>:
> Now I would like to
> 
> SV *
> fptr
> 
>   CODE:
>      RETVAL = newRV_noinc( (SV *) &XS_fptr);
> 
>   OUTPUT:
>      RETVAL

What is XS_fptr?  A C function which is an XS?

Then look into the .c file generated from your .xs file.  In the BOOT
section, there is some autogenerated code to register XS functions.
Just copy and paste it yourselves...

Hope this helps,
Ilya


------------------------------

Date: Fri, 06 Jul 2007 10:29:43 -0000
From: Justin C <justin.0706@purestblue.com>
Subject: DBIx::Simple variable interpolation problem
Message-Id: <3b72.468e1997.1942a@zem>


Yes, it's me again with more of the same... Maybe I should just say
"Thank you Paul" now?

I'm querying a database, trying to use a broad query so data from almost
any field can be used and all fields are searched to find that data, the
search should then return the record numbers that match.

I have the following query statement:

my $query = $dataSource->query('SELECT key FROM prospect WHERE ? ~* ?', $field, $sc) ;

$field is pulled from an array which is a list of the fields to be
searched. $sc is the search criteria. This isn't matching/returning
anything. If I, however, replace the first ? with the actual field name
I get results. I've tried this code with a print statement just before
$field is called, and it contains what I expect. 

I realise you've not got the data to check this, but a
full working code snippet is below ('working' meaning that should I
change the first ? to a field name then I get results). Anyway, here's
the code I have:

#!/usr/bin/perl

use warnings ;
use strict ;
use DBIx::Simple ;

my ($dataSource, @results) ;
sub db_connect {
    my ( $user, $password) = ("justin", "grobble") ;
    $dataSource = DBIx::Simple->connect(
        'dbi:Pg:database=prospects', $user, $password,
        { RaiseError => 1 , AutoCommit => 1 }
    ) or die DBI::Simple->error ;
}

while (@ARGV) {
    my $sc = pop @ARGV ;	#	Search criteria
    my @dbFields = qw/contact co_name ad1 ad2 ad3 town county p_code country tel1 tel2/ ;
    foreach my $field (@dbFields) {
	db_connect();
	my $query = $dataSource->query('SELECT key FROM prospect WHERE ? ~* ?', $field, $sc) ;
	while (my @row = $query->list){
	    push @results, $row[0];
	}
    }
}

foreach (@results) {
    print $_, "\n";
}

Any suggestions why a field name as a variable makes this not work? Or
is there something wrong with my $field that I'm not seeing?

Thank you for any help you can give with this.

	Justin

-- 
Justin Catterall                               www.masonsmusic.co.uk
Director                                       T: +44 (0)1424 427562
Masons Music Ltd                               F: +44 (0)1424 434362
                           For full company details see our web site


------------------------------

Date: Fri, 06 Jul 2007 06:20:33 -0700
From:  Paul Lalli <mritty@gmail.com>
Subject: Re: DBIx::Simple variable interpolation problem
Message-Id: <1183728033.340188.296130@n2g2000hse.googlegroups.com>

On Jul 6, 6:29 am, Justin C <justin.0...@purestblue.com> wrote:
> Yes, it's me again with more of the same... Maybe I should just say
> "Thank you Paul" now?

I'm tickled that you have that much faith in my powers of debugging,
but please don't discount the wealth of other people in this newsgroup
who can help you.

> I'm querying a database, trying to use a broad query so data from almost
> any field can be used and all fields are searched to find that data, the
> search should then return the record numbers that match.
>
> I have the following query statement:
>
> my $query = $dataSource->query('SELECT key FROM prospect WHERE ? ~* ?', $field, $sc) ;

http://search.cpan.org/~timb/DBI-1.58/DBI.pm#Placeholders_and_Bind_Values

(I realize you're using DBIx::Simple, but that's just a wrapper around
DBI itself).

You can't use place holders for column names.  Place holders are used
for values only.
> I realise you've not got the data to check this, but a
> full working code snippet is below

EXCELLENT!

> ('working' meaning that should I
> change the first ? to a field name then I get results). Anyway, here's
> the code I have:
>
> #!/usr/bin/perl
>
> use warnings ;
> use strict ;
> use DBIx::Simple ;
>
> my ($dataSource, @results) ;
> sub db_connect {
>     my ( $user, $password) = ("justin", "grobble") ;
>     $dataSource = DBIx::Simple->connect(
>         'dbi:Pg:database=prospects', $user, $password,
>         { RaiseError => 1 , AutoCommit => 1 }
>     ) or die DBI::Simple->error ;
>
> }
>
> while (@ARGV) {
>     my $sc = pop @ARGV ;        #       Search criteria
>     my @dbFields = qw/contact co_name ad1 ad2 ad3 town county p_code country tel1 tel2/ ;
>     foreach my $field (@dbFields) {
>         db_connect();

URG.  Why are you connecting to the database multiple times?  Very
very wasteful.  Connect once, then run your queries multiple times.

>         my $query = $dataSource->query('SELECT key FROM prospect WHERE ? ~* ?', $field, $sc) ;

Like I said, you can't use placeholders for column names.  Just put
the columnname directly into your SQL:
my $query = $dataSource->query("SELECT key FROM prospect WHERE $field
= ?", $sc);


> Any suggestions why a field name as a variable makes this not work?

Read the URL I pasted above.  The db interface needs to know the
actual syntax of the SQL statement before it can be prepared.  The
column name is needed to validate the SQL.

Paul Lalli



------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc.  For subscription or unsubscription requests, send
#the single line:
#
#	subscribe perl-users
#or:
#	unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.  

NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice. 

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V11 Issue 618
**************************************


home help back first fref pref prev next nref lref last post