[30646] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 1891 Volume: 11

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Tue Sep 30 16:09:50 2008

Date: Tue, 30 Sep 2008 13:09:09 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Tue, 30 Sep 2008     Volume: 11 Number: 1891

Today's topics:
    Re: Advice on module for plotting graphs <ben@morrow.me.uk>
    Re: Advice on module for plotting graphs (Vicky Conlan)
    Re: Advice on module for plotting graphs <jimsgibson@gmail.com>
    Re: extracting strings from a text file <yandry77@gmail.com>
    Re: extracting strings from a text file <tim@burlyhost.com>
    Re: extracting strings from a text file <cartercc@gmail.com>
    Re: extracting strings from a text file <tim@burlyhost.com>
    Re: extracting strings from a text file <tim@burlyhost.com>
    Re: file locks and a counter <jimsgibson@gmail.com>
    Re: file locks and a counter <richard@example.invalid>
    Re: file locks and a counter <tim@burlyhost.com>
        find help <jameslockie@mail.com>
    Re: find help (Vicky Conlan)
    Re: find help <jurgenex@hotmail.com>
    Re: find help <spamtrap@dot-app.org>
    Re: Handling Huge Data xhoster@gmail.com
    Re: how to replace $1, $2, $3 ... <lehmannmapson@cnm.de>
    Re: Sybase::CTLib ct_connect problem <tzz@lifelogs.com>
    Re: Sybase::CTLib ct_connect problem somyasharma@gmail.com
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Tue, 30 Sep 2008 16:48:26 +0100
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: Advice on module for plotting graphs
Message-Id: <abu9r5-f67.ln1@osiris.mauzo.dyndns.org>


Quoth comps@riffraff.plig.net (Vicky Conlan):
> I'm currently writing a small system to automate a process of
> "cut+paste into excel, output as a graph" someone is currently
> having to do.  Importing and munging the data I'm happy* with,
> creating a graph I have absolutely no experience of.  
> 
> I've had a look around CPAN, but there appear to be a million
> and one modules that may do what I'm looking for, does anyone
> have any experience (good or bad) or advice on which direction
> to go?

If you're on Win32 and have Excel, you can use Win32::OLE to
remote-control Excel into producing the graph for you. If you record a
macro in Excel that does what you want and then examine the VB code it
generates, it's usually trivial to convert that into Perl.

Ben

-- 
        I must not fear. Fear is the mind-killer. I will face my fear and
        I will let it pass through me. When the fear is gone there will be 
        nothing. Only I will remain.
ben@morrow.me.uk                                          Frank Herbert, 'Dune'


------------------------------

Date: Tue, 30 Sep 2008 16:11:19 +0000 (UTC)
From: comps@riffraff.plig.net (Vicky Conlan)
Subject: Re: Advice on module for plotting graphs
Message-Id: <gbtj37$1per$1@magenta.plig.net>

According to <ben@morrow.me.uk>:
>If you're on Win32 and have Excel, you can use Win32::OLE to
>remote-control Excel into producing the graph for you. If you record a
>macro in Excel that does what you want and then examine the VB code it
>generates, it's usually trivial to convert that into Perl.

I'm not on Win32, if I was going to go the excel route, it would involve
outputting the data to a solaris machine then importing it via (some
other method).  That would be the backup method if I couldn't find a
better solution.  I'd rather have a Perl module that outputs pretty
graphs as gifs (or something similar).
-- 


------------------------------

Date: Tue, 30 Sep 2008 10:40:24 -0700
From: Jim Gibson <jimsgibson@gmail.com>
Subject: Re: Advice on module for plotting graphs
Message-Id: <300920081040244415%jimsgibson@gmail.com>

In article <gbtdlj$15mj$1@magenta.plig.net>, Vicky Conlan
<comps@riffraff.plig.net> wrote:

> I'm currently writing a small system to automate a process of
> "cut+paste into excel, output as a graph" someone is currently
> having to do.  Importing and munging the data I'm happy* with,
> creating a graph I have absolutely no experience of.  
> 
> I've had a look around CPAN, but there appear to be a million
> and one modules that may do what I'm looking for, does anyone
> have any experience (good or bad) or advice on which direction
> to go?
> 
> (One option is always to output excel-importable data and then
> carry on using excel to create the graphs, but that's not really
> very nice)
> 
> Given I don't think gnuplot is available, I'm currently looking
> at SVG::Graph, but if there was something available that came
> bundled with the standard distribution, that would probably win
> the convenience vote.

What platform are you on? Why do you think gnuplot is not available?
Gnuplot would be my recommended approach. It is available as source and
supports many platforms. I have used on Windows, Linux, and Mac OSX.

-- 
Jim Gibson


------------------------------

Date: Tue, 30 Sep 2008 08:38:35 -0700 (PDT)
From: Andry <yandry77@gmail.com>
Subject: Re: extracting strings from a text file
Message-Id: <2d89b41f-38e6-435d-880b-2840906d9d74@l62g2000hse.googlegroups.com>

On 30 Set, 15:43, Josef Moellers <josef.moell...@fujitsu-siemens.com>
wrote:
> Andry wrote:
> > Hi,
> > I have a text file captured from an SSH session.
> > Each line of the text looks like this (opened with VI editor):
> > ***********************************************************************=
************
> > -rw-r--r-- 1 root root 2389787 Sep 30 10:45 ^[[00mfilename.pl^[[00m
> > ***********************************************************************=
***********
> > As you can see a lot of spurious/control/special characters are shown
> > (in VI editor).
> > I need to extract just the filenames at the end of each line (getting
> > rid of spurious characters).
> > The result should be like this:
> > ***********************************************************************=
************
> > filename.pl
> > ***********************************************************************=
************
> > Of course, I don't know in advance the value of the string to extract
> > (nor its length to pass to a "substring" function).
> > Can you suggest any method to extract the single file name at the end
> > of each line?
>
> The control characters are ANSI console control sequences.
> AFAIK they consist of an ESC character followed by an optional left
> angle bracket followed by numbers separated by semicolons followed by a
> letter, so you might try to weed out "\033.*?[[:alpha:]]".
>
> Another possibility would be to use the command "/bin/ls" rather than
> "ls", the latter is an alias for "ls --color=3Dauto".
>
> HTH,
>
> Josef
> --
> These are my personal views and not those of Fujitsu Siemens Computers!
> Josef M=F6llers (Pinguinpfleger bei FSC)
> =A0 =A0 =A0 =A0 If failure had no penalty success would not be a prize (T=
 . =A0Pratchett)
> Company Details:http://www.fujitsu-siemens.com/imprint.html

Thanks Josef!
The /bin/ls option works great!

Now, I can't get the filename out of the string.
I tried with:
$extract =3D~ s/^.*?(\w+)\s*$/$1/;
and I got:
*******************
pl
*******************
Then, I tried with:
$extract =3D~ s/^.*?(\w+)\.(\w+)\s*$/$1/;
and I got:
*******************
filename
*******************
While what I want is:
*******************
filename.pl
*******************

Could you help with that, please?

Thanks,
Andrea


------------------------------

Date: Tue, 30 Sep 2008 09:10:15 -0700
From: Tim Greer <tim@burlyhost.com>
Subject: Re: extracting strings from a text file
Message-Id: <HbsEk.2050$YN3.876@newsfe08.iad>

Andry wrote:

> $extract =~ s/^.*?(\w+)\.(\w+)\s*$/$1/;

filename is $1 and pl is $2.  You also didn't capture \.

So:

$extract =~ s/^.*?(\w+)\.(\w+)\s*$/$1.$2/;

or:

$extract =~ s/^.*?(\w+\.\w+)\s*$/$1/;

You also probably want to check with some type of word boundary so you
get all of "filename" in filename.pl, depending on how the file is
formatted (or could be).
-- 
Tim Greer, CEO/Founder/CTO, BurlyHost.com, Inc.
Shared Hosting, Reseller Hosting, Dedicated & Semi-Dedicated servers
and Custom Hosting.  24/7 support, 30 day guarantee, secure servers.
Industry's most experienced staff! -- Web Hosting With Muscle!


------------------------------

Date: Tue, 30 Sep 2008 09:49:53 -0700 (PDT)
From: cartercc <cartercc@gmail.com>
Subject: Re: extracting strings from a text file
Message-Id: <e44f13e3-1152-4aed-b5a8-2ba7261fb483@a1g2000hsb.googlegroups.com>

On Sep 30, 11:38=A0am, Andry <yandr...@gmail.com> wrote:
> > > -rw-r--r-- 1 root root 2389787 Sep 30 10:45 ^[[00mfilename.pl^[[00m

> Then, I tried with:
> $extract =3D~ s/^.*?(\w+)\.(\w+)\s*$/$1/;
> and I got:
> *******************
> filename
> *******************
> While what I want is:
> *******************
> filename.pl
> *******************
>
> Could you help with that, please?

UNTESTED

while(<__DATA__>)
{
  @line =3D split;
  $filename =3D $line[9]; #if it IS 9
  $filename =3D s/^.*?(\w+)\.(\w+)\s*$/$1.$2/;
  print $filename, "\n";
}

CC


------------------------------

Date: Tue, 30 Sep 2008 09:52:43 -0700
From: Tim Greer <tim@burlyhost.com>
Subject: Re: extracting strings from a text file
Message-Id: <vPsEk.14000$hX5.1774@newsfe06.iad>

cartercc wrote:

> On Sep 30, 11:38 am, Andry <yandr...@gmail.com> wrote:
>> > > -rw-r--r-- 1 root root 2389787 Sep 30 10:45
>> > > ^[[00mfilename.pl^[[00m
> 
>> Then, I tried with:
>> $extract =~ s/^.*?(\w+)\.(\w+)\s*$/$1/;
>> and I got:
>> *******************
>> filename
>> *******************
>> While what I want is:
>> *******************
>> filename.pl
>> *******************
>>
>> Could you help with that, please?
> 
> UNTESTED
> 
> while(<__DATA__>)
> {
>   @line = split;
>   $filename = $line[9]; #if it IS 9
>   $filename = s/^.*?(\w+)\.(\w+)\s*$/$1.$2/;
>   print $filename, "\n";
> }
> 
> CC

Remember, it would start from 0, rather than one.  If you use split,
it's [8].
-- 
Tim Greer, CEO/Founder/CTO, BurlyHost.com, Inc.
Shared Hosting, Reseller Hosting, Dedicated & Semi-Dedicated servers
and Custom Hosting.  24/7 support, 30 day guarantee, secure servers.
Industry's most experienced staff! -- Web Hosting With Muscle!


------------------------------

Date: Tue, 30 Sep 2008 09:57:32 -0700
From: Tim Greer <tim@burlyhost.com>
Subject: Re: extracting strings from a text file
Message-Id: <1UsEk.14019$hX5.6902@newsfe06.iad>

Andry wrote:

> Hi,
> I have a text file captured from an SSH session.
> Each line of the text looks like this (opened with VI editor):
>
***********************************************************************************
> -rw-r--r-- 1 root root 2389787 Sep 30 10:45 ^[[00mfilename.pl^[[00m
>
**********************************************************************************
> As you can see a lot of spurious/control/special characters are shown
> (in VI editor).
> I need to extract just the filenames at the end of each line (getting
> rid of spurious characters).
> The result should be like this:
>
***********************************************************************************
> filename.pl
>
***********************************************************************************
> Of course, I don't know in advance the value of the string to extract
> (nor its length to pass to a "substring" function).
> Can you suggest any method to extract the single file name at the end
> of each line?
> 
> Thank you,
> Andrea

my $line = '-rw-r--r-- 1 root root 2389787 Sep 30 10:45
^[[00mfilename.pl^[[00m';

$line = (split /\s+/, $line)[8];
$line =~ s/\^\[\[00m//g;

One way to do it.
-- 
Tim Greer, CEO/Founder/CTO, BurlyHost.com, Inc.
Shared Hosting, Reseller Hosting, Dedicated & Semi-Dedicated servers
and Custom Hosting.  24/7 support, 30 day guarantee, secure servers.
Industry's most experienced staff! -- Web Hosting With Muscle!


------------------------------

Date: Tue, 30 Sep 2008 10:20:18 -0700
From: Jim Gibson <jimsgibson@gmail.com>
Subject: Re: file locks and a counter
Message-Id: <300920081020182077%jimsgibson@gmail.com>

In article <3iv2e4552nre9br0ls0s7i8qc0v01818ie@4ax.com>, Jürgen Exner
<jurgenex@hotmail.com> wrote:

> Richard Nixon <richard@example.invalid> wrote:
> >How do you keep track of line numbers with a longer perl script?  What I
> >did before was put a line somewhere like
> >some where;
> >and then the compiler would tell me which line had an error.  Then I would
> >move it closer to line 230.  Is there a better way?
> 
> What about just jumping to line 230?

> - I vi I believe it's :230 but my vi is _very_ rusty.

230G

-- 
Jim Gibson


------------------------------

Date: Tue, 30 Sep 2008 13:22:56 -0600
From: Richard Nixon <richard@example.invalid>
Subject: Re: file locks and a counter
Message-Id: <tm4ex9eogfgq$.1mlw30isflrpt$.dlg@40tude.net>

On Mon, 29 Sep 2008 18:21:59 -0700, Jürgen Exner wrote:

> Richard Nixon <richard@example.invalid> wrote:
>>How do you keep track of line numbers with a longer perl script?  What I
>>did before was put a line somewhere like
>>some where;
>>and then the compiler would tell me which line had an error.  Then I would
>>move it closer to line 230.  Is there a better way?
> 
> What about just jumping to line 230?
> - In EMACS M-x goto-line 230. Besides, the current line number is always
> indicated in the status line.
> - I vi I believe it's :230 but my vi is _very_ rusty.
> - Heck, even Notepad has a Goto Line functionality.
> 
> What editor are you using that it doesn't know about line numbers?
> 
> jue

Gosh, juergen, I was unaware that notepad had that functionality.

While I am a "windows guy," I've never really developed any continuing
education with it because I haven't ever been in a place on usenet where
The Topic can be bothered for a practical user concern.

Line 230 is the sysopen line here:

sub check_lock {
   $time = $_[0];

   for ($i = 1;$i <= $time; $i++) {
      if (-e "$data_dir$lock_file") {
    sleep 1;
      }
      else {
    sysopen(FH, $path, O_WRONLY|O_EXCL|O_CREAT)         or die $!;
    print LOCK "0";
    close(LOCK);
    last;
      }
   }
}

There's no great surprise here, as I've got no file for it to open, so I
think I'm at the level where I can't pursue this further.

Does this address the file lock issue adequately?
-- 
Richard Milhous Nixon

A man came into the the office one day and said he was a sailor. We cured
him of that.
~~ Mark Twain


------------------------------

Date: Tue, 30 Sep 2008 13:05:14 -0700
From: Tim Greer <tim@burlyhost.com>
Subject: Re: file locks and a counter
Message-Id: <%DvEk.7665$YN5.7031@newsfe03.iad>

Richard Nixon wrote:

> 
> 
> Hello newsgroup,
> 
> Fortran is my syntax of choice for my avocation in numerical analysis,
> but
> perl is the much better tool for the net.  For one of the better
> fortran sites, they have this counter:
> 
> http://www.fortranlib.com/flcounter.perl
> 
> If the open statement in this:
> 
> sub check_lock {
>    $time = $_[0];
> 
>    for ($i = 1;$i <= $time; $i++) {
>       if (-e "$data_dir$lock_file") {
>     sleep 1;
>       }
>       else {
>     open(LOCK,">$data_dir$lock_file");
>     print LOCK "0";
>     close(LOCK);
>     last;
>       }
>    }
> }
> 
> is changed to this:
> 
> sysopen(FH, $path, O_WRONLY|O_EXCL|O_CREAT)         or die $!;
> 
> do they have a better program?
> 
> Thanks and cheers,
> 

Matt Wright's scripts were horrible in their day, and in this day and
age, have no place to be used.  Even Matt himself recommends people use
better coded scripts ( http://www.scriptarchive.com/nms.html ).  A lot
of his scripts have race conditions and the above locking and test
offers that very thing (a race condition).

It doesn't make sense to use a physical $filename.lock file and to check
its existence to see if it's locked and not use flock (file locking). 
Please, don't use any of his scripts.  View them for historical reasons
and not production ready.  They are not stable or secure.  Instead, use
the NMS (No Matt Script) version.  You can get the counter here:

http://nms-cgi.sourceforge.net/scripts.shtml

Look for "Text Counter".  You also have a lot of other scripts that are
superior to Matt's that are there, which you should use instead of his
(if you use any of the other ones).
-- 
Tim Greer, CEO/Founder/CTO, BurlyHost.com, Inc.
Shared Hosting, Reseller Hosting, Dedicated & Semi-Dedicated servers
and Custom Hosting.  24/7 support, 30 day guarantee, secure servers.
Industry's most experienced staff! -- Web Hosting With Muscle!


------------------------------

Date: Tue, 30 Sep 2008 08:29:07 -0700 (PDT)
From: jammer <jameslockie@mail.com>
Subject: find help
Message-Id: <8988a9c1-30f9-4a6f-af39-2023ffdcb145@v28g2000hsv.googlegroups.com>

I  am trying to find all directories of a certain name.
I use `find . -name certainName -type d`;
I do not need to go below the current directory.
I tried adding -depth 1 to find but that doesn't do what I want.

Is there a 100% perl way of doing what I need?

I read the documentation for File::Find and it doesn't seem to only
find directories.


------------------------------

Date: Tue, 30 Sep 2008 15:43:43 +0000 (UTC)
From: comps@riffraff.plig.net (Vicky Conlan)
Subject: Re: find help
Message-Id: <gbthff$1n2o$1@magenta.plig.net>

According to <jameslockie@mail.com>:
>I  am trying to find all directories of a certain name.
>I use `find . -name certainName -type d`;
>I do not need to go below the current directory.
>I tried adding -depth 1 to find but that doesn't do what I want.
>
>Is there a 100% perl way of doing what I need?
>
>I read the documentation for File::Find and it doesn't seem to only
>find directories.

Does this do what you want?

opendir(DIR,"."); 
@dirs = grep {/certainName/ && -d $_} readdir(DIR); 
print Dumper(\@dirs)'
-- 


------------------------------

Date: Tue, 30 Sep 2008 09:07:58 -0700
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: find help
Message-Id: <3cj4e4hcjt0up9qnqj33pj7q6mf3kr6lkv@4ax.com>

jammer <jameslockie@mail.com> wrote:
>I  am trying to find all directories of a certain name.
>I use `find . -name certainName -type d`;
>I do not need to go below the current directory.
>I tried adding -depth 1 to find but that doesn't do what I want.
>
>Is there a 100% perl way of doing what I need?
>
>I read the documentation for File::Find and it doesn't seem to only
>find directories.

It will find whatever you tell it to find in the wanted() function.

However, because you seem to be interested in the names of the
directories in the current directory only. Therefore  File::Find is
really overkill.
Just opendir(), readdir() in list context, and then grep(-d) on the
listing.

jue 



------------------------------

Date: Tue, 30 Sep 2008 12:10:10 -0400
From: Sherm Pendley <spamtrap@dot-app.org>
Subject: Re: find help
Message-Id: <m1y719d38d.fsf@dot-app.org>

jammer <jameslockie@mail.com> writes:

> I read the documentation for File::Find and it doesn't seem to only
> find directories.

Use the -d file test operator to filter out non-directories in your
wanted() function. See:

    perldoc -f -x

sherm--

-- 
My blog: http://shermspace.blogspot.com
Cocoa programming in Perl: http://camelbones.sourceforge.net


------------------------------

Date: 30 Sep 2008 17:50:15 GMT
From: xhoster@gmail.com
Subject: Re: Handling Huge Data
Message-Id: <20080930135017.023$n5@newsreader.com>

Vishal G <v3gupta@gmail.com> wrote:
> Hi Guys,
>
> I am trying to edit some bioinformatic package written in perl which
> was written to handle DNA sequence of about 500,000 base long (a
> string containg 500000 chrs)..
>
> I have to enhance it to handle 100 million base long DNA...
>
> Each base in DNA has this information, base (A, C, G or T), qual
> (0-99), position (1-length)
>
> there is one main DNA sequence and on average 500,000 parts (max 2000
> chrs long with the same set of information)...

How is this data stored?  Is it all in memory at once?

>
> The program first creates an alignment like
> <code>
>
> *
> Main - .....ACCCTTTGTCTAGTCGTATCGTCGATCGTCGCTAGCTCTGCT....
> Part -
> GTCGTATCGTCGAACGTCGCTAGCTC
> Part -                 CTTTGTCTAGTCGTATCGTCGATCGTCGCT
> Part
> -
> TCGAACGTCGCTAGC
> </code>

It looks like your alignment was line-wrapped into oblivion.  Anyway,
how was the alignment on such a large dataset done?  Couldn't your quality
summarization thing be best implement by pushing it into the aligner code?


> Now, lets say I have to go thorugh each position and find how many
> variations are present at certain position (with their original
> position and quality).
>
> Look at * position, there is T-A variation
>
> Right now they are using hash to caputure this
>
> %A, %C, %G, %T
>
> Loop For Main DNA {
>                                         $T{$pos} = $qual;
> # this tells me that there is T base at certain position with some
> qual

Since $pos is an integer and seems to be dense (every or almost every
position from 0 up to the length-1 will be occupied), then you should
consider using an array rather than a hash.  That might save some memory.
On the other hand, it might take more memory if most positions are
unanimous, meaning that 3 of the 4 base-hashes would not have a value for
any given position.

Also, where is $qual coming from?  Obviously it isn't a constant over the
life of the loop, like you have it shown.  Doesn't it have to draw from
something in RAM to obtain its value?

>
> }
>
> Update the qual by adding the qual of parts
>
> Loop For Parts {
>            $A{$pos} += $qual # for A parts
>
>            $T{$pos} += $qual $ for T parts
> }

Is there another loop over $pos?  If so, is it inside the Loop for parts
or outside of it?  Again, where does $qual come from?

>
> But because the dataset is huge, it consumes lot of memory...
>
> so basically I am trying to figure out a way to store this information
> without using much memory

You could "pack" the numbers into strings and manipulate them with
"substr". I think there are even some Tie modules that do this for you, but
the speed decrease might be substantial.

What I would probably do is use Inline::C and have the data be accumulated
in a C float or double array, rather than a perl structure.

Or maybe you can address one $pos at a time, and output the results of that
$pos to disk before moving on to the next one, rather than accumulating
into memory.

>
> If you dont understand the above problem, dont worry....
>
> just tell me how to handle huge data which need to accessed frequently
> using least possible memory..

Don't worry about what disease I actually have doc, just give me the cure.
I'm afraid that isn't likely to work well.  The details of the solution
are likely to depend on the details of the problem.

Xho

-- 
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.


------------------------------

Date: Tue, 30 Sep 2008 19:27:27 +0200
From: Marten Lehmann <lehmannmapson@cnm.de>
Subject: Re: how to replace $1, $2, $3 ...
Message-Id: <6kf5s2F6mlanU1@mid.individual.net>

Hello,

> As Paul pointed out, "Just do your regexp in list context rather than 
> scalar context.", so
> 
> use warnings;
> use strict;
> 
> my $a = "12 34 56 78 90";
> if (my @v = ($a =~ /(\d\d)/g)) {
> print join("\n", @v), "\n";
> }

this looks fine. I was just expecting, that there is a predefined 
variable I didn't know like $_ in "foreach (@array) {" or $@ in "eval { 
 ... };"

Regards
Marten


------------------------------

Date: Tue, 30 Sep 2008 11:27:40 -0500
From: Ted Zlatanov <tzz@lifelogs.com>
Subject: Re: Sybase::CTLib ct_connect problem
Message-Id: <86skrh4n0j.fsf@lifelogs.com>

On Tue, 30 Sep 2008 04:56:05 -0700 (PDT) somyasharma@gmail.com wrote: 

s> Hi,
s> I am trying to use SyBase::CTLib with perl 5.6 and sybase ASE 12.5. i
s> was trying a simple script to start of.

s> but, my script exits with this error :

s> Open Client Message:
s> Message number: LAYER = (1) ORIGIN = (1) SEVERITY = (1) NUMBER = (191)
s> Message String: ct_connect(): user api layer: external error: The
s> connection failed because of invalid or missing external configuration
s> data.

s> Can any one help in this regard? i have checked the existence of
s> external dependency file (ocs.cfg) in the directory $SYBASE/
s> $SYBASE_OCS/config.

I've only used DBD::Sybase, but this error seems to be at the library
level.  I've had problems with missing locale files; make sure
*everything* from the stock install of the client is in place and then
move things out of the way carefully if you need to save space.  Here's
what we have in $SYBASE that works for us:

charset.gz  charsets  config  interfaces  loca.gz  locales  OCS-15_0 ocs.gz  profile  sqsh-2.1.4

By the way, sqsh is a very good way to cope with Sybase :)

Ted



------------------------------

Date: Tue, 30 Sep 2008 09:47:49 -0700 (PDT)
From: somyasharma@gmail.com
Subject: Re: Sybase::CTLib ct_connect problem
Message-Id: <f4f9656f-016e-43a9-ac1d-910f339903a9@j22g2000hsf.googlegroups.com>

Hi Ted,
Thanks for the reply. Actually the scene is that if i use
Sybase::DBlib instead of CTlib, everything works fine. Adding to that,
there are lots of existing C++ components which use  Sybase's CTLib. I
get this error when i try to use Sybperl in a perl script.

The issue is somewhat baffling :(

I am trying this out in a very restricted environment,so not pretty
sure if i will be allowed to experiment with the $SYBASE directory.
Thanks for the inputs though :)


On Sep 30, 9:27=A0pm, Ted Zlatanov <t...@lifelogs.com> wrote:
> On Tue, 30 Sep 2008 04:56:05 -0700 (PDT) somyasha...@gmail.com wrote:
>
> s> Hi,
> s> I am trying to use SyBase::CTLib with perl 5.6 and sybase ASE 12.5. i
> s> was trying a simple script to start of.
>
> s> but, my script exits with this error :
>
> s> Open Client Message:
> s> Message number: LAYER =3D (1) ORIGIN =3D (1) SEVERITY =3D (1) NUMBER =
=3D (191)
> s> Message String: ct_connect(): user api layer: external error: The
> s> connection failed because of invalid or missing external configuration
> s> data.
>
> s> Can any one help in this regard? i have checked the existence of
> s> external dependency file (ocs.cfg) in the directory $SYBASE/
> s> $SYBASE_OCS/config.
>
> I've only used DBD::Sybase, but this error seems to be at the library
> level. =A0I've had problems with missing locale files; make sure
> *everything* from the stock install of the client is in place and then
> move things out of the way carefully if you need to save space. =A0Here's
> what we have in $SYBASE that works for us:
>
> charset.gz =A0charsets =A0config =A0interfaces =A0loca.gz =A0locales =A0O=
CS-15_0 ocs.gz =A0profile =A0sqsh-2.1.4
>
> By the way, sqsh is a very good way to cope with Sybase :)
>
> Ted



------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc.  For subscription or unsubscription requests, send
#the single line:
#
#	subscribe perl-users
#or:
#	unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.  

NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice. 

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V11 Issue 1891
***************************************


home help back first fref pref prev next nref lref last post