[29440] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 684 Volume: 11

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Thu Jul 26 09:09:54 2007

Date: Thu, 26 Jul 2007 06:09:09 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Thu, 26 Jul 2007     Volume: 11 Number: 684

Today's topics:
    Re: @arts <bik.mido@tiscalinet.it>
    Re: @arts <bik.mido@tiscalinet.it>
    Re: @arts <bik.mido@tiscalinet.it>
    Re: @arts <bik.mido@tiscalinet.it>
    Re: filehandle, read lines <roy.schultheiss@googlemail.com>
        fork command. <rajendra.prasad@in.bosch.com>
    Re: fork command. anno4000@radom.zrz.tu-berlin.de
    Re: fork command. <bik.mido@tiscalinet.it>
    Re: Help reading a file <tadmc@seesig.invalid>
    Re: match string by re using some pattern anno4000@radom.zrz.tu-berlin.de
    Re: match string by re using some pattern <admiralcap@gmail.com>
        only once in storage <a@mail.com>
    Re: only once in storage <peter@makholm.net>
    Re: only once in storage <noreply@gunnar.cc>
    Re: only once in storage <josef.moellers@fujitsu-siemens.com>
    Re: only once in storage <peter@makholm.net>
    Re: only once in storage <josef.moellers@fujitsu-siemens.com>
        Problem with excel workbook  marzec.wojciech@gmail.com
        Using CallManager AXL interface with perl and SOAP::Lit  julien.laffitte@gmail.com
    Re: XML Validation <john1949@yahoo.com>
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Thu, 26 Jul 2007 13:04:47 +0200
From: Michele Dondi <bik.mido@tiscalinet.it>
Subject: Re: @arts
Message-Id: <ervga3tgont4ur81cf423dbkonug6ln20i@4ax.com>

On Wed, 25 Jul 2007 23:29:48 GMT, Tad McClellan <tadmc@seesig.invalid>
wrote:

>I'm afraid not.
>
>It has changed its identity dozens of times over the past few years,
>so we can expect that it will do so yet again.

I'm deleting his posts manually. But perhaps I should just killfile
"vronans". There is a very minimal risk of killfiling someone who
wouldn't deserve that, however the risk is well worth the gain.


Michele
-- 
{$_=pack'B8'x25,unpack'A8'x32,$a^=sub{pop^pop}->(map substr
(($a||=join'',map--$|x$_,(unpack'w',unpack'u','G^<R<Y]*YB='
 .'KYU;*EVH[.FHF2W+#"\Z*5TI/ER<Z`S(G.DZZ9OX0Z')=~/./g)x2,$_,
256),7,249);s/[^\w,]/ /g;$ \=/^J/?$/:"\r";print,redo}#JAPH,


------------------------------

Date: Thu, 26 Jul 2007 13:50:50 +0200
From: Michele Dondi <bik.mido@tiscalinet.it>
Subject: Re: @arts
Message-Id: <mvvga3d01jl59ebk7nckloimt0sf3qdq3r@4ax.com>

On Wed, 25 Jul 2007 17:36:14 -0700, "Wade Ward" <zaxfuuq@invalid.net>
wrote:

>I've been reading about the functions that I used in the original post.  In 
>that syntax is
>for (reverse $first..$last) { }

Just one minor nitpick: C<for> is not really a function (AIUI it will
be in Perl 6, as will be most control structure) but a keyword, with
special syntax and special semantics: in other words, you could not
implement it as a sub.

>The only way I can see what happens here is if I replace it with my 
>pre-existing loop notions from C:

Fair enough, but...

>for (i = $last; i >= $first; -- i) {}

(incidentally, this *may* be valid Perl code, if i is a suitable sub,
but much more probably you meant $i instead of i.)

>At least one problem with this substitution is that I can't figure out how 
>the code between the curly brackets knows it's being looped on without 
>reference to the dummy i .  How does the intervening code in the former for 

 ...don't forget that while Perl's syntax resembles C's in some points,
it is a quite different language, and a higher level one: for one
thing it tries to DWIM ("do what I mean") and it is much inspired from
natural langues. Natural languages have pronouns and in English one of
the commones is "it". Perl's "it" is spelled $_ and it's called the
"topicalizer", because it's the implicit topic in many contexts. Here,
specifically, when you have

  for (LIST) { do_something with $_; }

each element in turn is aliased to $_ as the list is being looped
over. If the block is too large and you reasonably prefer a more
expressive variable name, then you can do

  for my $item (LIST) { do_something with $item; }

Of course C<my> is not strictly necessary, but it declares $item as
lexically scoped to the block, i.e. closest to its usage, which is
fine.

Said all this, I know that it's getting quite repetitive and annoying
(both for you and most of us) but you should really get familiar with
a bare minimum of Perl first. The particular question you asked here
is rarely asked because it's very baby Perl... I'm sure quite about
any good tutorial or introductory book will make you familiar with the
concept in the first few chapters. Thus I wholeheartedly encourage you
to read them thoroughly. Otherwise you will continue to ask trivial
questions that will give on the nerves of several people. And just as
with C<for> there's a similar behaviour in connection with C<while>
and several other common constructs and functions. Will you ask again?
And should *all* newbies try to learn this way and come here asking
the very same questions, can you imagine what a place could this be?!?
There are much more interesting discussions. Hold your breath, learn
more, then you'll enjoy them, and start new ones.


Michele
-- 
{$_=pack'B8'x25,unpack'A8'x32,$a^=sub{pop^pop}->(map substr
(($a||=join'',map--$|x$_,(unpack'w',unpack'u','G^<R<Y]*YB='
 .'KYU;*EVH[.FHF2W+#"\Z*5TI/ER<Z`S(G.DZZ9OX0Z')=~/./g)x2,$_,
256),7,249);s/[^\w,]/ /g;$ \=/^J/?$/:"\r";print,redo}#JAPH,


------------------------------

Date: Thu, 26 Jul 2007 13:55:48 +0200
From: Michele Dondi <bik.mido@tiscalinet.it>
Subject: Re: @arts
Message-Id: <2n2ha3h1baj7hr55u7vh75f0k7oseeln6f@4ax.com>

On Wed, 25 Jul 2007 21:08:07 -0400, Sherm Pendley
<spamtrap@dot-app.org> wrote:

>Keep in mind that $this_element (or $_ if you didn't explicitly declare an
>alias variable) is an alias for the corresponding list element, not a copy
>of it. That means that assigning a value to the alias will in fact change
>the value of the corresponding list element.

(For completeness...) if it can be changed: otherwise a "Modification
of a read-only value attempted" error will be issued.

>Also, note that "for" and "foreach" are synonyms and either one can be used
>with either style of loop. However, some folks (myself included) prefer to
>use "for" for C-style loops with a counter variable, and "foreach" for Perl-
>style loops with an alias variable. I also prefer to explicitly declare the

It seems that -fortunately- $Larry thinks differently, as C<foreach>
won't be in Perl 6. (Of course it will be just so easy to reintroduce
it...) But then C-style C<for> loop will be renamed to C<loop>, which
is also good.


Michele
-- 
{$_=pack'B8'x25,unpack'A8'x32,$a^=sub{pop^pop}->(map substr
(($a||=join'',map--$|x$_,(unpack'w',unpack'u','G^<R<Y]*YB='
 .'KYU;*EVH[.FHF2W+#"\Z*5TI/ER<Z`S(G.DZZ9OX0Z')=~/./g)x2,$_,
256),7,249);s/[^\w,]/ /g;$ \=/^J/?$/:"\r";print,redo}#JAPH,


------------------------------

Date: Thu, 26 Jul 2007 13:57:05 +0200
From: Michele Dondi <bik.mido@tiscalinet.it>
Subject: Re: @arts
Message-Id: <vu2ha3dvi6at3gfafce8spe1npase1d0j3@4ax.com>

On Wed, 25 Jul 2007 19:40:34 -0700, "Wade Ward" <zaxfuuq@invalid.net>
wrote:

>Smithers shows up with flames on his back while the old fart holds running 
>water and doesn't choose to intervene.

Please don't spoil for people like me who is likely to have a year or
more to wait before seeing the show.


Michele
-- 
{$_=pack'B8'x25,unpack'A8'x32,$a^=sub{pop^pop}->(map substr
(($a||=join'',map--$|x$_,(unpack'w',unpack'u','G^<R<Y]*YB='
 .'KYU;*EVH[.FHF2W+#"\Z*5TI/ER<Z`S(G.DZZ9OX0Z')=~/./g)x2,$_,
256),7,249);s/[^\w,]/ /g;$ \=/^J/?$/:"\r";print,redo}#JAPH,


------------------------------

Date: Thu, 26 Jul 2007 12:50:58 -0000
From:  roy <roy.schultheiss@googlemail.com>
Subject: Re: filehandle, read lines
Message-Id: <1185454258.095558.179480@d55g2000hsg.googlegroups.com>

I receive a XML-File up to 1 GB full of orders every day. I have to
split the orders and load them into a database for further processing.
I share this job onto multiple processes. This runs properly now.

Here a little impress from the code:

------------------------------ 8< ------------------------------

use Proc::Simple;
use constant MAX_PROCESSES  => 10;

$filesize = -s "... file";
$step = int($filesize/MAX_PROCESSES+1);

for (my $i=0;$i<MAX_PROCESSES;$i++) {
    $procs[$i] = Proc::Simple->new();
    $procs[$i]->start(\&insert_orders, $filename, $i*$step, ($i+1)*
$step);
}

 ...

sub insert_orders {
    my ($filename, $from, $to) = @_;

    my $xml = new IO::File;
    open ($xml, "< $filename");

    if ($xml = set_handle ($xml, $from)) {
        while (defined ($_dat = <$xml>)) {
            $_temp = "\U$_dat\E";                   # Convert into
capital letters
            $_temp =~ s/\s+//g;                     # Remove blanks

            if ($_temp eq '<ORDER>') {
                $_mode  = 'order';
                $_order = '<?xml version="1.0" encoding="UTF-8"?>' .
"\n";
            }

            $_order .= $_dat if $_mode eq 'order';

            if ($_temp eq '</ORDER>') {
                # load $_order into the database ...

                $_order = '';
                $_mode = '';

                last if ($to <= tell ($xml));
            }
        }
    }

    ...

    close ($xml);
    return 1;
}


sub set_handle {
    my ($handle, $pos) = @_;

    seek($handle,$pos,SEEK_CUR);

    if (defined (<$handle>))    # start new line
        { return $handle; }
    else
        { return; }
}

------------------------------ 8< ------------------------------

It must be guaranteed that each process starts in a different order
otherwise an order would be inserted more than 1 time.

So, thank you all for you answers. Let me know if you want to have
more information about the code.

Regards,

roy



------------------------------

Date: Thu, 26 Jul 2007 15:43:24 +0530
From: "rajendra" <rajendra.prasad@in.bosch.com>
Subject: fork command.
Message-Id: <f89s46$i07$1@news4.fe.internet.bosch.com>

Hello All,
The perl documentation says the fork command generates two copies of the
program ,one parent and one child.
If this is correct,can this  fork command be used for multitasking?.





------------------------------

Date: 26 Jul 2007 11:29:14 GMT
From: anno4000@radom.zrz.tu-berlin.de
Subject: Re: fork command.
Message-Id: <5gresaF3hse59U1@mid.dfncis.de>

rajendra <rajendra.prasad@in.bosch.com> wrote in comp.lang.perl.misc:
> Hello All,
> The perl documentation says the fork command generates two copies of the
> program ,one parent and one child.
> If this is correct,can this  fork command be used for multitasking?.

Yes, that is its very purpose.

Anno


------------------------------

Date: Thu, 26 Jul 2007 14:09:43 +0200
From: Michele Dondi <bik.mido@tiscalinet.it>
Subject: Re: fork command.
Message-Id: <g43ha3ps79vfllk1etgk3eoolsckdp2l29@4ax.com>

On Thu, 26 Jul 2007 15:43:24 +0530, "rajendra"
<rajendra.prasad@in.bosch.com> wrote:

>The perl documentation says the fork command generates two copies of the
>program ,one parent and one child.
>If this is correct,can this  fork command be used for multitasking?.

Well, yes: but what do you mean exactly? AIUI multitasking is better
seen as a property of "some" osen and fork() is one way to use it to
one's advantage...


Michele
-- 
{$_=pack'B8'x25,unpack'A8'x32,$a^=sub{pop^pop}->(map substr
(($a||=join'',map--$|x$_,(unpack'w',unpack'u','G^<R<Y]*YB='
 .'KYU;*EVH[.FHF2W+#"\Z*5TI/ER<Z`S(G.DZZ9OX0Z')=~/./g)x2,$_,
256),7,249);s/[^\w,]/ /g;$ \=/^J/?$/:"\r";print,redo}#JAPH,


------------------------------

Date: Thu, 26 Jul 2007 06:00:23 -0500
From: Tad McClellan <tadmc@seesig.invalid>
Subject: Re: Help reading a file
Message-Id: <slrnfagvm7.6on.tadmc@tadmc30.sbcglobal.net>

micky <micky@hotmail.com> wrote:


> Subject: Help reading a file


You don't need help reading a file.

You need help processing the contents of a file.

[snip data]

> And so on. I'd like to read it into an array such that node_xxxx goes in 
> char[1], the string after --> goes in @char[2], the number in the 1st 
> column goes into @char[3] and so on.
>
> Could someone help me with this?


-----------------------------
#!/usr/bin/perl
use warnings;
use strict;

while ( <DATA> ) {
    next unless /-->/;
    my @fields = split / (?:--|==)> |\s+/;
    foreach my $i ( 1 .. $#fields) {
        print "$i: $fields[$i]\n";
    }
    print "\n";
}

__DATA__
        node_2382 --> Peptostreptococc 10             1  0.333  T ==> C
                                       21             1  0.500  G ==> A
                                       23             1  0.500  G ==> A
                                       25             1  0.750  G ==> T
                                       31             1  1.000  C ==> T
                                       45             1  0.500  G ==> A
                                       47             1  0.250  T ==> C
        node_2384 --> coccostreptococc 10             1  0.333  T ==> C
                                       21             1  0.500  G ==> A
                                       23             1  0.500  G ==> A
                                       25             1  0.750  G ==> T
                                       31             1  1.000  C ==> T
                                       45             1  0.500  G ==> A
                                       47             1  0.250  T ==> C
-----------------------------


-- 
Tad McClellan
email: perl -le "print scalar reverse qq/moc.noitatibaher\100cmdat/"


------------------------------

Date: 26 Jul 2007 08:00:21 GMT
From: anno4000@radom.zrz.tu-berlin.de
Subject: Re: match string by re using some pattern
Message-Id: <5gr2klF3h38ssU1@mid.dfncis.de>

frytaz@gmail.com <frytaz@gmail.com> wrote in comp.lang.perl.misc:
> On Jul 25, 11:26 pm, anno4...@radom.zrz.tu-berlin.de wrote:

[...]

> > I am still at a loss guessing what you are really up to.
> >
> > Anno
> 
> OK, I'll try to explain it
> 
> for instance we parse http web page
> 
> $line = "section BOOKS - title SOME_BOOK_TITLE - price 20";

[...]

> now we try to parse other page where
> 
> $line = "title CD_TITLE - price 50 - section MUSIC";

The technique has been shown to you repeatedly.  Here it is
again, adapted to your newest variant of the problem:

    my @lines = (
        "section BOOKS - title SOME_BOOK_TITLE - price 20",
        "title CD_TITLE - price 50 - section MUSIC",
    );

    my $mark = qr/section|title|price|$/;

    for ( @lines ) {
        print "$_\n";
        my %h = /($mark)(.+?)(?=$mark)/g;
        my ( $section, $title, $price) = @h{ qw( section title price)};
        print "section: $section, title: $title, price: $price\n";
    }

Anno


------------------------------

Date: Thu, 26 Jul 2007 03:00:42 -0700
From: Matt Madrid <admiralcap@gmail.com>
Subject: Re: match string by re using some pattern
Message-Id: <V8adneT048BM7TXbnZ2dnUVZ_ternZ2d@comcast.com>

frytaz@gmail.com wrote:
> 
> OK, I'll try to explain it
> 
> for instance we parse http web page
> 
> $line = "section BOOKS - title SOME_BOOK_TITLE - price 20";
[snip]
> 
> now we try to parse other page where
> 
> $line = "title CD_TITLE - price 50 - section MUSIC";
[snip]

> in this example, need to put different order of section,title,price


I'm a little lost trying to figure out what you want to do also, but I'm
going to guess that you want to extract the title, price, and section from
each line, no matter what order they are in.

Continuing on with what Anno showed you earlier, how about something like this:

---------------------------------------------------------------
use strict;
use warnings;
#use Data::Dumper;

my @items;
while ( <DATA> ) {
    my %hash = /(section|title|price)\s+?(.+?)\s*?[-\n]/g;
    push(@items,\%hash) if keys(%hash);
}
#print Dumper(\@items);

#Now you have a list of items, each of which is a hash ref
#that you can access to grab the info:

foreach (@items) {
    my ($title, $price, $section) = @{$_}{'title','price','section'};
    print
    "Title: '$title'\n".
    "Price: '$price'\n".
    "Section: '$section'\n\n";
}

__DATA__
section BOOKS - title Green Eggs and Ham - price 6.95
price 6.95 - section BOOKS - title The Cat In The Hat


---------------------------------------------------------------


Matt M.


















------------------------------

Date: Thu, 26 Jul 2007 09:43:12 GMT
From: "a" <a@mail.com>
Subject: only once in storage
Message-Id: <Q0_pi.6244$_d2.476@pd7urf3no>

Hi,

I am writing a script to download all the links of the whole site. The link
of the web site is not a simple tree. There may be some replicated links
pointing to the same location.

So, I need to walk through the site to extract and push the URLs of each
page into a data structure.
I dont want the replicated links. Every link should only appear once in my
storage.
So, is there any effective way to achieve this?

Thanks




------------------------------

Date: Thu, 26 Jul 2007 09:50:33 +0000
From: Peter Makholm <peter@makholm.net>
Subject: Re: only once in storage
Message-Id: <87d4yfmr2u.fsf@hacking.dk>

"a" <a@mail.com> writes:

> So, I need to walk through the site to extract and push the URLs of each
> page into a data structure.
> I dont want the replicated links. Every link should only appear once in my
> storage.

You can use a hash to check if an url has been seen before:

  my %seen;
  my $url;

  while ($url = getnext()) {
        process_url($url) unless $seen{$url}++;
  }

//Makholm


------------------------------

Date: Thu, 26 Jul 2007 11:49:49 +0200
From: Gunnar Hjalmarsson <noreply@gunnar.cc>
Subject: Re: only once in storage
Message-Id: <5gr989F3hsro7U1@mid.individual.net>

a wrote:
> I am writing a script to download all the links of the whole site. The link
> of the web site is not a simple tree. There may be some replicated links
> pointing to the same location.
> 
> So, I need to walk through the site to extract and push the URLs of each
> page into a data structure.
> I dont want the replicated links. Every link should only appear once in my
> storage.
> So, is there any effective way to achieve this?

Use a hash.

-- 
Gunnar Hjalmarsson
Email: http://www.gunnar.cc/cgi-bin/contact.pl


------------------------------

Date: Thu, 26 Jul 2007 11:55:23 +0200
From: Josef Moellers <josef.moellers@fujitsu-siemens.com>
Subject: Re: only once in storage
Message-Id: <f89r2h$9mi$1@nntp.fujitsu-siemens.com>

Peter Makholm wrote:
> "a" <a@mail.com> writes:
>=20
>=20
>>So, I need to walk through the site to extract and push the URLs of eac=
h
>>page into a data structure.
>>I dont want the replicated links. Every link should only appear once in=
 my
>>storage.
>=20
>=20
> You can use a hash to check if an url has been seen before:
>=20
>   my %seen;
>   my $url;
>=20
>   while ($url =3D getnext()) {
>         process_url($url) unless $seen{$url}++;
>   }

I'd split that:

1. collect all urls:
    $urls{getnext()} =3D 1;
2. process all unique urls:
    process_url($_) foreach (keys %urls);


--=20
These are my personal views and not those of Fujitsu Siemens Computers!
Josef M=F6llers (Pinguinpfleger bei FSC)
	If failure had no penalty success would not be a prize (T.  Pratchett)
Company Details: http://www.fujitsu-siemens.com/imprint.html



------------------------------

Date: Thu, 26 Jul 2007 10:19:40 +0000
From: Peter Makholm <peter@makholm.net>
Subject: Re: only once in storage
Message-Id: <878x93mpqb.fsf@hacking.dk>

Josef Moellers <josef.moellers@fujitsu-siemens.com> writes:

>>   while ($url = getnext()) {
>>         process_url($url) unless $seen{$url}++;
>>   }
>
> I'd split that:
>
> 1. collect all urls:
>    $urls{getnext()} = 1;
> 2. process all unique urls:
>    process_url($_) foreach (keys %urls);

That would not work if getnext() return a element from a work queue
and process_url() inserted urls in the work queue based on the content
fetched from the url.

I would probally do the check while inserting into the work queue. So
in the above example getnext() is part of the extract urls from
content and process_url is inserting in work queue. 

/Makholm


------------------------------

Date: Thu, 26 Jul 2007 13:00:52 +0200
From: Josef Moellers <josef.moellers@fujitsu-siemens.com>
Subject: Re: only once in storage
Message-Id: <f89ut6$n28$1@nntp.fujitsu-siemens.com>

Peter Makholm wrote:
> Josef Moellers <josef.moellers@fujitsu-siemens.com> writes:
>=20
>=20
>>>  while ($url =3D getnext()) {
>>>        process_url($url) unless $seen{$url}++;
>>>  }
>>
>>I'd split that:
>>
>>1. collect all urls:
>>   $urls{getnext()} =3D 1;
>>2. process all unique urls:
>>   process_url($_) foreach (keys %urls);
>=20
>=20
> That would not work if getnext() return a element from a work queue
> and process_url() inserted urls in the work queue based on the content
> fetched from the url.
>=20
> I would probally do the check while inserting into the work queue. So
> in the above example getnext() is part of the extract urls from
> content and process_url is inserting in work queue.=20

Yes, sorry, I ignored the "the web site is not a simple tree".

--=20
These are my personal views and not those of Fujitsu Siemens Computers!
Josef M=F6llers (Pinguinpfleger bei FSC)
	If failure had no penalty success would not be a prize (T.  Pratchett)
Company Details: http://www.fujitsu-siemens.com/imprint.html



------------------------------

Date: Thu, 26 Jul 2007 05:11:56 -0700
From:  marzec.wojciech@gmail.com
Subject: Problem with excel workbook
Message-Id: <1185451916.526387.238700@g4g2000hsf.googlegroups.com>

Hi,all :)

I have got small script like this below:

use Win32::OLE qw(in with);
use Win32::OLE::Const 'Microsoft Excel';
use Cwd;

$Win32::OLE::Warn = 3;
my $path = getcwd."/przyklad.xls";
$path =~ tr /\//\\/;
my $excel=Win32::OLE->new('Excel.Application','Quit');
my $book = $excel->Workbooks->Add;#
my $sheet = $book->Worksheets(1);
$book->Sheets->Add;
$book->SaveAs($path);
print $sheet->Name;
#$sheet->Name = "something";
undef $book;
$excel->Quit;
undef $excel;

 ..and i want to rename one of excel sheet, unfortunetly variable
$sheet->Name is only read-only and when i uncomment this line I will
receive message :
Can't modify non-lvalue subroutine call at excel_basic.pl

I will be grateful for any help.

Thank You



------------------------------

Date: Thu, 26 Jul 2007 13:08:56 -0000
From:  julien.laffitte@gmail.com
Subject: Using CallManager AXL interface with perl and SOAP::Lite module
Message-Id: <1185455336.944655.128890@k79g2000hse.googlegroups.com>

Hi all,

I am trying to write a PERL script to use Cisco CallManager AXL
interface and get/process IP phones information.

I know I have to use SOAP::Lite package, but I really do not
understand how it works, especally with CallManager interface.

Actually, I do not know how to connect to the server: what must I set
in the URI field ? In the proxy one ? This part of the code is really
not clear for me.

>From what I have read on the web and seen on the server, I would have
wrote:
my $CCM = SOAP::Lite
      ->uri(xxx.xxx.xxx.xxx:443)
      ->proxy("xxx.xxx.xxx.xxx/API/AXL/V1/axlsoap.xsd")

But something tells me its not really good ;)

An older thread of smeenz show me how to get data after that (even if
it is not the information I have to get from server):

my $res =
  $soap->getPhone(
       SOAP::Data->name(phoneName => 'SEP000000000000')
  )


If there is a specific documentation I was unable to find it :(

Can anyone help me ?

Julien



------------------------------

Date: Thu, 26 Jul 2007 09:15:31 +0100
From: "John" <john1949@yahoo.com>
Subject: Re: XML Validation
Message-Id: <1KidnWfnP4O-xTXbnZ2dnUVZ8qSnnZ2d@eclipse.net.uk>


"Shiraz" <shirazk@gmail.com> wrote in message 
news:1185400596.357712.287670@q75g2000hsh.googlegroups.com...
>I am trying to use the XML simple to parse out some xml data. If I use
> the code below with invalid xml, i just get a warning 'not well-formed
> (invalid token) at line 1, column 16, byte 16 at /usr/local/lib/perl5/
> site_perl/5.8.7/i686-linux/XML/Parser.pm line 187'
> A test like 'unless (my $data = $xml->XMLin($msg)  ) ' doesnt work
> either.
> Anyone know how to test for valid XML using just XML::Simple or would
> i have to get a XML checking library
>
> Thanks,
>
> code:
> #!/usr/bin/perl
> use strict;
> use XML::Simple;
> $|=1;
> my $xml = new XML::Simple;
> my $msg =  '<xml><select app>orig_gw</select></xml>'; #this is bad xml
> my $data = $xml->XMLin($msg)
>
> result:
> not well-formed (invalid token) at line 1, column 16, byte 16 at /usr/
> local/lib/perl5/site_perl/5.8.7/i686-linux/XML/Parser.pm line 187
>

Try the following and see if your error persists:-

use strict;
use warnings;
use XML::Simple; $XML::Simple::PREFERRED_PARSER = 'XML::Parser';
use Data::Dumper; # format is print Dumper($request)

my $xmlsimple=new XML::Simple (ForceArray => 1, suppressempty => 1); # 
create object
my $xml="<xml><select app>orig_gw</select></xml>";
my $data=$xmlsimple->XMLin($xml;); # read XML string
print Dumper($data);

Regards
John





------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc.  For subscription or unsubscription requests, send
#the single line:
#
#	subscribe perl-users
#or:
#	unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.  

NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice. 

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V11 Issue 684
**************************************


home help back first fref pref prev next nref lref last post