[30439] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 1682 Volume: 11

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Mon Jun 30 16:09:44 2008

Date: Mon, 30 Jun 2008 13:09:10 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Mon, 30 Jun 2008     Volume: 11 Number: 1682

Today's topics:
    Re: editing smb.conf (INI files) <tch@nospam.syneticon.net>
        File size too big for perl processing <danieldharkness@gmail.com>
    Re: File size too big for perl processing <spamtrap@dot-app.org>
    Re: File size too big for perl processing <jimsgibson@gmail.com>
    Re: File size too big for perl processing xhoster@gmail.com
    Re: I hate CGI.pm <brian.d.foy@gmail.com>
        Lingua::Slavic::Numbers (was: number to word in any lan <tzz@lifelogs.com>
    Re: looping issue <ben@morrow.me.uk>
    Re: looping issue xhoster@gmail.com
        NDBM support alex.turchin@gmail.com
    Re: NDBM support <smallpond@juno.com>
    Re: NDBM support <mark.clementsREMOVETHIS@wanadoo.fr>
    Re: Need help with a question. <jimsgibson@gmail.com>
    Re: Need help with a question. <glex_no-spam@qwest-spam-no.invalid>
        perl + python tutorial available for download <xahlee@gmail.com>
    Re: perl + python tutorial available for download <smallpond@juno.com>
    Re: perl + python tutorial available for download <jurgenex@hotmail.com>
    Re: perl + python tutorial available for download <spamtrap@dot-app.org>
    Re: Template Toolkit and USE <tzz@lifelogs.com>
    Re: Thread or threads. <zentara@highstream.net>
    Re: Thread or threads. <glex_no-spam@qwest-spam-no.invalid>
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Mon, 30 Jun 2008 17:43:58 +0200
From: Tomasz Chmielewski <tch@nospam.syneticon.net>
Subject: Re: editing smb.conf (INI files)
Message-Id: <g4auvu$a67$1@online.de>

Joe Smith schrieb:
> Tomasz Chmielewski wrote:
>> What would you use to parse smb.conf or INI-files in a somehow 
>> automated/scripted manner?
> 
> Config::Tiny - Read/Write .ini style files with as little code as possible
> 
> # Changing data
> $Config->{newsection} = { this => 'that' }; # Add a section
> $Config->{section}->{Foo} = 'Not Bar!';     # Change a value
> delete $Config->{_};                        # Delete a value or section
> # Save a config
> $Config->write( 'file.conf' );

Thanks a lot!


-- 
Tomasz Chmielewski
http://wpkg.org


------------------------------

Date: Mon, 30 Jun 2008 11:18:38 -0700 (PDT)
From: Cheez <danieldharkness@gmail.com>
Subject: File size too big for perl processing
Message-Id: <df436918-985a-408e-a89e-f61b1cf779fc@z66g2000hsc.googlegroups.com>

Hi, I posted this to perl.beginners as well and will make sure
comments go to both groups.

I have a big file of 16-letter words that I am using as "bait" to
capture larger words in a raw data file.  I loop through all of the
rawdata with a single word for 1) matches and 2) to associate the raw
data with the word.  I then go to the next line in the word list and
repeat.

hashsequence16.txt is the 16-letter word file (203MB)
rawdata.txt is the raw data file (93MB)

I have a counter in the code to tell me how long it's taking to
process... 9500 hours or so to complete...  I definitely have time to
pursue other alternatives.

Scripting with perl is a hobby and not a vocation so I apologize in
advance for ugly code.  Any suggestions/comments would be greatly
appreciated.

Thanks,
Dan

========================

print "**fisher**";

$flatfile = "newrawdata.txt";
# 95MB in size

$datafile = "hashsequence16.txt";
# 203MB in size

my $filesize = -s "hashsequence16.txt";
# for use in processing time calculation

open(FILE, "$flatfile") || die "Can't open '$flatfile': $!\n";
open(FILE2, "$datafile") || die "Can't open '$flatfile': $!\n";
open (SEQFILE, ">fishersearch.txt") || die "Can't open '$seqparsed': $!
\n";

@preparse = <FILE>;
@hashdata = <FILE2>;

close(FILE);
close(FILE2);


for my $list1 (@hashdata) {
# iterating through hash16 data

    $finish++;

    if ($finish ==10 ) {
# line counter

	$marker = $marker + $finish;

	$finish =0;

	$left = $filesize - $marker;

	printf "$left\/$filesize\n";
# this prints every 17 seconds
			}

    ($line, $freq) = split(/\t/, $list1);

    for my $rawdata (@preparse) {
# iterating through rawdata

	$rawdata=~ s/\n//;

	if ($rawdata =~ m/$line/) {
# matching hash16 word with rawdata line

	    my $first_pos = index  $rawdata,$line;

	    print SEQFILE "$first_pos\t$rawdata\n";
# printing to info to new file

				}

			}

    print SEQFILE "PROCESS\t$line\n";
# printing hash16 word and "process"

}


------------------------------

Date: Mon, 30 Jun 2008 14:47:09 -0400
From: Sherman Pendley <spamtrap@dot-app.org>
Subject: Re: File size too big for perl processing
Message-Id: <m13amu7ob6.fsf@dot-app.org>

Cheez <danieldharkness@gmail.com> writes:

> print "**fisher**";
>
> $flatfile = "newrawdata.txt";
> # 95MB in size
>
> $datafile = "hashsequence16.txt";
> # 203MB in size
>
> my $filesize = -s "hashsequence16.txt";
> # for use in processing time calculation
>
> open(FILE, "$flatfile") || die "Can't open '$flatfile': $!\n";
> open(FILE2, "$datafile") || die "Can't open '$flatfile': $!\n";
> open (SEQFILE, ">fishersearch.txt") || die "Can't open '$seqparsed': $!
> \n";
>
> @preparse = <FILE>;
> @hashdata = <FILE2>;
>
> close(FILE);
> close(FILE2);
>
>
> for my $list1 (@hashdata) {

If you're looping through $datafile one line at a time, there's no
need to read the whole thing into RAM at once. Just leave the file
open, and use a while() loop to read one line at a time instead:

    while (my $list1 = <FILE2>) {

> # iterating through hash16 data
>
>     $finish++;
>
>     if ($finish ==10 ) {
> # line counter
>
> 	$marker = $marker + $finish;
>
> 	$finish =0;
>
> 	$left = $filesize - $marker;
>
> 	printf "$left\/$filesize\n";
> # this prints every 17 seconds
> 			}
>
>     ($line, $freq) = split(/\t/, $list1);
>
>     for my $rawdata (@preparse) {
> # iterating through rawdata
>
> 	$rawdata=~ s/\n//;

Chomp() is a faster way to remove newlines:

        chomp($rawdata);

> 	if ($rawdata =~ m/$line/) {
> # matching hash16 word with rawdata line
>
> 	    my $first_pos = index  $rawdata,$line;

Index() will scan the string a second time. There's no need to do
that, since the position of the matched expressions are already stored
in @-:

            my $first_pos = $-[0];

sherm--

-- 
My blog: http://shermspace.blogspot.com
Cocoa programming in Perl: http://camelbones.sourceforge.net


------------------------------

Date: Mon, 30 Jun 2008 12:06:41 -0700
From: Jim Gibson <jimsgibson@gmail.com>
Subject: Re: File size too big for perl processing
Message-Id: <300620081206415085%jimsgibson@gmail.com>

In article
<df436918-985a-408e-a89e-f61b1cf779fc@z66g2000hsc.googlegroups.com>,
Cheez <danieldharkness@gmail.com> wrote:

> Hi, I posted this to perl.beginners as well and will make sure
> comments go to both groups.
> 
> I have a big file of 16-letter words that I am using as "bait" to
> capture larger words in a raw data file.  I loop through all of the
> rawdata with a single word for 1) matches and 2) to associate the raw
> data with the word.  I then go to the next line in the word list and
> repeat.
> 
> hashsequence16.txt is the 16-letter word file (203MB)

Hmm. How many 16-letter words are in this file? I see from your code
that the file contains the word and a frequency count. Estimating at
about 25 bytes per word, that represents 8 million words.

> rawdata.txt is the raw data file (93MB)
> 
> I have a counter in the code to tell me how long it's taking to
> process... 9500 hours or so to complete...  I definitely have time to
> pursue other alternatives.
> 
> Scripting with perl is a hobby and not a vocation so I apologize in
> advance for ugly code.  Any suggestions/comments would be greatly
> appreciated.
> 
> Thanks,
> Dan
> 
> ========================
> 

You should have

use strict;
use warnings;

in your program. This is very important if you wish to get help from
this newsgroup.

> print "**fisher**";
> 
> $flatfile = "newrawdata.txt";
> # 95MB in size
> 
> $datafile = "hashsequence16.txt";
> # 203MB in size
> 
> my $filesize = -s "hashsequence16.txt";
> # for use in processing time calculation
> 
> open(FILE, "$flatfile") || die "Can't open '$flatfile': $!\n";
> open(FILE2, "$datafile") || die "Can't open '$flatfile': $!\n";
> open (SEQFILE, ">fishersearch.txt") || die "Can't open '$seqparsed': $!
> \n";

You should be using lexically-scoped file handle variables, the
3-argument version of open, and 'or' instead of '||'.

> 
> @preparse = <FILE>;
> @hashdata = <FILE2>;

Well at least you have enough memory to read the files into memory.
That helps. If you apply the chomp operator to these arrays, you can
save yourself some repetitive processing later:

  chomp(@preparse);
  chomp(@hashdata);

> 
> close(FILE);
> close(FILE2);
> 
> 
> for my $list1 (@hashdata) {
> # iterating through hash16 data


> 
>     $finish++;
> 
>     if ($finish ==10 ) {
> # line counter
> 
>   $marker = $marker + $finish;
> 
>   $finish =0;
> 
>   $left = $filesize - $marker;
> 
>   printf "$left\/$filesize\n";
> # this prints every 17 seconds
>       }

When you are asking for help, it is best to leave out irrelevant
details such as periodic printing statements. It doesn't help anybody
help you.

> 
>     ($line, $freq) = split(/\t/, $list1);
> 
>     for my $rawdata (@preparse) {
> # iterating through rawdata
> 
>   $rawdata=~ s/\n//;

No need for this if you chomp the arrays after reading.

> 
>   if ($rawdata =~ m/$line/) {
> # matching hash16 word with rawdata line
> 
>       my $first_pos = index  $rawdata,$line;

You first use a regex to find if $line appears in $rawdata, then use
index to find out where it appears. Just test the return value from
index to see if the substring appears. It will be -1 if it does not.
This will give you a significant speed-up.

> 
>       print SEQFILE "$first_pos\t$rawdata\n";
> # printing to info to new file
> 
>         }
> 
>       }
> 
>     print SEQFILE "PROCESS\t$line\n";
> # printing hash16 word and "process"
> 
> }

You only make one pass through FILE2, so you can save some memory by
processing the contents of this file one line at a time, instead of
reading it into the @hashdata array. It looks like you could also swap
the order of the for loops and only make one pass through FILE,
instead, but that may take more memory.

It is difficult to see why this program will take 9500 hours to run.
Make the above changes and try again. Without your data files or a look
at some sample data, it is difficult for anyone to really help you.

-- 
Jim Gibson


------------------------------

Date: 30 Jun 2008 19:56:54 GMT
From: xhoster@gmail.com
Subject: Re: File size too big for perl processing
Message-Id: <20080630155700.097$BG@newsreader.com>

Cheez <danieldharkness@gmail.com> wrote:
> Hi, I posted this to perl.beginners as well and will make sure
> comments go to both groups.
>
> I have a big file of 16-letter words that I am using as "bait" to
> capture larger words in a raw data file.  I loop through all of the
> rawdata with a single word for 1) matches and 2) to associate the raw
> data with the word.  I then go to the next line in the word list and
> repeat.
>
> hashsequence16.txt is the 16-letter word file (203MB)

How many lines?  (it seems the 16-letter part is only the first column
of the file, so it is not simply 203MB / 17bytes)

> rawdata.txt is the raw data file (93MB)

How many lines is it?

>
> I have a counter in the code to tell me how long it's taking to
> process... 9500 hours or so to complete...  I definitely have time to
> pursue other alternatives.
>
> Scripting with perl is a hobby and not a vocation so I apologize in
> advance for ugly code.

As a hobbyer, you should have the leisure to make it less ugly, while
someone working under the clock might not!

>
> open(FILE, "$flatfile") || die "Can't open '$flatfile': $!\n";
> open(FILE2, "$datafile") || die "Can't open '$flatfile': $!\n";

Wrong variable in the die, $datafile not $flatfile


> @preparse = <FILE>;
> @hashdata = <FILE2>;

Do you have a lot of memory, or is your system swapping?  If swapping,
that right there will slow it down dramatically.  In this case, if you
change the outer foreach to a while (<FILE>), that might make things
better.

>
> close(FILE);
> close(FILE2);
>
> for my $list1 (@hashdata) {
>     $finish++;
>     if ($finish ==10 ) {
>         $marker = $marker + $finish;
>         $finish =0;
>         $left = $filesize - $marker;

$filesize is in bytes, while $marker is in lines.  This isn't gonna give
meaningful information.


>         printf "$left\/$filesize\n";
>         # this prints every 17 seconds
>     }
>
>     ($line, $freq) = split(/\t/, $list1);
>
>     for my $rawdata (@preparse) {
>         $rawdata=~ s/\n//;

This substitution only needs to be done once, not for every @hashdata.
Put "chomp @preparse" outside of the loop.

>
>         if ($rawdata =~ m/$line/) {

In my test case, I had to add \Q before $line, otherwise the odd
special character in it caused regex syntax errors.

>             my $first_pos = index  $rawdata,$line;

On success, you are doing the search twice.  If success is rare, then
of course this is not important speedwise.  Get rid of one or the other,
I'd prefer to get rid of the regex and do only the index.


Anyway, I'd write it to load hashdata into a hash (surprise!), and then
probe a 16 byte sliding window of newdata against that hash.

my %hashdata;
while (<FILE2>) {
  chomp;
  my ($t)=split /\t/;
  $hashdata{$t}=();
};
close(FILE2);
my ($finish,$marker,$left);
while (my $rawdata=<FILE>) {
    chomp $rawdata;
    foreach (0..(length $rawdata) - 16) {
        if (exists $hashdata{substr $rawdata,$_,16}) {
            print SEQFILE "$_\t$rawdata\n";
        }
    }
}

The whole thing takes about a minute on files of about the size you
specified.

Xho

-- 
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.


------------------------------

Date: Mon, 30 Jun 2008 10:41:46 -0500
From: brian d  foy <brian.d.foy@gmail.com>
Subject: Re: I hate CGI.pm
Message-Id: <300620081041467024%brian.d.foy@gmail.com>

In article <20080629182126.868$8v@newsreader.com>, <xhoster@gmail.com>
wrote:


> When making self-referencing URLs, it would be nice if there was a way to
> say "Give me the self-referencing URL that would exist if this one
> parameter had this value instead of what it actually does."  In the absence
> of that, I have to store the value, change it, get the self_url, then
> change it back.

When I want to do that, I create a second CGI.pm object. I can change
it how I like without affecting the original data.


------------------------------

Date: Mon, 30 Jun 2008 08:34:42 -0500
From: Ted Zlatanov <tzz@lifelogs.com>
Subject: Lingua::Slavic::Numbers (was: number to word in any language)
Message-Id: <86tzfbyrkd.fsf_-_@lifelogs.com>

On Sun, 29 Jun 2008 22:38:11 +0200 "Peter J. Holzer" <hjp-usenet2@hjp.at> wrote: 

PJH> The best you can do is check that the locale matches
PJH> /^([a-z]{2})($|_[A-Z]{2}\b)/ and check if
PJH> "Lingua::\U$1\E::Numbers" exists. you won't find Lingua::Slavic::Numbers
PJH> that way, and not all locales start with a language code, but there
PJH> doesn't seem any more dependable way to extract the language code from
PJH> the locale than to rely on the naming convention.

Since I put Lingua::Slavic::Numbers up, I intend to also provide
wrappers (Lingua::LL::Numbers) for it in the same distribution.  I just
haven't gotten around to it.

If anyone is interested in testing the existing Bulgarian translations,
please do.  In addition, if you can provide other test cases for other
Slavic languages (Russian is probably the big one, but others would be
good too: Macedonian, Serbian, Slovak, etc.), I would appreciate it.  My
knowledge is not good enough to write the test cases but I can get going
once I have those.

Ted


------------------------------

Date: Mon, 30 Jun 2008 16:47:57 +0100
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: looping issue
Message-Id: <dqbnj5-td1.ln1@osiris.mauzo.dyndns.org>


Quoth dakin999 <akhilshri@gmail.com>:
> Hi, I have following code:
> 

    use warnings;
    use strict;

    open my $FILE, '<', 'file_name'
        or die "can't open 'file_name': $!";

>    foreach my $row (@$array_ref) {
>            my ( $usr, $usr_det, $pwd_val) = @$row;
>       #print "\tuser id :$usr\n";
>       #print "\tcard no :$usr_det\n";
>       #print "\tpasswd :$pwd_val\n";

        my $line = <$FILE> or last;
        chomp $line;
        my ($nusr_id) = split;

        #...
    }

Ben

-- 
I have two words that are going to make all your troubles go away.
"Miniature". "Golf".
                                                         [ben@morrow.me.uk]


------------------------------

Date: 30 Jun 2008 16:28:51 GMT
From: xhoster@gmail.com
Subject: Re: looping issue
Message-Id: <20080630122855.727$MK@newsreader.com>

dakin999 <akhilshri@gmail.com> wrote:
> Hi, I have following code:
>
>    foreach my $row (@$array_ref) {
>            my ( $usr, $usr_det, $pwd_val) = @$row;
>       #print "\tuser id :$usr\n";
>       #print "\tcard no :$usr_det\n";
>       #print "\tpasswd :$pwd_val\n";
>       open (FILE, "<file_name") || die "Could not open the file: $!";
>       while (<FILE>) {
>          chomp;
>          (my $nusr_id) = split();  #read into a variable
>           --
>           --
>           --
>          }
>       }
>
>  The problem is in the looping of input <FILE> that I need to read for
> each $row. Basically there can be more than 1 $row values and I need
> to pick a new line from <FILE> for each $row entry.
>
> Any suggestions for doing this??

It looks like you are trying to join the array and the file.  But you have
hidden what the join condition is behind the "--",  which greatly limits
the specificity of the advice we can give.  Most likely, something should
be put into a hash.  Whether that is the contents of @$array_ref or
the contents of file_name, I can't tell based on what you have given us.

Xho

-- 
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.


------------------------------

Date: Mon, 30 Jun 2008 11:02:42 -0700 (PDT)
From: alex.turchin@gmail.com
Subject: NDBM support
Message-Id: <ac5a234c-2340-47ec-85e8-380b3d0244f5@c58g2000hsc.googlegroups.com>

I have an old and rather large (thousands of lines of code) piece of
perl software that utilizes (in numerous places) NDBM databases,
including dbmopen() etc. calls. This code has now been moved by the
website administrator to a different UNIX platform that does not
support NDBM.

Unfortunately I do not have the resources to completely rewirte the
code to utilize a different database. Even more importantly, all of
the data the software uses (unique to the software and took months of
work to enter) is in the NDBM format.

I am therefore looking for a way to obtain perl support for the NDBM
format and corresponding function calls (dbmopen, dbmclose) on a
system that does not natively support them. Any thoughts would be
greatly appreciated!

Alex


------------------------------

Date: Mon, 30 Jun 2008 14:59:49 -0400
From: smallpond <smallpond@juno.com>
Subject: Re: NDBM support
Message-Id: <a1e32$48692d29$15639@news.teranews.com>

alex.turchin@gmail.com wrote:
> I have an old and rather large (thousands of lines of code) piece of
> perl software that utilizes (in numerous places) NDBM databases,
> including dbmopen() etc. calls. This code has now been moved by the
> website administrator to a different UNIX platform that does not
> support NDBM.
> 
> Unfortunately I do not have the resources to completely rewirte the
> code to utilize a different database. Even more importantly, all of
> the data the software uses (unique to the software and took months of
> work to enter) is in the NDBM format.
> 
> I am therefore looking for a way to obtain perl support for the NDBM
> format and corresponding function calls (dbmopen, dbmclose) on a
> system that does not natively support them. Any thoughts would be
> greatly appreciated!
> 
> Alex


This fairly informative article:
http://unixpapa.com/incnote/dbm.html

recommends dumping the data to plain text files in some easy easy-to-parse
format, and regenerating the database using your current dbm implementation.

The APIs are mostly the same, so once the data is in the right format,
the code should not be too big a problem.

Do you still have access to the old platform?
** Posted from http://www.teranews.com **


------------------------------

Date: Mon, 30 Jun 2008 19:08:32 +0000
From: Mark Clements <mark.clementsREMOVETHIS@wanadoo.fr>
Subject: Re: NDBM support
Message-Id: <48692f30$0$871$ba4acef3@news.orange.fr>

alex.turchin@gmail.com wrote:
> I have an old and rather large (thousands of lines of code) piece of
> perl software that utilizes (in numerous places) NDBM databases,
> including dbmopen() etc. calls. This code has now been moved by the
> website administrator to a different UNIX platform that does not
> support NDBM.
>
> Unfortunately I do not have the resources to completely rewirte the
> code to utilize a different database. Even more importantly, all of
> the data the software uses (unique to the software and took months of
> work to enter) is in the NDBM format.
>
> I am therefore looking for a way to obtain perl support for the NDBM
> format and corresponding function calls (dbmopen, dbmclose) on a
> system that does not natively support them. Any thoughts would be
> greatly appreciated!

Hmmm.

http://perldoc.perl.org/functions/dbmopen.html

(quoted)
**
You can control which DBM library you use by loading that library before 
you call dbmopen():

     use DB_File;
     dbmopen(%NS_Hist, "$ENV{HOME}/.netscape/history.db")
	or die "Can't open netscape history file: $!";

**

Can you not install NDBM_File and add a

use NDBM_File;

to the script(s)?

http://search.cpan.org/~rgarcia/perl-5.10.0/ext/NDBM_File/NDBM_File.pm

Mark


------------------------------

Date: Mon, 30 Jun 2008 08:27:11 -0700
From: Jim Gibson <jimsgibson@gmail.com>
Subject: Re: Need help with a question.
Message-Id: <300620080827112473%jimsgibson@gmail.com>

In article <48666e7f$0$14348$e4fe514c@news.xs4all.nl>, Erwin van Koppen
<invalid@invalid.invalid> wrote:

> "Trev" wrote:
> >
> > Thanks, I've made the changes and so far so good.
> 
> foreach $cpqlog (@cpqlog_data) {
>     {
>     chomp($cpqlog);
> 
>     if ($cpqlog =~ /MAC/) {
>         $cpqlog =~ s/  <FIELD NAME="Subject" VALUE="//i;
>         $cpqlog =~ s/  <FIELD NAME="MAC" VALUE="//i;
>         $cpqlog =~ s/"\/>//i;
>     }
>     }
> }
> 
> You do understand that the above code does not actually change anything in 
> the file, right? The substitutions in $cpqlog are just discarded at the end 
> of the loop.

Not quite. In Perl, the loop variable $cpqlog is an "alias" to the
array elements, so the changes to $cpqlog are applied to the members of
the array. Of course, whether or not the file is modified depends upon
what is done after the loop. (Just trying to clear up any
misunderstanding about what "discarded at the end of the loop" means.)

-- 
Jim Gibson


------------------------------

Date: Mon, 30 Jun 2008 11:13:34 -0500
From: "J. Gleixner" <glex_no-spam@qwest-spam-no.invalid>
Subject: Re: Need help with a question.
Message-Id: <4869062e$0$89395$815e3792@news.qwest.net>

Trev wrote:
>> I'm only after 1 line (MAC) per server, which is value [1] the other
>> MAC's I do not need. Is it possible to grep MAC from the Array (Entire
>> File) and pass the results into a new array which I could then
>> print[1] to a file. This will then extract 1 line for every server
>> that I have an xml file for.
> 
> Maybe this will help...
> 
> Read list.txt for each server listed do
>      read xml file into array
>      find results matching MAC
>      print the 1st MAC address only ($server,$mac[1])
> go back to list.txt and do the same for next server listed in file.
> 
> That's what I'm trying to do with this perl script.

Since you're parsing an XML file, possibly using XML::Simple
will make it much easier.

http://search.cpan.org/~grantm/XML-Simple-2.18/lib/XML/Simple.pm


------------------------------

Date: Mon, 30 Jun 2008 09:45:18 -0700 (PDT)
From: Xah <xahlee@gmail.com>
Subject: perl + python tutorial available for download
Message-Id: <86c4090b-ed0b-4249-b3ee-517c94328741@z32g2000prh.googlegroups.com>

my perl and python tutorial

http://xahlee.org/perl-python/index.html

is now available for download for offline reading.
Download link at the bottom.

   Xah
=E2=88=91 http://xahlee.org/

=E2=98=84


------------------------------

Date: Mon, 30 Jun 2008 13:16:13 -0400
From: smallpond <smallpond@juno.com>
Subject: Re: perl + python tutorial available for download
Message-Id: <31f90$486914df$11268@news.teranews.com>

Xah wrote:
> my perl and python tutorial
> 
> http://xahlee.org/perl-python/index.html
> 
> is now available for download for offline reading.
> Download link at the bottom.
> 
>    Xah
> ? http://xahlee.org/
> 
> ?

"Pyhton" ??

typical
** Posted from http://www.teranews.com **


------------------------------

Date: Mon, 30 Jun 2008 17:44:13 GMT
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: perl + python tutorial available for download
Message-Id: <q56i64hlkgn2n9q1gc4mjsvglhhds8apbt@4ax.com>

Xah <xahlee@gmail.com> wrote:
>my perl and python tutorial
>
>http://xahlee.org/perl-python/index.html
>
>is now available for download for offline reading.

Why anyone would have the idea to mix two different langauges in one
tutorial is beyond me.

And calling that web page a tutorial is a large stretch of imagination.
It is a random collection of primitve code samples, organized roughly by
area. No learning goals, no explanation of concepts, ...


Apparently you changed your ID to escape all the filters you have been a
permanent guest to. This 'tutorial' confirms that this is the best place
for you. So back you go again.

jue 


------------------------------

Date: Mon, 30 Jun 2008 13:49:32 -0400
From: Sherman Pendley <spamtrap@dot-app.org>
Subject: Re: perl + python tutorial available for download
Message-Id: <m1y74mizir.fsf@dot-app.org>

smallpond <smallpond@juno.com> writes:

> "Pyhton" ??
>
> typical

Both typical, and illustrative of Xah's skill level. :-)

sherm--

-- 
My blog: http://shermspace.blogspot.com
Cocoa programming in Perl: http://camelbones.sourceforge.net


------------------------------

Date: Mon, 30 Jun 2008 08:42:20 -0500
From: Ted Zlatanov <tzz@lifelogs.com>
Subject: Re: Template Toolkit and USE
Message-Id: <86myl3yr7n.fsf@lifelogs.com>

On Fri, 27 Jun 2008 06:39:57 -0700 (PDT) Ronny Mandal <ronnyma@math.uio.no> wrote: 

RM> I've installed the module URI from CPAN. This works perfectly on perl-
RM> scripts, however when running tpage CLI it issues an error message,
RM> e.g.

RM> ./test.tt:

RM> [% USE URI %]

RM> [ronny@pops]/home/ronny/development/tt(387): tpage test.tt
RM> plugin error - URI: plugin not found
RM> [ronny@pops]/home/ronny/development/tt(388):

RM> Any suggestions? When dumping the @INC, URI.pm is contained in at
RM> least one of the paths.

TT plugins are not regular Perl modules (although it's not hard to write
a TT plugin wrapper around a Perl module).  See below for the full list
of what's included with TT.  You may want the URL plugin, I don't know
your specific needs.

http://template-toolkit.org/docs/manual/Plugins.html

Ted


------------------------------

Date: Mon, 30 Jun 2008 10:11:26 -0400
From: zentara <zentara@highstream.net>
Subject: Re: Thread or threads.
Message-Id: <kbqh645eitac8n9sru45lcujrud6ocdo31@4ax.com>

On Mon, 30 Jun 2008 05:24:19 -0700 (PDT), nadav <nadavh@gmail.com>
wrote:

>thinking of writing a perl script with concurrent abilities.
>which Module is better 'Thread' or 'threads' ?
>
>thx.

Thread (capital T) is obsolete.

-- 
I'm not really a human, but I play one on earth.
http://zentara.net/CandyGram_for_Mongo.html 


------------------------------

Date: Mon, 30 Jun 2008 11:18:02 -0500
From: "J. Gleixner" <glex_no-spam@qwest-spam-no.invalid>
Subject: Re: Thread or threads.
Message-Id: <4869073a$0$89395$815e3792@news.qwest.net>

nadav wrote:
> thinking of writing a perl script with concurrent abilities.
> which Module is better 'Thread' or 'threads' ?

It depends. If you read the first few paragraphs of the
documentation for Thread, it should be obvious.


------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc.  For subscription or unsubscription requests, send
#the single line:
#
#	subscribe perl-users
#or:
#	unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.  

NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice. 

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V11 Issue 1682
***************************************


home help back first fref pref prev next nref lref last post