[30436] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 1679 Volume: 11

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Sun Jun 29 03:09:46 2008

Date: Sun, 29 Jun 2008 00:09:10 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Sun, 29 Jun 2008     Volume: 11 Number: 1679

Today's topics:
    Re: [OT] How to make list of all htm file... (Doug Miller)
    Re: [OT] How to make list of all htm file... <szrRE@szromanMO.comVE>
        C, C++ and Perl Workshops with NYLXS <ruben@www2.mrbrklyn.com>
    Re: how do prlglobs expand (was Re: 'nobody' using sudo xhoster@gmail.com
    Re: Need help with a question. <trevor.dodds@gmail.com>
    Re: Need help with a question. <jurgenex@hotmail.com>
    Re: Need help with a question. <anonymous@cow.ard>
    Re: Need help with a question. <jurgenex@hotmail.com>
    Re: Need help with a question. <trevor.dodds@gmail.com>
    Re: Need help with a question. <fawaka@gmail.com>
    Re: Need help with a question. <ben@morrow.me.uk>
    Re: Need help with a question. <trevor.dodds@gmail.com>
        new CPAN modules on Sun Jun 29 2008 (Randal Schwartz)
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Sun, 29 Jun 2008 02:11:23 GMT
From: spambait@milmac.com (Doug Miller)
Subject: Re: [OT] How to make list of all htm file...
Message-Id: <3dC9k.13300$mh5.7673@nlpi067.nbdc.sbc.com>

In article <g3mi0d01tpc@news4.newsguy.com>, "szr" <szrRE@szromanMO.comVE> wrote:
>Dr.Ruud wrote:
>> szr schreef:
>>
>>>    $ find . | grep -P 'html?$'
>>
>> That is quite wasteful, even if the current directory doesn't contain
>> millions of subdirectories and files.
>
>Aside form forgetting *. which should of been at the beginning of my 
>patterns, is it really more wasteful? 

Yes, absolutely.

>Does find not have to also check 
>each file it comes across too? 

Certainly. But you're piping *all* of them to grep, thus making both find 
*and* grep process all of them.

>Or is it just the over of piping the 
>final output from find over to grep? 

That, too.

>Other then that I don't see why it 
>would be more wasteful? 

Because it:
a) creates, opens, and closes a pipe that is not necessary
b) spawns an additional process (grep) that is not necessary
c) ships *every* filename across that unnecessary pipe to that unnecessary 
process to be filtered
 .. when you could instead simply filter the filenames at the source, as 
they're generated by find.

>On my both my Dual core Linux system as well as 
>an old P2 400 also running Linux, I see no difference in speed, even on 
>a large sprawling directory.

That's because 
a) you're on a single-user machine, and
b) you're not examining a large enough directory to notice the difference.
Try that in a multi-user environment with typical production directory trees, 
and the difference will become visible.

> find does it's thing, grep prunes it's results.

Pointless. find can both find *and* prune.


------------------------------

Date: Sat, 28 Jun 2008 21:46:35 -0700
From: "szr" <szrRE@szromanMO.comVE>
Subject: Re: [OT] How to make list of all htm file...
Message-Id: <g4743001o7c@news4.newsguy.com>

Doug Miller wrote:
> In article <g3mi0d01tpc@news4.newsguy.com>, "szr"
> <szrRE@szromanMO.comVE> wrote:
>> Dr.Ruud wrote:
>>> szr schreef:
>>>
>>>>    $ find . | grep -P 'html?$'
>>>
>>> That is quite wasteful, even if the current directory doesn't
>>> contain millions of subdirectories and files.
>>
>> Aside form forgetting *. which should of been at the beginning of my
>> patterns, is it really more wasteful?
>
> Yes, absolutely.
>
>> Does find not have to also check
>> each file it comes across too?
>
> Certainly. But you're piping *all* of them to grep, thus making both
> find *and* grep process all of them.

Yep.

>> Or is it just the over of piping the
>> final output from find over to grep?

s/over/overhead/

> That, too.
>
>> Other then that I don't see why it
>> would be more wasteful?
>
> Because it:
> a) creates, opens, and closes a pipe that is not necessary
> b) spawns an additional process (grep) that is not necessary
> c) ships *every* filename across that unnecessary pipe to that
> unnecessary process to be filtered
> .. when you could instead simply filter the filenames at the source,
> as
> they're generated by find.
>
>> On my both my Dual core Linux system as well as
>> an old P2 400 also running Linux, I see no difference in speed, even
>> on a large sprawling directory.
>
> That's because
> a) you're on a single-user machine, and
> b) you're not examining a large enough directory to notice the
> difference.
> Try that in a multi-user environment with typical production
> directory trees, and the difference will become visible.

I logged into one of the large servers that I manage and ran the same 
test, and found there to be a difference, especially when running it 
using the system root (/) as the starting point. It is indeed better to 
go the efficient route.

>> find does it's thing, grep prunes it's results.
>
> Pointless. find can both find *and* prune.

True. Wonderful, -regex, is.

-- 
szr 




------------------------------

Date: Sat, 28 Jun 2008 22:56:21 -0400
From: Ruben <ruben@www2.mrbrklyn.com>
Subject: C, C++ and Perl Workshops with NYLXS
Message-Id: <pan.2008.06.29.02.56.19.598047@www2.mrbrklyn.com>


While we have some summer off time I'm going to lead 3 workshops for
beginners through the NYLXS Mailing List at hangout@nylxs.com

We will be doing Introduction to C Programming using the book

C Programming:  A Modren Approach

by K.N.King

I'll be developing C programming notes on line as we go.  This book is
fairly intensive

Also we will do Perl Porgramming

using the Perl Programming Notes of NYLXS on
http://www.nylxs.com/docs/perlcourse/

I'll be reediting these notes as we go, but they are fairly complete as
they are

Finally, we will be learning C++ with the text C++ Primer
Stanley Lippman and Josee Lajoie - I have the 3rd addition

I took C++ at NYU but frankly have not nearly enough background, so I'll
be making notes and learning along with everyone else.

Anyone who wants to join is welcome to.  NYLXS Accounts are available on
the server for a NYLXS membership fee of $45 (and then you need to do
volunteer hours to become a voting member).  We will also use the NYLXS
irc channel to meet on line weekly as announced

The irc channel is on freenode #nylxs

The mailing list itself is published on the NYLXS site.  Hope you join us!


Ruben

-- 
http://www.mrbrklyn.com - Interesting Stuff
http://www.nylxs.com - Leadership Development in Free Software

So many immigrant groups have swept through our town that Brooklyn, like Atlantis, reaches mythological proportions in the mind of the world  - RI Safir 1998

http://fairuse.nylxs.com  DRM is THEFT - We are the STAKEHOLDERS - RI Safir 2002

"Yeah - I write Free Software...so SUE ME"

"The tremendous problem we face is that we are becoming sharecroppers to our own cultural heritage -- we need the ability to participate in our own society."

"> I'm an engineer. I choose the best tool for the job, politics be damned.<
You must be a stupid engineer then, because politcs and technology have been attached at the hip since the 1st dynasty in Ancient Egypt.  I guess you missed that one."

© Copyright for the Digital Millennium



------------------------------

Date: 29 Jun 2008 01:37:34 GMT
From: xhoster@gmail.com
Subject: Re: how do prlglobs expand (was Re: 'nobody' using sudo -- scary!)
Message-Id: <20080628213743.427$dy@newsreader.com>

Ben Morrow <ben@morrow.me.uk> wrote:
> [please quote properly]
>
> Quoth news1234@free.fr:
> >
> > What would happen if I use follwing statement in perl"
> >
> > foreach my $file (</home/*/.forward>){
> >     do_something($file);
> > }
> >
> > would perl
> > - iterate through the files
> > - or would perl first create a list of  all the files and then
> >       iterate through them.
>
> 'foreach' always creates a list and then iterates over it.

Not always.  For example, in the case of foreach (1..1e6).

>
> > - or would it hit a linit and not provide all hits.
> > - or does it depend on the system perl is running on
>
> You will eventually hit the memory limit on your system, and the limit
> on the size of the pointer used to index the perl stack; you won't hit
> any limits before that.
>
> You can avoid pre-creating the list by using 'while' instead:
>
>     while (my $file = </home/*/.forward>) {

On my system, and I suspect on all systems, this still pre-creates the
result set in its entirety.

For example, if I put a "last" in the while loop, the code still performed
49418 "lstat" calls before it did a single loop iteration and broke out.

Perhaps the result set is stored in a special packed structure that is more
compact than it would be in a foreach loop.  But a test shows that this
effect seems small.  it took 12 meg to do

while (<blah/*>) {last}

and 14.3 meg to do

foreach (<blah/*>) {last}

Where blah has 49418 files in it.

Xho

-- 
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.


------------------------------

Date: Sat, 28 Jun 2008 13:18:08 -0700 (PDT)
From: Trev <trevor.dodds@gmail.com>
Subject: Re: Need help with a question.
Message-Id: <b6b8282c-84ea-4939-9933-ecddd0aefbac@f36g2000hsa.googlegroups.com>

On Jun 28, 1:40=A0pm, Ben Morrow <b...@morrow.me.uk> wrote:
Thanks Ben for the detailed reply, I've been able to get the output
correct, but I'm stuck on how to read $cpqlogline into an array
without writing it to a file.

my files look like this:

the perl file:

use warnings;
use strict;

my $server_file =3D "list.txt";
my $mactmp =3D "mactmp.txt";
my $maclog =3D "logfile.txt";

sub LoadFile
{
open(my $SRV, '<', $server_file)
			or die("Could not open file: $!");

while (my $server =3D <$SRV>) {
chomp($server);
open (my $DAT, '<', 'output.txt')
       || die("Could not open 'output.txt': $!");

open (OTF, ">blah.txt");
print OTF $DAT;

while (my $cpqlogline =3D <$DAT>) {
		{
		chomp($cpqlogline);
		if ($cpqlogline =3D~ /MAC/)
		{
			$cpqlogline =3D~ s/  <FIELD NAME=3D"Subject" VALUE=3D"//i;
			$cpqlogline =3D~ s/  <FIELD NAME=3D"MAC" VALUE=3D"//i;
			$cpqlogline =3D~ s/"\/>//i;

			open (TMP, ">>$mactmp");
			print TMP "$server". "," . "$cpqlogline\n";
			close TMP;
		}
	}
}
}
close OTF;
}

sub CreateLOG
{
	open (TMP, "<$mactmp");
	open (LOG, ">>$maclog");
	my @lines=3D<TMP>;
	print LOG "$lines[1]";
	close TMP;
	close LOG;
	print @lines;


	unlink "blah.txt";
	#unlink $mactmp;
}

LoadFile;
CreateLOG;

The list.txt file:

server1
server2
server3

The output.txt file:

  <FIELD NAME=3D"Subject" VALUE=3D"Embedded NIC MAC Assignment"/>
  <FIELD NAME=3D"Port" VALUE=3D"1"/>
  <FIELD NAME=3D"MAC" VALUE=3D"00-00-00-00-00-01"/>
  <FIELD NAME=3D"Port" VALUE=3D"2"/>
  <FIELD NAME=3D"MAC" VALUE=3D"00-00-00-00-00-02"/>
  <FIELD NAME=3D"Port" VALUE=3D"iLO"/>
  <FIELD NAME=3D"MAC" VALUE=3D"00-00-00-00-00-03"/>

When the script runs it only creats the logfile with one line:
server1,00-00-00-00-00-01

When it should show:
server1,00-00-00-00-00-01
server2,00-00-00-00-00-01
server3,00-00-00-00-00-01

I see the problem is that the CreateLog only reads value[1] since the
above sub writes everything into mactmp.txt then only CreateLog is
run.

Any ideas?


------------------------------

Date: Sat, 28 Jun 2008 20:36:08 GMT
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: Need help with a question.
Message-Id: <8t7d64pvn87diham03igagshoiljup88ak@4ax.com>

Trev <trevor.dodds@gmail.com> wrote:
>On Jun 28, 1:40 pm, Ben Morrow <b...@morrow.me.uk> wrote:
>Thanks Ben for the detailed reply, I've been able to get the output
>correct, but I'm stuck on how to read $cpqlogline into an array
>without writing it to a file.

I suppose with "read into an array" you meant storing the value in an
array. Please see e.g. "perldoc -f push".

However, there is no need for that. Why not write the output file line
by line simultaneously as you process the input file line by line?

jue


------------------------------

Date: Sat, 28 Jun 2008 23:00:33 +0200
From: Anonymous coward <anonymous@cow.ard>
Subject: Re: Need help with a question.
Message-Id: <52f62$4866a671$89e0e08f$6520@news1.tudelft.nl>

On Sat, 28 Jun 2008 13:18:08 -0700, Trev wrote:

> On Jun 28, 1:40 pm, Ben Morrow <b...@morrow.me.uk> wrote: Thanks Ben for
> the detailed reply, I've been able to get the output correct, but I'm
> stuck on how to read $cpqlogline into an array without writing it to a
> file.
> 

push @array, $cpqlogline;

> open (OTF, ">blah.txt");
> print OTF $DAT;

That makes no sense at all. Why would you want to print a filehandle

> 	$cpqlogline =~ s/  <FIELD NAME="Subject" VALUE="//i;
> 	$cpqlogline =~ s/  <FIELD NAME="MAC" VALUE="//i;
> 	$cpqlogline =~ s/"\/>//i;

Can't you do that in a more robust manner? XML::Simple is your friend. Or 
a single m// regexp if you are certain the order of the attributes is 
always the same.

> 	open (TMP, ">>$mactmp");
> 	print TMP "$server". "," . "$cpqlogline\n"; close TMP;

Why do you open and close a file inside a loop? Why don't you just say 
print "$server,$cpglogline\n" ?

> }
> close OTF;

You might want to close that in the same block as where you opened it.

> 	print LOG "$lines[1]";

What do the quotes serve for?

> 	unlink "blah.txt";

Why unlink that here?


> When the script runs it only creats the logfile with one line:
> server1,00-00-00-00-00-01
> 
> When it should show:
> server1,00-00-00-00-00-01
> server2,00-00-00-00-00-01
> server3,00-00-00-00-00-01
> 
> I see the problem is that the CreateLog only reads value[1] since the
> above sub writes everything into mactmp.txt then only CreateLog is run.
> 
> Any ideas?

This program makes absolutely no sense to me. What are you trying to do 
exactly? Can you give a bit more description?

Cheers,

Leon


------------------------------

Date: Sat, 28 Jun 2008 21:33:49 GMT
From: Jürgen Exner <jurgenex@hotmail.com>
Subject: Re: Need help with a question.
Message-Id: <u5bd64hurj80d9j3s6bibuhetgi3q6ivki@4ax.com>

Anonymous coward <anonymous@cow.ard> wrote:
>On Sat, 28 Jun 2008 13:18:08 -0700, Trev wrote:
>> open (OTF, ">blah.txt");
>> print OTF $DAT;
>
>That makes no sense at all. Why would you want to print a filehandle

Maybe he wants to print _to_ a filehandle?

>> 	open (TMP, ">>$mactmp");
>> 	print TMP "$server". "," . "$cpqlogline\n"; close TMP;
>
>Why do you open and close a file inside a loop? 

Indeed, that is an expensive operation that is better done once outside
of the loop.

>Why don't you just say 
>print "$server,$cpglogline\n" ?

Mabye because (unless you set the default filehandle using select()) the
text will end up on the screen instead of in the file?

>This program makes absolutely no sense to me. What are you trying to do 
>exactly? Can you give a bit more description?

It is certainly somewhat cryptic.

jue


------------------------------

Date: Sat, 28 Jun 2008 14:44:35 -0700 (PDT)
From: Trev <trevor.dodds@gmail.com>
Subject: Re: Need help with a question.
Message-Id: <4761ce47-c175-4828-9207-954a1c560925@8g2000hse.googlegroups.com>

On Jun 28, 5:33=C2=A0pm, J=EF=BF=BDrgen Exner <jurge...@hotmail.com> wrote:
> Anonymous coward <anonym...@cow.ard> wrote:
> >On Sat, 28 Jun 2008 13:18:08 -0700, Trev wrote:
> >> open (OTF, ">blah.txt");
> >> print OTF $DAT;
>
> >That makes no sense at all. Why would you want to print a filehandle
>
> Maybe he wants to print _to_ a filehandle?
>
> >> =C2=A0 =C2=A0 =C2=A0 =C2=A0open (TMP, ">>$mactmp");
> >> =C2=A0 =C2=A0 =C2=A0 =C2=A0print TMP "$server". "," . "$cpqlogline\n";=
 close TMP;
>
> >Why do you open and close a file inside a loop?
>
> Indeed, that is an expensive operation that is better done once outside
> of the loop.
>
> >Why don't you just say
> >print "$server,$cpglogline\n" ?
>
> Mabye because (unless you set the default filehandle using select()) the
> text will end up on the screen instead of in the file?
>
> >This program makes absolutely no sense to me. What are you trying to do
> >exactly? Can you give a bit more description?
>
> It is certainly somewhat cryptic.
>
> jue

I'm connecting to many servers extracting XML data to a file, I then
only want the MAC address of Port 1 for each server. The part where I
open an existing file (output.txt) and dump content into blah.txt is
for testing, once the script it working I will remove that portion as
the output.txt file will be different for each server.

Hope that helps. I'll look into the push.

Thanks


------------------------------

Date: Sun, 29 Jun 2008 00:10:53 +0200
From: Leon Timmermans <fawaka@gmail.com>
Subject: Re: Need help with a question.
Message-Id: <4c695$4866b6ed$89e0e08f$6520@news1.tudelft.nl>

On Sat, 28 Jun 2008 21:33:49 +0000, Jürgen Exner wrote:

>>Why don't you just say
>>print "$server,$cpglogline\n" ?
> 
> Mabye because (unless you set the default filehandle using select()) the
> text will end up on the screen instead of in the file?

Oops, that was a typo on my part. I wanted to say print TMP "$server,
$cpglogline\n".

Leon


------------------------------

Date: Sat, 28 Jun 2008 23:22:08 +0100
From: Ben Morrow <ben@morrow.me.uk>
Subject: Re: Need help with a question.
Message-Id: <g5qij5-jhu2.ln1@osiris.mauzo.dyndns.org>


Quoth Trev <trevor.dodds@gmail.com>:
> On Jun 28, 1:40 pm, Ben Morrow <b...@morrow.me.uk> wrote:
> Thanks Ben for the detailed reply, I've been able to get the output
> correct, but I'm stuck on how to read $cpqlogline into an array
> without writing it to a file.
> 
> my files look like this:
> 
> the perl file:
> 
> use warnings;
> use strict;
> 
> my $server_file = "list.txt";
> my $mactmp = "mactmp.txt";
> my $maclog = "logfile.txt";
> 
> sub LoadFile
> {
> open(my $SRV, '<', $server_file)
> 			or die("Could not open file: $!");
> 
> while (my $server = <$SRV>) {
> chomp($server);
> open (my $DAT, '<', 'output.txt')
>        || die("Could not open 'output.txt': $!");

If you reopen output.txt every time you will start at the beginning
again.

> open (OTF, ">blah.txt");
> print OTF $DAT;

What are you trying to achieve here? You don't ever seem to make use of
the contents of blah.txt.

> while (my $cpqlogline = <$DAT>) {
> 		{
> 		chomp($cpqlogline);
> 		if ($cpqlogline =~ /MAC/)
> 		{
> 			$cpqlogline =~ s/  <FIELD NAME="Subject" VALUE="//i;
> 			$cpqlogline =~ s/  <FIELD NAME="MAC" VALUE="//i;
> 			$cpqlogline =~ s/"\/>//i;
> 
> 			open (TMP, ">>$mactmp");
> 			print TMP "$server". "," . "$cpqlogline\n";
> 			close TMP;

You don't need to keep everything in temporary files like this. Perl has
variables for keeping data in.

Here you write all the data into mactmp.txt; in fact, you write a line
for every server with every mac address. I don't think this is what you
meant.

> 		}
> 	}
> }
> }
> close OTF;
> }
> 
> sub CreateLOG
> {
> 	open (TMP, "<$mactmp");
> 	open (LOG, ">>$maclog");
> 	my @lines=<TMP>;
> 	print LOG "$lines[1]";

Here you reopen mactmp.txt, read all the data into @lines, print the
second line (only) to maclog.txt, and throw the rest away. This is why
you only get one record in the log at the end.

I'm not quite sure how to help you at this point. You seem to be missing
several rather fundamental points about how Perl works. If I were
writing this program, I might write it something like this (completely
untested):

    #!/usr/bin/perl

    my $srvs = 'list.txt';
    my $macs = 'output.txt';
    my $log  = 'logfile.txt';

    # Only open each file once. We don't need any temporary files.
    open my $SRVS, '<', $srvs or die "can't read '$srvs': $!";
    open my $MACS, '<', $macs or die "can't read '$macs': $!";
    open my $LOG,  '>>', $log or die "can't append to '$log': $!";

    # For each line in output.txt...
    while (<$MACS>) {
        
        # if it matches the pattern, put the VALUE field in $mac...
        my ($mac) = /<FIELD NAME="MAC" VALUE="([^"]+)">/
            # otherwise move on to the next line.
            or next;

        # Get the next server from the list.
        my $srv = <$SRVS> or die "Not enough servers.\n";
        chomp $srv;

        # Write a line to the logfile.
        print $LOG "$srv,$mac\n";
    }

    # Close the logfile explicitly so we can check all the data got
    # written safely.
    close $LOG or die "writing to '$log' failed: $!";

    # Check if there are any servers left.
    <$SRVS> and die "Not enough addresses.\n";

    __END__

Can you understand what that's doing?

Ben

-- 
               We do not stop playing because we grow old; 
                  we grow old because we stop playing.
                            ben@morrow.me.uk


------------------------------

Date: Sat, 28 Jun 2008 19:07:21 -0700 (PDT)
From: Trev <trevor.dodds@gmail.com>
Subject: Re: Need help with a question.
Message-Id: <6bc2e2fc-3c1c-4d1d-a962-88dfd22881bf@d45g2000hsc.googlegroups.com>

On Jun 28, 6:22=A0pm, Ben Morrow <b...@morrow.me.uk> wrote:
> Quoth Trev <trevor.do...@gmail.com>:
>
>
>
> > On Jun 28, 1:40=A0pm, Ben Morrow <b...@morrow.me.uk> wrote:
> > Thanks Ben for the detailed reply, I've been able to get the output
> > correct, but I'm stuck on how to read $cpqlogline into an array
> > without writing it to a file.
>
> > my files look like this:
>
> > the perl file:
>
> > use warnings;
> > use strict;
>
> > my $server_file =3D "list.txt";
> > my $mactmp =3D "mactmp.txt";
> > my $maclog =3D "logfile.txt";
>
> > sub LoadFile
> > {
> > open(my $SRV, '<', $server_file)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0or die("Could not open file: $!"=
);
>
> > while (my $server =3D <$SRV>) {
> > chomp($server);
> > open (my $DAT, '<', 'output.txt')
> > =A0 =A0 =A0 =A0|| die("Could not open 'output.txt': $!");
>
> If you reopen output.txt every time you will start at the beginning
> again.
>
> > open (OTF, ">blah.txt");
> > print OTF $DAT;
>
> What are you trying to achieve here? You don't ever seem to make use of
> the contents of blah.txt.
>
> > while (my $cpqlogline =3D <$DAT>) {
> > =A0 =A0 =A0 =A0 =A0 =A0{
> > =A0 =A0 =A0 =A0 =A0 =A0chomp($cpqlogline);
> > =A0 =A0 =A0 =A0 =A0 =A0if ($cpqlogline =3D~ /MAC/)
> > =A0 =A0 =A0 =A0 =A0 =A0{
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$cpqlogline =3D~ s/ =A0<FIELD NA=
ME=3D"Subject" VALUE=3D"//i;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$cpqlogline =3D~ s/ =A0<FIELD NA=
ME=3D"MAC" VALUE=3D"//i;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$cpqlogline =3D~ s/"\/>//i;
>
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0open (TMP, ">>$mactmp");
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0print TMP "$server". "," . "$cpq=
logline\n";
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0close TMP;
>
> You don't need to keep everything in temporary files like this. Perl has
> variables for keeping data in.
>
> Here you write all the data into mactmp.txt; in fact, you write a line
> for every server with every mac address. I don't think this is what you
> meant.
>
> > =A0 =A0 =A0 =A0 =A0 =A0}
> > =A0 =A0}
> > }
> > }
> > close OTF;
> > }
>
> > sub CreateLOG
> > {
> > =A0 =A0open (TMP, "<$mactmp");
> > =A0 =A0open (LOG, ">>$maclog");
> > =A0 =A0my @lines=3D<TMP>;
> > =A0 =A0print LOG "$lines[1]";
>
> Here you reopen mactmp.txt, read all the data into @lines, print the
> second line (only) to maclog.txt, and throw the rest away. This is why
> you only get one record in the log at the end.
>
> I'm not quite sure how to help you at this point. You seem to be missing
> several rather fundamental points about how Perl works. If I were
> writing this program, I might write it something like this (completely
> untested):
>
> =A0 =A0 #!/usr/bin/perl
>
> =A0 =A0 my $srvs =3D 'list.txt';
> =A0 =A0 my $macs =3D 'output.txt';
> =A0 =A0 my $log =A0=3D 'logfile.txt';
>
> =A0 =A0 # Only open each file once. We don't need any temporary files.
> =A0 =A0 open my $SRVS, '<', $srvs or die "can't read '$srvs': $!";
> =A0 =A0 open my $MACS, '<', $macs or die "can't read '$macs': $!";
> =A0 =A0 open my $LOG, =A0'>>', $log or die "can't append to '$log': $!";
>
> =A0 =A0 # For each line in output.txt...
> =A0 =A0 while (<$MACS>) {
>
> =A0 =A0 =A0 =A0 # if it matches the pattern, put the VALUE field in $mac.=
 ..
> =A0 =A0 =A0 =A0 my ($mac) =3D /<FIELD NAME=3D"MAC" VALUE=3D"([^"]+)">/
> =A0 =A0 =A0 =A0 =A0 =A0 # otherwise move on to the next line.
> =A0 =A0 =A0 =A0 =A0 =A0 or next;
>
> =A0 =A0 =A0 =A0 # Get the next server from the list.
> =A0 =A0 =A0 =A0 my $srv =3D <$SRVS> or die "Not enough servers.\n";
> =A0 =A0 =A0 =A0 chomp $srv;
>
> =A0 =A0 =A0 =A0 # Write a line to the logfile.
> =A0 =A0 =A0 =A0 print $LOG "$srv,$mac\n";
> =A0 =A0 }
>
> =A0 =A0 # Close the logfile explicitly so we can check all the data got
> =A0 =A0 # written safely.
> =A0 =A0 close $LOG or die "writing to '$log' failed: $!";
>
> =A0 =A0 # Check if there are any servers left.
> =A0 =A0 <$SRVS> and die "Not enough addresses.\n";
>
> =A0 =A0 __END__
>
> Can you understand what that's doing?
>
> Ben
>
> --
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0We do not stop playing because we grow old=
;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 we grow old because we stop playing.
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 b...@morrow.me.uk

Hi

I'm only after 1 line (MAC) per server, which is value [1] the other
MAC's I do not need. Is it possible to grep MAC from the Array (Entire
File) and pass the results into a new array which I could then
print[1] to a file. This will then extract 1 line for every server
that I have an xml file for.


------------------------------

Date: Sun, 29 Jun 2008 04:42:20 GMT
From: merlyn@stonehenge.com (Randal Schwartz)
Subject: new CPAN modules on Sun Jun 29 2008
Message-Id: <K37JqK.16wF@zorch.sf-bay.org>

The following modules have recently been added to or updated in the
Comprehensive Perl Archive Network (CPAN).  You can install them using the
instructions in the 'perlmodinstall' page included with your Perl
distribution.

Apache-Bootstrap-0.04_01
http://search.cpan.org/~phred/Apache-Bootstrap-0.04_01/
Bootstraps dual life mod_perl1 and mod_perl2 Apache modules 
----
Auth-Yubikey_Decrypter-0.02
http://search.cpan.org/~massyn/Auth-Yubikey_Decrypter-0.02/
The great new Auth::Yubikey_Decrypter! 
----
Auth-Yubikey_Decrypter-0.03
http://search.cpan.org/~massyn/Auth-Yubikey_Decrypter-0.03/
The great new Auth::Yubikey_Decrypter! 
----
B-Debug-1.10
http://search.cpan.org/~rurban/B-Debug-1.10/
Walk Perl syntax tree, printing debug info about ops 
----
B-Generate-1.12_09
http://search.cpan.org/~rurban/B-Generate-1.12_09/
Create your own op trees. 
----
CGI-CMS-0.34
http://search.cpan.org/~lze/CGI-CMS-0.34/
Content Managment System that runs under mod_perl and and as cgi script. 
----
Catalyst-Authentication-Credential-Authen-Simple-0.01
http://search.cpan.org/~jlmartin/Catalyst-Authentication-Credential-Authen-Simple-0.01/
Verify credentials with the Authen::Simple framework 
----
Catalyst-Controller-DBIC-API-1.000000
http://search.cpan.org/~lsaunders/Catalyst-Controller-DBIC-API-1.000000/
----
Catalyst-Plugin-SmartURI-0.023
http://search.cpan.org/~rkitover/Catalyst-Plugin-SmartURI-0.023/
Configurable URIs for Catalyst 
----
Catalyst-Plugin-SmartURI-0.024
http://search.cpan.org/~rkitover/Catalyst-Plugin-SmartURI-0.024/
Configurable URIs for Catalyst 
----
Catalyst-Plugin-SmartURI-0.025
http://search.cpan.org/~rkitover/Catalyst-Plugin-SmartURI-0.025/
Configurable URIs for Catalyst 
----
Chart-Gnuplot-0.02
http://search.cpan.org/~kwmak/Chart-Gnuplot-0.02/
Plot graph using Gnuplot on the fly 
----
Check-ISA-0.01
http://search.cpan.org/~nuffin/Check-ISA-0.01/
DWIM, correct checking of an object's class 
----
Check-ISA-0.02
http://search.cpan.org/~nuffin/Check-ISA-0.02/
DWIM, correct checking of an object's class 
----
Chemistry-Elements-1.05_01
http://search.cpan.org/~bdfoy/Chemistry-Elements-1.05_01/
Perl extension for working with Chemical Elements 
----
Crypt-Util-0.09
http://search.cpan.org/~nuffin/Crypt-Util-0.09/
A lightweight Crypt/Digest convenience API 
----
HTML-Menu-TreeView-1.03
http://search.cpan.org/~lze/HTML-Menu-TreeView-1.03/
Create a HTML TreeView from scratch 
----
HTTP-Request-FromLog-0.00001
http://search.cpan.org/~miki/HTTP-Request-FromLog-0.00001/
convert to HTTP::Request object from Apache's access_log record. 
----
Kx-0.031
http://search.cpan.org/~markpf/Kx-0.031/
Perl extension for Kdb+ http://kx.com 
----
Mobile-P2kMoto-0.03
http://search.cpan.org/~mbarbon/Mobile-P2kMoto-0.03/
interface with Motorola P2K phones 
----
Module-Install-POE-Test-Loops-0.01
http://search.cpan.org/~martijn/Module-Install-POE-Test-Loops-0.01/
Install tests for POE::Loops 
----
Module-Mask-Deps-0.06
http://search.cpan.org/~mattlaw/Module-Mask-Deps-0.06/
Mask modules not listed as dependencies 
----
Module-Util-1.04
http://search.cpan.org/~mattlaw/Module-Util-1.04/
Module name tools and transformations 
----
Muldis-D-0.37.0
http://search.cpan.org/~duncand/Muldis-D-0.37.0/
Formal spec of Muldis D relational DBMS lang 
----
Net-Whois-Raw-1.54
http://search.cpan.org/~despair/Net-Whois-Raw-1.54/
Get Whois information for domains 
----
POE-Component-Server-Chargen-1.12
http://search.cpan.org/~bingos/POE-Component-Server-Chargen-1.12/
A POE component that implements an RFC 864 Chargen server. 
----
POE-Component-Server-Echo-1.62
http://search.cpan.org/~bingos/POE-Component-Server-Echo-1.62/
A POE component that implements an RFC 862 Echo server. 
----
POE-Component-Server-RADIUS-0.08
http://search.cpan.org/~bingos/POE-Component-Server-RADIUS-0.08/
a POE based RADIUS server component 
----
POE-Component-WWW-Shorten-1.16
http://search.cpan.org/~bingos/POE-Component-WWW-Shorten-1.16/
A non-blocking wrapper around WWW::Shorten. 
----
POE-Filter-LZF-1.68
http://search.cpan.org/~bingos/POE-Filter-LZF-1.68/
A POE filter wrapped around Compress::LZF 
----
POE-Filter-LZO-1.68
http://search.cpan.org/~bingos/POE-Filter-LZO-1.68/
A POE filter wrapped around Compress::LZO 
----
POE-Filter-LZW-1.68
http://search.cpan.org/~bingos/POE-Filter-LZW-1.68/
A POE filter wrapped around Compress::LZW 
----
POE-Test-Loops-1.000
http://search.cpan.org/~rcaputo/POE-Test-Loops-1.000/
Reusable tests for POE::Loop authors 
----
Portable-0.06
http://search.cpan.org/~adamk/Portable-0.06/
Perl on a Stick (EXPERIMENTAL) 
----
Rinchi-XMLSchema-0.02
http://search.cpan.org/~bmames/Rinchi-XMLSchema-0.02/
Module for creating XML Schema objects from XSD files. 
----
Statistics-FisherPitman-0.02
http://search.cpan.org/~rgarton/Statistics-FisherPitman-0.02/
Randomization-based alternative to one-way independent groups ANOVA; unequal variances okay 
----
Sys-Info-0.52_91
http://search.cpan.org/~burak/Sys-Info-0.52_91/
Fetch information from the host system 
----
TAP3-Tap3edit-0.30
http://search.cpan.org/~mrjones/TAP3-Tap3edit-0.30/
TAP3 and RAP files Encode/Decode library http://www.tap3edit.com 
----
Text-CSV_XS-0.52
http://search.cpan.org/~hmbrand/Text-CSV_XS-0.52/
comma-separated values manipulation routines 
----
Text-FakeXML-0.02
http://search.cpan.org/~jv/Text-FakeXML-0.02/
Creating text with <things>. 
----
URI-SmartURI-0.024
http://search.cpan.org/~rkitover/URI-SmartURI-0.024/
Subclassable and hostless URIs 
----
WWW-Shorten-2.00
http://search.cpan.org/~davecross/WWW-Shorten-2.00/
Interface to URL shortening sites. 
----
XML-RSS-Feed-2.3
http://search.cpan.org/~jbisbee/XML-RSS-Feed-2.3/
Persistant XML RSS Encapsulation 
----
XML-RSS-PicLens-0.02
http://search.cpan.org/~andya/XML-RSS-PicLens-0.02/
Create a PicLens compatible RSS feed 
----
eBay-API-Docs-0.02
http://search.cpan.org/~tkeefer/eBay-API-Docs-0.02/
----
eBay-API-Docs-0.03
http://search.cpan.org/~tkeefer/eBay-API-Docs-0.03/
----
optimizer-0.06_01
http://search.cpan.org/~rurban/optimizer-0.06_01/
Write your own Perl optimizer, in Perl 


If you're an author of one of these modules, please submit a detailed
announcement to comp.lang.perl.announce, and we'll pass it along.

This message was generated by a Perl program described in my Linux
Magazine column, which can be found on-line (along with more than
200 other freely available past column articles) at
  http://www.stonehenge.com/merlyn/LinuxMag/col82.html

print "Just another Perl hacker," # the original

--
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
<merlyn@stonehenge.com> <URL:http://www.stonehenge.com/merlyn/>
Smalltalk/Perl/Unix consulting, Technical writing, Comedy, etc. etc.
See http://methodsandmessages.vox.com/ for Smalltalk and Seaside discussion


------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc.  For subscription or unsubscription requests, send
#the single line:
#
#	subscribe perl-users
#or:
#	unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.  

NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice. 

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V11 Issue 1679
***************************************


home help back first fref pref prev next nref lref last post