[30809] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 2054 Volume: 11

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Sat Dec 13 21:09:53 2008

Date: Sat, 13 Dec 2008 18:09:14 -0800 (PST)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Sat, 13 Dec 2008     Volume: 11 Number: 2054

Today's topics:
    Re: MIME::Lite -- Add attribute to part's Content-Type <waveright@gmail.com>
        multidimensional array <tara.bratten@gmail.com>
    Re: multidimensional array <klaus03@gmail.com>
    Re: multidimensional array <tadmc@seesig.invalid>
    Re: Processing Multiple Large Files <hjp-usenet2@hjp.at>
    Re: Processing Multiple Large Files <hjp-usenet2@hjp.at>
    Re: Processing Multiple Large Files xhoster@gmail.com
    Re: Processing Multiple Large Files sln@netherlands.com
    Re: Processing Multiple Large Files sln@netherlands.com
    Re: querying Active Directory via LDAP in perl <nospam@somewhere.com>
    Re: Trouble with Net::Ping <hjp-usenet2@hjp.at>
        WWW::Search('Ebay::ByEndDate') <georg.heiss@gmx.de>
    Re: WWW::Search('Ebay::ByEndDate') <tim@burlyhost.com>
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Sat, 13 Dec 2008 04:00:53 -0800 (PST)
From: Todd Wade <waveright@gmail.com>
Subject: Re: MIME::Lite -- Add attribute to part's Content-Type
Message-Id: <95669871-ea2f-417e-a588-6489823ed6d9@n33g2000pri.googlegroups.com>

On Dec 12, 7:10 pm, Jerry Krinock <je...@ieee.org> wrote:
> When Apple's Mail.app composes a message with a .zip attachment, it
> adds an "x-mac-auto-archive" attribute to Content-Type, which I
> believe tells the receiving Mail.app to automatically unzip the
> attachment upon receipt.  I would like to be able to send messages
> behaving like that when I use MIME::Lite.  Taking a wild guess, I
> wrote this code:
>
>     $msg->attach(
>         Type     => 'application/zip',
>         Path     => $myPath,
>         'x-mac-auto-archive' => 'yes',
>         Filename => "MyScript.app.zip"
>     ) ;
>
> What I get from that (which fails to make Mail.app automatically
> unzip) is:
>
>    Content-Type: application/zip; name="MyScript.app.zip"
>    X-Mac-Auto-Archive: yes
>
> A similar message sent by the real Mail.app gives me a Content-Type
> like this:
>
>    Content-Type: application/zip;
>         x-mac-auto-archive=yes;
>         name="MyScript.app.zip"
>
> Apparently, the problem is that instead of becoming an attribute of
> Content-Type, my x-mac-auto-archive becomes a Header field.  (I hope
> my terminology is correct.)
>

Just looking at what you've got here, I'd try:

  Type => 'application/zip; x-mac-auto-archive=yes'

Todd W.


------------------------------

Date: Sat, 13 Dec 2008 01:42:28 -0800 (PST)
From: taralee324 <tara.bratten@gmail.com>
Subject: multidimensional array
Message-Id: <f1e6714c-6f7e-4e20-90cc-9dfe548648d1@o40g2000yqb.googlegroups.com>

I know I've done this before... but for the life of me, I cannot
recall how!

I have an array of arrays (ie. $AoA[$array][$row][$col]). And I want
to copy one array to a separate array variable. I've tried:
@new_array = [$AoA[$array]];
@new_array = {$AoA[$array]};
and about a million other ways, but I can't find any online
documentation (which is where I learn this stuff) on how to do it.

Thanks for any help!


------------------------------

Date: Sat, 13 Dec 2008 02:35:46 -0800 (PST)
From: Klaus <klaus03@gmail.com>
Subject: Re: multidimensional array
Message-Id: <2fabc2d9-ef65-4aa1-8ac0-e095201e0b75@v13g2000vbb.googlegroups.com>

On Dec 13, 10:42=A0am, taralee324 <tara.brat...@gmail.com> wrote:

> I have an array of arrays (ie. $AoA[$array][$row][$col]). And I want
> to copy one array to a separate array variable. I've tried:
> @new_array =3D [$AoA[$array]];
> @new_array =3D {$AoA[$array]};

@new_array =3D @{$AoA[$array]};

> but I can't find any online documentation

perldoc perldsc
Access and Printing of an ARRAY OF ARRAYS

--
Klaus


------------------------------

Date: Sat, 13 Dec 2008 07:00:11 -0600
From: Tad J McClellan <tadmc@seesig.invalid>
Subject: Re: multidimensional array
Message-Id: <slrngk7cer.8go.tadmc@tadmc30.sbcglobal.net>

taralee324 <tara.bratten@gmail.com> wrote:

> I have an array of arrays (ie. $AoA[$array][$row][$col]). And I want
> to copy one array to a separate array variable.

> I can't find any online documentation


Apply "Use Rule 1" from 

    perldoc perlreftut

I like to do it in 3 steps.

    my @array = @other;   # pretend it is a normal array

    my @array = @{   };   # replace the array name with a block

    my @array = @{ $AoA[$array] };  # fill in the block with a reference


-- 
Tad McClellan
email: perl -le "print scalar reverse qq/moc.noitatibaher\100cmdat/"


------------------------------

Date: Sat, 13 Dec 2008 23:03:30 +0100
From: "Peter J. Holzer" <hjp-usenet2@hjp.at>
Subject: Re: Processing Multiple Large Files
Message-Id: <slrngk8c9j.bnq.hjp-usenet2@hrunkner.hjp.at>

On 2008-12-12 13:09, A. Sinan Unur <1usa@llenroc.ude.invalid> wrote:
> "A. Sinan Unur" <1usa@llenroc.ude.invalid> wrote in
> news:Xns9B71C0CDC9E12asu1cornelledu@127.0.0.1: 
>> "friend.05@gmail.com" <hirenshah.05@gmail.com> wrote in news:5f1e2237-
>> b3f6-409c-aa95-b7ba57955265@q26g2000prq.googlegroups.com:
>> 
>>> I analyzing some netwokr log files. There are around 200-300 files
>>> and each file has more than 2 million entries in it.
[...]
>>> Is there any efficient way to do it?
>>> 
>>> May be Multiprocessing, Multitasking ?
>> 
>> Here is one way to do it using Parallel::Forkmanager.
>> 
> ...
>
>> Not very impressive. Between no forking vs max 20 instances, time 
>> required to process was reduced by 20% with most of the gains coming 
>> from running 2. That probably has more to do with the implementation
>> of fork on Windows than anything else.
>> 
>> In fact, I should probably have used threads on Windows. Anyway, I'll 
>> boot into Linux and see if the returns there are greater.
>
> Hmmm ... I tried it on ArchLinux using perl from the repository on the 
> exact same hardware as the Windows tests:
>
> [sinan@archardy large]$ time perl process.pl 0
>
> real    0m29.983s
> user    0m29.848s
> sys     0m0.073s
>
> [sinan@archardy large]$ time perl process.pl 2
>
> real    0m15.281s
> user    0m29.865s
> sys     0m0.077s
>
> with no changes going to 4, 8, 16 or 20 max instances. Exact same 
> program and data on the same hardware, yet the no fork version was 40% 
> faster.

Where do you get this 40% figure from? As far as I can see the forking
version is almost exactly 100% faster (0m15.281s instead of 0m29.983s)
than the non-forking version. 

This is to be expected. Your small test files fit completely into memory
even on rather small systems and if you ran process.pl directly after
create.pl, they almost certainly were. So the task is completely
CPU-bound, and if you have at least two cores (most current computers
have) two processes should be twice as fast as one.

Here is what I get for 

    for i in `seq 0 25`
    do
	echo -n "$i "
	time ./process $i
    done

on a dual-core system:

0 ./process $i  20.85s user 0.10s system 99% cpu 21.024 total
1 ./process $i  22.03s user 0.06s system 99% cpu 22.146 total
2 ./process $i  21.86s user 0.04s system 197% cpu 11.093 total
3 ./process $i  22.63s user 0.09s system 197% cpu 11.505 total
[...]
23 ./process $i  21.67s user 0.15s system 199% cpu 10.956 total
24 ./process $i  22.91s user 0.10s system 199% cpu 11.553 total
25 ./process $i  22.05s user 0.08s system 199% cpu 11.124 total


Two processes are twice as fast as one, but adding more processes
doesn't help (but doesn't hurt either).

And here's the output for an 8-core system:

0 ./process $i  10.22s user 0.05s system 99% cpu 10.275 total
1 ./process $i  10.13s user 0.07s system 100% cpu 10.196 total
2 ./process $i  10.19s user 0.06s system 199% cpu 5.138 total
3 ./process $i  10.19s user 0.06s system 284% cpu 3.606 total
4 ./process $i  10.19s user 0.06s system 395% cpu 2.589 total
5 ./process $i  10.18s user 0.06s system 472% cpu 2.167 total
6 ./process $i  10.20s user 0.05s system 495% cpu 2.069 total
7 ./process $i  10.20s user 0.07s system 650% cpu 1.580 total
8 ./process $i  10.18s user 0.06s system 652% cpu 1.571 total
9 ./process $i  10.19s user 0.05s system 659% cpu 1.553 total
10 ./process $i  10.20s user 0.06s system 667% cpu 1.538 total
11 ./process $i  10.19s user 0.06s system 666% cpu 1.538 total
12 ./process $i  10.19s user 0.06s system 706% cpu 1.451 total
13 ./process $i  10.19s user 0.05s system 662% cpu 1.545 total
14 ./process $i  10.19s user 0.06s system 689% cpu 1.486 total
15 ./process $i  10.19s user 0.05s system 708% cpu 1.446 total
16 ./process $i  10.20s user 0.06s system 755% cpu 1.357 total
17 ./process $i  10.22s user 0.06s system 756% cpu 1.360 total
18 ./process $i  10.20s user 0.05s system 741% cpu 1.383 total
19 ./process $i  10.21s user 0.06s system 729% cpu 1.407 total
20 ./process $i  10.23s user 0.05s system 726% cpu 1.415 total
21 ./process $i  10.20s user 0.06s system 749% cpu 1.368 total
22 ./process $i  10.21s user 0.05s system 726% cpu 1.411 total
23 ./process $i  10.23s user 0.06s system 739% cpu 1.392 total
24 ./process $i  10.21s user 0.04s system 712% cpu 1.440 total
25 ./process $i  10.20s user 0.05s system 739% cpu 1.386 total

Speed rises almost linearly until 7 processes (which manage to use 6.5
cores). Then it gets still a bit faster until 16 to 17 processes (using
7.5 cores) and after that it levels off. Not quite what I expected but
close enough.

For the OP's problem, this test is most likely not representative: He
has a lot more files and each is larger. So they may not fit into the
cache, and even if they do, they probably aren't in the cache when his
script runs (depends on how long ago they were last read/written and how
busy the system is).

	hp


------------------------------

Date: Sat, 13 Dec 2008 23:26:25 +0100
From: "Peter J. Holzer" <hjp-usenet2@hjp.at>
Subject: Re: Processing Multiple Large Files
Message-Id: <slrngk8dki.bnq.hjp-usenet2@hrunkner.hjp.at>

On 2008-12-12 15:02, cartercc <cartercc@gmail.com> wrote:
> On Dec 11, 3:27 pm, "friend...@gmail.com" <hirenshah...@gmail.com>
> wrote:
>> I analyzing some netwokr log files. There are around 200-300 files and
>> each file has more than 2 million entries in it.
>> Currently my script is reading each file line by line. So it will take
>> lot of time to process all the files.
[...]
>>
>> May be Multiprocessing, Multitasking ?
>
> If you are using an Intel-like processor, it multi processes, anyway.

No. At least not on a level you notice. From a perl programmer's view
(or a Java or C programmer's), each core is a separate CPU. A
single-threaded program will not become faster just because you have two
or more cores. You have to program those threads (or processes)
explicitely to get any speedup. See my results for Sinan's test program
for a dual- and eight core machine.

> There are only two ways to increase speed: increase the clocks of the
> processor or increase the number of processors. With respect to the
> latter, take a look at Erlang. I'd bet a lot of money that you could
> write an Erlang script that would increase the speed by several orders
> of magnitude.

I doubt that very much. Erlang is inherently multithreaded so you don't
have to do anything special to use those 2 or 8 cores you have, but it
doesn't magically make a processor "several orders of magnitude" faster,
unless you have hundreds or thousands of processors. 

I think Erlang is usually compiled to native code (like C), so it may
well be a few orders of magnitude faster than perl because of that. But
that depends very much on the problem and extracting stuff from text
files is something at which perl is relatively fast.

> (On my machine, Erlang generates about 60,000 threads in
> sevaral milliseconds, and I have an old, slow machine.)

Which says nothing about how long it takes to parse a line in a log
file.

	hp


------------------------------

Date: 14 Dec 2008 00:30:05 GMT
From: xhoster@gmail.com
Subject: Re: Processing Multiple Large Files
Message-Id: <20081213192853.428$mr@newsreader.com>

"Peter J. Holzer" <hjp-usenet2@hjp.at> wrote:
> On 2008-12-12 13:09, A. Sinan Unur <1usa@llenroc.ude.invalid> wrote:
> >
> >> Not very impressive. Between no forking vs max 20 instances, time
> >> required to process was reduced by 20% with most of the gains coming
> >> from running 2. That probably has more to do with the implementation
> >> of fork on Windows than anything else.
> >>
> >> In fact, I should probably have used threads on Windows. Anyway, I'll
> >> boot into Linux and see if the returns there are greater.
> >
> > Hmmm ... I tried it on ArchLinux using perl from the repository on the
> > exact same hardware as the Windows tests:
> >
> > [sinan@archardy large]$ time perl process.pl 0
> >
> > real    0m29.983s
> > user    0m29.848s
> > sys     0m0.073s
> >
> > [sinan@archardy large]$ time perl process.pl 2
> >
> > real    0m15.281s
> > user    0m29.865s
> > sys     0m0.077s
> >
> > with no changes going to 4, 8, 16 or 20 max instances. Exact same
> > program and data on the same hardware, yet the no fork version was 40%
> > faster.
>
> Where do you get this 40% figure from? As far as I can see the forking
> version is almost exactly 100% faster (0m15.281s instead of 0m29.983s)
> than the non-forking version.


I assumed he was comparing Linux to Windows, not within linux.

Xho

-- 
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.


------------------------------

Date: Sun, 14 Dec 2008 01:03:42 GMT
From: sln@netherlands.com
Subject: Re: Processing Multiple Large Files
Message-Id: <59j8k41h56n4ou7kpjno4forik9ttdl76e@4ax.com>

On Thu, 11 Dec 2008 12:27:15 -0800 (PST), "friend.05@gmail.com" <hirenshah.05@gmail.com> wrote:

>Hi,
>
>I analyzing some netwokr log files. There are around 200-300 files and
>each file has more than 2 million entries in it.
>
>Currently my script is reading each file line by line. So it will take
>lot of time to process all the files.
>
>Is there any efficient way to do it?
>
>May be Multiprocessing, Multitasking ?
>
>
>Thanks.

I'm estimating 100 characters perl line, 2 million lines per file,
at 300 files, will be about 60 Gigabytes of data to be read.

If the files are are to be read across a real 1 Gigabit network, just
reading the data will take about 10 minutes (I think, or about 600 seconds).
A Gigabit ethernet, theoretically can transmit 100 MB/second, if its cache
is big enough. But that includes packetizing data and protocol ack/nak's.
So, in reality, its about 50 MB/second.

Some drives can't read/write that fast. Its the upper limit of
some small raid systems. So the drives may not actually be able to keep up
with network read requests if thats all your doing.
The cpu will be mostly idle.

In reality though, your not reading huge blocks of data like on a disk to
disk transfer, your processing line by line.

In this case, the I/O requests are incrementally idle between cpu, line by
line file processing. AND the cpu sits Idle waiting for the incremental I/O
request.

So where is the time being lost? Well, its being lost in both places, on the
cpu and the I/O.

The cpu can process an equivalent menutia, of about 25 GB's/second, ram
at about 2-4 GB's/second, the hard drive (being slower than Gigabit ethernet)
at about 25 MB's/second

For the cpu:
It would be better to keep the cpu working all the time and not waiting for
I/O completion on single requests. The way to do this is to have many requests
submitted at one time. I would say 25 threads, on 25 different files at a time.
Your running the same function, just on a different thread. You just have to
know which file is next.
And multiple threads beats multiple processes hands down simply because process
switching takes much more overhead than thread switching. Another reason is that
you have 25 buffers waiting for the I/O data instead of just 1.

For the I/O:
When the I/O always has a request pending (cached) it doesen't have to wait for the
cpu. Most memory transfer will be via dma controller, not the cpu.

If you don't do multiple threads at all, there is still ways to speed it up.
Even if you create a cache of 10 lines at a time before you process, it would be
better than no threads at all.

Good luck.

sln





------------------------------

Date: Sun, 14 Dec 2008 01:20:08 GMT
From: sln@netherlands.com
Subject: Re: Processing Multiple Large Files
Message-Id: <cnn8k4tep2q50tjh4ckt080t52m7oiagtf@4ax.com>

On Sun, 14 Dec 2008 01:03:42 GMT, sln@netherlands.com wrote:

>On Thu, 11 Dec 2008 12:27:15 -0800 (PST), "friend.05@gmail.com" <hirenshah.05@gmail.com> wrote:
>
>
[...]
>If you don't do multiple threads at all, there is still ways to speed it up.
>Even if you create a cache of 10 lines at a time before you process, it would be
>better than no threads at all.
>
Of course its a block read, not line by line.

sln



------------------------------

Date: Sat, 13 Dec 2008 12:48:17 -0500
From: "Thrill5" <nospam@somewhere.com>
Subject: Re: querying Active Directory via LDAP in perl
Message-Id: <gi0sh4$71r$1@nntp.motzarella.org>


<joseph85750@yahoo.com> wrote in message 
news:e34ecf39-ee6c-42de-8056-d462661916f6@i24g2000prf.googlegroups.com...
On Dec 11, 7:11 pm, "Thrill5" <nos...@somewhere.com> wrote:
> <joseph85...@yahoo.com> wrote in message
>
> news:03a34234-c330-464e-9a0c-6bd65c28b96b@k36g2000pri.googlegroups.com...
>
>
>
> > I've been poking at this on and off over the past few months, never
> > having much success. I was never sure what sort of crazy query string
> > the AD server wanted. But then it occurred to me that my Linux
> > Evolution email client does this without any problems-- only using the
> > IP address of the Active Directory LDAP server. I can query/search,
> > and it immediately returns all matches.
>
> > How can it do this without the big ugly
> > "cn=users,dc=foo,dc=blah,o=acme......" string ?
>
> > Since this is obviously possible and simple (except for me), how could
> > I do this same simple query in perl-- only armed with the IP address
> > of my AD/LDAP server?
>
> > Curiously,
> > JS
>
> Google "LDAP query syntax", and you will find a whole bunch of information
> about querying AD via LDAP.

>Yes, google returns many articles mentioning query strings, such as:
>
>search DN: ou=groups,ou=@company,dc=corp,dc=trx,dc=com
>
>But back to my original question-- Evolution doesn't seem to need any
>of this.  In Evolution, you simply give it the IP address of your AD/
>LDAP server and it all magically works.  Evolution is running on a
>linux box, which has no knowledge of the query string variables.
>
>I even tried running a tcpdump on the connection to figure out what it
>was doing but couldn't figure it out.

You obviously haven't read them, if you did you would know how to do this. 
You need only to specify the CN to search for, the base DN (where to start 
the search) along with setting the appropriate seach scope (i.e. subtree). 




------------------------------

Date: Sat, 13 Dec 2008 21:39:58 +0100
From: "Peter J. Holzer" <hjp-usenet2@hjp.at>
Subject: Re: Trouble with Net::Ping
Message-Id: <slrngk87d0.bnq.hjp-usenet2@hrunkner.hjp.at>

On 2008-12-11 21:34, Ted Byers <r.ted.byers@gmail.com> wrote:
> On Dec 10, 5:46 pm, "Peter J. Holzer" <hjp-usen...@hjp.at> wrote:
>> On 2008-12-09 06:14, Ted Byers <r.ted.by...@gmail.com> wrote:
>> > and http lives on top of TCP.  Since I would expect a web
>> > client like a browser based on LWP would be using http exchanges,
>>
>> Maybe I'm mixing up threads, but didn't you have problems with FTP?
>> That's a different protocol then HTTP. Both use TCP, but you can't debug
>> either of them properly by trying to connect to TCP port 7 (which is
>> what Net::Ping does).
>>
>>
> Actually, I am having trouble with both.  One script uses Net::FTP to
> transfer archives from one of our servers to another across the
> country, and the other uses LWP to retrieve a data feed from one of
> our suppliers.  Both work flawlessly when the amount of data is small
> (up to a few hundred k) and both time out when the amount of data is
> large (a few MB). Might it be that the client merely thinks the
> connection has timed out but that the requested data is still being
> transferred?  I know that the script that pushes our large archives to
> our other server does complete the transfer of the larger files (we
> can open them and see they're intact), but then dies believing the
> connection is broken and so never makes an attempt to send the next
> file in thelist.

Keep in mind that FTP uses at several connections: One control
connection (for the entire session) and one data connection per file
transfer. During a file transfer there is no traffic on the control
connection, so a firewall between the server and the client might drop
the connection in the meantime. This doesn't seem very probably since
your files are so small (only a few MB), and should be tranferred in a
few minutes at most, but it does match the symptoms your are having, and
your workaround:

> We managed to work around this by opening the
> connection to the ftp server just before trying to transfer a file and
> closing it immediately after sending the file (something my colleague
> - our system administrator - set up).

is what I would suggest in such a situation, unless you can fix the
firewall, or switch to a less baroque protocol (like SFTP, HTTP/WebDAV,
or rsync).


> WRT LWP, when the transfer fails, we work around it by recursively
> partitioning the period for which we're requesting data into smaller
> and smaller sub-intervvals until we get all the data.  This is only
> necessary when there is several MB worth of data to be retrieved.

First of all, try to find out what the problem is:

Does the transfer work if you use some other tool (wget, curl, a web
browser, ...)? If not, it isn't a problem with LWP, and hence off-topic
for this newsgroup (it may become on-topic again, when the problem is
known, cannot be fixed and you need to implement a work-around). Do
large HTTP downloads from other sites work? 

And remember: Always try to test exactly what you want to test. If you
want to test whether host A can reach host B, it makes no sense to run a
test on host C. If you are interested in HTTP, it makes no sense to use
ping, etc. Except as control experiments: If you have two experiments
where one works and one doesn't (e.g., you can transfer files from B to
C, but not from B to A), that difference is likely related to your
problem. But you need to be aware of the differences.


> Again, is there a better way?

We can't know "a better way", since we don't know what the problem is.
It might be a problem in a perl module. But it might also be a problem
in one of the two firewalls or with your internet connection. Since your
problems with large (still small in my book) downloads occur with two
independent perl modules and maybe even other tools (you didn't say
whether the FTP download your system administrator set up uses perl or
something else) I'd guess that it's not the Perl modules.


>> > But there is a problem, here, with what you've written.  You say the
>> > ping command uses ICMP, but you saywww.microsoft.comis not ICMP
>> > pingable.  I used the ping command provided by MS with WXP.  Why would
>> > that use ICMP ifwww.microsoft.comis not ICMP pingable.
>>
>> MS ping uses ICMP because that's its job. "ping" is the command which
>> checks if a host replies to ICMP echo requests. For some reason
>> Microsoft doesn't want its web servers to reply to ICMP echo requests.
>> Maybe they think you should use a browser to look at a webserver, not
>> ping ;-).
>>
> OK.  But that makes them unfriendly when one needs to check
> connectivity.

They probably don't have any intention of being friendly for that
purpose.

(Personally I think blocking ICMP is stupid. I mean, if anybody is going
to attack www.microsoft.com, why should they try to ping them first?
They can attack the web server directly, they already know that it is there)


>> > In any event, LWP gives only a mention that a given transfer timed out
>> > (and it happens only when trying to transfer a multimegabyte file),
>> > but not why.  I DID use the LWP::DebugFile package for this, but the
>> > data doesn't seem very detailed.
>>
>> > I was, in fact, advised in this forum to check connectivity between
>> > the machines in the transaction using ping and traceroute.
>>
>> Yes. The commands "ping" and "traceroute". They already exist and are
>> almost certainly installed on your linux server. No need to write a
>> replacement in Perl.
>>
> OK.  But the server running my script is Windows.

Sorry, I was never sure which host did what and until this posting I
didn't even realize that you were talking about two different problems
(or two different symptoms of the same problem). You just aren't very
precise in your descriptions (and I don't want to ask more about your
network setup, since this is a Perl newsgroup, not a network newsgroup -
crosspost and fup to a more appropriate newsgroup if you want to
continue this discussion).

> There traceroute is tracert (unless Windows Server uses a different
> name than the other versions of Windows).  And for the purpose of
> automating checking of connectivity one would need a little extra code
> to invoke these OS commands and parse the output to check for success
> or failure (something I'd assumed was what Net::Ping and
> Net::Traceroute were made for, but it appears that was wrong).

I don't think anybody suggested that you should automate these tests.
Just do them once to see if you have basic connectivity (well, you
already know that) and how the route from one host to the other looks
like.  They aren't all that helpful for your problem.

If you need continuous monitoring of connectivity, use a monitoring tool
like Nagios.

	hp



------------------------------

Date: Sat, 13 Dec 2008 09:58:21 -0800 (PST)
From: "georg.heiss@gmx.de" <georg.heiss@gmx.de>
Subject: WWW::Search('Ebay::ByEndDate')
Message-Id: <94cbfde0-6182-4190-9aa6-eac894e8c563@s1g2000prg.googlegroups.com>

Hi, why doesn't work this search? any ideas?

#!/usr/bin/perl -w
use strict;
use WWW::Search;

  my $EBAY_HOST = 'http://search.ebay.de';
#  my $search = new WWW::Search('Ebay::ByEndDate');
  my $search = new WWW::Search('Ebay');
  my $q = WWW::Search::escape_query("iphone");
  my $hits;

  $search->native_query($q, { ebay_host => $EBAY_HOST } );

   while (my $r = $search->next_result()) {
     $hits++;
     print "Result: , $r->url()\n";
    #    $r->title(),
    #    $r->description(),
    #    $r->change_date();

   }


------------------------------

Date: Sat, 13 Dec 2008 10:20:52 -0800
From: Tim Greer <tim@burlyhost.com>
Subject: Re: WWW::Search('Ebay::ByEndDate')
Message-Id: <82T0l.3641$iB4.2278@newsfe02.iad>

georg.heiss@gmx.de wrote:

> Hi, why doesn't work this search? any ideas?

The code itself doesn't tell people much.  What errors are you getting,
if any?  What is it doing or not doing, and what are you expecting it
to do?  What is the nature of the problem?
-- 
Tim Greer, CEO/Founder/CTO, BurlyHost.com, Inc.
Shared Hosting, Reseller Hosting, Dedicated & Semi-Dedicated servers
and Custom Hosting.  24/7 support, 30 day guarantee, secure servers.
Industry's most experienced staff! -- Web Hosting With Muscle!


------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc.  For subscription or unsubscription requests, send
#the single line:
#
#	subscribe perl-users
#or:
#	unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.  

NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice. 

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V11 Issue 2054
***************************************


home help back first fref pref prev next nref lref last post