[24323] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 6514 Volume: 10

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Tue May 4 11:05:56 2004

Date: Tue, 4 May 2004 08:05:13 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Tue, 4 May 2004     Volume: 10 Number: 6514

Today's topics:
    Re: Authen::NTLM and MS04-011 (Andrew Speer)
    Re: Books online???? <nobull@mail.com>
    Re: Books online???? <xxala_qumsiehxx@xxyahooxx.com>
        download webpage <anonymous@disneyland.com>
    Re: download webpage <1usa@llenroc.ude>
    Re: download webpage <anonymous@disneyland.com>
    Re: download webpage <spamtrap@dot-app.org>
    Re: download webpage <spamtrap@dot-app.org>
        Dprof in a multiprocess script (flazan)
        Emacs modules for Perl programming (Jari Aalto+mail.perl)
    Re: Finding file size over network (Anno Siegel)
    Re: Finding file size over network <usenet@morrow.me.uk>
    Re: How do I get MIME skeleton? <glex_nospam@qwest.invalid>
    Re: How do I get MIME skeleton? <admin@asarian-host.net>
    Re: is there something more elegant to convert Dos to u (Anno Siegel)
    Re: is there something more elegant to convert Dos to u <noreply@gunnar.cc>
    Re: PDL code (Anno Siegel)
    Re: Q on "use" <socyl@987jk.com>
        Security descriptors (Andy)
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: 4 May 2004 04:55:07 -0700
From: andrew.speer@isolutions.com.au (Andrew Speer)
Subject: Re: Authen::NTLM and MS04-011
Message-Id: <967a9d2.0405040355.a94beb5@posting.google.com>

Kevin,

I recently came across this same problem. The challenge format looks
to have changed, and as a result Authen::NTLM seems to sends a
"broken" NT domain string to the server.

The fix (for me) was to alter the code (v1.02 in my case). In the
"ntlm" subroutine change the line:

$domain = substr($c_info->{buffer}, 0, $c_info->{domain}{len}); 

to 

$domain = substr($challenge, $c_info->{domain}{offset},
$c_info->{domain}{len});

which fixed the problem for me. I hope it is also backwards compatible
with pre MS04-11 patched server, but have been unable to test.

I have sent a private email to Mark with similar information, so
hopefully the module will be updated sometime.

Thank <deity> for Ethereal - without it this would have been nigh
impossible to debug.

Andrew


------------------------------

Date: 04 May 2004 12:56:20 +0100
From: Brian McCauley <nobull@mail.com>
Subject: Re: Books online????
Message-Id: <u9wu3ss8wr.fsf@wcl-l.bham.ac.uk>

Henry Williams <***************> writes:

> [...] I have found several Internet sites that have all the O'Reilly
> books on perl, online.

> "This CD-ROM is intended for use by one individual. As such, you may
> make copies for your own personal use. However, you may not provide
> copies to others, or make this reference library available to others
> over a LAN or other network."
> 
> However it is readily available on the Internet everywhere you care to
> hunt. I have resisted mass downloading these sites but do wonder, have
> any of you all noticed the same?

Yes, I've stumbled into a few when Google searching. 

> I buy all my books but wonder what's up with all this?

Some are deliberate Warez and some I suspect are people intending to
put the CD for use on on their intranet and not bothering to block
external access.  Yeah, I know that the license bans intranet
publication but morally doing so is not as bad as putting it up on the
Internet.

> I am not posting URL's but that is a scarce barrier, as they are
> everywhere.

They change often anyhow.

> What's up with all that? Heck you can get the real book from eBay for
> pennies on the dollar. And I'd rather fall asleep with a good book, in
> lieu of more favorable circumstances.

I like to be able to read all the O'Reilly books on line - no chasing
to find out who's borrowed the print/CD version, no lead-time when you
want a new book.  I must admit that I did occasionally take advantage
of the pirate sites, for the convenience not to cheat the
authors/publishers out of their livelihoods.  Then I discovered
safari.oreilly.com and that made an honest man of me for
$9.99+Tax[1]/Mo

[1] In the EU we have to pay our local sales tax on this service.

--      
     \\   ( )
  .  _\\__[oo
 .__/  \\ /\@
 .  l___\\
  # ll  l\\
 ###LL  LL\\


------------------------------

Date: Tue, 04 May 2004 14:50:38 GMT
From: Ala Qumsieh <xxala_qumsiehxx@xxyahooxx.com>
Subject: Re: Books online????
Message-Id: <25Olc.60006$MJ6.28887@newssvr25.news.prodigy.com>

bxb7668 wrote:

> In this case I wasn't asking anything. My Tk questions had been
> answered. I was commenting on sarcastic answers.

Would you please point out any sarcastic answers you got on clp.tk? I 
lurk there too, and have personally answered a few of your questions. I 
don't recall seeing any sarcastic answers.

> I agree with you completely. I prefer to get the answer to my question
> and enough information so that I know how to find the answer myself
> the next time.

Good. To that end, RTFM is the correct response to your question *IF* it 
does indeed answer your question. This is true for most questions in any 
newsgroup.

> I'm not sure which I find worse. The "whiners" that want us to do
> their work for them, or the "sarcatics" who rant at posters for
> questions that they consider "stupid" or "whining". There are too many
> OP that don't say anything more than "Why isn't print working?"
> without giving any details. I would rather reply with a polite request
> for those details. I've seen some "experts" reply with statements like
> "That's a stupid question." and nothing else. That is rude, not
> helpful for anyone and a waste of bandwidth.

My turn to agree with you completely now. But, the behavior of "experts" 
is sometimes understandable. Imagine giving up your time for free to 
answer other people's questions, yet you are bombarded by FAQs posted by 
lazy people who don't want to waste their *own* time looking it up. You 
can be nice a few times, but then it gets to you. My reaction to that is 
to help the newbie the first couple of times, but then to ignore his/her 
posts if it keeps recurring.

--Ala



------------------------------

Date: Tue, 04 May 2004 12:51:24 GMT
From: "luc" <anonymous@disneyland.com>
Subject: download webpage
Message-Id: <glMlc.95322$Ql.5965871@phobos.telenet-ops.be>

With the following command u can download a webpage and put it in ur c:

print `get http://charts.iex.nl/charts/D0-I230482-P-S50-Lnl.png >
c:/beurs/test/agfagevaert.png`;
(note: the single quotes here are a single backquotes).
This is a webpage that shows the evolution during the day of a share on the
amsterdam stock market.
At 17.00 this program is manually run.
The problem is that I would like to download this page everyday. But if I
use the above code it will overwrite the file. How does this code need to be
changed in order to have 5 files(of the 5 weekdays) at the end of the week.





------------------------------

Date: 4 May 2004 12:55:09 GMT
From: "A. Sinan Unur" <1usa@llenroc.ude>
Subject: Re: download webpage
Message-Id: <Xns94DF5ABB7E8DAasu1cornelledu@132.236.56.8>

"luc" <anonymous@disneyland.com> wrote in
news:glMlc.95322$Ql.5965871@phobos.telenet-ops.be: 

> With the following command u can download a webpage and put it in ur
> c: 

What do you mean with 'u' and 'ur'?
 
> print `get http://charts.iex.nl/charts/D0-I230482-P-S50-Lnl.png >
> c:/beurs/test/agfagevaert.png`;
> (note: the single quotes here are a single backquotes).
> This is a webpage that shows the evolution during the day of a share
> on the amsterdam stock market.
> At 17.00 this program is manually run.
> The problem is that I would like to download this page everyday. But
> if I use the above code it will overwrite the file. How does this code
> need to be changed in order to have 5 files(of the 5 weekdays) at the
> end of the week. 

Why don't you prefix filenames with the date in YYYYMMDD format?

Sinan.

-- 
A. Sinan Unur
1usa@llenroc.ude (reverse each component for email address)


------------------------------

Date: Tue, 04 May 2004 13:01:18 GMT
From: "luc" <anonymous@disneyland.com>
Subject: Re: download webpage
Message-Id: <yuMlc.95331$K%2.5677129@phobos.telenet-ops.be>


"A. Sinan Unur" <1usa@llenroc.ude> schreef in bericht
news:Xns94DF5ABB7E8DAasu1cornelledu@132.236.56.8...
> "luc" <anonymous@disneyland.com> wrote in
> news:glMlc.95322$Ql.5965871@phobos.telenet-ops.be:
>
> > With the following command u can download a webpage and put it in ur
> > c:
>
> What do you mean with 'u' and 'ur'?
>
> > print `get http://charts.iex.nl/charts/D0-I230482-P-S50-Lnl.png >
> > c:/beurs/test/agfagevaert.png`;
> > (note: the single quotes here are a single backquotes).
> > This is a webpage that shows the evolution during the day of a share
> > on the amsterdam stock market.
> > At 17.00 this program is manually run.
> > The problem is that I would like to download this page everyday. But
> > if I use the above code it will overwrite the file. How does this code
> > need to be changed in order to have 5 files(of the 5 weekdays) at the
> > end of the week.
>
> Why don't you prefix filenames with the date in YYYYMMDD format?
>
> Sinan.

you can't because when you use the single backquotes command is given to
dos.


>
> --
> A. Sinan Unur
> 1usa@llenroc.ude (reverse each component for email address)




------------------------------

Date: Tue, 04 May 2004 09:14:51 -0400
From: Sherm Pendley <spamtrap@dot-app.org>
Subject: Re: download webpage
Message-Id: <7vadnVkcnanWCgrdRVn-jg@adelphia.com>

A. Sinan Unur wrote:

> What do you mean with 'u' and 'ur'?

It's shorthand for "I'm too lazy to spell out 'you' and 'your'." ;-)

sherm--

-- 
Cocoa programming in Perl: http://camelbones.sourceforge.net
Hire me! My resume: http://www.dot-app.org


------------------------------

Date: Tue, 04 May 2004 09:20:37 -0400
From: Sherm Pendley <spamtrap@dot-app.org>
Subject: Re: download webpage
Message-Id: <YrWdnVQxCYI6BQrdRVn-gQ@adelphia.com>

luc wrote:

> "A. Sinan Unur" <1usa@llenroc.ude> schreef in bericht
> news:Xns94DF5ABB7E8DAasu1cornelledu@132.236.56.8...
>>
>> Why don't you prefix filenames with the date in YYYYMMDD format?
>>
>> Sinan.
> 
> you can't because when you use the single backquotes command is given to
> dos.

You can - Variable interpolation works just fine with backticks.

my $date = 'yyyymmdd';
my $url = 'http://server.com/path.png';
print `get $url > c:/path/to/$date-filename.png`;

sherm--

-- 
Cocoa programming in Perl: http://camelbones.sourceforge.net
Hire me! My resume: http://www.dot-app.org


------------------------------

Date: 4 May 2004 03:28:09 -0700
From: fanton@ksolutions.it (flazan)
Subject: Dprof in a multiprocess script
Message-Id: <2ccc1a8a.0405040228.2260c232@posting.google.com>

Hi all,

does someone know an easy profiler like Devel::DProf that works in a 
multiprocess script too, bringing back a report for single process?


flazan


------------------------------

Date: 04 May 2004 13:01:35 GMT
From: <jari.aalto@poboxes.com> (Jari Aalto+mail.perl)
Subject: Emacs modules for Perl programming
Message-Id: <perl-faq/emacs-lisp-modules_1083675484@rtfm.mit.edu>

Archive-name: perl-faq/emacs-lisp-modules
Posting-Frequency: 2 times a month
URL: http://tiny-tools.sourceforge.net/
Maintainer: Jari Aalto <jari.aalto@poboxes.com>

Announcement: "What Emacs lisp modules can help with programming Perl"

    Preface

        Emacs is your friend if you have to do anything comcerning software
        development: It offers plug-in modules, written in Emacs lisp
        (elisp) language, that makes all your programmings wishes come
        true. Please introduce yourself to Emacs and your programming era
        will get a new light.

    Where to find Emacs/XEmacs

        o   Unix:
            http://www.gnu.org/software/emacs/emacs.html
            http://www.xemacs.org/

        o   Unix Windows port (for Unix die-hards):
            install http://www.cygwin.com/  which includes native Emacs 21.x.
            XEmacs port is bundled in XEmacs setup.exe available from
            XEmacs site.

        o   Pure Native Windows port
            http://www.gnu.org/software/emacs/windows/ntemacs.html
            ftp://ftp.xemacs.org/pub/xemacs/windows/setup.exe

        o   More Emacs resources at
            http://tiny-tools.sourceforge.net/  => Emacs resource page

Emacs Perl Modules

    Cperl -- Perl programming mode

        ftp://ftp.math.ohio-state.edu/pub/users/ilya/perl
        http://www.perl.com/CPAN-local/misc/emacs/cperl-mode/
        <ilya@math.ohio-state.edu>    Ilya Zakharevich

        CPerl is major mode for editing perl files. Forget the default
        `perl-mode' that comes with Emacs, this is much better. Comes
        standard in newest Emacs.

    TinyPerl -- Perl related utilities

        http://tiny-tools.sourceforge.net/

        If you ever wonder how to deal with Perl POD pages or how to find
        documentation from all perl manpages, this package is for you.
        Couple of keystrokes and all the documentaion is in your hands.

        o   Instant function help: See documentation of `shift', `pop'...
        o   Show Perl manual pages in *pod* buffer
        o   Grep through all Perl manpages (.pod)
        o   Follow POD references e.g. [perlre] to next pod with RETURN
        o   Coloured pod pages with `font-lock'
        o   Separate `tiperl-pod-view-mode' for jumping topics and pages
            forward and backward in *pod* buffer.

        o   Update `$VERSION' variable with YYYY.MMDD on save.
        o   Load source code into Emacs, like Devel::DProf.pm
        o   Prepare script (version numbering) and Upload it to PAUSE
        o   Generate autoload STUBS (Devel::SelfStubber) for you
            Perl Module (.pm)

    TinyIgrep -- Perl Code browsing and easy grepping

        [TinyIgrep is included in Tiny Tools Kit]

        To grep from all installed Perl modules, define database to
        TinyIgrep. There is example file emacs-rc-tinyigrep.el that shows
        how to set up dattabases for Perl5, Perl4 whatever you have
        installed

        TinyIgrep calls Igrep.el to to do the search, You can adjust
        recursive grep options, set search case sensitivity, add user grep
        options etc.

        You can find latest `igrep.el' module at
        <http://groups.google.com/groups?group=gnu.emacs.sources> The
        maintainer is Jefin Rodgers <kevinr@ihs.com>.

    TinyCompile -- To Browse grep results in Emacs *compile* buffer

        TinyCompile is a minor mode for *compile* buffer from where
        you can collapse unwanted lines or shorten file URLs:

            /asd/asd/asd/asd/ads/as/da/sd/as/as/asd/file1:NNN: MATCHED TEXT
            /asd/asd/asd/asd/ads/as/da/sd/as/as/asd/file2:NNN: MATCHED TEXT

            -->

            cd /asd/asd/asd/asd/ads/as/da/sd/as/as/asd/
            file1:NNN: MATCHED TEXT
            file1:NNN: MATCHED TEXT

End



------------------------------

Date: 4 May 2004 11:51:13 GMT
From: anno4000@lublin.zrz.tu-berlin.de (Anno Siegel)
Subject: Re: Finding file size over network
Message-Id: <c7803h$fi$1@mamenchi.zrz.TU-Berlin.DE>

Cosmic Cruizer <XXjbhuntxx@white-star.com> wrote in comp.lang.perl.misc:
> XXjbhuntxx@white-star.com (Cosmic Cruizer) wrote in
> <Xns94DEB1B73100Dccruizermydejacom@64.164.98.50>: 
> 
> >I'm having trouble getting file stats from files on remote servers. All
> >the filenames get printed from within the if statement, but I am not
> >getting anything for the file size (or any other stat I try). Doing a
> >print on $target_file returns the full path and filename.
> >
> >Any suggestions?
> >
> >
> >
> >foreach (@server_list) {
> >  print "$_ \n";
> >
> >  system("net use q: $_ /USER:$user $password");
> >
> >  opendir DIR, $file_path or die "Cannot open: $!";

You should also mention the path name in the error message.

> >    my @files = grep { /GLC[0-9a-zA-Z]*\.tmp/ } readdir DIR;
> >
> >    for my $file ( @files ) {
> >      my $target_file = $file_path . $file;

This will only work if $file_path is set up with a trailing slash.
Otherwise you need

    my $target_file = "$file_path/" . $file;

or similar.

> >      if (-e $target_file) {

Why do you check for existence of the file again?  You have just
pulled it out of a directory.  It would be better to add a check
after the stat() call and print the system error if it fails.

> >        $size = (stat($target_file))[7];  # Use 8th element of stat

Why isn't $size a lexical?  Aren't you running under strict?

> >        print "  $file \t $size\n";
> >      }
> >
> >    }
> >
> >  closedir DIR;
> >
> >  system("net use q: /delete");
> >}
> 
> Finally figured it out. I replaced:
>    $size = (stat($target_file))[7];
> with
>    $size = stat($target_file)->size;

You need one of File::Stat or File::stat, and it must be used so that
it overrides the built-in stat() for this to work.  It isn't
in your code, so you should have mentioned that.  It takes people a
lot of time to figure out by themselves what's going on.

Otherwise, that doesn't make sense.  Both statements should set $size
to the same value, and do for me.  ->size is just a wrapper around
the builtin.

Anno


------------------------------

Date: Tue, 4 May 2004 11:58:01 +0000 (UTC)
From: Ben Morrow <usenet@morrow.me.uk>
Subject: Re: Finding file size over network
Message-Id: <c780g9$9ip$1@wisteria.csv.warwick.ac.uk>


Quoth anno4000@lublin.zrz.tu-berlin.de (Anno Siegel):
> Cosmic Cruizer <XXjbhuntxx@white-star.com> wrote in comp.lang.perl.misc:
> > >
> > >      my $target_file = $file_path . $file;
> 
> This will only work if $file_path is set up with a trailing slash.
> Otherwise you need
> 
>     my $target_file = "$file_path/" . $file;
> 
> or similar.

Or, better, use File::Spec.

Ben

-- 
  Joy and Woe are woven fine,
  A Clothing for the Soul divine       William Blake
  Under every grief and pine          'Auguries of Innocence'
  Runs a joy with silken twine.                                ben@morrow.me.uk


------------------------------

Date: Tue, 04 May 2004 07:26:28 -0500
From: "J. Gleixner" <glex_nospam@qwest.invalid>
Subject: Re: How do I get MIME skeleton?
Message-Id: <UZLlc.3$vW5.7943@news.uswest.net>

Mark wrote:
> Good morning,
> 
> I have been using MIME::Parser (From MIME-tools-6.200_02) to get a list of
> all the "part" names of a MIME encoded message. I used something like:
> 
> $entity -> dump_skeleton ();
> 
> However, that decodes the entire message, and writes its parts to disk. And
> all I want, is a list of names of all the parts (recursed). Like
> dump_skeleton does, but then without the expensive decoding/disk IO.
> 
> Does anyone know whether there is an existing function for this?

Never used the module, but taking a quick read through the documentation..

### Keep parsed message bodies in core (default outputs to disk):
     $parser->output_to_core(1);

"...is true, then all body data goes to in-core data structures This is 
a little risky (what if someone emails you an MPEG or a tar file, hmmm?) 
but people seem to want this bit of noose-shaped rope, so I'm providing it."

Could use Data::Dumper to look at the structure so you could parse out 
whatever you need.

Sounds like what you're after?


------------------------------

Date: Tue, 4 May 2004 15:43:17 +0200
From: "Mark" <admin@asarian-host.net>
Subject: Re: How do I get MIME skeleton?
Message-Id: <tYidnXAn-LBjAArd4p2dnA@giganews.com>

J. Gleixner wrote:

>> I have been using MIME::Parser (From MIME-tools-6.200_02) to get a
>> list of all the "part" names of a MIME encoded message. I used
>> something like:
>>
>> $entity -> dump_skeleton ();
>>
>> However, that decodes the entire message, and writes its parts to
>> disk. And all I want, is a list of names of all the parts
>> (recursed). Like dump_skeleton does, but then without the expensive
>> decoding/disk IO.
>>
>> Does anyone know whether there is an existing function for this?
>
> Never used the module, but taking a quick read through the
> documentation..
>
> ### Keep parsed message bodies in core (default outputs to disk):
>      $parser->output_to_core(1);
>
> "...is true, then all body data goes to in-core data structures ..."
>
> Sounds like what you're after?

Not really; output_to_core (1) stilll *decodes* the various parts, and
generates them to memory. What I am looking for, however, is a function that
simply identifies the different parts, and lists them, without the excessive
overhead of actually decoding.

Thanks,

- Mark




------------------------------

Date: 4 May 2004 10:22:36 GMT
From: anno4000@lublin.zrz.tu-berlin.de (Anno Siegel)
Subject: Re: is there something more elegant to convert Dos to unix in   subroutine?
Message-Id: <c77qtc$o54$3@mamenchi.zrz.TU-Berlin.DE>

Gunnar Hjalmarsson  <noreply@gunnar.cc> wrote in comp.lang.perl.misc:
> Paul Lalli wrote:
> >> Gunnar Hjalmarsson wrote:
> >>> Yes. Subroutines that do what they are supposed to do are
> >>> always more elegant.
> > 
> > I *think* the bit Gunnar was complaining about is the line right
> > below chomp.
> 
> <snip>
> 
> > (Gunnar, feel free to point out if there's another bit that I'm
> > missing)
> 
> My main "complaint" is the prototype that disallows that arguments are
> passed to the sub.

I wonder if that bug was masked by a gratuitous ampersand in the call

    "&toUnixFile( $some_file)"

would work (as far as that goes), even with the empty prototype.

Anno


------------------------------

Date: Tue, 04 May 2004 13:37:35 +0200
From: Gunnar Hjalmarsson <noreply@gunnar.cc>
Subject: Re: is there something more elegant to convert Dos to unix in   subroutine?
Message-Id: <c77vho$od6p$1@ID-184292.news.uni-berlin.de>

Anno Siegel wrote:
> Gunnar Hjalmarsson wrote:
>> My main "complaint" is the prototype that disallows that
>> arguments are passed to the sub.
> 
> I wonder if that bug was masked by a gratuitous ampersand in the
> call
> 
>     "&toUnixFile( $some_file)"
> 
> would work (as far as that goes), even with the empty prototype.

Hmm.. Didn't think of that possibility.

-- 
Gunnar Hjalmarsson
Email: http://www.gunnar.cc/cgi-bin/contact.pl



------------------------------

Date: 4 May 2004 10:41:53 GMT
From: anno4000@lublin.zrz.tu-berlin.de (Anno Siegel)
Subject: Re: PDL code
Message-Id: <c77s1h$o54$4@mamenchi.zrz.TU-Berlin.DE>

Mark Ohlund  <ohlund@woodwrecker.com> wrote in comp.lang.perl.misc:

[snip]

> When I call the VectorSpace code, I get an error at line 180:
> 
> Can't modify non-lvalue subroutine call in concatenation (.) or string 
> at /usr/local/lib/perl5/5.6.1/Search/VectorSpace.pm line 180, near "$value;"
> 
> Line 180 is:
> 
> index( $vector, $offset ) .= $value;
> 
> I *think* the problem is that rather than using the PDL index function 
> which should set the value of the PDL vector object at $offset to 
> $value, Perl thinks I'm trying to access the intrinsic Perl index 
> function. I've tried prefacing the index call with PDL:: to no avail.

No, the error message would be different ("Can't modify index in
concatenation (.) or string at...").  It's calling a user-defined
sub alright, and the sub *would* have to be an lvalue sub for the
modification to work.  It looks like you have found a bug.

Anno


------------------------------

Date: Tue, 4 May 2004 13:52:24 +0000 (UTC)
From: kj <socyl@987jk.com>
Subject: Re: Q on "use"
Message-Id: <c7876o$j6v$1@reader2.panix.com>

In <409759e2$0$16036$3b214f66@usenet.univie.ac.at> Heinrich.Mislik@univie.ac.at (Heinrich Mislik) writes:

>In article <c76r3k$6p9$1@reader2.panix.com>, socyl@987jk.com says...

>Assuming that your package names have no :: and that your sub
>get_available_services doesn't take any parameters, the following
>should work (untested):

>  use strict;
>  
>  my $services;
>  my @packages = qw(Foo Bar Baz);
>  
>  for my $package (@packages) {
>    my @services;
>    require "$package.pm";
>    @services = $package->get_available_services;

OK, I like that.

But just to be clear, regarding your caveat about '::' in the
package names, is there anything wrong (apart from aesthetics,
maybe) with:

  for my $package (@packages) {
    my @services;
    (my $package_filename = "$package.pm") =~ s,::,/,g;
    require $package_filename;
    @services = $package->get_available_services;

    # etc. ...
?

Thanks!

kj

P.S. If there's a better way than the regexp above to go from a
package name to the corresponding file name (assuming the usual
naming convention), please let me know.
-- 
NOTE: In my address everything before the period is backwards.


------------------------------

Date: 4 May 2004 05:06:46 -0700
From: andreasmaier@nurfuerspam.de (Andy)
Subject: Security descriptors
Message-Id: <ee11c0d9.0405040406.7268f4c0@posting.google.com>

Hello,

Does anybody know if there is a possibility to change permissions of a
shared directory in windows using perl?

There are some modules like Win32::FileSecurity, but they only change
permissions for files/directory within the local file system, but I
found no modules to change the permissions for the share.

I know, the permissions for the share are saved as security
descriptors in REGISTRY:{HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\Shares\security}

But this key is saved as a very long bitmask, and I'm still unable to
convert this bitmask into a readable string, and thus I'm also unable
to change these permissions.

Is there a module that can convert this bitmask or that can convert
any settings into this bitmask?

Some help would be great here, 
thanks, andy


------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

#The Perl-Users Digest is a retransmission of the USENET newsgroup
#comp.lang.perl.misc.  For subscription or unsubscription requests, send
#the single line:
#
#	subscribe perl-users
#or:
#	unsubscribe perl-users
#
#to almanac@ruby.oce.orst.edu.  

NOTE: due to the current flood of worm email banging on ruby, the smtp
server on ruby has been shut off until further notice. 

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

#To request back copies (available for a week or so), send your request
#to almanac@ruby.oce.orst.edu with the command "send perl-users x.y",
#where x is the volume number and y is the issue number.

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V10 Issue 6514
***************************************


home help back first fref pref prev next nref lref last post