[32917] in Perl-Users-Digest

home help back first fref pref prev next nref lref last post

Perl-Users Digest, Issue: 4195 Volume: 11

daemon@ATHENA.MIT.EDU (Perl-Users Digest)
Thu Apr 17 00:09:32 2014

Date: Wed, 16 Apr 2014 21:09:04 -0700 (PDT)
From: Perl-Users Digest <Perl-Users-Request@ruby.OCE.ORST.EDU>
To: Perl-Users@ruby.OCE.ORST.EDU (Perl-Users Digest)

Perl-Users Digest           Wed, 16 Apr 2014     Volume: 11 Number: 4195

Today's topics:
        Best Digital Marketing Services by Professional Web Des aveinfosys@gmail.com
    Re: clickable tree <xhoster@gmail.com>
    Re: Crawl nested data structure, apply code block to ea <Vorzakir@invalid.invalid>
    Re: Crawl nested data structure, apply code block to ea <rweikusat@mobileactivedefense.com>
    Re: Crawl nested data structure, apply code block to ea <Vorzakir@invalid.invalid>
    Re: Crawl nested data structure, apply code block to ea <thepoet_nospam@arcor.de>
    Re: Crawl nested data structure, apply code block to ea <hjp-usenet3@hjp.at>
    Re: Crawl nested data structure, apply code block to ea <rweikusat@mobileactivedefense.com>
    Re: Crawl nested data structure, apply code block to ea <Vorzakir@invalid.invalid>
    Re: Crawl nested data structure, apply code block to ea <rweikusat@mobileactivedefense.com>
        Geo tagging EXIF data? <tuxedo@mailinator.com>
    Re: Geo tagging EXIF data? <john@castleamber.com>
    Re: Geo tagging EXIF data? <jimsgibson@gmail.com>
    Re: Geo tagging EXIF data? <john@castleamber.com>
        Pathname from URL <cheney@halliburton.com>
    Re: Pathname from URL <peter@makholm.net>
        Digest Administrivia (Last modified: 6 Apr 01) (Perl-Users-Digest Admin)

----------------------------------------------------------------------

Date: Wed, 16 Apr 2014 03:05:06 -0700 (PDT)
From: aveinfosys@gmail.com
Subject: Best Digital Marketing Services by Professional Web Designing Company
Message-Id: <4510840e-4739-4dd2-bae5-ee53352da15c@googlegroups.com>

Ave Infosys is a leading professional in Web Designing Company in Hyderabad=
 India for the E-Business Industry.Ave Infosys are providing Best Website D=
evelopment and Design Services in Hyderabad.Our company offers the Best Web=
 Design & Development services, Web Hosting Services,Responsive and Mobile =
Designing services,Ecommerce websites services,Digital Marketing Services. =
We have intensive web design and web skills merging with the quality essenc=
e of expertise should have component to help you to ascertain your internet=
 presence or take it to the next level.we are the Best Web Design Company i=
n Hyderabad.

For More Details :

Please contact: (+91) 40 40275321
		=09
Email : info@aveinfosys.com=09
		=09
Web : http://aveinfosys.com/digital-marketing


------------------------------

Date: Tue, 15 Apr 2014 19:43:16 -0700
From: Xho Jingleheimerschmidt <xhoster@gmail.com>
Subject: Re: clickable tree
Message-Id: <likqvc$mmd$1@dont-email.me>

On 04/11/14 13:32, Martijn Lievaart wrote:
> On Fri, 11 Apr 2014 14:59:19 +0300, George Mpouras wrote:
> 
>> Any idea of how create a html tree with clickable nodes ?
>> there is this excellent library http://d3js.org  but it is difficult to
>> use if from Perl
> 
> As this is not really a Perl question I can only say it's off topic and 
> you should google on jquery-ui.
> 

If the data is represented in a Perl data structure, how to get that
into the jquery-ui would seem to be Perl question.  And how to get the
users actions out again and back into Perl would be as well.

Xho



------------------------------

Date: Mon, 14 Apr 2014 22:04:58 +0000 (UTC)
From: Randy Westlund <Vorzakir@invalid.invalid>
Subject: Re: Crawl nested data structure, apply code block to each
Message-Id: <lihm2a$f6g$1@dont-email.me>

On 2014-04-13, John Bokma wrote:
> Make your own TT filter?
>
> http://template-toolkit.org/docs/modules/Template/Filters.html#section_FILTERS
>
> IIRC you can chain filters, so you can first run your data to your
> custom filter, then through the latex filter.
>

This looks promising.  The remaining obstacle is that when I'm
building the hash to feed Template::Latex, I'm intentionally
inserting some ampersands for formatting.  So I need to escape some
of them, but not others.  Perhaps for the ones I'm intentionally
putting there, I'll write them as '&&' and have the filter transform
it like this:
        '&' > '\&'
        '&&' > '&'

Of course, then any user data containing '&&' will break it :/


------------------------------

Date: Mon, 14 Apr 2014 23:13:26 +0100
From: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Subject: Re: Crawl nested data structure, apply code block to each
Message-Id: <87d2gj4l15.fsf@sable.mobileactivedefense.com>

Randy Westlund <Vorzakir@invalid.invalid> writes:
> On 2014-04-13, John Bokma wrote:
>> Make your own TT filter?
>>
>> http://template-toolkit.org/docs/modules/Template/Filters.html#section_FILTERS
>>
>> IIRC you can chain filters, so you can first run your data to your
>> custom filter, then through the latex filter.
>>
>
> This looks promising.  The remaining obstacle is that when I'm
> building the hash to feed Template::Latex, I'm intentionally
> inserting some ampersands for formatting.

Then why on earth don't you escape ampersands in the input data before
putting it in the hash ands insert real 'table format &s' afterwards?


------------------------------

Date: Tue, 15 Apr 2014 04:43:58 +0000 (UTC)
From: Randy Westlund <Vorzakir@invalid.invalid>
Subject: Re: Crawl nested data structure, apply code block to each
Message-Id: <liidee$eni$1@dont-email.me>

On 2014-04-14, Rainer Weikusat wrote:
> Randy Westlund <Vorzakir@invalid.invalid> writes:
>> On 2014-04-13, John Bokma wrote:
>>> Make your own TT filter?
>>>
>>> http://template-toolkit.org/docs/modules/Template/Filters.html#section_FILTERS
>>>
>>> IIRC you can chain filters, so you can first run your data to your
>>> custom filter, then through the latex filter.
>>>
>>
>> This looks promising.  The remaining obstacle is that when I'm
>> building the hash to feed Template::Latex, I'm intentionally
>> inserting some ampersands for formatting.
>
> Then why on earth don't you escape ampersands in the input data before
> putting it in the hash ands insert real 'table format &s' afterwards?

That's why I was trying to figure out how I could crawl the data
structure, to do it before I inserted stuff.  My code is laid out
like this:

- get complicated document from MongoDB
- spend two pages of perl pulling things out of the nested data
  structure, transforming the complicated data structure into a
  complicated mess of LaTex formatting mixed with variables in a
  hash
- feed to template

The whole thing generates something like an invoice, but with a lot
of conditional formatting depending on what things are in the DB
record.


------------------------------

Date: Tue, 15 Apr 2014 10:18:00 +0200
From: Christian Winter <thepoet_nospam@arcor.de>
Subject: Re: Crawl nested data structure, apply code block to each
Message-Id: <534ceb38$0$6701$9b4e6d93@newsspool2.arcor-online.net>

Am 13.04.2014 19:39, schrieb Randy Westlund:
> I have a simple problem, but am not sure how to solve it.  I'm
> getting data from a MongoDB database with the MongoDB module.  This
> returns a BSON (JSON-like) document as a nested data structure with
> arbitrary element types.  I'm taking that and building a LaTeX
> document with the Template::Latex module.  This is currently
> working, most of the time.
>
> My problem is that the strings I'm pulling from the database
> sometimes have an '&' in them, which screws up my tabularx sections
> in LaTeX.  So I need some way to crawl this data structure and
> escape them.  I want to do something like call map on all the scalar
> values found.
>
> I looked at Data::Nested, but didn't see anything useful for me.  Is
> there a module that has a function like this, or a concise way to
> write this myself?

I'm not familiar with MongoDB and its BSON structures, but from
a quick glimpse it looks like a nested structure of hashes and
lists where the deepest leaves are hashrefs.

So my first bet would be to traverse the structure recursively,
check if and what type of reference has been passed, call it again
for all list and hash values found or do a replacement if it's
a scalar ref.

sub doescape
{
   my $inp = shift;
   if( ref $inp eq "ARRAY")
   {
     foreach( @$inp )
     {
       doescape(ref $_ ? $_ : \$_);
     }
   } elsif( ref $inp eq "HASH" )
   {
     foreach( keys %$inp )
     {
       doescape(ref $inp->{$_} ? $inp->{$_} : \$inp->{$_});
     }
   } else {
     $$inp =~ s/&/\\&/ if( defined $$inp );
   }
}

-Chris


------------------------------

Date: Tue, 15 Apr 2014 17:05:19 +0200
From: "Peter J. Holzer" <hjp-usenet3@hjp.at>
Subject: Re: Crawl nested data structure, apply code block to each
Message-Id: <slrnlkqilf.ebo.hjp-usenet3@hrunkner.hjp.at>

On 2014-04-15 04:43, Randy Westlund <Vorzakir@invalid.invalid> wrote:
> My code is laid out like this:
>
> - get complicated document from MongoDB
> - spend two pages of perl pulling things out of the nested data
>   structure, transforming the complicated data structure into a
>   complicated mess of LaTex formatting mixed with variables in a
>   hash
> - feed to template

This may be part of the problem. I find that it is generally a good idea
to delay output conversion (in this case applying LaTeX formatting, but
the same applies for HTML or just character encoding as long as
possible, and ideally leave it to your templating engine, output filter,
or whatever. Otherwise it is too easy to lose track of what still needs
to be converted and what doesn't (leading to either double-converted
strings or unconverted input in the output).

        hp

-- 
   _  | Peter J. Holzer    | Fluch der elektronischen Textverarbeitung:
|_|_) |                    | Man feilt solange an seinen Text um, bis
| |   | hjp@hjp.at         | die Satzbestandteile des Satzes nicht mehr
__/   | http://www.hjp.at/ | zusammenpaßt. -- Ralph Babel


------------------------------

Date: Tue, 15 Apr 2014 20:40:12 +0100
From: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Subject: Re: Crawl nested data structure, apply code block to each
Message-Id: <87tx9ue603.fsf@sable.mobileactivedefense.com>

Randy Westlund <Vorzakir@invalid.invalid> writes:
> On 2014-04-14, Rainer Weikusat wrote:
>> Randy Westlund <Vorzakir@invalid.invalid> writes:
>>> On 2014-04-13, John Bokma wrote:
>>>> Make your own TT filter?
>>>>
>>>> http://template-toolkit.org/docs/modules/Template/Filters.html#section_FILTERS
>>>>
>>>> IIRC you can chain filters, so you can first run your data to your
>>>> custom filter, then through the latex filter.
>>>>
>>>
>>> This looks promising.  The remaining obstacle is that when I'm
>>> building the hash to feed Template::Latex, I'm intentionally
>>> inserting some ampersands for formatting.
>>
>> Then why on earth don't you escape ampersands in the input data before
>> putting it in the hash ands insert real 'table format &s' afterwards?
>
> That's why I was trying to figure out how I could crawl the data
> structure, to do it before I inserted stuff. My code is laid out
> like this:
>
> - get complicated document from MongoDB
> - spend two pages of perl pulling things out of the nested data
>   structure, transforming the complicated data structure into a
>   complicated mess of LaTex formatting mixed with variables in a
>   hash

Did it already occur to you that this is already "code crawling the
database", although specialized for your problem? All you need to add is
an intermediate processing step between

'get data out of the BSON document'

and

'transform data before putting it into the hash'

How are you acessing the serialized data?


------------------------------

Date: Wed, 16 Apr 2014 02:28:21 +0000 (UTC)
From: Randy Westlund <Vorzakir@invalid.invalid>
Subject: Re: Crawl nested data structure, apply code block to each
Message-Id: <likps5$h0g$1@dont-email.me>

On 2014-04-15, Rainer Weikusat wrote:
> Randy Westlund <Vorzakir@invalid.invalid> writes:
>> On 2014-04-14, Rainer Weikusat wrote:
>>> Then why on earth don't you escape ampersands in the input data before
>>> putting it in the hash ands insert real 'table format &s' afterwards?
>>
>> That's why I was trying to figure out how I could crawl the data
>> structure, to do it before I inserted stuff. My code is laid out
>> like this:
>>
>> - get complicated document from MongoDB
>> - spend two pages of perl pulling things out of the nested data
>>   structure, transforming the complicated data structure into a
>>   complicated mess of LaTex formatting mixed with variables in a
>>   hash
>
> Did it already occur to you that this is already "code crawling the
> database", although specialized for your problem? All you need to add is
> an intermediate processing step between
>
> 'get data out of the BSON document'
>
> and
>
> 'transform data before putting it into the hash'
>
> How are you acessing the serialized data?

I'm using Data::Diver to pull fields out one at a time.  I solved
the problem by wrapping those calls with my own sub that does some
simple substitution.  It's the obvious solution, but it isn't very
pretty.  This being perl, I was hoping I could find some nice
declarative way to it, like how map works.  I guess in this case
there isn't one.



------------------------------

Date: Wed, 16 Apr 2014 13:58:16 +0100
From: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Subject: Re: Crawl nested data structure, apply code block to each
Message-Id: <87tx9ttorb.fsf@sable.mobileactivedefense.com>

Randy Westlund <Vorzakir@invalid.invalid> writes:
> On 2014-04-15, Rainer Weikusat wrote:
>> Randy Westlund <Vorzakir@invalid.invalid> writes:

[...]

> I'm using Data::Diver to pull fields out one at a time.  I solved
> the problem by wrapping those calls with my own sub that does some
> simple substitution.  It's the obvious solution, but it isn't very
> pretty.  This being perl, I was hoping I could find some nice
> declarative way to it, like how map works.

'map' works by looping over the input list and collecting the results of
evaluating the 'map expressions' on an output list. You could use that,
too, by turning this into a multi-pass algorithm which first builds a
list of keys and values, then uses map to transform that into a list of
keys and escaped values, than runs whatever your other formatting code
happens to do on this list and finally, puts the results into a hash. I
don't quite get why someone would consider this 'a pretty solution',
especially when comparing it with a single-pass algorithm which performs
the escaping-step which must be done prior to the other processing so
that it doesn't escape the wrong ampersands before said 'other
processing' ever sees the data.

If Data::Diver was an OO-module, you could subclass that, overload Dive,
and then, your main 'processing logic' would be independent of the 'data
extraction logic' in the sense that escaping might or might not be
performed depending on which kind of 'diver object' is used to extract
the values. But since it isn't, that's not an option.


------------------------------

Date: Mon, 14 Apr 2014 15:41:04 +0200
From: Tuxedo <tuxedo@mailinator.com>
Subject: Geo tagging EXIF data?
Message-Id: <ligohh$c5k$1@news.albasani.net>

Hello,

I'm in search of a fairly lightweight GPS device to carry around and use in 
combination with a Sony NEX camera.

Are there are some command line Perl utilities that append geo-coordinates 
onto Exif data of a set of images?

I've not used any GPS device before but I imagine the device would collect 
GPS data once a  minute or every few seconds. The data should thereafter 
easily be exported into some standard format and merged with a collection 
Jpegs in a directory where a 'merge' command can run.

I imagine it could be done by synchronizing the time settings of the camera 
and the GPS device and after having saved a collection of Jpegs and GPS 
data in a directory, running the procedure against the two data sets would 
write the closest by-time geo-coordinate into an Exif field of each Jpeg.

Can anyone recommend some perl applications or modules for this?

Many thanks,
Tuxedo

PS: I would like to avoid any Windows-only applications or other 
proprietary GUI solutions etc. Also, any recommendations on actual GPS 
hardware would be much appreciated.



------------------------------

Date: Mon, 14 Apr 2014 10:36:28 -0500
From: John Bokma <john@castleamber.com>
Subject: Re: Geo tagging EXIF data?
Message-Id: <874n1vudmr.fsf@castleamber.com>

Tuxedo <tuxedo@mailinator.com> writes:

> I've not used any GPS device before but I imagine the device would collect 
> GPS data once a  minute or every few seconds. The data should thereafter 
> easily be exported into some standard format and merged with a collection 
> Jpegs in a directory where a 'merge' command can run.

Exporting it do a standard format can be done with GPSBabel:
http://www.gpsbabel.org/

> I imagine it could be done by synchronizing the time settings of the camera 
> and the GPS device and after having saved a collection of Jpegs and GPS 
> data in a directory, running the procedure against the two data sets would 
> write the closest by-time geo-coordinate into an Exif field of each
> Jpeg.

It seems that GPSBabel can do that as well:
http://www.gpsbabel.org/htmldoc-1.5.0/fmt_exif.html

[..]
> PS: I would like to avoid any Windows-only applications or other 
> proprietary GUI solutions etc. Also, any recommendations on actual GPS 
> hardware would be much appreciated.

GPSBabel is open source and runs from the command line. No idea if there
is a Perl solution.

-- 
John Bokma                                                               j3b

Blog: http://johnbokma.com/        Perl Consultancy: http://castleamber.com/
Perl for books:    http://johnbokma.com/perl/help-in-exchange-for-books.html


------------------------------

Date: Mon, 14 Apr 2014 11:05:24 -0700
From: Jim Gibson <jimsgibson@gmail.com>
Subject: Re: Geo tagging EXIF data?
Message-Id: <140420141105240582%jimsgibson@gmail.com>

In article <ligohh$c5k$1@news.albasani.net>, Tuxedo
<tuxedo@mailinator.com> wrote:

> Hello,
> 
> I'm in search of a fairly lightweight GPS device to carry around and use in 
> combination with a Sony NEX camera.
> 
> Are there are some command line Perl utilities that append geo-coordinates 
> onto Exif data of a set of images?

You can check out this Perl-based one:

<http://www.carto.net/projects/photoTools/gpsPhoto/>

I have not tried it.

> I've not used any GPS device before but I imagine the device would collect 
> GPS data once a  minute or every few seconds. The data should thereafter 
> easily be exported into some standard format and merged with a collection 
> Jpegs in a directory where a 'merge' command can run.

Devices differ on when and where they save "track" points (as opposed
to "waypoints" and "routes"). Some can be configured to record points
at time or distance intervals. Formats also vary widely, but most
devices support the standard GPX format. As John mentioned, GPSBabel
can translate many different formats into GPX, and also extract the
track logs from the device.

> I imagine it could be done by synchronizing the time settings of the camera 
> and the GPS device and after having saved a collection of Jpegs and GPS 
> data in a directory, running the procedure against the two data sets would 
> write the closest by-time geo-coordinate into an Exif field of each Jpeg.
> 
> Can anyone recommend some perl applications or modules for this?
> 
> Many thanks,
> Tuxedo
> 
> PS: I would like to avoid any Windows-only applications or other 
> proprietary GUI solutions etc. Also, any recommendations on actual GPS 
> hardware would be much appreciated.

I just bought the Holux M-241 device and tried it out over the weekend:

<http://wiki.openstreetmap.org/wiki/Holux_M-241>

Finding an application to extract the track logs proved to be
difficult. The only thing I have gotten to work so far is BT747, a Java
program: <http://www.bt747.com>

mtkbabel is a Perl program that is supposed to be able to extract track
logs from some GPS devices, including the Holux M-241, but it did not
work on the first try, I think because the device name was incorrect. I
haven't had a chance to try it again:

<http://sourceforge.net/projects/mtkbabel/>

-- 
Jim Gibson


------------------------------

Date: Mon, 14 Apr 2014 13:48:42 -0500
From: John Bokma <john@castleamber.com>
Subject: Re: Geo tagging EXIF data?
Message-Id: <877g6r4uid.fsf@castleamber.com>

Jim Gibson <jimsgibson@gmail.com> writes:

> I just bought the Holux M-241 device and tried it out over the weekend:
>
> <http://wiki.openstreetmap.org/wiki/Holux_M-241>

I have an Amod AGL3080. Has been on many hikes for years and still
works. No display, though.

The Holux looks nice, I like the display, thanks for mentioning it.

-- 
John Bokma                                                               j3b

Blog: http://johnbokma.com/        Perl Consultancy: http://castleamber.com/
Perl for books:    http://johnbokma.com/perl/help-in-exchange-for-books.html


------------------------------

Date: Wed, 16 Apr 2014 09:28:48 +0000 (UTC)
From: Andre Majorel <cheney@halliburton.com>
Subject: Pathname from URL
Message-Id: <slrnlksjag.33t.cheney@atc5.vermine.org>

Looking for Perl code to convert a URL like

  http://foobar.org/%7Ea/b/c?d=e/f#g

to a Unix pathname like

  foobar.org/~a/b/c?d=e%2Ff

The same sort of thing that wget -x does.

Even better if it offers the option of honouring the
Content-disposition HTTP header.

Thanks in advance !

-- 
André Majorel http://www.teaser.fr/~amajorel/
Endless variations make it all seem new
Can you recognise the patterns that you find ?


------------------------------

Date: Wed, 16 Apr 2014 12:03:45 +0200
From: Peter Makholm <peter@makholm.net>
Subject: Re: Pathname from URL
Message-Id: <87bnw1h9q6.fsf@vps1.hacking.dk>

Andre Majorel <cheney@halliburton.com> writes:

> Looking for Perl code to convert a URL like
>
>   http://foobar.org/%7Ea/b/c?d=e/f#g
>
> to a Unix pathname like
>
>   foobar.org/~a/b/c?d=e%2Ff

The generalt module for parsing URL's would be URI.pm. This will help
you with parsing the url into the individual parts. Putting this
together into the actual filename is probably something you'll need to
do yourself.

//Makholm


------------------------------

Date: 6 Apr 2001 21:33:47 GMT (Last modified)
From: Perl-Users-Request@ruby.oce.orst.edu (Perl-Users-Digest Admin) 
Subject: Digest Administrivia (Last modified: 6 Apr 01)
Message-Id: <null>


Administrivia:

To submit articles to comp.lang.perl.announce, send your article to
clpa@perl.com.

Back issues are available via anonymous ftp from
ftp://cil-www.oce.orst.edu/pub/perl/old-digests. 

#For other requests pertaining to the digest, send mail to
#perl-users-request@ruby.oce.orst.edu. Do not waste your time or mine
#sending perl questions to the -request address, I don't have time to
#answer them even if I did know the answer.


------------------------------
End of Perl-Users Digest V11 Issue 4195
***************************************


home help back first fref pref prev next nref lref last post