I want to use following script:
use FileHandle;
use WWW::Curl::Easy;
use WWW::Curl::Form;
my $file, my $curl, my $curlf, my $return, my $minified;
$file = new FileHandle();
$curl = new WWW::Curl::Easy();
$curl->setopt(CURLOPT_URL, "http://closure-compiler.appspot.com/compile");
$curlf = new WWW::Curl::Form();
$curlf->formadd('output_format', 'text');
$curlf->formadd('output_info', 'compiled_code');
$curlf->formadd('compilation_level', 'ADVANCED_OPTIMIZATIONS');
$curlf->formaddfile($name, 'js_code', 'multipart/form-data');
$curl->setopt(CURLOPT_HTTPPOST, $curlf);
$file->open(\$minified, ">");
$curl->setopt(CURLOPT_WRITEDATA, $file);
$return = $curl->perform();
Following error is thrown:
Can't locate object method "formadd" via package "WWW::Curl::Form" at ./minifyjs.pl ....
WHY??? The WWW::Curl module is installed properly, I used package libwww-curl-perl under Debian/Ubuntu.
Can anyone help me please?
Whoops.
Looks like this commit broke formadd. The XS sub doesn't match the PREFIX = curl_form_ declaration (as it's named curl_formadd), so perl doesn't know how to map the Perl version of the method back to XS.
4.12 was the first release that tried to support WWW::Curl::Form, looks like it didn't work after all. Not sure how I've missed this one. I should probably note it here that WWW::Curl::Form support wasn't exactly a high priority TODO item on my list, due to the existence of various high quality form handling modules on CPAN. I've only accepted the patch for the sake of feature completeness. You're encouraged to use those modules for managing form content. The standard WWW::Curl use case statement applies.
I released 4.13 to fix this issue. Good catch!
Check out WWW::Mechanize. It has a lot of nice form methods.
Related
With all the hating on Storable -- I decided to check out Sereal for serialization needs. Plus I was having some issues with 32bit/64bit cross platform issues with Storable, so I figured this would be a good time.
After having some issues, I boiled the problem down to the following code. (i'm persisting an HTTP::Request object, hence the example code).
This is my encode test, i'm storing to a file:
use Sereal::Encoder;
use HTTP::Request;
use HTTP::Headers;
use URI;
my $encoder = Sereal::Encoder->new();
open(my $fh, ">", 'myfile.data') or die $!;
binmode($fh);
my $uri = URI->new('http://www.example.com');
my $headers = HTTP::Headers->new(
Content_Type => 'application/json',
);
my $http_request = HTTP::Request->new(POST => $uri, $headers, 'bleh');
print $fh $encoder->encode( $http_request );
close($fh);
And on the same machine(same perl etc. on 5.18), I run the following:
use Sereal::Decoder;
use File::Slurp qw(read_file);
use URI;
my $data = read_file('myfile.data') or die $!;
my $dec = Sereal::Decoder->new();
my $decoded = $dec->decode($data);
print $decoded->{_uri}->scheme,"\n";
And the output of running the encoding program, and then the decoding program is:
Can't locate object method "scheme" via package "URI::http" at testd.pl line 8.
Anyhow, was really nagging me as to what the problem was. I ended up reverting back to Storable and using nfreeze to solve my arch issues with Storable but was wondering why my attempt to transition to Sereal crashed and burned.
Thanks!
Sereal, unlike Storable, won't automatically load a module when it encounters a serialized object. This is a security issue with Storable, so Sereal is working as intended.[1]
At the point where scheme is called in the second test program, URI::http hasn't been loaded yet, so the method call results in an error. It seems that URI will load its subclasses when its constructor is used on a string that "looks like" one of them, e.g.
URI->new('http://www.stackoverflow.com');
loads the URI::http module. So one solution would be to add a dummy invocation of that constructor to ensure URI::http is loaded, or manually use URI::http instead. Either option causes the print $decoded->{_uri}->scheme line of the second script to work as expected, but I think the second is the lesser of two evils (importing an undocumented submodule from URI versus an arbitrary method call done specifically for its not-immediately-obvious side effect).
What I like to do:
I am using Rex to remotely call tests at servers. I remotely execute the tests with a call the the local prove. I want to gather all the information about the testruns at the different servers at one place. To achieve this I run the tests with prove -a (and maybe also with --merge for capturing STDERR) to create an archive (.tgz). I then download this archive again with Rex to the controlling server. I think this is quite a good plan so far...
My problem now is that I find a lot of hints on creating such a TAP-archive, but none of how I can actually read this archive. Sure, I could open and process it somehow with Archive::Tar or parse it manually with TAP::Parser as suggested by Schwern. But knowing that there are formatters like TAP::Formatter::HTML or TAP::Formatter::JUnit (e.g. for Jenkins) I think there must be a way of using those tools directly on a TAP-archive? When I look up the docs I only find hints on how to use this stuff with prove to format tests while running them. But I need to use this formatters on the archive, I have been running prove already remotely...
So far about the context. My question in short is: How can I use the Perl-TAP-Tools to format TAP coming from a TAP-archive produced by prove?
I am thankful for any little hints. Also if you see a problem in my approach in general.
Renée provided a working solution here: http://www.perl-community.de/bat/poard/thread/18420 (German)
use strict;
use warnings;
use TAP::Harness::Archive;
use TAP::Harness;
use TAP::Formatter::HTML;
my $formatter = TAP::Formatter::HTML->new;
my $harness = TAP::Harness->new({ formatter => $formatter });
$formatter->really_quiet(1);
$formatter->prepare;
my $session;
my $aggregator = TAP::Harness::Archive->aggregator_from_archive({
archive => '/must/be/the/complete/path/to/test.tar.gz',
parser_callbacks => {
ALL => sub {
$session->result( $_[0] );
},
},
made_parser_callback => sub {
$session = $formatter->open_test( $_[1], $_[0] );
}
});
$aggregator->start;
$aggregator->stop;
$formatter->summary($aggregator);
Tanks a lot! I hope this will help some others too. It seems like this knowledge is not very wide spread yet.
I have made a module to pack this solution in a nice interface: https://metacpan.org/module/Convert::TAP::Archive
So from now on you can just type this:
use Convert::TAP::Archive qw(convert_from_taparchive);
my $html = convert_from_taparchive(
'/must/be/the/complete/path/to/test.tar.gz',
'TAP::Formatter::HTML',
);
The problem with the output is mentioned in the docs. Please provide patches or comments if you know how to fix this (minor) issue. E.g. here: https://github.com/borisdaeppen/Convert-TAP-Archive
Renee pointed me to how Tapper makes it: https://metacpan.org/source/TAPPER/Tapper-TAP-Harness-4.1.1/lib/Tapper/TAP/Harness.pm#L273
Quite some effort to read an archive though...
I'm currently working on internationalizing a large Perl/Mason web application (Perl 5.8.0, Mason 1.48, mod_perl & Apache). In choosing a localization module, I decided to go with Locale::TextDomain over Locale::Maketext, mostly because the latter's plural form support isn't as nice as I'd like.
The hang-up I'm having with Locale::TextDomain is that it resolves which catalog to use for translations based on the process' locale. When I realized this, I got worried about how this would affect my application if I wanted users to be able to use different locales -- would it be possible that a change in locale to suit one user's settings would affect another user's session? For example, could there be a situation in which an English user received a page in German because a German user's session changed the process' locale? I'm not very knowledgeable about how Apache's thread/process model works, though it seems that if multiple users can be served by the same thread, this could happen.
This email thread would indicate that this is possible; here the OP describes the situation I'm thinking about.
If this is true, is there a way I can prevent this scenario while still using Locale::TextDomain? I suppose I could always hack at the module to load the catalogs in a locale-independent (probably using DBD::PO), but hopefully I'm just missing something that will solve my problem...
You entirely avoid the setlocale problems by using web_set_locale instead.
(That message on the mailing list predates the addition of that function by about 4 years.)
Edit: You are correct that global behaviour persists in Apache children, leading to buggy behaviour.
I wrote up a test case:
app.psgi
use 5.010;
use strictures;
use Foo::Bar qw(run);
my $app = sub {
my ($env) = #_;
run($env);
};
Foo/Bar.pm
package Foo::Bar;
use 5.010;
use strictures;
use Encode qw(encode);
use File::Basename qw(basename);
use Locale::TextDomain __PACKAGE__, '/tmp/Foo-Bar/share/locale';
use Locale::Util qw(web_set_locale);
use Plack::Request qw();
use Sub::Exporter -setup => { exports => [ 'run' ] };
our $DEFAULT_LANGUAGE = 'en'; # untranslated source strings
sub run {
my ($env) = #_;
my $req = Plack::Request->new($env);
web_set_locale($env->{HTTP_ACCEPT_LANGUAGE}, undef, undef, [
map { basename $_ } grep { -d } glob '/tmp/Foo-Bar/share/locale/*'
]); # XXX here
return $req
->new_response(
200,
['Content-Type' => 'text/plain; charset=UTF-8'],
[encode('UTF-8', __ 'Hello, world!')],
)->finalize;
}
The app runs as a PerlResponseHandler. When the user request a language that cannot be set, the call fails silently and the language that was used last successfully is still enabled.
The trick to fix this is to always set to a language that exists with a fallback mechanism. At the spot marked XXX, add the code or web_set_locale($DEFAULT_LANGUAGE), so that despite using a global setting, the behaviour cannot persist because we guarantee that it is set/changed once per request.
Edit 2: Further testing reveals that it's not thread-safe, sorry. Use the prefork MPM only which isolates requests as processes; however worker and event are affected because they are thread-based.
When you run perl -e "Bla->new", you get this well-known error:
Can't locate object method "new" via package "Bla"
(perhaps you forgot to load "Bla"?)
Happened in a Perl server process the other day due to an oversight of mine. There are multiple scripts, and most of them have the proper use statements in place. But there was one script that was doing Bla->new in sub blub at line 123 but missing a use Bla at the top, and when it was hit by a click without any of the other scripts using Bla having been loaded by the server process before, then bang!
Testing the script in isolation would be the obvious way to safeguard against this particular mistake, but alas the code is dependent upon a humungous environment. Do you know of another way to safeguard against this oversight?
Here's one example how PPI (despite its merits) is limited in its view on Perl:
use strict;
use HTTP::Request::Common;
my $req = GET 'http://www.example.com';
$req->headers->push_header( Bla => time );
my $au=Auweia->new;
__END__
PPI::Token::Symbol '$req'
PPI::Token::Operator '->'
PPI::Token::Word 'headers'
PPI::Token::Operator '->'
PPI::Token::Word 'push_header'
PPI::Token::Symbol '$au'
PPI::Token::Operator '='
PPI::Token::Word 'Auweia'
PPI::Token::Operator '->'
PPI::Token::Word 'new'
Setting the header and assigning the Auweia->new parse the same. So I'm not sure how you can build upon such a shaky foundation. I think the problem is that Auweia could also be a subroutine; perl.exe cannot tell until runtime.
Further Update
Okay, from #Schwern's instructive comments below I learnt that PPI is just a tokenizer, and you can build upon it if you accept its limitations.
Testing is the only answer worth the effort. If the code contains mistakes like forgetting to load a class, it probably contains other mistakes. Whatever the obstacles, make it testable. Otherwise you're patching a sieve.
That said, you have two options. You can use Class::Autouse which will try to load a module if it isn't already loaded. It's handy, but because it affects the entire process it can have unintended effects.
Or you can use PPI to scan your code and find all the class method calls. PPI::Dumper is very handy to understand how PPI sees Perl.
use strict;
use warnings;
use PPI;
use PPI::Dumper;
my $file = shift;
my $doc = PPI::Document->new($file);
# How PPI sees a class method call.
# PPI::Token::Word 'Class'
# PPI::Token::Operator '->'
# PPI::Token::Word 'method'
$doc->find( sub {
my($node, $class) = #_;
# First we want a word
return 0 unless $class->isa("PPI::Token::Word");
# It's not a class, it's actually a method call.
return 0 if $class->method_call;
my $class_name = $class->literal;
# Next to it is a -> operator
my $op = $class->snext_sibling or return 0;
return 0 unless $op->isa("PPI::Token::Operator") and $op->content eq '->';
# And then another word which PPI identifies as a method call.
my $method = $op->snext_sibling or return 0;
return 0 unless $method->isa("PPI::Token::Word") and $method->method_call;
my $method_name = $method->literal;
printf "$class->$method_name seen at %s line %d.\n", $file, $class->line_number;
});
You don't say what server enviroment you're running under, but from what you say it sounds like you could do with preloading all your modules in advance before executing any individual pages. Not only would this prevent the problems you're describing (where every script has to remember to load all the modules it uses) but it would also save you memory.
In pre-forking servers (as is commonly used with mod_perl and Apache) you really want to load as much of your code before your server forks for the first time so that the code is stored once in copy-on-write shared memory rather than mulitple times in each child process when it is loaded on demand.
For information on pre-loading in Apache, see the section of Practical mod_perl
My apologies if this is a duplicate; I may not know the proper terms to search for.
I am tasked with analyzing a Perl module file (.pm) that is a fragment of a larger application. Is there a tool, app, or script that will simply go through the code and pull out all the variable names, module names, and function calls? Even better would be something that would identify whether it was declared within this file or is something external.
Does such a tool exist? I only get the one file, so this isn't something I can execute -- just some basic static analysis I guess.
Check out the new, but well recommended Class::Sniff.
From the docs:
use Class::Sniff;
my $sniff = Class::Sniff->new({class => 'Some::class'});
my $num_methods = $sniff->methods;
my $num_classes = $sniff->classes;
my #methods = $sniff->methods;
my #classes = $sniff->classes;
{
my $graph = $sniff->graph; # Graph::Easy
my $graphviz = $graph->as_graphviz();
open my $DOT, '|dot -Tpng -o graph.png' or die("Cannot open pipe to dot: $!");
print $DOT $graphviz;
}
print $sniff->to_string;
my #unreachable = $sniff->unreachable;
foreach my $method (#unreachable) {
print "$method\n";
}
This will get you most of the way there. Some variables, depending on scope, may not be available.
If I understand correctly, you are looking for a tool to go through Perl source code. I am going to suggest PPI.
Here is an example cobbled up from the docs:
#!/usr/bin/perl
use strict;
use warnings;
use PPI::Document;
use HTML::Template;
my $Module = PPI::Document->new( $INC{'HTML/Template.pm'} );
my $sub_nodes = $Module->find(
sub { $_[1]->isa('PPI::Statement::Sub') and $_[1]->name }
);
my #sub_names = map { $_->name } #$sub_nodes;
use Data::Dumper;
print Dumper \#sub_names;
Note that, this will output:
...
'new',
'new',
'new',
'output',
'new',
'new',
'new',
'new',
'new',
...
because multiple classes are defined in HTML/Template.pm. Clearly, a less naive approach would work with the PDOM tree in a hierarchical way.
Another CPAN tools available is Class::Inspector
use Class::Inspector;
# Is a class installed and/or loaded
Class::Inspector->installed( 'Foo::Class' );
Class::Inspector->loaded( 'Foo::Class' );
# Filename related information
Class::Inspector->filename( 'Foo::Class' );
Class::Inspector->resolved_filename( 'Foo::Class' );
# Get subroutine related information
Class::Inspector->functions( 'Foo::Class' );
Class::Inspector->function_refs( 'Foo::Class' );
Class::Inspector->function_exists( 'Foo::Class', 'bar' );
Class::Inspector->methods( 'Foo::Class', 'full', 'public' );
# Find all loaded subclasses or something
Class::Inspector->subclasses( 'Foo::Class' );
This will give you similar results to Class::Sniff; you may still have to do some processing on your own.
There are better answers to this question, but they aren't getting posted, so I'll claim the fastest gun in the West and go ahead and post a 'quick-fix'.
Such a tool exists, in fact, and is built into Perl. You can access the symbol table for any namespace by using a special hash variable. To access the main namespace (the default one):
for(keys %main::) { # alternatively %::
print "$_\n";
}
If your package is named My/Package.pm, and is thus in the namespace My::Package, you would change %main:: to %My::Package:: to achieve the same effect. See the perldoc perlmod entry on symbol tables - they explain it, and they list a few alternatives that may be better, or at least get you started on finding the right module for the job (that's the Perl motto - There's More Than One Module To Do It).
If you want to do it without executing any code that you are analyzing, it's fairly easy to do this with PPI. Check out my Module::Use::Extract; it's a short bit of code shows you how to extract any sort of element you want from PPI's PerlDOM.
If you want to do it with code that you have already compiled, the other suggestions in the answers are better.
I found a pretty good answer to what I was looking for in this column by Randal Schwartz. He demonstrated using the B::Xref module to extract exactly the information I was looking for. Just replacing the evaluated one-liner he used with the module's filename worked like a champ, and apparently B::Xref comes with ActiveState Perl, so I didn't need any additional modules.
perl -MO=Xref module.pm