I'm trying to figure out if it's possible to give an argument when calling an mojolicious app. This indicates that it's not easily done, at least not five years ago. By looking at the documentation, it looks like there is an option for it (or I'm I reading this wrong?).
Mojolicious::Commands->start_app('MyApp');
Mojolicious::Commands->start_app(MyApp => #ARGV);
If it's indeed possible, how do I access it from the startup function? I've tried some of the most obvious one, like...
sub startup {
my ($self, $arg) = #_;
....
This didn't work.
Take a look at the source. When you do start_app, it eventually runs $app->start, which passes #ARGV to $self->commands->run. That's another instance of Mojolicious::Commands, which parses the args and figures out what to do with them.
My best guess is you need to implement a Mojolicious::Command, and then you can pass along your args. That could be as simple as setting properties in the app object (which might exist already, not sure).
Related
I know specific instances of this question have been answered before:
How can I dynamically include Perl modules without using eval?
How do I use a Perl package known only in runtime?
There are also good answers at Perl Monks:
Writing a Perl module that dynamically loads other modules.
Creating subroutines on the fly
But I would like a robust way to add functionallity to a Perl application that will be:
Efficient: if the code is not needed it should not be compiled.
Easy to debug: error reporting if something goes wrong at the dynamic code, should point at the right place at the dynamic code.
Easy to extend: adding new code should be as easy as adding a new file or directory+file.
Easy to invoke: the main application should be able to use an "add on" without much trouble. An efficient mechanism to check if the "add on" has already been loaded and if not load it, would be a plus.
To illustrate the point, here are some examples that would benefit from a good solution:
A set of scripts that move data from different applications. For instance, moving data from OpenCart to Prestashop, where each entity in the data model has a specific "add on" that deals with the input or output; then an intermediate data model takes care of the transformation of the data. This could be used to move data in any direction or even between different versions of the same ecommerce.
A web application that needs to render different types of HTML in different places. Each "module" knows how to handle a certain information and accepts parameters to do it. A module outputs HTML, another a list of documents, another a document, another a banner, and so on.
Here are some examples that I have used and that work.
Load a function at run time and output the possible compile errors:
eval `cat $file_with_function`;
if( $# ) {
print STDERR $#, "\n";
die "Errors at file $file_with_function\n";
}
Or more robust using File::Slurp:
eval read_file("$file_with_function", binmode => ':utf8');
Check that a certain function has been defined:
if( !defined &myfunction ) {
die "myfunction is not defined\n";
}
The function may be called from there on. This is fine with one function, but not for many.
If the function is put in a module:
require $file_with_function; # needs the ".pm" extension, i.e. addon/func.pm
$name_of_module->import(); # need to know the module name, i.e. Addon::Func
$name_of_module->myfunction(...);
Where the require may be protected inside an eval and then use $# as before.
With Module::Load:
load $name_of_module;
Followed by the import and used in the same way. Security should not be a concern as it may be assumed that the dynamic code comes from a trusted place. Are there better ways? Which way would be considered good practice?
In case it helps, I will be using the solution (among other places, but not exclusively) within the Dancer framework.
EDIT: Given the comments, I add some more info. All cases that I have in mind have in common:
There is more than one dynamic piece of code. Probably many to start with.
Each bit of code has the same interface.
Given the comments and the lack of responses, I have done some research to answer my own question. Comments or other answers are welcome!
Dynamic code
By dynamic code I mean code that is evaluated at run-time. In general, I consider better to compile an application so that you have all the error checking the Perl compiler can offer before starting to execute. Added to use strict and use warnings, you can catch many common mistakes that way. So why using dynamic code at all? These are the reasons I consider:
An application performs many different actions that are chosen depending on the context of execution. For instance, an application extracts certain properties from a file. The way to extract them depends on the file type and we want to deal with many file types, but we do not want to change the application for each new file type we add. We also want the application to start quickly.
An application needs to be expanded on the fly in a way that does not require the application to restart.
We have a large application that contains a number of features. When we deploy the application, we do not want to provide all the possible features all the time, maybe because we licence them separately, maybe because not all of them are able to run under all platforms. By throwing in only the files with the features we want, we have a distribution that does not require changing any code or config files.
How do we do it?
Given the possibilities that Perl offers, solutions to adding dynamic code come in two flavors: using eval and using require. Then there are modules that may help do things in an easier or more maintainable way.
The quick and dirty way
The eval way uses the form eval EXPR to compile a piece of Perl code at run-time. The expression could be a string but I suggest putting the code in a file and grouping other similar files in a convenient place. Then, if possible using File::Slurp:
eval read_file("$file_with_code", binmode => ':utf8');
if( $# ) {
die "$file_with_code: error $#\n";
}
if( !defined &myfunction ) {
die "myfunction is not defined at $file_with_code\n";
}
Specifying the character set to read_file makes sure that the file will be interpreted correctly. It is also good to check that the compilation was correct and that the function we expect was defined. So in $file_with_code, we will have:
sub myfunction(...) {
# Do whatever; maybe return something
}
Then you may invoke the function normally. The function will be a different one depending on which file was loaded. Simple and dynamic.
The modular way (recommended)
The way I would do it with maintainability in mind would be using require. Unlike use, that is evaluated at compile-time, require may be used to load a module at run-time. Out of the various ways to invoke require, I would go for:
my $mymodule = 'MyCompany::MyModule'; # The module name ends up in $mymodule
require $mymodule;
Also unlike use, require will load the module but will not execute import. So we may use any functions inside the module and those function names will not polute the calling namespace. To access the function we will need to use:
$mymodule->myfunction($a, $b);
See below as to how the arguments get passed. This way of invoking a function will add an argument before $a and $b that is usually named $self. You may ignore it if you donĀ“t know anything about object orientation.
As require will try to load a module and the module may not exist or it may not compile, to catch the error it will be better to use:
eval "require $mymodule";
Then $# may be used to check for an error in the loading+compiling process. We may also check that the function has been defined with:
if( $mymodule->can('myfunction') ) {
die "myfunction is not defined at module $mymodule\n";
}
In this case we will need to create a directory for the modules and a file with the .pm extension for each one:
MyCompany
MyModule.pm
Inside MyModule.pm we will have:
package MyCompany::MyModule;
sub myfunction {
my ($self, $a, $b);
# Do whatever; maybe return something
# $self will be 'MyCompany::MyModule'
}
1;
The package bit is essential and will make sure that whatever definitions we put inside will be at the MyCompany::MyModule namespace. The 1; at the end will tell require that the module initialization was correct.
In case we wanted to implement the module by using other libraries that could polute the caller namespace, we could use the namespace::clean module. This module will make sure the caller does not get any additions to the namespace coming from the module we are defining. It is used in this way:
package MyCompany::MyModule;
# Definitions by these modules will not be available to the code doing the require
use Library1 qw(def1 def2);
use Library2 qw(def3 def4);
...
# Private functions go here and will not be visible from the code doing the require
sub private_function1 {
...
}
...
use namespace::clean;
# myfunction will be available
sub myfunction {
# Do whatever; maybe return something
}
...
1;
What happens if we include a module more than once?
The short answer is nothing. Perl keeps track of which modules have been loaded and from where using the %INC variable. Both use and require will not load a library twice. use will add any exported names to the callers namespace. require will not do that either. In case you want to check that a module has been loaded already, you could use %INC or better yet, you could use module::loaded which is part of the core in modern Perl versions:
use Module::Loaded;
if( !is_loaded( $mymodule ) {
eval "require $mymodule" );
...
}
How do I make sure Perl finds my module files?
For use and require Perl uses the #INC variable to define the list of directories that will be used to look for libraries. Adding a new directory to it may be achieved (among other ways) by adding it to the PERL5LIB environment variable or by using:
use lib '/the/path/to/my/libs';
Helper libraries
I have found some libraries that may be used to make the code that uses the dynamic mechanism more maintainable. They are:
The if module: will load a module or not depending on a condition: use if CONDITION, MODULE => ARGUMENTS;. May also be used to unload a module.
Module::Load::Conditional: will not die on you while trying to load a module and may also be used to check the module version or its dependencies. It is also able to load a list of modules all at once even checking their versions before doing so.
Taken from the Module::Load::Conditional documentation:
use Module::Load::Conditional qw(can_load);
my $use_list = {
CPANPLUS => 0.05,
LWP => 5.60,
'Test::More' => undef,
};
print can_load( modules => $use_list )
? 'all modules loaded successfully'
: 'failed to load required modules';
Can someone give me some direction on the best way to do when sharing a $dbh variable between "objects" in different .pm files.
For instance, my main module say Foo.pm has a new constructor, etc and I could give it a dbh or create a dbh and then share it by passing it as a parameter to the new constructor for Bar.pm, and then re-assigning inside Bar->new, but that seems like I'm doing a lot of work managing this variable.
Is this a simple, yet elegant way to do this? I've researched Exporter and a few other examples, but none seem to be straight forward.
Thanks!
I suppose that what you actually want is to take the control over $dbh creation out of the code that works with it. Most trivial way is, well,
my $dbh;
sub get_dbh {
if ( $dbh is bad ) {
reconnect or whatever
}
return $dbh || die;
}
And then in your code access it like
get_dbh()->do("your sql");
You could put that get_dbh() function to a separate module and call it from anywhere in your project - as usual with perl, it will be included only once and its local static variable $dbh will exist in only one copy within the perl process.
There are many possible ways to achieve that, writing a function like described above (and maybe passing a reference to that function instead of passing the $dbh) is one. There are plenty of others, depending on your design and personal taste - a singleton class, a variable tied to the function described above, or even a class that imitates DBI... That's up to you, but that should be one piece of code, spreading this logic all over your project is a bad idea.
If you're using Moose to build your object, you could encapsulate your database handle in a role and require it into classes that need the database access.
I'm working on a fairly complex application written in Perl. I'm fairly experienced with the language, but I'm just stumped on this.
I'm using a module, Foo, which uses sysread and syswrite for various operations on a file-handle (a bi-directional socket, in this case) that I pass to its constructor.
I want to do the following: From another module I am writing, (let's call it Bar), I want to change the way that sysread/write behave only when called from within methods that belong to Foo
Sysread et al need to work as normal everywhere else. It can be safely assumed that the use of sysread will not change in Foo.
The reason I want to do this is that I need to track the number of bytes being read from/written to the afore-mentioned file handle. At this point, this seems like the only way I can get this information - basically saving the return value from sysread/write.
I have no problems using anything from the CPAN, as long as it's of good quality.
Update: I found a better solution to my specific problem, and have posted the code here:
https://github.com/Hercynium/Tie-Handle-CountChars
It seems to be working very well in my application, but I won't be posting it to the CPAN until I've more thoroughly tested it, plus written some actual unit tests :)
You could do this by creating your own Foo::sysread function, which wraps the core function by logging the return value. The wrapping can be done automatically (preventing you from having to mess about with the symbol table yourself) with Class::Method::Modifiers:
package Foo;
use strict;
use warnings;
# ... other code...
use Class::Method::Modifiers;
around sysread => sub {
my $orig = shift;
my $return = CORE::sysread(#_);
# do something with $return
return $return;
};
I am mulling over a web-app in Perl that will allow users to create bug monitors. So essentially each "bug watch" will be a bug ID that will be passed to a sub routine along with the "sleep" time, and once the "sleep time" is over it must recur without blocking the parent process or the peer processes.
I tried Schedule::Cron. It supports cron-like format but here the arguments to the subs must be simple scalars hence I ruled it out.
POE/Coro seem to be another options but I don't have much idea about it/ :(
Any insights ? TIA
-Matt.
What's wrong with Schedule::Cron? You can make any subroutine reference that you like, so you can make closures that refer to the extra or specific data you need. You don't have to rely on the argument list. Was there something else about the module that didn't work for you?
I tried Schedule::Cron. It supports cron-like format but here the arguments to the subs must be simple scalars hence I ruled it out.
The Schedule::Cron documentation says that arguments is a Reference to array containing arguments to be used when calling the subroutine. Pass a reference to a named array of your arguments, if you wish. Since the cron entry holds a reference to #data you can add or remove #data elements in your code as needed.
$cron->add_entry(
'* * * * *',
subroutine => \&mysub,
arguments => \#data,
);
You can also use a closure, as Brian suggested:
my $var = 42;
my #arr = get_stuff();
$cron->add_entry(
'* * * * *',
sub { mysub($var, #arr) },
);
See the perlref man page for more information on closures.
If u do decide to look into Coro then it might be worth having a look at Continuity as this is a web library/framework built around Coro.
Also take a look at Squatting web microframework which "squats" on top of Continuity by default. The Squatting distro comes with some examples of using Coro::Event.
#(brian d foy): The reasons why i think Schedule::Cron is good for me
1: $cron->add_entry doesn't seem to provide me an option to pass #arrays/$vars to the subs.
$cron->add_entry("$temp",{'subroutine' => \&test1,'arguments' => \#array}); is not allowed.
2: I am not sure if there is a way to add new cron entry after cron->run(detach=>1); has been fired without restarting the script..
I've run into what appears to be a variable scope issue I haven't encountered before. I'm using Perl's CGI module and a call to DBI's do() method. Here's the code structure, simplified a bit:
use DBI;
use CGI qw(:cgi-lib);
&ReadParse;
my $dbh = DBI->connect(...............);
my $test = $in{test};
$dbh->do(qq{INSERT INTO events VALUES (?,?,?)},undef,$in{test},"$in{test}",$test);
The #1 placeholder variable evaluates as if it is uninitialized. The other two placeholder variables work.
The question: Why is the %in hash not available within the context of do(), unless I wrap it in double quotes (#2 placeholder) or reassign the value to a new variable (#3 placeholder)?
I think it's something to do with how the CGI module's ReadParse() function assigns scope to the %in hash, but I don't know Perl scoping well enough to understand why %in is available at the top level but not from within my do() statement.
If someone does understand the scoping issue, is there a better way to handle it? Wrapping all the %in references in double quotes seems a little messy. Creating new variables for each query parameter isn't realistic.
Just to be clear, my question is about the variable scoping issue. I realize that ReadParse() isn't the recommended method to grab query params with CGI.
I'm using Perl 5.8.8, CGI 3.20, and DBI 1.52. Thank you in advance to anyone reading this.
#Pi & #Bob, thanks for the suggestions. Pre-declaring the scope for %in has no effect (and I always use strict). The result is the same as before: in the db, col1 is null while cols 2 & 3 are set to the expected value.
For reference, here's the ReadParse function (see below). It's a standard function that's part of CGI.pm. The way I understand it, I'm not meant to initialize the %in hash (other than satisfying strict) for purposes of setting scope, since the function appears to me to handle that:
sub ReadParse {
local(*in);
if (#_) {
*in = $_[0];
} else {
my $pkg = caller();
*in=*{"${pkg}::in"};
}
tie(%in,CGI);
return scalar(keys %in);
}
I guess my question is what is the best way to get the %in hash within the context of do()? Thanks again! I hope this is the right way to provide additional info to my original question.
#Dan: I hear ya regarding the &ReadParse syntax. I'd normally use CGI::ReadParse() but in this case I thought it was best to stick to how the CGI.pm documentation has it exactly.
It doesn't actually look like you're using it as described in the docs:
https://metacpan.org/pod/CGI#COMPATIBILITY-WITH-CGI-LIB.PL
If you must use it, then CGI::ReadParse(); seems more sensible and less crufty syntax. Although I can't see it making much difference in this situation, but then it is a tied variable, so who the hell knows what it's doing ;)
Is there a particular reason you can't use the more-common $cgi->param('foo') syntax? It's a little bit cleaner, and filths up your namespace in a considerably more predictable manner..
use strict;. Always.
Try declaring
our %in;
and seeing if that helps. Failing that, strict may produce a more useful error.
I don't know what's wrong, but I can tell you some things that aren't:
It's not a scoping issue. If it were then none of the instances of $in{test} would work.
It's not the archaic & calling syntax. (It's not "right" but it's harmless in this case.)
ReadParse is a nasty bit of code. It munges the symbol table to create the global variable %in in the calling package. What's worse is that it's a tied variable, so accessing it could (theoretically) do anything. Looking at the source code for CGI.pm, the FETCH method just invokes the params() method to get the data. I have no idea why the fetch in the $dbh->do() isn't working.
Firstly, that is not in the context/scope of do. It is still in the context of main or global. You dont leave context until you enter {} in some way relating to subroutines or different 'classes' in perl. Within () parens you are not leaving scope.
The sample you gave us is of an uninitialized hash and as Pi has suggested, using strict will certainly keep those from occuring.
Can you give us a more representative example of your code? Where are you setting %IN and how?
Something's very broken there. Perl's scoping is relatively simple, and you're unlikely to stumble upon anything odd like that unless you're doing something daft. As has been suggested, switch on the strict pragma (and warnings, too. In fact you should be using both anyway).
It's pretty hard to tell what's going on without being able to see how %in is defined (is it something to do with that nasty-looking ReadParse call? why are you calling it with the leading &, btw? that syntax has been considered dead and gone for a long time). I suggest posting a bit more code, so we can see what's going on..
What version of DBI are you using? From looking at the DBI changelog it appears that versions prior to 1.00 didn't support the attribute argument. I suspect that the "uninitialized" $in{test} is actually the undef that you're passing to $dbh->do().
From the example you gave, this is not a scoping issue, or none of the parameters would work.
Looks like DBI (or a DBD, not sure where bind parameters are used) isn't honoring tie magic.
The workaround would be to stringize or copy what you pass to it, like your second and third parameters do.
A simple test using SQLite and DBI 1.53 shows it working ok:
$ perl -MDBI -we'sub TIEHASH { bless {} } sub FETCH { "42" } tie %x, "main" or die; my $dbh = DBI->connect("dbi:SQLite:dbname=dbfile","",""); $dbh->do("create table foo (bar char(80))"); $dbh->do("insert into foo values (?)", undef, $x{foo}); print "got: " . $dbh->selectrow_array("select bar from foo") . "\n"; $dbh->do("drop table foo")'
got: 42
Care to share what database you are using?
Per the DBI documentation: Binding a tied variable doesn't work, currently.
DBI is pretty complicated under the hood, and unfortunately goes through some gyrations to be efficient that are causing your problem. I agree with everyone else who says to get rid of the ugly old cgi-lib style code. It's unpleasant enough to do CGI without a nice framework (go Catalyst), let alone something that's been obsolete for a decade.
Okay, try this:
use CGI;
my %in;
CGI::ReadParse(\%in);
That might help as it's actually using a variable that you've declared, and therefore can control the scope of (plus it'll let you use strict without other nastiness that could be muddying the waters)
As this is starting to look like a tie() problem, try the following experiment. Save this as a foo.pl and run it as perl foo.pl "x=1"
use CGI;
CGI::ReadParse();
p($in{x}, "$in{x}");
sub p { my #a = #_; print "#a\n" }
It should print 1 1. If it doesn't, we've found the culprit.
I just tried your test codce from http://www.carcomplaints.com/test/test.pl.txt, and it works right away on my computer, no problems. I get three values as expected. I didn't run it as CGI, but using:
...
use CGI qw/-debug/;
...
I write a variable on the console (test=test) and your scripts inserts without a problem.
If however your leave this out, tt will insert an empty string and two NULLs. This is a because you interpolate a value into a string. This will makes a string with value of $in{test} which is undef at the moment. undef stringifies to an empty string, which is what is inserted into database.
Try this
%in = ReadParse();
but i doubt that. Are you trying to get query parameters or something?