Can someone give me some direction on the best way to do when sharing a $dbh variable between "objects" in different .pm files.
For instance, my main module say Foo.pm has a new constructor, etc and I could give it a dbh or create a dbh and then share it by passing it as a parameter to the new constructor for Bar.pm, and then re-assigning inside Bar->new, but that seems like I'm doing a lot of work managing this variable.
Is this a simple, yet elegant way to do this? I've researched Exporter and a few other examples, but none seem to be straight forward.
Thanks!
I suppose that what you actually want is to take the control over $dbh creation out of the code that works with it. Most trivial way is, well,
my $dbh;
sub get_dbh {
if ( $dbh is bad ) {
reconnect or whatever
}
return $dbh || die;
}
And then in your code access it like
get_dbh()->do("your sql");
You could put that get_dbh() function to a separate module and call it from anywhere in your project - as usual with perl, it will be included only once and its local static variable $dbh will exist in only one copy within the perl process.
There are many possible ways to achieve that, writing a function like described above (and maybe passing a reference to that function instead of passing the $dbh) is one. There are plenty of others, depending on your design and personal taste - a singleton class, a variable tied to the function described above, or even a class that imitates DBI... That's up to you, but that should be one piece of code, spreading this logic all over your project is a bad idea.
If you're using Moose to build your object, you could encapsulate your database handle in a role and require it into classes that need the database access.
Related
I know specific instances of this question have been answered before:
How can I dynamically include Perl modules without using eval?
How do I use a Perl package known only in runtime?
There are also good answers at Perl Monks:
Writing a Perl module that dynamically loads other modules.
Creating subroutines on the fly
But I would like a robust way to add functionallity to a Perl application that will be:
Efficient: if the code is not needed it should not be compiled.
Easy to debug: error reporting if something goes wrong at the dynamic code, should point at the right place at the dynamic code.
Easy to extend: adding new code should be as easy as adding a new file or directory+file.
Easy to invoke: the main application should be able to use an "add on" without much trouble. An efficient mechanism to check if the "add on" has already been loaded and if not load it, would be a plus.
To illustrate the point, here are some examples that would benefit from a good solution:
A set of scripts that move data from different applications. For instance, moving data from OpenCart to Prestashop, where each entity in the data model has a specific "add on" that deals with the input or output; then an intermediate data model takes care of the transformation of the data. This could be used to move data in any direction or even between different versions of the same ecommerce.
A web application that needs to render different types of HTML in different places. Each "module" knows how to handle a certain information and accepts parameters to do it. A module outputs HTML, another a list of documents, another a document, another a banner, and so on.
Here are some examples that I have used and that work.
Load a function at run time and output the possible compile errors:
eval `cat $file_with_function`;
if( $# ) {
print STDERR $#, "\n";
die "Errors at file $file_with_function\n";
}
Or more robust using File::Slurp:
eval read_file("$file_with_function", binmode => ':utf8');
Check that a certain function has been defined:
if( !defined &myfunction ) {
die "myfunction is not defined\n";
}
The function may be called from there on. This is fine with one function, but not for many.
If the function is put in a module:
require $file_with_function; # needs the ".pm" extension, i.e. addon/func.pm
$name_of_module->import(); # need to know the module name, i.e. Addon::Func
$name_of_module->myfunction(...);
Where the require may be protected inside an eval and then use $# as before.
With Module::Load:
load $name_of_module;
Followed by the import and used in the same way. Security should not be a concern as it may be assumed that the dynamic code comes from a trusted place. Are there better ways? Which way would be considered good practice?
In case it helps, I will be using the solution (among other places, but not exclusively) within the Dancer framework.
EDIT: Given the comments, I add some more info. All cases that I have in mind have in common:
There is more than one dynamic piece of code. Probably many to start with.
Each bit of code has the same interface.
Given the comments and the lack of responses, I have done some research to answer my own question. Comments or other answers are welcome!
Dynamic code
By dynamic code I mean code that is evaluated at run-time. In general, I consider better to compile an application so that you have all the error checking the Perl compiler can offer before starting to execute. Added to use strict and use warnings, you can catch many common mistakes that way. So why using dynamic code at all? These are the reasons I consider:
An application performs many different actions that are chosen depending on the context of execution. For instance, an application extracts certain properties from a file. The way to extract them depends on the file type and we want to deal with many file types, but we do not want to change the application for each new file type we add. We also want the application to start quickly.
An application needs to be expanded on the fly in a way that does not require the application to restart.
We have a large application that contains a number of features. When we deploy the application, we do not want to provide all the possible features all the time, maybe because we licence them separately, maybe because not all of them are able to run under all platforms. By throwing in only the files with the features we want, we have a distribution that does not require changing any code or config files.
How do we do it?
Given the possibilities that Perl offers, solutions to adding dynamic code come in two flavors: using eval and using require. Then there are modules that may help do things in an easier or more maintainable way.
The quick and dirty way
The eval way uses the form eval EXPR to compile a piece of Perl code at run-time. The expression could be a string but I suggest putting the code in a file and grouping other similar files in a convenient place. Then, if possible using File::Slurp:
eval read_file("$file_with_code", binmode => ':utf8');
if( $# ) {
die "$file_with_code: error $#\n";
}
if( !defined &myfunction ) {
die "myfunction is not defined at $file_with_code\n";
}
Specifying the character set to read_file makes sure that the file will be interpreted correctly. It is also good to check that the compilation was correct and that the function we expect was defined. So in $file_with_code, we will have:
sub myfunction(...) {
# Do whatever; maybe return something
}
Then you may invoke the function normally. The function will be a different one depending on which file was loaded. Simple and dynamic.
The modular way (recommended)
The way I would do it with maintainability in mind would be using require. Unlike use, that is evaluated at compile-time, require may be used to load a module at run-time. Out of the various ways to invoke require, I would go for:
my $mymodule = 'MyCompany::MyModule'; # The module name ends up in $mymodule
require $mymodule;
Also unlike use, require will load the module but will not execute import. So we may use any functions inside the module and those function names will not polute the calling namespace. To access the function we will need to use:
$mymodule->myfunction($a, $b);
See below as to how the arguments get passed. This way of invoking a function will add an argument before $a and $b that is usually named $self. You may ignore it if you don´t know anything about object orientation.
As require will try to load a module and the module may not exist or it may not compile, to catch the error it will be better to use:
eval "require $mymodule";
Then $# may be used to check for an error in the loading+compiling process. We may also check that the function has been defined with:
if( $mymodule->can('myfunction') ) {
die "myfunction is not defined at module $mymodule\n";
}
In this case we will need to create a directory for the modules and a file with the .pm extension for each one:
MyCompany
MyModule.pm
Inside MyModule.pm we will have:
package MyCompany::MyModule;
sub myfunction {
my ($self, $a, $b);
# Do whatever; maybe return something
# $self will be 'MyCompany::MyModule'
}
1;
The package bit is essential and will make sure that whatever definitions we put inside will be at the MyCompany::MyModule namespace. The 1; at the end will tell require that the module initialization was correct.
In case we wanted to implement the module by using other libraries that could polute the caller namespace, we could use the namespace::clean module. This module will make sure the caller does not get any additions to the namespace coming from the module we are defining. It is used in this way:
package MyCompany::MyModule;
# Definitions by these modules will not be available to the code doing the require
use Library1 qw(def1 def2);
use Library2 qw(def3 def4);
...
# Private functions go here and will not be visible from the code doing the require
sub private_function1 {
...
}
...
use namespace::clean;
# myfunction will be available
sub myfunction {
# Do whatever; maybe return something
}
...
1;
What happens if we include a module more than once?
The short answer is nothing. Perl keeps track of which modules have been loaded and from where using the %INC variable. Both use and require will not load a library twice. use will add any exported names to the callers namespace. require will not do that either. In case you want to check that a module has been loaded already, you could use %INC or better yet, you could use module::loaded which is part of the core in modern Perl versions:
use Module::Loaded;
if( !is_loaded( $mymodule ) {
eval "require $mymodule" );
...
}
How do I make sure Perl finds my module files?
For use and require Perl uses the #INC variable to define the list of directories that will be used to look for libraries. Adding a new directory to it may be achieved (among other ways) by adding it to the PERL5LIB environment variable or by using:
use lib '/the/path/to/my/libs';
Helper libraries
I have found some libraries that may be used to make the code that uses the dynamic mechanism more maintainable. They are:
The if module: will load a module or not depending on a condition: use if CONDITION, MODULE => ARGUMENTS;. May also be used to unload a module.
Module::Load::Conditional: will not die on you while trying to load a module and may also be used to check the module version or its dependencies. It is also able to load a list of modules all at once even checking their versions before doing so.
Taken from the Module::Load::Conditional documentation:
use Module::Load::Conditional qw(can_load);
my $use_list = {
CPANPLUS => 0.05,
LWP => 5.60,
'Test::More' => undef,
};
print can_load( modules => $use_list )
? 'all modules loaded successfully'
: 'failed to load required modules';
I have a bunch of Perl 6 tests that start off with some basic tests where I put the class name to test in a variable the use that variable throughout the test:
my $package = 'Some::Class';
use-ok $package;
my $class = ::($package);
can-ok $class, 'new';
I hadn't paid attention to this for a bit, but it no longer works because classes are lexically loaded now:
No such symbol 'Some::Class'
It's not a hard fix. Load the module without use-ok and in the scope where I want ::($package):
use Some::Class;
...
The other solutions (discounting an ugly EVAL perhaps) have the issue I'm trying to avoid.
But, I don't particularly like that since the name shows up twice in the file. I've particularly liked my formerly-working idiom that I carried over from Perl 5. If I wanted to change a class name, it only showed up once in the file. I could easily generate boilerplate tests (although, it's not that much harder for the fix).
Is there a way I can get back to the ideal I wanted? (Although I figure lexical loading in the next version will get in the way again).
To sum up the problem: You want to load modules using a symbol instead of hardcoding it.
Using a constant should do this for you:
constant some-module = 'Some::Module';
use ::(some-module);
You can also load the module at runtime using require, which would allow runtime computed values:
my $some-module = 'Some::Module';
require ::($some-module);
::($some-module).foo
It would make sense to do this after trying a use-ok.
For extra credit, you may find the techniques in this article useful.
http://rakudo.org/2017/03/18/lexical-require-upgrade-info/
I often work on a huge, not-very-well-documented, object-oriented Perl repo at my place of employment. While maintaining the code, I frequently need to trace things that are inherited from other classes so that I can understand what they're doing. For example, I need to figure out what $self->mystery is and what it's doing:
package Foo::Bar;
use Moose;
use Method::Signatures;
use Foo::Bar::Element;
use Foo::Bar::Function;
use base qw (Baz::Foo::Bar);
method do_stuff ($some_arg) {
# mystery is not defined in Foo::Bar
my $mystery = $self->mystery;
$mystery->another_mystery($some_arg);
}
I usually find myself spending way too much time tracing through parent classes. So my question is, is there an easy way for me to figure out where $self->mystery comes from? Or in other words, I need to find where mystery is declared.
And by "easy way", I don't mean using ack or grep to string search through files. I'm hoping there's some sort of debugging module I can install and use which could help give me some insight.
Thank you.
Thanks to Standard Perl . . . the comes_from Method!
You don’t need to download any special tool or module this, let alone some giant IDE because your undocumented class structure has gotten too complicated for mere humans ever to understand without a hulking IDE.
Why not? Simple: Standard Perl contains everything you need to get the answer you’re looking for. The easy way to find out where something comes from is to use the very useful comes_from method:
$origin = $self->comes_from("mystery");
$secret_origin = $self->comes_from("another_mystery");
$birthplace = Some::Class->comes_from("method_name");
That will return the original name of the subroutine which that method would resolve to. As you see, comes_from works as both an object method and a class method, just like can and isa.
Note that when I say the name of the subroutine it resolves to, I mean where that subroutine was originally created, back before any importing or inheritance. For example, this code:
use v5.10.1;
use Path::Router;
my($what, $method) = qw(Path::Router dump);
say "$what->$method is really ", $what->comes_from($method);
prints out:
Path::Router->dump is really Moose::Object::dump
Similar calls would also reveal things like:
Net::SMTP->mail is really Net::SMTP::mail
Net::SMTP->status is really Net::Cmd::status
Net::SMTP->error is really IO::Handle::error
It works just fine on plain ole subroutines, too:
SQL::Translator::Parser::Storable->normalize_name
is really SQL::Translator::Utils::normalize_name
The lovely comes_from method isn’t quite built in though it requires nothing outside of Standard Perl. To make it accessible to you and all your classes and objects and more, just add this bit of code somewhere — anywhere you please really :)
sub UNIVERSAL::comes_from($$) {
require B;
my($invocant, $invoke) = #_;
my $coderef = $invocant->can($invoke) || return;
my $cv = B::svref_2object($coderef);
return unless $cv->isa("B::CV");
my $gv = $cv->GV;
return if $gv->isa("B::SPECIAL");
my $subname = $gv->NAME;
my $packname = $gv->STASH->NAME;
return $packname . "::" . $subname;
}
By declaring that as a UNIVERSAL sub, now everybody who’s anybody gets to play with it, just like they do with can and isa. Enjoy!
Are you sure you don't want an IDE? It seems to be what you are asking about. Padre, Eclipse EPIC, Emacs , and vim and many other editors offer some variation on the features you mention - probably simpler than you seem to want. If you have big project to navigate ctags can help - it's usually easy to integrate into an editor and you are allowed to hack on your configuration file (with regexes BTW) to get it to recognize bits of a complicated set of source files.
There is a related PERL FAQ entry about IDEs and a SO question: What's a good development environment for Perl?. There are also a host of CPAN modules you will want to use when developing that let you look into your code programmatically:
Devel::Kit
Devel::Peek
SUPER
Devel::Trepan
MooseX::amine / mex
...
You can see an example of a script that looks for methods in classes in the SO node: Get all methods and/or properties in a given Perl class or module.
You might be able to get tools like these to help you hop around in your source in a way you find useful from a shell or from inside the debugger. Trepan has a good short summary of debugging tools as part of its documentation. Generally though you can be very productive combining Data::Dumper the B:: modules (B::Xref , B::Deparse, etc., etc.) with the debugger and ack.
I already asked a similar question, but it had more to do with using barewords for functions. Basically, I am refactoring some of my code as I have begun learning about "package" and OOP in perl. Since none of my modules used "package" I could just "require" them into my code, and I was able to use their variables and functions without any trouble. But now I have created a module that I will be using as a class for object manipulation, and I want to continue being able to use my functions in other modules in the same way. No amount of reading docs and tutorials has quite answered my question, so please, if someone can offer an explanation of how it works, and why, that would go a really long way, more than a "this is how you do it"-type answer.
Maybe this can illustrate the problem:
myfile.cgi:
require 'common.pm'
&func_c('hi');
print $done;
common.pm:
$done = "bye";
sub func_c {print #_;}
will print "hi" then "bye", as expected.
myfile_obj.cgi:
use common_obj;
&func_obj('hi');
&finish;
common_obj.pm:
package common_obj;
require 'common.pm';
sub func_obj {&func_c(#_);}
sub finish {print $done;}
gives "Undefined subroutine func_c ..."
I know (a bit) about namespaces and so on, but I don't know how to achieve the result I want (to get func_obj to be able to call func_c from common.pm) without having to modify common.pm (which might break a bunch of other modules and scripts that depend on it working the way it is). I know about use being called as a "require" in BEGIN along with its import().. but again, that would require modifying common.pm. Is what I want to accomplish even possible?
You'll want to export the symbols from package common_obj (which isn't a class package as it stands).
You'll want to get acquainted with the Exporter module. There's an introduction in Modern Perl too (freely available book, but consider buying it too).
It's fairly simple to do - if you list functions in #EXPORT_OK then they can be made available to someone who uses your package. You can also group functions together into named groups via EXPORT_TAGS.
Start by just exporting a couple of functions, list those in your use statement and get the basics. It's easy enough then.
If your module was really object-oriented then you'd access the methods through an object reference $my_obj->some_method(123) so exporting isn't necessary. It's even possible for one package to offer both procedural/functional and object-oriented interfaces.
Your idea of wrapping old "unsafe" modules with something neater seems a sensible way to proceed by the way. Get things under control without breaking existing working code.
Edit : explanation.
If you require a piece of unqualified code then its definitions will end up in the requiring package (common_obj) but if you restrict code inside a package definition and then use it you need to explicitly export definitions.
You can use common_obj::func_obj and common_obj::finish. You just need to add their namespaces and it will work. You don't need the '&' in this case.
When you used the package statement (in common_obj.pm) you changed the namespace for the ensuing functions. When you didn't (in common.pm) you included the functions in the same namespace (main or common_obj). I don't believe this has anything to do with use/require.
You should use Exporter. Change common_obj to add:
use base Exporter;
#EXPORT_OK = qw/func_obj finish/;
Then change myfile_obj:
use common_obj qw/func_obj finish/;
I am assuming you are just trying to add a new interface into an old "just works" module. I am sure this is fraught with problems but if it can be done this is one way to do it.
It's very good that you're making the move to use packages as that will help you a lot in the future. To get there, i suggest that you start refactoring your old code as well. I can understand not wanting to have to touch any of the old cgi files, and agree with that choice for now. But you will need to edit some of your included modules to get where you want to be.
Using your example as a baseline the goal is to leave myfile.cgi and all files like it as they are without changes, but everything else is fair game.
Step 1 - Create a new package to contain the functions and variables in common.pm
common.pm needs to be a package, but you can't make it so without effecting your old code. The workaround for this is to create a completely new package to contain all the functions and variables in the old file. This is also a good opportunity to create a better naming convention for all of your current and to be created packages. I'm going to assume that maybe you don't have it named as common.pm, but regardless, you should pick a directory and name that bits your project. I'm going to randomly choose the name MyProject::Core for the functions and variables previously held in common.pm
package MyProject::Core;
#EXPORT = qw();
#EXPORT_OK = qw($done func_c);
%EXPORT_TAGS = (all => [#EXPORT, #EXPORT_OK]);
use strict;
use warnings;
our $done = "bye";
sub func_c {
print #_, "\n";
}
1;
__END__
This new package should be placed in MyProject/Core.pm. You will need to include all variables and functions that you want exported in the EXPORT_OK list. Also, as a small note, I've added return characters to all of the print statements in your example just to make testing easier.
Secondly, edit your common.pm file to contain just the following:
use MyProject::Core qw(:all);
1;
__END__
Your myfile.cgi should work the same as it always does now. Confirm this before proceeding.
Next you can start creating your new packages that will rely on the functions and variables that used to be in the old common.pm. Your example common_obj.pm could be recoded to the following:
package common_obj;
use MyProject::Core qw($done func_c);
use base Exporter;
#EXPORT_OK = qw(func_obj finish);
use strict;
use warnings;
sub func_obj {func_c(#_);}
sub finish {print "$done\n";}
1;
__END__
Finally, myfile_obj.cgi is then recoded as the so:
use common_obj qw(func_obj finish);
use strict;
use warnings;
func_obj('hi');
finish();
1;
__END__
Now, I could've used #EXPORT instead of #EXPORT_OK to automatically export all the available functions and variables, but it's much better practice to only selectively import those functions you actually need. This method also makes your code more self-documenting, so someone looking at any file, can search to see where a particular function came from.
Hopefully, this helps you get on your way to better coding practices. It can take a long time to refactor old code, but it's definitely a worthwhile practice to constantly be updating ones skills and tools.
I'm working on a fairly complex application written in Perl. I'm fairly experienced with the language, but I'm just stumped on this.
I'm using a module, Foo, which uses sysread and syswrite for various operations on a file-handle (a bi-directional socket, in this case) that I pass to its constructor.
I want to do the following: From another module I am writing, (let's call it Bar), I want to change the way that sysread/write behave only when called from within methods that belong to Foo
Sysread et al need to work as normal everywhere else. It can be safely assumed that the use of sysread will not change in Foo.
The reason I want to do this is that I need to track the number of bytes being read from/written to the afore-mentioned file handle. At this point, this seems like the only way I can get this information - basically saving the return value from sysread/write.
I have no problems using anything from the CPAN, as long as it's of good quality.
Update: I found a better solution to my specific problem, and have posted the code here:
https://github.com/Hercynium/Tie-Handle-CountChars
It seems to be working very well in my application, but I won't be posting it to the CPAN until I've more thoroughly tested it, plus written some actual unit tests :)
You could do this by creating your own Foo::sysread function, which wraps the core function by logging the return value. The wrapping can be done automatically (preventing you from having to mess about with the symbol table yourself) with Class::Method::Modifiers:
package Foo;
use strict;
use warnings;
# ... other code...
use Class::Method::Modifiers;
around sysread => sub {
my $orig = shift;
my $return = CORE::sysread(#_);
# do something with $return
return $return;
};