First, I'd like to say this isn't a question of design, but a question of compliance. I'm aware there are issues with the current setup.
In a module, there are packages that are named after the servers, which has many of the same variables/functions that pertain to that server. It looks like this was set up so that you could do:
PRODUCTION_SERVER_NAME::printer() or
TEST_SERVER_NAME::printer()
Perhaps a better design might have been something like:
CENTRAL_PACKAGE_NAME::printer('production') or CENTRAL_PACKAGE_NAME::printer('test')
Anyhow, it seems the server names have changed, so instead of using the actual servernames, I'd like to rename the packages to just PRODUCTION or TEST, without changing the other code that is still refering to PRODUCTION_SERVER_NAME.
Something like:
package PRODUCTION, PRODUCTION_SERVER_NAME; # pseudo code
I'm guessing some sort of glob/import might work, but was wondering if there is already something that does something similar. I also realize it's not good practice to saturate namespaces.
I am not providing any comments on the design or anything that might involve changing client code. Functions in MyTest.pm can be accessed using either MyTest:: or MyExam::. However, you can't use use MyExam because the physical file is not there. You could do clever #INC tricks, but my programs always crash and burn when I try to be clever.
MyTest.pm
package MyTest;
sub hello { 'Hello' }
sub twoplustwo { 4 }
for my $sub (qw( hello twoplustwo)) {
no strict 'refs';
*{"MyExam::$sub"} = *{"MyTest::$sub"};
}
1;
test.pl
#!/usr/bin/env perl
use strict; use warnings;
use feature 'say';
use MyTest;
say MyExam::hello();
say MyExam::twoplustwo();
Output:
Hello
4
Have you considered using aliased? It sounds like it could work for you.
Try Exporter::Auto:
package Foo;
use Exporter::Auto;
sub foo {
print('foo');
}
package Bar;
use Foo;
package main;
Foo::foo();
Bar::foo();
Related
I am considering moving the individual methods of a module into separate modules in order to obtain more manageable files. I wrote a little test for this:
a.pl:
#!/usr/bin/perl
use 5.028;
use warnings;
use utf8;
use open ':std', ':encoding(UTF-8)';
use Readonly;
use English qw(-no_match_vars);
use Benchmark qw(:all);
use A;
our $VERSION = 1;
Readonly::Scalar my $COUNT => 10_000_000;
warn $A::VERSION;
warn $A::Login2::VERSION;
my $a = A->new;
warn $a;
$a->login(1);
$a->login2(1);
cmpthese($COUNT, {
login => sub{$a->login},
login2 => sub{$a->login2}
});
A.pm:
package A;
use 5.028;
use warnings;
use utf8;
use open ':std', ':encoding(UTF-8)';
use Readonly;
use English qw(-no_match_vars);
use A::Login2 'login2';
our $VERSION = 1;
sub new {
my ($class, $p) = #_;
my $this = {};
bless $this, $class;
return $this;
}
sub login {
my ($this, $dump) = #_;
if ($dump) {
warn "$this: login";
$this->test;
}
return;
}
sub test {
my ($this) = #_;
warn "$this: test";
return;
}
1;
A/Login2.pm:
package A::Login2;
use 5.028;
use warnings;
use utf8;
use open ':std', ':encoding(UTF-8)';
use Readonly;
use English qw(-no_match_vars);
use base 'Exporter';
our #EXPORT_OK = qw(login2);
our $VERSION = 1.1;
sub login2 {
my ($this, $dump) = #_;
if ($dump) {
warn "$this: login2";
$this->test;
}
return;
}
1;
The output from ./a.pl is:
1 at ./a.pl line 18.
1.1 at ./a.pl line 19.
A=HASH(0x5581d48f3470) at ./a.pl line 21.
A=HASH(0x5581d48f3470): login at A.pm line 25.
A=HASH(0x5581d48f3470): test at A.pm line 34.
A=HASH(0x5581d48f3470): login2 at A/Login2.pm line 18.
A=HASH(0x5581d48f3470): test at A.pm line 34.
Rate login login2
login 5847953/s -- -6%
login2 6250000/s 7% --
I would have thought that login was faster than login2.
Why login2 is faster than login?
Is it a good Idea to put each method in its own module?
Is there a better way?
I'm looking forward for comments.
This is an open-ended question on design, but I'd offer some specific comments.
Firstly, it's a commendable want to split up unwieldy files for management ease and readability, and this is in general a good idea. We use libraries in any sizable code we write -- so right there, the overall code is split across different units.
But such division is based on functionality, where behaviors (functions) are naturally grouped into packages. Splitting for size may result in an awkward codebase; it may matter what goes together, updating can get tricky (buggy?), etc. This can actually hinder the overall managability.
If a module feels too large it may well be that there is too much functionality bundled together and that the codebase should be in several different modules. There is no simple rule for assessing this; designing libraries isn't easy. It may be helpful to think whether it makes sense splitting off a group of functions so that they have their own namespace? †
However, the posted example has another issue: it's a class but where object-oriented mechanism is mixed with the basic package import. This is convoluted (how/why does an object get passed to a function defined in a file which isn't a class?), and I wouldn't recommend doing that.
Can a well designed class be too large to nicely fit in a file? Probably I guess, even though I haven't seen such a case. Usually, when a codebase ends up broken across multiple compilation units in Perl it is because of functionality -- it is more fittting to have multiple classes.
But if somehow mere size ends up being the problem, a reasonable approach may be to have multiple files each being the same package, clearly documented.
A.pm
package A;
use warnings;
use strict;
use feature 'say';
# =======================================================
# NOTE: Class definition is given in multiple parts/files
# =======================================================
use A_part1;
use A_part2;
sub new { ... }
# perhaps more methods in this file
1;
A_part1.pm
package A;
# warnings, strict, pragmas, etc
sub func1 { my ($self, #args) = #_; ... };
...
1;
and similarly for A_part2.pm etc. Then this is used as usual
use A;
my $obj = A->new( ... );
$obj->func1(...);
Note that this breaks the rule (a convention) about the relation between the file name and the package name (A_part1.pm vs package A;); for one, PerlCritic will complain.‡ However, it is done deliberately here and I wouldn't be concerned about that.
But I'd consider this to be actually needed only very rarely. I'd rather expect that if a library seems too large it is probably taking on too much and should be redesigned into multiple classes.
† But if there is indeed simply too many functions, which do belong in the same library, once the file is broken up consider using require to bring those files together.
‡ Perl::Critic::Policy::Modules::RequireFilenameMatchesPackage
tl;dr: Separate groups of methods into manageable chunks. Don't put every method into its own file, but don't put a million methods in one file. How you split that up depends on the task, how the methods relate to each other, and your tolerance for editing it all. Performance is not the key issue: maintainability is. Worry about performance when it becomes an issue.
You see a differences of about 6% in your test, but there are a few things to consider:
The uncertainty in Benchmark is estimated to be in about that range. See, for instance, Steffen Mueller's writings about benchmarking and his Dumbbench module.
You benchmark shows something happening just under 6 million times a second versus something happening just over 6 million times a second. That's really, really fast in both cases. If you added a control, like a plain sub { 1 }, I think you'd find that the mere act of testing is the most significant factor in your results. Consider that you also want to test all of the other cases, including calling the method from its source package without the import, calling everything as plain subroutines (without method dispatch), and various other ways. You need to tease out what's actually important by isolating different factors.
I have various talks about benchmarking where the code isn't doing what we think it is doing, as in Wasting time thinking about wasted time. Since you run so many iterations per second, I'm guessing that you aren't actually measuring anything real. Once Perl knows how to resolve a method, for example, it doesn't need to do that work again. Everything you are wondering about this problem has already happened before your benchmark starts.
You have a lot of extra code (Readonly, English, etc). Unless those things are part of what you are trying to measure, get rid of them. I'm not even sure they are doing what you think they are doing. You also don't need to inherit from Exporter (thus screwing around with #ISA). You just want its import routine, which you can import.
I already asked a similar question, but it had more to do with using barewords for functions. Basically, I am refactoring some of my code as I have begun learning about "package" and OOP in perl. Since none of my modules used "package" I could just "require" them into my code, and I was able to use their variables and functions without any trouble. But now I have created a module that I will be using as a class for object manipulation, and I want to continue being able to use my functions in other modules in the same way. No amount of reading docs and tutorials has quite answered my question, so please, if someone can offer an explanation of how it works, and why, that would go a really long way, more than a "this is how you do it"-type answer.
Maybe this can illustrate the problem:
myfile.cgi:
require 'common.pm'
&func_c('hi');
print $done;
common.pm:
$done = "bye";
sub func_c {print #_;}
will print "hi" then "bye", as expected.
myfile_obj.cgi:
use common_obj;
&func_obj('hi');
&finish;
common_obj.pm:
package common_obj;
require 'common.pm';
sub func_obj {&func_c(#_);}
sub finish {print $done;}
gives "Undefined subroutine func_c ..."
I know (a bit) about namespaces and so on, but I don't know how to achieve the result I want (to get func_obj to be able to call func_c from common.pm) without having to modify common.pm (which might break a bunch of other modules and scripts that depend on it working the way it is). I know about use being called as a "require" in BEGIN along with its import().. but again, that would require modifying common.pm. Is what I want to accomplish even possible?
You'll want to export the symbols from package common_obj (which isn't a class package as it stands).
You'll want to get acquainted with the Exporter module. There's an introduction in Modern Perl too (freely available book, but consider buying it too).
It's fairly simple to do - if you list functions in #EXPORT_OK then they can be made available to someone who uses your package. You can also group functions together into named groups via EXPORT_TAGS.
Start by just exporting a couple of functions, list those in your use statement and get the basics. It's easy enough then.
If your module was really object-oriented then you'd access the methods through an object reference $my_obj->some_method(123) so exporting isn't necessary. It's even possible for one package to offer both procedural/functional and object-oriented interfaces.
Your idea of wrapping old "unsafe" modules with something neater seems a sensible way to proceed by the way. Get things under control without breaking existing working code.
Edit : explanation.
If you require a piece of unqualified code then its definitions will end up in the requiring package (common_obj) but if you restrict code inside a package definition and then use it you need to explicitly export definitions.
You can use common_obj::func_obj and common_obj::finish. You just need to add their namespaces and it will work. You don't need the '&' in this case.
When you used the package statement (in common_obj.pm) you changed the namespace for the ensuing functions. When you didn't (in common.pm) you included the functions in the same namespace (main or common_obj). I don't believe this has anything to do with use/require.
You should use Exporter. Change common_obj to add:
use base Exporter;
#EXPORT_OK = qw/func_obj finish/;
Then change myfile_obj:
use common_obj qw/func_obj finish/;
I am assuming you are just trying to add a new interface into an old "just works" module. I am sure this is fraught with problems but if it can be done this is one way to do it.
It's very good that you're making the move to use packages as that will help you a lot in the future. To get there, i suggest that you start refactoring your old code as well. I can understand not wanting to have to touch any of the old cgi files, and agree with that choice for now. But you will need to edit some of your included modules to get where you want to be.
Using your example as a baseline the goal is to leave myfile.cgi and all files like it as they are without changes, but everything else is fair game.
Step 1 - Create a new package to contain the functions and variables in common.pm
common.pm needs to be a package, but you can't make it so without effecting your old code. The workaround for this is to create a completely new package to contain all the functions and variables in the old file. This is also a good opportunity to create a better naming convention for all of your current and to be created packages. I'm going to assume that maybe you don't have it named as common.pm, but regardless, you should pick a directory and name that bits your project. I'm going to randomly choose the name MyProject::Core for the functions and variables previously held in common.pm
package MyProject::Core;
#EXPORT = qw();
#EXPORT_OK = qw($done func_c);
%EXPORT_TAGS = (all => [#EXPORT, #EXPORT_OK]);
use strict;
use warnings;
our $done = "bye";
sub func_c {
print #_, "\n";
}
1;
__END__
This new package should be placed in MyProject/Core.pm. You will need to include all variables and functions that you want exported in the EXPORT_OK list. Also, as a small note, I've added return characters to all of the print statements in your example just to make testing easier.
Secondly, edit your common.pm file to contain just the following:
use MyProject::Core qw(:all);
1;
__END__
Your myfile.cgi should work the same as it always does now. Confirm this before proceeding.
Next you can start creating your new packages that will rely on the functions and variables that used to be in the old common.pm. Your example common_obj.pm could be recoded to the following:
package common_obj;
use MyProject::Core qw($done func_c);
use base Exporter;
#EXPORT_OK = qw(func_obj finish);
use strict;
use warnings;
sub func_obj {func_c(#_);}
sub finish {print "$done\n";}
1;
__END__
Finally, myfile_obj.cgi is then recoded as the so:
use common_obj qw(func_obj finish);
use strict;
use warnings;
func_obj('hi');
finish();
1;
__END__
Now, I could've used #EXPORT instead of #EXPORT_OK to automatically export all the available functions and variables, but it's much better practice to only selectively import those functions you actually need. This method also makes your code more self-documenting, so someone looking at any file, can search to see where a particular function came from.
Hopefully, this helps you get on your way to better coding practices. It can take a long time to refactor old code, but it's definitely a worthwhile practice to constantly be updating ones skills and tools.
I read the explanation even from perldoc and StackOverflow. But there is a little confusion.
use normally loads the module at compile time whereas require does at run time
use calls the import function inbuilt only whereas require need to call import module separately like
BEGIN {
require ModuleName;
ModuleName->import;
}
require is used if we want to load bigger modules occasionally.
use throws the exception at earlier states whereas require does when I encounters the issue
With use we can selectively load the procedures not all but few like
use Module qw(foo bar) # it will load foo and bar only
is it possible in require also?
Beisdes that are there another differences between use and require?
Lot of discussion on google but I understood these above mentioned points only.
Please help me other points.
This is sort of like the differences between my, our, and local. The differences are important, but you should be using my 99% of the time.
Perl is a fairly old and crufty language. It has evolved over the years from a combination awk/shell/kitchen sink language into a stronger typed and more powerful language.
Back in Perl 3.x days before the concept of modules and packages solidified, there was no concept of modules having their own namespace for functions and variables. Everything was available everywhere. There was nothing to import. The use keyword didn't exist. You always used require.
By the time Perl 5 came out, modules had their own storage for variable and subroutine names. Thus, I could use $total in my program, and my Foo::Bar module could also use $total because my $total was really $main::total and their $total was really $Foo::Bar::total.
Exporting was a way to make variables and subroutines from a module available to your main program. That way, you can say copy( $file, $tofile); instead of File::Copy::copy( $file, $tofile );.
The use keyword simply automated stuff for you. Plus, use ran at compile time before your program was executed. This allows modules to use prototyping, so you can say foo( #array ) instead of foo( \#array ) or munge $file; instead of munge( $file );
As it says in the use perldoc's page:
It [use] is exactly equivalent to:
BEGIN { require Module; Module->import( LIST ); }
Basically, you should be using use over require 99% of the time.
I can only think of one occasion where you need to use require over use, but that's only to emulate use. There are times when a module is optional. If Foo::Bar is available, I may use it, but if it's not, I won't. It would be nice if I could check whether Foo::Bar is available.
Let's try this:
eval { use Foo::Bar; };
my $foo_bar_is_available = 1 unless ($#);
If Foo::Bar isn't available, I get this:
Can't locate Foo/Bar.pm in #INC (#INC contains:....)
That's because use happens BEFORE I can run eval on it. However, I know how to emulate use with require:
BEGIN {
eval { require Foo::Bar; Foo::Bar->import( qw(foo bar barfu) ); };
our foo_bar_module_available = 1 unless ($#);
}
This does work. I can now check for this in my code:
our $foo_bar_module_available;
if ( $foo_bar_module_available ) {
fubar( $var, $var2 ); #I can use it
}
else {
... #Do something else
}
I think that the code you written by your own in the second point is self explanatory of the difference between the two ...
In practice "use" perform a "require" of the module and after that it automatically import the module, with "require" instead the module is only mandatory to be present but you have the freedom to import it when you need it ...
Given what stated above it result obvious that the question in the point 5 have no sense, since "require" doesn't import anything, there is no need to specify the module part to load, you can selectively load the part you need when you will do the import operation ...
Furthermore bear in mind that while "use" act at compile time(Perl compilation phase), "require" act at runtime, for this reason with "require" you will be able to import the package only if and/or when it is really needed .
Difference between use and require:
If we use "use" no need to give file extension. Ex: use
server_update_file.
If we use "require" need to give file extension. Ex: require
"server_update_file.pm";
"use" method is used only for modules.
"require" method is used for both libraries and modules.
Refer the link for more information: http://www.perlmonks.org/?node_id=412860
Suppose I have two files: a module file that looks like this:
package myPackage;
use Bio::Seq;
and another file that looks like this:
use lib "path/to/lib";
use myPackage;
use Bio::Seq;
How can i prevent that Bio::Seq is included twice? Thanx
It won't be included twice. use semantics could be described like that:
require the module
call module's import
As the documentation says, it's equivalent to:
BEGIN { require Module; Module−>import( LIST ); }
require mechanism, on the other hand, assures modules' code is compiled and executed only once, the first time some require it. This mechanism is based on the special variable %INC. You can find further details in the documentation for use, require, and in the perlmod page.
use Foo
is mostly equivalent to
# perldoc -f use
BEGIN {
require "Foo.pm";
Foo->import();
}
And require "Foo" is mostly equivalent to
# perldoc -f require
sub require {
my ($filename) = #_;
if (exists $INC{$filename}) {
return 1 if $INC{$filename};
die "Compilation failed in require";
}
# .... find $filename in #INC
# really load
return do $realfilename;
}
So
No, the code won't be "Loaded" more than once, only "imported" more than once.
If you have code such as
package Bio::Seq;
...
sub import {
# fancy stuff
}
And you wanted to make sure a library was loaded, but not call import on it,
#perldoc -f use
use Bio::Seq ();
Modules aren't "included" in Perl like they are in C. They are "loaded", by which I mean "executed".
A module will only be loaded/executed once, no matter how many use statements specify it.
The only thing that happens for every use of a module is the call to the module's import method. That is typically used to export symbols to the using namespace.
I guess, you want to optimize the loading(usage) of Module.
For optimizing, dynamic loading may be helpful.
For dynamically loading a Perl Module, we use Class::Autouse.
For more details you can visit this link.
I guess the OP may look for a way of avoiding a long list of use statement boilerplate at the beginning of his/her Perl script. In this case, I'd like to point everyone to Import::Into. It works like the keyword import in Java and Python. Also, this blog post provides a wonderful demo of Import::Into.
I'm currently refactoring a test suite built up by a colleague and would like to use Test::Class[::Most] while doing so. As I started I figured out I could really use a couple of Moose roles to decouple code a little bit. However, it seems it's not quite possible -- I'm getting error messages like this one:
Prototype mismatch: sub My::Test::Class::Base::blessed: none vs ($) at
/usr/lib/perl5/vendor_perl/5.8.8/Sub/Exporter.pm line 896
So the question is: can I use Moose together with Test::Class and if so, how?
PS: The code goes like this:
package My::Test::Class::Base;
use Moose;
use Test::Class::Most;
with 'My::Cool::Role';
has attr => ( ... );
Test::Deep (loaded via Test::Most via Test::Class::Most) is exporting its own blessed along with a lot of other stuff it probably shouldn't be. Its not documented. Moose is also exporting the more common Scalar::Util::blessed. Since Scalar::Util::blessed is fairly common, Test::Deep should not be exporting its own different blessed.
Unfortunately, there's no good way to stop it. I'd suggest in My::Test::Class::Base doing the following hack:
package My::Test::Class::Base;
# Test::Class::Most exports Test::Most exports Test::Deep which exports
# an undocumented blessed() which clashes with Moose's blessed().
BEGIN {
require Test::Deep;
#Test::Deep::EXPORT = grep { $_ ne 'blessed' } #Test::Deep::EXPORT;
}
use Moose;
use Test::Class::Most;
and reporting the problem to Test::Deep and Test::Most.
You can squelch particular exports via (for example):
use Test::Deep '!blessed';
I've just released an updated version of Test::Most. If you install 0.30, this issue goes away.
Folks finding this page might also be interested to know about the various Test::Class-Moose mashup modules:
Test::Able
Test::Sweet
Test::Class::Moose (not yet on CPAN)
With any of these some amount of refactoring would required-- the syntax varies. HOwever, with some amount of find-and-replace you may be able to make a fairly quick transition.