I'm trying to profile a big application that contains a module creatively named DB. After running it under -d:NYTProf and calling nytprofhtml, both without any additional switches or related environment variables, I get usual nytprof directory with HTML output. However it seems that due to some internal logic any output related to my module DB is severely mangled. Just to make sure, DB is pure Perl.
Top and all subroutines list: while other function links point to relevant -pm-NN-line.html file, links to subs from DB point to entry script instead.
"line" link in "Source Code Files" section does point to DB-pm-NN-line.html and it does exists, but unlike all other files it doesn't have "Statements" table inside and "Line" table have absolutely no lines of code, only summary of calls.
Actually, here's a small example:
# main.pl
use lib '.';
use DB;
for (1..10) {
print DB::do_stuff();
}
# DB.pm
package DB;
sub do_stuff {
my $a = 1;
my $b = 2;
my $c = $a + $b;
return $c;
}
1;
Try running perl -d:NYTProf main.pl, then nytprofhtml and then inspect nytprof/DB-pm-8-line.html.
I don't know if it happens because NYTProf itself have internal module named DB or it handles modules starting with DB in some magical way - I've noticed output for functions from DBI looks somewhat different too.
Is there a way to change/disable this behavior short of renaming my DB module?
That's hardly an option
You don't really have an option. The special DB package and the associated Devel:: namespace are coded into the perl compiler / interpreter. Unless you want to do without any debugging facilities altogether and live in fear of any mysterious unquantifiable side effects then you must refactor your code to rename your proprietary DB library
Anyway, the name is very generic and that's exactly why it is expected to be encountered
On the contrary, Devel::NYTProf is bound to use the existing core package DB. It's precisely because it is a very generic identifier that an engineer should reject it as a choice for production code, which could be required to work with pre-existing third-party code at any point
a module creatively named DB
This belies your own support of the choice. DBhas been a core module since v5.6 in 2000, and anything already in CPAN should have been off the cards from the start. I feel certain that the clash of namespaces must been discovered before now. Please don't be someone else who just sweeps the issue under the carpet
Remember that, as it stands, any code you are now putting in the DB package is sharing space with everything that perl has already put there. It is astonishing that you are not experiencing strange and inexplicable symptoms already
I don't see that it's too big a task to fix this as long as you have a proper test suite for your 10MB of Perl code. If you don't, then hopefully you will at least not make the same mistakes again
Related
I know specific instances of this question have been answered before:
How can I dynamically include Perl modules without using eval?
How do I use a Perl package known only in runtime?
There are also good answers at Perl Monks:
Writing a Perl module that dynamically loads other modules.
Creating subroutines on the fly
But I would like a robust way to add functionallity to a Perl application that will be:
Efficient: if the code is not needed it should not be compiled.
Easy to debug: error reporting if something goes wrong at the dynamic code, should point at the right place at the dynamic code.
Easy to extend: adding new code should be as easy as adding a new file or directory+file.
Easy to invoke: the main application should be able to use an "add on" without much trouble. An efficient mechanism to check if the "add on" has already been loaded and if not load it, would be a plus.
To illustrate the point, here are some examples that would benefit from a good solution:
A set of scripts that move data from different applications. For instance, moving data from OpenCart to Prestashop, where each entity in the data model has a specific "add on" that deals with the input or output; then an intermediate data model takes care of the transformation of the data. This could be used to move data in any direction or even between different versions of the same ecommerce.
A web application that needs to render different types of HTML in different places. Each "module" knows how to handle a certain information and accepts parameters to do it. A module outputs HTML, another a list of documents, another a document, another a banner, and so on.
Here are some examples that I have used and that work.
Load a function at run time and output the possible compile errors:
eval `cat $file_with_function`;
if( $# ) {
print STDERR $#, "\n";
die "Errors at file $file_with_function\n";
}
Or more robust using File::Slurp:
eval read_file("$file_with_function", binmode => ':utf8');
Check that a certain function has been defined:
if( !defined &myfunction ) {
die "myfunction is not defined\n";
}
The function may be called from there on. This is fine with one function, but not for many.
If the function is put in a module:
require $file_with_function; # needs the ".pm" extension, i.e. addon/func.pm
$name_of_module->import(); # need to know the module name, i.e. Addon::Func
$name_of_module->myfunction(...);
Where the require may be protected inside an eval and then use $# as before.
With Module::Load:
load $name_of_module;
Followed by the import and used in the same way. Security should not be a concern as it may be assumed that the dynamic code comes from a trusted place. Are there better ways? Which way would be considered good practice?
In case it helps, I will be using the solution (among other places, but not exclusively) within the Dancer framework.
EDIT: Given the comments, I add some more info. All cases that I have in mind have in common:
There is more than one dynamic piece of code. Probably many to start with.
Each bit of code has the same interface.
Given the comments and the lack of responses, I have done some research to answer my own question. Comments or other answers are welcome!
Dynamic code
By dynamic code I mean code that is evaluated at run-time. In general, I consider better to compile an application so that you have all the error checking the Perl compiler can offer before starting to execute. Added to use strict and use warnings, you can catch many common mistakes that way. So why using dynamic code at all? These are the reasons I consider:
An application performs many different actions that are chosen depending on the context of execution. For instance, an application extracts certain properties from a file. The way to extract them depends on the file type and we want to deal with many file types, but we do not want to change the application for each new file type we add. We also want the application to start quickly.
An application needs to be expanded on the fly in a way that does not require the application to restart.
We have a large application that contains a number of features. When we deploy the application, we do not want to provide all the possible features all the time, maybe because we licence them separately, maybe because not all of them are able to run under all platforms. By throwing in only the files with the features we want, we have a distribution that does not require changing any code or config files.
How do we do it?
Given the possibilities that Perl offers, solutions to adding dynamic code come in two flavors: using eval and using require. Then there are modules that may help do things in an easier or more maintainable way.
The quick and dirty way
The eval way uses the form eval EXPR to compile a piece of Perl code at run-time. The expression could be a string but I suggest putting the code in a file and grouping other similar files in a convenient place. Then, if possible using File::Slurp:
eval read_file("$file_with_code", binmode => ':utf8');
if( $# ) {
die "$file_with_code: error $#\n";
}
if( !defined &myfunction ) {
die "myfunction is not defined at $file_with_code\n";
}
Specifying the character set to read_file makes sure that the file will be interpreted correctly. It is also good to check that the compilation was correct and that the function we expect was defined. So in $file_with_code, we will have:
sub myfunction(...) {
# Do whatever; maybe return something
}
Then you may invoke the function normally. The function will be a different one depending on which file was loaded. Simple and dynamic.
The modular way (recommended)
The way I would do it with maintainability in mind would be using require. Unlike use, that is evaluated at compile-time, require may be used to load a module at run-time. Out of the various ways to invoke require, I would go for:
my $mymodule = 'MyCompany::MyModule'; # The module name ends up in $mymodule
require $mymodule;
Also unlike use, require will load the module but will not execute import. So we may use any functions inside the module and those function names will not polute the calling namespace. To access the function we will need to use:
$mymodule->myfunction($a, $b);
See below as to how the arguments get passed. This way of invoking a function will add an argument before $a and $b that is usually named $self. You may ignore it if you don´t know anything about object orientation.
As require will try to load a module and the module may not exist or it may not compile, to catch the error it will be better to use:
eval "require $mymodule";
Then $# may be used to check for an error in the loading+compiling process. We may also check that the function has been defined with:
if( $mymodule->can('myfunction') ) {
die "myfunction is not defined at module $mymodule\n";
}
In this case we will need to create a directory for the modules and a file with the .pm extension for each one:
MyCompany
MyModule.pm
Inside MyModule.pm we will have:
package MyCompany::MyModule;
sub myfunction {
my ($self, $a, $b);
# Do whatever; maybe return something
# $self will be 'MyCompany::MyModule'
}
1;
The package bit is essential and will make sure that whatever definitions we put inside will be at the MyCompany::MyModule namespace. The 1; at the end will tell require that the module initialization was correct.
In case we wanted to implement the module by using other libraries that could polute the caller namespace, we could use the namespace::clean module. This module will make sure the caller does not get any additions to the namespace coming from the module we are defining. It is used in this way:
package MyCompany::MyModule;
# Definitions by these modules will not be available to the code doing the require
use Library1 qw(def1 def2);
use Library2 qw(def3 def4);
...
# Private functions go here and will not be visible from the code doing the require
sub private_function1 {
...
}
...
use namespace::clean;
# myfunction will be available
sub myfunction {
# Do whatever; maybe return something
}
...
1;
What happens if we include a module more than once?
The short answer is nothing. Perl keeps track of which modules have been loaded and from where using the %INC variable. Both use and require will not load a library twice. use will add any exported names to the callers namespace. require will not do that either. In case you want to check that a module has been loaded already, you could use %INC or better yet, you could use module::loaded which is part of the core in modern Perl versions:
use Module::Loaded;
if( !is_loaded( $mymodule ) {
eval "require $mymodule" );
...
}
How do I make sure Perl finds my module files?
For use and require Perl uses the #INC variable to define the list of directories that will be used to look for libraries. Adding a new directory to it may be achieved (among other ways) by adding it to the PERL5LIB environment variable or by using:
use lib '/the/path/to/my/libs';
Helper libraries
I have found some libraries that may be used to make the code that uses the dynamic mechanism more maintainable. They are:
The if module: will load a module or not depending on a condition: use if CONDITION, MODULE => ARGUMENTS;. May also be used to unload a module.
Module::Load::Conditional: will not die on you while trying to load a module and may also be used to check the module version or its dependencies. It is also able to load a list of modules all at once even checking their versions before doing so.
Taken from the Module::Load::Conditional documentation:
use Module::Load::Conditional qw(can_load);
my $use_list = {
CPANPLUS => 0.05,
LWP => 5.60,
'Test::More' => undef,
};
print can_load( modules => $use_list )
? 'all modules loaded successfully'
: 'failed to load required modules';
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am confused about Perl modules. I get that a module can be used to dump a whole bunch of subs into, tidying main code.
But, what is the relationship between modules?
Can modules "use" other modules?
Must I use export, or can I get away with dropping that stuff?
How do I solve circular use? (Security.pm uses Html.pm and Html.pm uses Security.pm). I know the obvious answer, but in some cases I need to use Security.pm routines in Html.pm and vice versa - not sure how to get around the problem.
If I remove all "use" clauses from all of my modules ... Then I have to use full sub qualifiers. For example, Pm::Html::get_user_friends($dbh, $uid) will use Security to determine if a friend is a banned user or not (banned is part of Security).
I just don't get this module thing. All the "tutorials" only speak of one module, never multiple, nor do they use real world examples.
The only time I've come across multiple modules is when using OO code. But nothing that tells me definitively one way or another how multiple modules interact.
Modules in Perl come in several flavors and have several different things that make them a module.
Definition
Something qualifies as a module if the following things are true:
the filename ends in .pm,
there is a package declaration in the file,
the package name matches the filename and path; Data/Dumper.pm contains package Data::Dumper,
it ends with a 1 or another true value.
Conventions
Then there are some accepted conventions:
modules should usually only contain one package,
module names should be camel case and should not contain underscores _ (example: Data::Dumper, WWW::Mechanize::Firefox)
modules that are in small letters completely are not modules, they are pragmas.
Usually a module either contains a collection of functions (subs) or it is object oriented. Let's look at the collections first.
Modules as function collections
A typical module that bundles a bunch of functionality that is related uses a way to export those functions into your code's namespace. A typical example is List::Util. There are several ways to export things. The most common one is Exporter.
When you take a function from a module to put it into your code, that's called importing it. That is useful if you want to use the function a lot of times, as it keeps the name short. When you import it, you can call it directly by its name.
use List::Util 'max';
print max(1, 2, 3);
When you don't import it, you need to use the fully qualified name.
use List::Util (); # there's an empty list to say you don't want to import anything
print List::Util::max(1, 2, 3); # now it's explicit
This works because Perl installs a reference to the function behind List::Util::max into your namespace under the name max. If you don't do that, you need to use the full name. It's a bit like a shortcut on your desktop in Windows.
Your module does not have to provide exporting/importing. You can just use it as a collection of stuff and call them by their full names.
Modules as a collection of packages
While every .pm file called a module, people often also refer to a whole collection of things that are a distribution as a module. Something like DBI comes to mind, which contains a lot of .pm files, which are all modules, but still people talk about the DBI module only.
Object oriented modules
Not every module needs to contain stand-alone functions. A module (now we're more talking about the one directly above) can also contain a class. In that case it usually does not export any functions. In fact, we do not call the subs functions any more, but rather methods. The package name becomes the name of the class, you create instances of the class called objects, and you call methods on those objects, which ends up being the functions in your package.
Loading modules
There are two main ways of loading a module in Perl. You can do it at compile time and at run time. The perl1 compiler (yes, there is a compiler although it's interpreted language) loads files, compiles them, then switches to run time to run the compiled code. When it encounters a new file to load, it switches back to compile time, compiles the new code, and so on.
Compile time
To load a module at compile time, you use it.
use Data::Dumper;
use List::Util qw( min max );
use JSON ();
This is equivalent to the following.
BEGIN {
require Data::Dumper;
Data::Dumper->import;
require List::Util;
List::Util->import('min', 'max');
require JSON;
# no import here
}
The BEGIN block gets called during compile time. The example in the linked doc helps understand the concept of those switches back and forth.
The use statements usually go at the top of you program. You do pragmas first (use strict and use warnings should always be your very first things after the shebang), then use statements. They should be used so your program loads everything it needs during startup. That way at run time, it will be faster. For things that run for a long time, or where startup time doesn't matter, like a web application that runs on Plack this is what you want.
Run time
When you want to load something during run time, you use require. It does not import anything for you. It also switches to compile time for the new file momentarily, but then goes back to run time where it left of. That makes it possible to load modules conditionally, which can be useful especially in a CGI context, where the additional time it takes to parse a new file during the run outweighs the cost of loading everything for every invocation of the program although it might not be needed.
require Data::Dumper;
if ($foo) {
require List::Util;
return List::Util::max( 1, 2, 3, $foo );
}
It is also possible to pass a string or a variable to require, so you can not only conditionally load things, but also dynamically.
my $format = 'CSV'; # or JSON or XML or whatever
require "My::Parser::$format";
This is pretty advanced, but there are use-cases for it.
In addition, it's also possible to require normal Perl files with a .pl ending at run time. This is often done in legacy code (which I would call spaghetti). Don't do it in new code. Also don't do it in old code. It's bad practice.
Where to load what
In general you should always use or require every module that you depend on in any given module. Never rely on the fact that some other downstream part of your code loads things for you. Modules are meant to encapsulate functionality, so they should be able to at least stand on their own a little bit. If you want to reuse one of your modules later, and you forgot to include a dependency it will cause you grief.
It also makes it easier to read your code, as clearly stated dependencies and imports at the top help the maintenance guy (or future you) to understand what your code is about, what it does and how it does it.
Not loading the same thing twice
Perl takes care of that for you. When it parses the code at compile time, it keeps track of what it has loaded. Those things to into the super-global variable %INC, which is a hash of names that have been loaded, and where they came from.
$ perl -e 'use Data::Dumper; print Dumper \%INC'
$VAR1 = {
'Carp.pm' => '/home/foo/perl5/perlbrew/perls/perl-5.20.1/lib/site_perl/5.20.1/Carp.pm',
'warnings.pm' => '/home/foo/perl5/perlbrew/perls/perl-5.20.1/lib/5.20.1/warnings.pm',
'strict.pm' => '/home/foo/perl5/perlbrew/perls/perl-5.20.1/lib/5.20.1/strict.pm',
'constant.pm' => '/home/foo/perl5/perlbrew/perls/perl-5.20.1/lib/site_perl/5.20.1/constant.pm',
'XSLoader.pm' => '/home/foo/perl5/perlbrew/perls/perl-5.20.1/lib/site_perl/5.20.1/x86_64-linux/XSLoader.pm',
'overloading.pm' => '/home/foo/perl5/perlbrew/perls/perl-5.20.1/lib/5.20.1/overloading.pm',
'bytes.pm' => '/home/foo/perl5/perlbrew/perls/perl-5.20.1/lib/5.20.1/bytes.pm',
'warnings/register.pm' => '/home/julien/perl5/perlbrew/perls/perl-5.20.1/lib/5.20.1/warnings/register.pm',
'Exporter.pm' => '/home/foo/perl5/perlbrew/perls/perl-5.20.1/lib/site_perl/5.20.1/Exporter.pm',
'Data/Dumper.pm' => '/home/foo/perl5/perlbrew/perls/perl-5.20.1/lib/5.20.1/x86_64-linux/Data/Dumper.pm',
'overload.pm' => '/home/foo/perl5/perlbrew/perls/perl-5.20.1/lib/5.20.1/overload.pm'
};
Every call to use and require adds a new entry in that hash, unless it's already there. In that case, Perl does not load it again. It still imports names for you if you used the module though. This makes sure that there are no circular dependencies.
Another important thing to keep in mind with regards to legacy code is that if you require normal .pl files, you need to get the path right. Because the key in %INC will not be the module name, but instead the string you passed, doing the following will result in the same file being loaded twice.
perl -MData::Dumper -e 'require "scratch.pl"; require "./scratch.pl"; print Dumper \%INC'
$VAR1 = {
'./scratch.pl' => './scratch.pl',
'scratch.pl' => 'scratch.pl',
# ...
};
Where modules are loaded from
Just like %INC, there is also a super global variable #INC, which contains the paths that Perl looks for modules in. You can add stuff to it by using the lib pragma, or via the environment variable PERL5LIB among other things.
use lib `lib`;
use My::Module; # this is in lib/My/Module.pm
Namespaces
The packages you use in your modules define namespaces in Perl. By default when you create a Perl script without a package, you are in the package main.
#!/usr/bin/env perl
use strict;
use warnings;
sub foo { ... }
our $bar;
The sub foo will be available as foo inside the main .pl file, but also as main::foo from anywhere else. The shorthand is ::foo. The same goes for the package variable $bar. It's really $main::bar or just $::bar. Use this sparingly. You don't want stuff from your script to leak over in your modules. That's a very bad practice that will come back and bite you later.
In your modules, things are in the namespace of the package they are declared in. That way, you can access them from the outside (unless they are lexically scoped with my, which you should do for most things). That is mostly ok, but you should not be messing with internals of other code. Use the defined interface instead unless you want to break stuff.
When you import something into your namespace, all it is is a shortcut as described above. This can be useful, but you also do not want to pollute your namespaces. If you import a lot of things from one module to another module, those thing will become available in that module too.
package Foo;
use List::Util 'max';
sub foo { return max(1, 2, 3) }
package main; # this is how you switch back
use Foo;
print Foo::max(3, 4, 5); # this will work
Because you often do not want this to happen, you should chose carefully what you want to import into your namespace. On the other hand you might not care, which can be fine, too.
Making things private
Perl does not understand the concept of private or public. When you know how the namespaces work you can pretty much get to everything that is not lexical. There are even ways to get to lexicals to, but they involve some arcane black magic and I'll not go into them.
However, there is a convention on how to mark things as private. Whenever a function or variable starts with an underscore, it should be considered private. Modern tools like Data::Printer take that into account when displaying data.
package Foo;
# this is considered part of the public interface
sub foo {
_bar();
}
# this is considered private
sub _bar {
...
}
It's good practice to start doing things like that, and to keep away from the internals of modules on CPAN. Things that are named like that are not considered stable, they are not part of the API and they can change at any time.
Conclusion
This was a very broad overview of some of the concepts involved here. Most of it will quickly become second nature to you once you've used it a few times. I remember that it took me about a year during my training as a developer to wrap my head around that, especially exporting.
When you start a new module, the perldoc page perlnewmod is very helpful. You should read that and make sure you understand what it says.
1: notice the small p in perl? I'm talking about the program here, not the name of the language, which is Perl.
(Your question would be a lot easier to read if you used capital letters.)
can modules "use" other modules?
Yes. You can load a module within another module. If you had looked at almost any CPAN module code, you would have seen examples of this.
must i use export, or can i get away with dropping that stuff?
You can stop using Exporter.pm if you want. But if you want to export symbol names from your modules then you either use Exporter.pm or you implement your own export mechanism. Most people choose to go with Export.pm as it's easier. Or you could look at alternatives like Exporter::Lite and Exporter::Simple.
how do i solve circular use (security.pm uses html.pm and html.pm uses security.pm)
By repartitioning your libraries to get rid of these circular dependencies. It might mean that you're putting too much into one module. Perhaps make smaller, more specialised, modules. Without seeing more explicit examples, it's hard to be much help here.
if i remove all "use" clauses from all of my PM's...then i have to use full sub qualifiers. for example, pm::html::get_user_friends($dbh, $uid) will use security to determine if a friend is a banned user or not (banned is part of security)
You're misunderstanding things here.
Calling use does two things. Firstly, it loads the module and secondly, it runs the module's import() subroutine. And it's the import() subroutine that does all of the Exporter.pm magic. And it's the Exporter.pm magic that allows you to call subroutines from other modules using short names rather than fully-qualified names.
So, yes, if you remove use statements from a module, then you will probably lose the ability to use short names for subroutines from other modules. But you're also relying on some other code in your program to actually load the module. So if you remove all use statements that load a particular module, then you won't be able to call the subroutines from that module. Which seems counter-productive.
It's generally a very good idea for all code (whether it's your main calling program or a module) to explicitly load (with use) any modules that it needs. Perl keeps track of modules that have already been loaded, so there is no problem with inefficiency due to modules being loaded multiple times. If you want to load a module and turn off any exporting of symbol names, then you can do that using syntax like:
use Some::Module (); # turn off exports
The rest of your question just seems like a rant. I can't find any more questions to answer.
As much as I can (mostly for clarity/documentation), I've been trying to say
use Some::Module;
use Another::Module qw( some namespaces );
in my Perl modules that use other modules.
I've been cleaning up some old code and see some places where I reference modules in my code without ever having used them:
my $example = Yet::Another::Module->AFunction($data); # EXAMPLE 1
my $demo = Whats::The::Difference::Here($data); # EXAMPLE 2
So my questions are:
Is there a performance impact (I'm thinking compile time) by not stating use x and simply referencing it in the code?
I assume that I shouldn't use modules that aren't utilized in the code - I'm telling the compiler to compile code that is unnecessary.
What's the difference between calling functions in example 1's style versus example 2's style?
I would say that this falls firmly into the category of preemptive optimisation and if you're not sure, then leave it in. You would have to be including some vast unused libraries if removing them helped at all
It is typical of Perl to hide a complex issue behind a simple mechanism that will generally do what you mean without too much thought
The simple mechanisms are these
use My::Module 'function' is the same as writing
BEGIN {
require My::Module;
My::Module->import( 'function' );
}
The first time perl successfully executes a require statement, it adds an element to the global %INC hash which has the "pathified" module name (in this case, My/Module.pm) for a key and the absolute location where it found the source as a value
If another require for the same module is encountered (that is, it already exists in the %INC hash) then require does nothing
So your question
What happens if I reference a package but don't use/require it?
We're going to have a problem with use, utilise, include and reference here, so I'm code-quoting only use and require when I mean the Perl language words.
Keeping things simple, these are the three possibilities
As above, if require is seen more than once for the same module source, then it is ignored after the first time. The only overhead is checking to see whether there is a corresponding element in %INC
Clearly, if you use source files that aren't needed then you are doing unnecessary compilation. But Perl is damn fast, and you will be able to shave only fractions of a second from the build time unless you have a program that uses enormous libraries and looks like use Catalyst; print "Hello, world!\n";
We know what happens if you make method calls to a class library that has never been compiled. We get
Can't locate object method "new" via package "My::Class" (perhaps you forgot to load "My::Class"?)
If you're using a function library, then what matters is the part of use that says
My::Module->import( 'function' );
because the first part is require and we already know that require never does anything twice. Calling import is usually a simple function call, and you would be saving nothing significant by avoiding it
What is perhaps less obvious is that big modules that include multiple subsidiaries. For instance, if I write just
use LWP::UserAgent;
then it knows what it is likely to need, and these modules will also be compiled
Carp
Config
Exporter
Exporter::Heavy
Fcntl
HTTP::Date
HTTP::Headers
HTTP::Message
HTTP::Request
HTTP::Response
HTTP::Status
LWP
LWP::MemberMixin
LWP::Protocol
LWP::UserAgent
Storable
Time::Local
URI
URI::Escape
and that's ignoring the pragmas!
Did you ever feel like you were kicking your heels, waiting for an LWP program to compile?
I would say that, in the interests of keeping your Perl code clear and tidy, it may be an idea to remove unnecessary modules from the compilation phase. But don't agonise over it, and benchmark your build times before doing any pre-handover tidy. No one will thank you for reducing the build time by 20ms and then causing them hours of work because you removed a non-obvious requirement.
You actually have a bunch of questions.
Is there a performance impact (thinking compile time) by not stating use x and simply referencing it in the code?
No, there is no performance impact, because you can't do that. Every namespace you are using in a working program gets defined somewhere. Either you used or required it earlier to where it's called, or one of your dependencies did, or another way1 was used to make Perl aware of it
Perl keeps track of those things in symbol tables. They hold all the knowledge about namespaces and variable names. So if your Some::Module is not in the referenced symbol table, Perl will complain.
I assume that I shouldn't use modules that aren't utilized in the code - I'm telling the compiler to compile code that is unnecessary.
There is no question here. But yes, you should not do that.
It's hard to say if this is a performance impact. If you have a large Catalyst application that just runs and runs for months it doesn't really matter. Startup cost is usually not relevant in that case. But if this is a cronjob that runs every minute and processes a huge pile of data, then an additional module might well be a performance impact.
That's actually also a reason why all use and require statements should be at the top. So it's easy to find them if you need to add or remove some.
What's the difference between calling functions in example 1's style versus example 2's style?
Those are for different purposes mostly.
my $example = Yet::Another::Module->AFunction($data); # EXAMPLE 1
This syntax is very similar to the following:
my $e = Yet::Another::Module::AFunction('Yet::Another::Module', $data)
It's used for class methods in OOP. The most well-known one would be new, as in Foo->new. It passes the thing in front of the -> to the function named AFunction in the package of the thing on the left (either if it's blessed, or if it's an identifier) as the first argument. But it does more. Because it's a method call, it also takes inheritance into account.
package Yet::Another::Module;
use parent 'A::First::Module';
1;
package A::First::Module;
sub AFunction { ... }
In this case, your example would also call AFunction because it's inherited from A::First::Module. In addition to the symbol table referenced above, it uses #ISA to keep track of who inherits from whom. See perlobj for more details.
my $demo = Whats::The:Difference::Here($data); # EXAMPLE 2
This has a syntax error. There is a : missing after The.
my $demo = Whats::The::Difference::Here($data); # EXAMPLE 2
This is a function call. It calls the function Here in the package Whats::The::Difference and passes $data and nothing else.
Note that as Borodin points out in a comment, your function names are very atypical and confusing. Usually functions in Perl are written with all lowercase and with underscores _ instead of camel case. So AFunction should be a_function, and Here should be here.
1) for example, you can have multiple package definitions in one file, which you should not normally do, or you could assign stuff into a namespace directly with syntax like *Some::Namespace::frobnicate = sub {...}. There are other ways, but that's a bit out of scope for this answer.
I often work on a huge, not-very-well-documented, object-oriented Perl repo at my place of employment. While maintaining the code, I frequently need to trace things that are inherited from other classes so that I can understand what they're doing. For example, I need to figure out what $self->mystery is and what it's doing:
package Foo::Bar;
use Moose;
use Method::Signatures;
use Foo::Bar::Element;
use Foo::Bar::Function;
use base qw (Baz::Foo::Bar);
method do_stuff ($some_arg) {
# mystery is not defined in Foo::Bar
my $mystery = $self->mystery;
$mystery->another_mystery($some_arg);
}
I usually find myself spending way too much time tracing through parent classes. So my question is, is there an easy way for me to figure out where $self->mystery comes from? Or in other words, I need to find where mystery is declared.
And by "easy way", I don't mean using ack or grep to string search through files. I'm hoping there's some sort of debugging module I can install and use which could help give me some insight.
Thank you.
Thanks to Standard Perl . . . the comes_from Method!
You don’t need to download any special tool or module this, let alone some giant IDE because your undocumented class structure has gotten too complicated for mere humans ever to understand without a hulking IDE.
Why not? Simple: Standard Perl contains everything you need to get the answer you’re looking for. The easy way to find out where something comes from is to use the very useful comes_from method:
$origin = $self->comes_from("mystery");
$secret_origin = $self->comes_from("another_mystery");
$birthplace = Some::Class->comes_from("method_name");
That will return the original name of the subroutine which that method would resolve to. As you see, comes_from works as both an object method and a class method, just like can and isa.
Note that when I say the name of the subroutine it resolves to, I mean where that subroutine was originally created, back before any importing or inheritance. For example, this code:
use v5.10.1;
use Path::Router;
my($what, $method) = qw(Path::Router dump);
say "$what->$method is really ", $what->comes_from($method);
prints out:
Path::Router->dump is really Moose::Object::dump
Similar calls would also reveal things like:
Net::SMTP->mail is really Net::SMTP::mail
Net::SMTP->status is really Net::Cmd::status
Net::SMTP->error is really IO::Handle::error
It works just fine on plain ole subroutines, too:
SQL::Translator::Parser::Storable->normalize_name
is really SQL::Translator::Utils::normalize_name
The lovely comes_from method isn’t quite built in though it requires nothing outside of Standard Perl. To make it accessible to you and all your classes and objects and more, just add this bit of code somewhere — anywhere you please really :)
sub UNIVERSAL::comes_from($$) {
require B;
my($invocant, $invoke) = #_;
my $coderef = $invocant->can($invoke) || return;
my $cv = B::svref_2object($coderef);
return unless $cv->isa("B::CV");
my $gv = $cv->GV;
return if $gv->isa("B::SPECIAL");
my $subname = $gv->NAME;
my $packname = $gv->STASH->NAME;
return $packname . "::" . $subname;
}
By declaring that as a UNIVERSAL sub, now everybody who’s anybody gets to play with it, just like they do with can and isa. Enjoy!
Are you sure you don't want an IDE? It seems to be what you are asking about. Padre, Eclipse EPIC, Emacs , and vim and many other editors offer some variation on the features you mention - probably simpler than you seem to want. If you have big project to navigate ctags can help - it's usually easy to integrate into an editor and you are allowed to hack on your configuration file (with regexes BTW) to get it to recognize bits of a complicated set of source files.
There is a related PERL FAQ entry about IDEs and a SO question: What's a good development environment for Perl?. There are also a host of CPAN modules you will want to use when developing that let you look into your code programmatically:
Devel::Kit
Devel::Peek
SUPER
Devel::Trepan
MooseX::amine / mex
...
You can see an example of a script that looks for methods in classes in the SO node: Get all methods and/or properties in a given Perl class or module.
You might be able to get tools like these to help you hop around in your source in a way you find useful from a shell or from inside the debugger. Trepan has a good short summary of debugging tools as part of its documentation. Generally though you can be very productive combining Data::Dumper the B:: modules (B::Xref , B::Deparse, etc., etc.) with the debugger and ack.
I've been assigned to pick up a webapplication written in some old Perl Legacy code, get it working on our server to later extend it. The code was written 10 years ago by a solitary self-taught developer...
The code has weird stuff going on - they are not afraid to do lib-param.pl on line one, and later in the file do /lib-pl/lib-param.pl - which is offcourse a different file.
Including a.pl with methods b() and c() and later including d.pl with methods c() and e() seems to be quite popular too... Packages appear to be unknown, so you'll just find &c() somewhere in the code later.
Interesting questions:
Is there a tool that can draw relations between perl-files? Show a list of files used by each other file?
The same for MySQL databases and tables? Can it show which schema's/tables are used by which files?
Is there an IDE that knows which c() is called - the one in a.pl or the one in d.pl?
How would you start to try to understand the code?
I'm inclined to go through each file and refactor it, but am not allowed to do that - only the strict minimum to get the code working. (But since the code never uses strict, I don't know if I'm gonna...)
Not using strict is a mistake -- don't continue it. Move the stuff in d.pl to D.pm (or perhaps a better name alltogether), and if the code is procedural use Sub::Exporter to get those subs back into the calling package. strict is lexical, you can turn it on for just one package. Such as your new package D;. To find out which code is being called, use Devel::SimpleTrace.
perl -MDevel::SimpleTrace ./foo.pl
Now any warnings will be accompanied by a full back-log -- sprinkle warnings around the code and run it.
I think the MySQL question should be removed, from this. Schema Table mappings have nothing to do with perl, it seems an out of place distraction on this question.
I would write a utility to scan a complete list of all subs and which file they live in; then I would write a utility to give me a list of all function calls and which file they come from.
By the way - it is not terribly hard to write a fairly mindless static analysis tool to generate a call graph.
For many cases, in well-written code, that will be enough to help me out...