Is it okay to use modules from within subroutines? - perl

Recently I start playing with OO Perl and I've been creating quite a bunch of new objects for a new project that I'm working on. As I'm unfamilliar with any best practice regarding OO Perl and we're kind in a tight rush to get it done :P
I'm putting a lot of this kind of code into each of my function:
sub funcx{
use ObjectX; # i don't declare this on top of the pm file
# but inside the function itself
my $obj = new ObjectX;
}
I was wondering if this will cause any negative impact versus putting on the use Object line on top of the Perl modules outside of any function scope.
I was doing this so that I feel it's cleaner in case I need to shift the function around.
And the other thing that I have noticed is that when I try to run a test.pl script on the unix server itself which test my objects, it slow as heck. But when the same code are run through CGI which is connected to an apache server, the web page doesn't load as slowly.

Where to put use?
use occurs at compile time, so it doesn't matter where you put it. At least from a purely pragmatic, 'will it work', point of view. Because it happens at compile time use will always be executed, even if you put it in a conditional. Never do this: if( $foo eq 'foo' ) { use SomeModule }
In my experience, it is best to put all your use statements at the top of the file. It makes it easy to see what is being loaded and what your dependencies are.
Update:
As brian d foy points out, things compiled before the use statement will not be affected by it. So, the location can matter. For a typical module, location does not matter, however, if it does things that affect compilation (for example it imports functions that have prototypes), the location could matter.
Also, Chas Owens points out that it can affect compilation. Modules that are designed to alter compilation are called pragmas. Pragmas are, by convention, given names in all lower-case. These effects apply only within the scope where the module is used. Chas uses the integer pragma as an example in his answer. You can also disable a pragma or module over a limited scope with the keyword no.
use strict;
use warnings;
my $foo;
print $foo; # Generates a warning
{ no warnings 'unitialized`; # turn off warnings for working with uninitialized values.
print $foo; # No warning here
}
print $foo; # Generates a warning
Indirect object syntax
In your example code you have my $obj = new ObjectX;. This is called indirect object syntax, and it is best avoided as it can lead to obscure bugs. It is better to use this form:
my $obj = ObjectX->new;
Why is your test script slow on the server?
There is no way to tell with the info you have provided.
But the easy way to find out is to profile your code and see where the time is being consumed. NYTProf is another popular profiling tool you may want to check out.
Best practices
Check out Perl Best Practices, and the quick reference card. This page has a nice run down of Damian Conway's OOP advice from PBP.
Also, you may wish to consider using Moose. If the long script startup time is acceptable in your usage, then Moose is a huge win.

question 1
It depends on what the module does. If it has lexical effects, then it will only affect the scope it is used in:
my $x;
{
use integer;
$x = 5/2; #$x is now 2
}
my $y = 5/2; #$y is now 2.5
If it is a normal module then it makes no difference where you use it, but it is common to use all of those modules at the top of the program.
question 2
Things that can affect the speed of a program between machines
speed of the processor
version of modules installed (some modules have XS versions that are much faster)
version of Perl
number of entries in PERL5LIB
speed of the drive

daotoad and Chas. Owens already answered the part of your question pertaining to the position of use statements. Let me remark on something else here:
I was doing this so that I feel it's
cleaner in case I need to shift the
function around.
Personally, I find it much cleaner to have all the used modules in one place at the top of the file. You won't have to search for use statements to see what other modules are being used and a quick glance will tell you what is being used and even what is not being used.
Regarding your performance problem: with Apache and mod_perl the Perl interpreter will have to parse and compile your used modules only once. The next time the script is run, execution should be much faster. On the command line, however, a second run doesn't get this benefit.

Related

Perl: Is it possible to dynamically fix compile time error?

If I have, for example, next perl script:
use strict;
use warnings;
print $x;
When I run this script, compilation will fail with error:
Global symbol "$x" requires explicit package name (did you forget to declare "my $x"?) at ...
Is it possible to write some perl module which will be called when this error occur and automatically fix this error and continue compilation? (Even links to any info is OK)
# This code is incorrect.
# Here I just ask about such ability
# This code is very weak approximation how it might look
package AutoFix;
sub fix {
$main::x = 'You are defined now';
}
1;
So next code will not fail and print You are defined now:
use strict;
use warnings;
use AutoFix;
print $x;
How much work would you like to do to create the code that could figure out what the fix should be? And, will that amount of work be comparable or less to the work required to examine code by hand?
Now, I'm writing all of this having spent quite a bit of time trying to come up with a system to analyze CPAN installer output to figure out what went wrong (a major impetus for CPANPLUS, now relegated to history). It's easy to tell that something is not right, but beyond that is a lot of suffering.
In your example, you have an error about an undeclared variable. How does AutoFix know if that should be a package or a lexical variable? You can guess one or the other, but you actually have two big problems:
What is the intent of the code?
Does the code reflect the actual intent?
Determining the intent of the code is often very difficult for even an experienced human programmer to figure out (just read StackOverflow question comments). Compiling code is often not correct code, in the sense that it doesn't achieve the desired outcome. Furthermore, does the programmer even understand the problem? Does the code the programmer wrote (incorrectly here) reflect the actual work the code should do? It's difficult for humans in code review to figure this out. Tools like Coverity can guess at problems it knows about, but they aren't going to be able to correct the code.
But let's say that the programmer understands the problem. Have they correctly expressed that? The longer you've been programming, the more you lean toward "no", in general, in my experience.
This is completely different than the database constraint you mentioned. That's a narrowly targeted fix for an expected and allowed situation. Consider a different parallel: if the record has a New York area code but a Chicago address, should I fix the city? When I was a younger dumbass, I did a similar thing to a database. It was stupid because I thought I knew something I didn't, and everyone who understood the situation recognized it immediately. Even then, those sorts of constraints are how we model what we know about the world, not what the world actually is.
Now, to make AutoFix, you need to make something that can look at code, understand it, and figure out what it should do. You can make guesses, but you have no basis for playing the probabilities there.
Technical matters can't solve this. AutoFix can undo the work of pragmas such that some classes of errors don't show up, but so what? The program with an error just continues? How does that help anyone?
Not only that, compilers tend to complain when they realize they can't parse something. What they complain about is often not the problem. The first thing I teach people while debugging is that they need to look at the statement immediately proceeding the line line number in the error message. Any error message you catch can have a virtually infinite number of causes.
Consider this code, which fails in the same way as your example (same error message) but for a completely different but common reason:
use strict;
use warnings;
my $x = 5,
print $x++;
How do you figure out what the fix should be? It's not about declaring $x.
So, you now have two cases, and you build that your fixer. Then you encounter another case, so you build that in. And you keep doing this until eventually you have a large dictionary of fixes. Maybe you get a bit crazy and do some machine learning (and wouldn't a corpus of bad code and resolutions be cool).
But, the program still can't continue. It has to start over because it has to at least back up to where it should have done something but didn't. You can't merely restart the program because you don't know if its idempotent. Re-runing the program might redo work it shouldn't, such as inserting duplicate into databases.
Having said all that, this sort of thing is related to static analysis and the refactoring browser. Adam Kennedy's Parse Perl Isolated (PPI) project was a first step into understanding Perl code without compiling it, then move toward the Smalltalk ideal of understanding which parts of code represented the same thing. If you knew that two things named foo were the same thing, you could rearrange code dealing with foo. For example, if you renamed a method from bar to set_bar, you could immediately know which bars you should rename and which belonged to some other class.
Adam wrote Acme::BadExample and challenged anyone to get it to run. He wrote "any given piece of Perl source exists in bizarre pseudo-quantum-like state, in that it demonstrates both duality and indeterminism."
Jos Boumans stepped up and used some mind-bending Perl, which he then showed in Barely Legal XXX Perl, which I think he first presented in 2006. He was amazingly creative in his solutions, and in a way that I wouldn't want in production code.
Perl doesn't even know, by design, what type of thing will be in a variable or even that the method you might call on it will exist. In fact, it defers so much to the runtime, trusting that things will be in place by the time you need them, that we often say "only Perl can parse perl". You literally need to be able to run Perl code to properly compile it since BEGIN blocks can affect the parse. For example, a BEGIN can define a subroutine with a certain arity. How do you parse foo 5, 6? You have to know what has already been defined.
Perl has other "action at a distance" features that make this even tougher. autodie redefines CORE features to add extra behavior, but you might not be able to see that in the code. You can set default regex flags (and I've seen plenty of big screw ups by people applying /isxm to entire files without checking).
As noted above, autofixing compile time error is not possible (or probably hard to fix)
Instead of fixing compile time error try to resolve your problem in different way.
For example. In your script you use $x variable. Probably you know that you will use it and you want to get instance of some value, e.g. You are defined now then you could use Exporter:
use strict;
use warnings;
use AutoFix qw/ $x /;
print $x;
And AutoFix module will look like:
package AutoFix;
require Exporter;
our #ISA = qw(Exporter);
our #EXPORT_OK = qw( $x $y $z ); # symbols to export on request
... # code which will create instance of $x $y $z on request
1;
Gool luck ;-)

Exporting symbols without Exporter

I am learning how to use packages and objects in Perl.
It has been suggested that I use Exporter in order to use the functions and variables of a module that have the package directive.
I was curious to know whether there is a way to export symbols without using Exporter? In other words, can Exporter be emulated in some way?
The reason I ask is due to my assumption that Exporter carries extra overhead for functionality that small, simple scripts that must run as fast as possible don't need, and could be avoided by including that functionality with a few simple lines of code.
Maybe a simple illustration of what I mean might help.
Say I have a module which only does this
my $my_string = "my_print";
sub my_print {
print "$my_string: ", #_;
}
which would allow for a lot of small scripts to use my_print instead of print, with just a simple require with the filename of my module (and very little overhead).
Then, I wanted to use this in another module that has a package declaration, and this no longer works, so now I must use a package declaration and therefore Exporter in my simple module just to get this to work in the new module.
Having been using Perl for quite a while I am used the fact that almost everything is quite simple, straightforward, and low overhead, so I just feel that there could be such a solution for this. If not, then I would accept an answer that explains exactly why Exporter is the only way.
What Exporter does is quite simple. Here's how you can do it without using Exporter:
In Foo.pm:
package Foo;
use strict;
use warnings;
sub answer { 42 }
sub import {
no strict 'refs';
my $caller = caller;
*{$caller . "::answer"} = \&answer;
}
1;
In script.pl:
use Foo;
print answer(), "\n";
However, as others have said, you really needn't worry about the overhead of using Exporter. It's a fairly small and efficient module, and has been bundled with every version of Perl since 5.0. Whatsmore, chances are you're already loading it somewhere anyway -- many of the core Perl modules (such as Carp, Scalar::Util, List::Util, etc) use it.
Update: in an earlier version of the code above, I forgot the no strict 'refs';. This is necessary for *{$some_string} to work.
Exporter is definitely low overhead. There is also a list of alternative modules in the See Also section.
If you're curious, you can do a search on CPAN for the module and view the source yourself. You'll notice that the majority of the "code" is just documentation. However, honestly if you're this new to perl, you shouldn't be worrying about streamlining your code for being light weight as much as you should be aiming to take advantage of as many resources as possible to make coding quicker and easier.
Just my $.02
It sounds very much like you're trying to find an excuse to avoid Exporter? Have you really had programs that start up too slowly? Exporter is loaded and executed only in the compilation phase.
If you are genuinely worried about compilation speed then you should write
BEGIN { require MyPackage }
and then later
MyPackage::myprint($myparam)
which involves no overhead from Exporter at all, or even from the equivalent in-line code.
Yes, the code that actually exports symbols from one package to another is just a single line in the code of Exporter.pm and you could duplicate it. But wouldn't you much rather just add
use MyPackage;
at the start of you program and know that, from any context, the symbols from MyPackage would be correctly exported?
If you have "small simple scripts that must be run as fast as possible" then you should investigate leaving them running as daemons rather than recompiling the Perl each time the program is run.

In Perl scripts, should we use shell commands or call Perl functions that imitate shell operations?

I want to know about the best practices here. Suppose I want to get the content of some line of a file. I can use a one-line shell command to get my answer, or write a subroutine, as shown in the code below.
A text file named some_text:
She laughed. Then both continued eating in silence, like strangers,
but after dinner they walked side by side; and there sprang up
between them the light jesting conversation of people who are free
and satisfied, to whom it does not matter where they go or what
they talk about.
Code to get content of line 5 of the file
#!perl
use warnings;
use strict;
my $file = "some_text";
my $lnum = 5;
my $shellcmd = "awk 'NR==$lnum' $file";
print qx($shellcmd);
print getSrcLine($file, $lnum);
sub getSrcLine {
my($file, $lnum) = #_;
open FILE, $file or die "$!";
my #ray = <FILE>;
return $ray[$lnum-1];
}
I ask this because I see a lot of Perl scripts where at some point, a shell command was called, while at some later point, the same task was done by a call to a (library or handwritten) function, for example, rm -rf versus File::Path::rmtree. I just want to make it consistent.
What is the recommended thing to do?
If there's a Perl function for the operation, Perl thinks you should use its version. However, you give an example of a Perl module providing a pure Perl way to do it. That's much different. There's no single answer (as in most things), so you have to decide for yourself what to do:
Does the pure Perl approach do it correctly? For example, File::Copy has some limitations because it makes some awkward decisions for the user, so many people think it's broken. See, for instance, File::Copy versus cp/mv.
Does pure Perl approach do it in an acceptable time? Sometimes the external program is orders of magnitude faster. Sometimes it's a lot slower.
External commands usually are portable within a family of systems (e.g. all linux-like systems) but probably not across families (e.g. Windows and linux). Your tolerance for that might affect your answer. Even if you think you are running the same command, the different flavors of unix-like systems might have different switches for the operations.
Passing complicated arguments—spaces, quotes, and special characters—to external commands can make you cry. You have to do a lot of fiddly work to make sure you're handling arguments correctly. Perl subroutines don't care though.
You have to pay much more attention to what you are doing when you are using the external command. If you just call rm, Perl is going to search through your PATH and use the first thing called rm. That doesn't mean it's the program you think it is. I write about this quite a bit in the "Secure Programming Techniques" in Mastering Perl.
If the pure Perl approach requires a module, especially if that module has many complicated dependencies, you might be in for dependency or distribution hell down the road.
Personally, I start with the pure Perl approach until it doesn't work for the situation.
For your particular examples, I'd use Perl. Shelling out to awk, which is a proto-Perl, is just odd. You should be able to do everything awk does right it Perl. If you have an awk program, you can convert it to Perl with the a2p program:
NR==5
a2p turns that into (modulo some setup bits at the start):
while (<>) {
print $_ if $. == 5;
}
Notice that it still scans the entire file even though you have the fifth line. However, you can use the translated program as a start:
while (<>) {
if( $. == 5 ) {
print;
last;
}
}
I don't think you should shell out to some other program to avoid that Perl code.
To remove a directory tree, I like File::Path. It has some dependencies, but they are all in the Perl Standard Library. There's very little pain, if any, associated with that module. I'd use it until I ran into a problem where it didn't work.
If you want your app to be portable to non-unix systems, then definitely code everything in Perl.
If not, it's really up to you... creating a new process is slower, but if it's not important for the task then it doesn't matter. Personally I would pick the solution which I can quicker implement.
It seems to me that code that works should be the first priority. Yours fails if the file name has a space in it, for example.
Using the shell makes it harder to code correctly since your program needs to properly generate another program to be run by sh. (This problem goes away if you use the multi-arg version of system to avoid the shell.)
Furthermore, using external tools can make it hard to handle errors. You didn't even attempt to do so!
On the flip side, there are multiple reasons for using external tools. For example, Perl doesn't provide as good an file copy utility as cp; using the sort tool allows you to sort arbitrary large files with limited RAM; etc.

Is Perl unit-testing only for modules, not programs?

The docs I find around the ’net and the book I have, Perl Testing, either say or suggest that unit-testing for Perl is usually done when creating modules.
Is this true? Is there no way to unit-test actual programs using Test::More and cousins?
Of course you can test scripts using Test::More. It's just harder, because most scripts would need to be run as a separate process from which you capture the output, and then test it against expected output.
This is why modulinos (see chapter 17 in: brian d foy, Mastering Perl, second edition, O'Reilly, 2014) were developed. A modulino is a script that can also be used as a module. This makes it easier to test, as you can load the modulino into your test script and then test its functions like you would a regular module.
The key feature of a modulino is this:
#! /usr/bin/perl
package App::MyName; # put it in a package
run() unless caller; # Run program unless loaded as a module
sub run {
... # your program here
}
The function doesn't have to be called run; you could use main if you're a C programmer. You'd also normally have additional subroutines that run calls as needed.
Then your test scripts can use require "path/to/script" to load your modulino and exercise its functions. Since many scripts involve writing output, and it's often easier to print as you go instead of doing print sub_that_returns_big_string(), you may find Test::Output useful.
It's not an easiest way to test your code, but you can test the script directly. You can run the script with specific parameters using system (or better IPC::Run3), capture output and compare it with expected result.
But this will be a top level test. It'll be hard to tell which part of your code caused problem.
Unit-tests are used to test individual modules. This makes it easier to see where the problem came from. Also testing functions individually is much easier, because you only need to think about what can happen in smaller piece of code.
The way of testing is depend on your project size. You can, of cause, put everything into single file, but putting your code into module (or even splitting it into different modules) will give you benefits in future: code can be easier reused and tested.

What's the best way to have two modules which use functions from one another in Perl?

Unfortunately, I'm a totally noob when it comes to creating packages, exporting, etc in Perl. I tried reading some of the modules and often found myself dozing off from the long chapters. It would be helpful if I can find what I need to understand in just one simple webpage without the need to scroll down. :P
Basically I have two modules, A & B, and A will use some function off from B and B will use some functions off from A. I get a tons of warning about function redefined when I try to compile via perl -c.
Is there a way to do this properly? Or is my design retarded? If so what would be a better way? As the reason I did this is to avoid copy n pasting the other module functions again into this module and renaming them.
It's not really good practice to have circular dependencies. I'd advise factoring something or another to a third module so you can have A depends on B, A depends on C, B depends on C.
So... the suggestion to factor out common code into another module is
a good one. But, you shouldn't name modules *.pl, and you shouldn't
load them by require-ing a certain pathname (as in require
"../lib/foo.pl";). (For one thing, saying '..' makes your script
depend on being executed from the same working directory every time.
So your script may work when you run it as perl foo.pl, but it won't
work when you run it as perl YourApp/foo.pl. That is generally not good.)
Let's say your app is called YourApp. You should build your
application as a set of modules that live in a lib/ directory. For
example, here is a "Foo" module; its filename is lib/YourApp/Foo.pm.
package YourApp::Foo;
use strict;
sub do_something {
# code goes here
}
Now, let's say you have a module called "Bar" that depends on "Foo".
You just make lib/YourApp/Bar.pm and say:
package YourApp::Bar;
use strict;
use YourApp::Foo;
sub do_something_else {
return YourApp::Foo::do_something() + 1;
}
(As an advanced exercise, you can use Sub::Exporter or Exporter to
make use YourApp::Foo install subroutines in the consuming package's
namespace, so that you don't have to write YourApp::Foo:: before
everything.)
Anyway, you build your whole app like this. Logical pieces of
functionally should be grouped together in modules (or even better,
classes).
To make all this run, you write a small script that looks like this (I
put these in bin/, so let's call it bin/yourapp.pl):
#!/usr/bin/env perl
use strict;
use warnings;
use feature ':5.10';
use FindBin qw($Bin);
use lib "$Bin/../lib";
use YourApp;
YourApp::run(#ARGV);
The key here is that none of your code is outside of modules, except a
tiny bit of boilerplate to start your app running. This is easy to
maintain, and more importantly, it makes it easy to write automated
tests. Instead of running something from the command-line, you can
just call a function with some values.
Anyway, this is probably off-topic now. But I think it's important
to know.
The simple answer is to not test compile modules with perl -c... use perl -e'use Module'
or perl -e0 -MModule instead.
perl -c is designed for doing a test compile of a script, not a module. When you run it
on one of your
When recursively using modules, the key point is to make sure anything externally referenced is set up early. Usually this means at least making use #ISA be set in a compile time construct (in BEGIN{} or via "use parent" or the deprecated "use base") and #EXPORT and friends be set in BEGIN{}.
The basic problem is that if module Foo uses module Bar (which uses Foo), compilation of Foo stops right at that point until Bar is fully compiled and it's mainline code has executed. Making sure that whatever parts of Foo Bar's compile and run-of-mainline-code
need are there is the answer.
(In many cases, you can sensibly separate out the functionality into more modules and break the recursion. This is best of all.)