This question requires an understanding of compiler phase, vs the BEGIN block. From Programming Perl: 3rd Edition - Page 467
It's also important to understand the distinction between compile phase and compile time, and between run phase and run time. A typical Perl program gets one compile phase, and then one run phase. A "phase" is a large-scale concept. But compile time and run time are small-scale concepts. A given compile phase does mostly compile-time stuff, but it also does some run-time stuff via BEGIN blocks. A given run phase does mostly run-time stuff, but it can do compile-time stuff through operators like eval STRING.
Let's take very a simple example
sub complex_sub {
die 'code run';
}
sleep 5;
print 'good';
use constant FOO => complex_sub();
if the above is run as-is, then complex_sub from the users perspective is run in the compiler phase. However, with slight modifications I can have..
# Bar.pm
package Bar {
use constant FOO => main::complex_sub();
}
# test.pl
package main {
sub complex_sub {
die 'code run';
}
sleep 5;
print 'good';
require Bar;
}
In the above code complex_sub is run in the execution phase. Is there anyway to differentiate these two cases from the perspective of complex_sub to enable the top syntax, but to prohibit the bottom syntax.
Use the ${^GLOBAL_PHASE} variable. It contains "START" in the first case, but "RUN" in the second one.
# RUN
perl -wE'say ${^GLOBAL_PHASE}'
# START
perl -wE'BEGIN {say ${^GLOBAL_PHASE}}'
# RUN
perl -wE'eval q{BEGIN {say ${^GLOBAL_PHASE}}}'
See perlvar for details.
You could do:
#!/usr/bin/env perl
use Const::Fast;
sub complex_sub {
die 'code run';
}
sleep 5;
print 'good';
const my $FOO => complex_sub();
Which outputs:
<5 second pause>
code run at /tmp/so1.pl line 6.
good
This works because though the lexical variable is declared at compile time it isn't set till run time.
Related
I have pretty big perl script executed quite frequently (from cron).
Most executions require pretty short & simple tests.
How to split single file script into two parts with "part two" compiled based on "part 1" decision?
Considered solution:
using BEGIN{ …; exit if …; } block for trivial test.
two file solution with file_1 using require to compile&execute file_2.
I would prefer single file solution to ease maintenance if the cost is reasonable.
First, you should measure how long the compilation really takes, to see if this "optimization" is even necessary. If it does happen to be, then since you said you'd prefer a one-file solution, one possible solution is using the __DATA__ section for code like so:
use warnings;
use strict;
# measure compliation and execution time
use Time::HiRes qw/ gettimeofday tv_interval /;
my $start;
BEGIN { $start = [gettimeofday] }
INIT { printf "%.06f\n", tv_interval($start) }
END { printf "%.06f\n", tv_interval($start) }
my $condition = 1; # dummy for testing
# conditionally compile and run the code in the DATA section
if ($condition) {
eval do { local $/; <DATA>.'; 1' } or die $#;
}
__DATA__
# ... lots of code here ...
I see two ways of achieving what you want. The simple one would be to divide the script in two parts. The first part will do the simple tests. Then, if you need to do more complicated tests you may "add" the second part. The way to do this is using eval like this:
<first-script.pl>
...
eval `cat second-script.pl`;
if( $# ) {
print STDERR $#, "\n";
die "Errors in the second script.\n";
}
Or using File::Slurp in a more robust way:
eval read_file("second-script.pl", binmode => ':utf8');
Or following #amon suggestion and do:
do "second-script.pl";
Only beware that do is different from eval in this way:
It also differs in that code evaluated with do FILE cannot see lexicals in the enclosing scope; eval STRING does. It's the same, however, in that it does reparse the file every time you call it, so you probably don't want to do this inside a loop.
The eval will execute in the context of the first script, so any variables or initializations will be available to that code.
Related to this, there is this question: Best way to add dynamic code to a perl application, which I asked some time ago (and answered myself with the help of the comments provided and some research.) I took some time to document everything I could think of for anyone (and myself) to refer to.
The second way I see would be to turn your testing script into a daemon and have the crontab bit call this daemon as necessary. The daemon remains alive so any data structures that you may need will remain in memory. On the down side, this will take resources in a continuos way as the daemon process will always be running.
I have a simple script:
our $height = 40;
our $width = 40;
BEGIN {
GetOptions( 'help' => \$help,
'x=i' => \$width,
'y=i' => \$height) or die "No args.";
if($help) {
print "Some help";
exit 0;
}
print $width."\n"; #it is 10 when call with script.pl -x 10 -y 10
print $height."\n"; #it is 10 when call with script.pl -x 10 -y 10
#some other code which check installed modules
eval 'use Term::Size::Any qw( chars pixels )';
if ( $# ) {
if ( $# =~ /Cant locate (\S+)/ ) {
warn "No modules";
exit 2;
}
}
}
print $width."\n"; #but here is still 40 not 10
print $height."\n";#but here is still 40 not 10
I call this script with 2 parameters (x and y), for example: script.pl -x 10 -y 10. But the given values are not saved in variables $width and $height. I want change this variables by giving arguments. How can I copy given values or save them into $width and $height? Is it possible?
EDITED - I added some code to this example
The BEGIN clause is executed before normal code. When you declare $height and $width, you set them to 40 after you process the options.
Solution: process the options outside the BEGIN clause.
The problem is that declaration/definitions like our $height = 40 etc. are executed in two phases. The declaration is performed at compilation time, while the assignment is done at run time. That means something like
my $x = 0;
BEGIN {
$x = 1;
}
say $x;
will display 0, because $x is declared at compile time and set to 1 at compile time because of the BEGIN block. But it is then set to zero at run time.
All you need to do it change the declaration/definition to just a declaration. That way there is no run-time modification of the assignment made by the BEGIN block
use strict;
use warnings 'all';
use feature 'say';
my $xx;
BEGIN {
$xx = 1;
}
say $xx;
output
1
Note that there is no need for our. my is almost always preferable. And please don't use BEGIN blocks to execute significant chunks of code: they should be reserved for preparatory actions of comparable to loading the required modules before run time starts. No one expects a program to output help text if it won't compile, which is what you are trying to do.
All BEGIN blocks are executed in the compile phase, as soon as possible (right after they're parsed) -- before the run phase even starts. See this in perlmod and see this "Effective Perler" article. Also, the declaration part of my $x = 1; happens in the compile phase as well, but the assignment is done at runtime.
Thus the $height and $weight are declared, then your code to process the options runs in its BEGIN block, and once the interpreter gets to the run phase then the variables are assigned 40, overwriting whatever had been assigned in that BEGIN block.
Thus a way around that is to only declare these variables, without assignment, before the BEGIN block, and assign that 40 after the BEGIN block if the variables are still undefined (I presume, as default values).
However, it is better not to process options, or do any such extensive work, in a BEGIN block.
Here are a couple of other ways to do what you need.
Loading the module during compilation is fine for your purpose, as long as you know at runtime whether it worked. So load it as you do, under eval in a BEGIN block, so that you can set a flag for later use (conditionally). This flag need be declared (without assignment) before that BEGIN block.
my $ok_Term_Size_Any;
BEGIN {
eval 'use Term::Size::Any qw(chars pixels)';
$ok_Term_Size_Any = 1 unless $#;
};
# use the module or else, based on $ok_Term_Size_Any
The declaration happens at compile time, and being in BEGIN so does the assignment -- under the conditional if not $#. If that condition fails (the module couldn't be loaded) the assignment doesn't happen and the variable stays undefined. Thus it can be used as a flag in further processing.
Also: while the rest of the code isn't shown I can't imagine a need for our; use my instead.
NOTE Please consult this question for subtleties regarding the following approach
Alternatively, load all "tricky" modules at runtime. Then there are no issues with parsing options, what can now be done normally at runtime as well, before or after those modules, as suitable.
The use Module qw(LIST); statement is exactly
BEGIN {
require Module;
Module->import(LIST);
};
See use. So to check for a module at runtime, before you'd use it, run and eval that code
use warnings 'all';
use strict;
eval {
require Module;
Module->import( qw(fun1 fun2 ...) );
};
if ($#) {
# Load an alternative module or set a flag or exit ...
};
# use the module or inform the user based on the flag
Instead of using eval (with the requisite error checking), one can use Try::Tiny but note that there are issues with that, too. See this post, also for a discussion about the choice. The hard reasons for using a module instead of eval-and-$# have been resolved in 5.14.
Until now I assumed that use keyword loads the module during compile time and require loads the module during runtime. If that's true then loading a module ( using use ) within a if block, should fail as the block gets executed during runtime!
But I tried testing this using the below code. And the output tells me I am wrong.
#!/usr/bin/perl
my $file = '/home/chidori/dummy.txt';
if ( $file ) {
use File::Basename;
my $base_filename = basename($file);
print "File basename is $base_filename\n";
}
else {
print "Nothing to display\n";
}
Output
chidori#ubuntu:~$ ./moduletest.pl
File basename is dummy.txt
This behavior is implicitly documented in the POD for use.
Because "use" takes effect at compile time, it doesn't respect
the ordinary flow control of the code being compiled. In
particular, putting a "use" inside the false branch of a
conditional doesn't prevent it from being processed.
The use statment is part of the parse tree of your code. It is executed during the compile phase of perl. The Perl compile won't add the use statement to the parse tree. This means that it won't be executed during the run time of the program.
If you are really curious how the code is parsed, then you can inspect the parse tree with B::Deparse.
use Foo;
is basically the same thing as
BEGIN {
require Foo;
import Foo;
}
It gets executed as soon as it's compiled, not when the program is eventually run. As such, it's not subject to conditionals and loops.
It doesn't make much sense to place use File::Basename; inside a block, but it does make sense for other uses of BEGIN and use. It makes sense for lexical pragmas, for example.
use warnings; # Changes compiler settings.
$x; # Warns.
{
no warnings; # Changes compiler settings.
$x; # Doesn't warn.
{
use warnings; # Changes compiler settings.
$x; # Warns.
}
$x; # Doesn't warn.
}
$x; # Warns.
1;
I am getting started with Test::More, already have a few .t test scripts. Now I'd like to define a function that will only be used for the tests, but across different .t files. Where's the best place to put such a function? Define another .t without any tests and require it where needed? (As a sidenote I use the module structure created by Module::Starter)
The best approach is to put your test functions, like any other set of functions, into a module. You can then use Test::Builder to have your test diagnostics/fail messages act as if the failure originated from the .t file, rather than your module.
Here is a simple example.
package Test::YourModule;
use Test::Builder;
use Sub::Exporter -setup => { exports => ['exitcode_ok'] }; # or 'use Exporter' etc.
my $Test = Test::Builder->new;
# Runs the command and makes sure its exit code is $expected_code. Contrived!
sub exitcode_ok {
my ($command, $expected_code, $name) = #_;
system($command);
my $exit = $? >> 8;
my $message = $!;
my $ok = $Test->is_num( $exit, $expected_code, $name );
if ( !$ok ) {
$Test->diag("$command exited incorrectly with the error '$message'");
}
return $ok;
}
In your script:
use Test::More plan => 1;
use Test::YourModule qw(exitcode_ok);
exitcode_ok('date', 0, 'date exits without errors');
Write a module as rjh has demonstrated. Put it in t/lib/Test/YourThing.pm, then it can be loaded as:
use lib 't/lib';
use Test::YourThing;
Or you can put it straight in t/Test/YourThing.pm, call it package t::Test::YourThing and load it as:
use t::Test::YourThing;
The upside is not having to write the use lib line in every test file, and clearly identifying it as a local test module. The down side is cluttering up t/, it won't work if "." is not in #INC (for example, if you run your tests in taint mode, but it can be worked around with use lib ".") and if you decide to move the .pm file out of your project you have to rewrite all the uses. Your choice.
I need to add unit testing to some old scripts, the scripts are all basically in the following form:
#!/usr/bin/perl
# Main code
foo();
bar();
# subs
sub foo {
}
sub bar {
}
If I try to 'require' this code in a unit test, the main section of the code will run, where as I want to be able to just test "foo" in isolation.
Is there any way to do this without moving foo,bar into a seperate .pm file?
Assuming you have no security concerns, wrap it in a sub { ... } and eval it:
use File::Slurp "read_file";
eval "package Script; sub {" . read_file("script") . "}";
is(Script::foo(), "foo");
(taking care that the eval isn't in scope of any lexicals that would be closed over by the script).
Another common trick for unit testing scripts is to wrap the body of their code into a 'caller' block:
#!/usr/bin/perl
use strict;
use warnings;
unless (caller) {
# startup code
}
sub foo { ... }
When run from the command line, cron, a bash script, etc., it runs normally. However, if you load it from another Perl program, the "unless (caller) {...}" code does not run. Then in your test program, declare a namespace (since the script is probably running code in package main::) and 'do' the script.
#!/usr/bin/perl
package Tests::Script; # avoid the Test:: namespace to avoid conflicts
# with testing modules
use strict;
use warnings;
do 'some_script' or die "Cannot (do 'some_script'): $!";
# write your tests
'do' is more efficient than eval and fairly clean for this.
Another trick for testing scripts is to use Expect. This is cleaner, but is also harder to use and it won't let you override anything within the script if you need to mock anything up.
Ahh, the old "how do I unit test a program" question. The simplest trick is to put this in your program before it starts doing things:
return 1 unless $0 eq __FILE__;
__FILE__ is the current source file. $0 is the name of the program being run. If they are the same, your code is being executed as a program. If they're different, it's being loaded as a library.
That's enough to let you start unit testing the subroutines inside your program.
require "some/program";
...and test...
Next step is to move all the code outside a subroutine into main, then you can do this:
main() if $0 eq __FILE__;
and now you can test main() just like any other subroutine.
Once that's done you can start contemplating moving the program's subroutines out into their own real libraries.