How to handle code coverage of both Perl scripts and modules? - perl

I have a unique requirement of handling code coverage of my Perl scripts.
I have written a few Perl scripts which in turn uses a few Perl modules. My requirement is to use run these Perl scripts with different options they support and assess the code coverage of both Perl scripts and Perl modules.
So I am using Devel::Cover, Module::Build and Test::More from CPAN. It's all woring fine if I call functions inside Perl modules directly inside test script. But it's not working if I call the scripts directly (In this case I am not getting generated with code coverage of Perl modules and scripts both).
Here is my example test script using Test::More:
use strict;
use warnings;
use Test::More;
BEGIN { plan tests => 1 }
ok(sub {
my #args = ("ex4200fw","-query-fw","-i","192.168.168.1");
#print "# Executing #args \n";
`#args`;
my $rc = $? >> 8;
#print "# Return code: $rc \n";
$rc == 1
}->(),"Query Juniper EX4200 FW, incorrect IP address.\n");
Here ex4200fw (is in path) is the Perl script written by me which in turn calls dependent module updates.pm.
Do we have any tools to suite to this requirement?
Running Perl scripts and getting code coverage of both scripts and its dependent modules?
Can we accomplish the same using above CPAN modules?
Any sample script is much useful for me.

Gathering coverage statistics
To gather coverage statistics you need to use Devel::Cover. (If you can't directly change the source core of included scripts you can just specify -MDevel::Cover as a parameter to perl.)
So you should rather change your "test script" to add this parameter when calling other Perl script like following:
my #args = ("perl", "-MDevel::Cover", "ex4200fw","-query-fw","-i","192.168.168.1");
Or you can specify enviroment variable PERL5OPT=-MDevel::Cover before top test script is executed. In this case you'll not need to change any script source. Here is a small shell sample:
## run tests and gather coverage statistics
export PERL5OPT=-MDevel::Cover
perl test1.pl
perl test2.pl
...
Calculate result coverage
There is the cover utility which outputs all lines which were executed. You should run it after all tests are executed. Standard modules are excluded from the report by default.

Related

How to test missing optional perl modules via the command line perl call

I have a test suite with hundreds of tests for a perl module I've yet to release which is a command line interface. Since it is a command line interface, the tests (until possibly now) are all written to drop code into a template and then call the template script using a system call.
I recently added an optional dependency on a 3rd party module that's not a part of core perl. I know my module works whether that module is installed or not because I have a computer with it installed and one without and the module works without error in each case. However, I'd like to be able to write a test to confirm that my module will work when the 3rd party module is absent - and I'd like that test to work even if the 3rd party module is installed, but behave as if it wasn't.
Ideally, I could use the structure I've put in place for testing which makes a system call to a template script. I know I could write a separate test script that manipulates #INC in the BEGIN block, imports the particular methods that use the module, and call them like a unit test. But I would like to know if there's a way I can use the test structure I've already got all my other tests using, which is to make a system call.
So is there a way to exclude a module from being imported via a perl command line option? I've tried -M-Module, but the code use Module still imports the module.
Incidentally, my module uses the 3rd party module inside an eval, which is how I made it optional.
I wrote Test::Without::Module for this exact case. It works by modifying #INC to prevent loading of modules that you name. For testing, you could either run the test from the command line:
perl -MTest::Without::Module=Some::Module -w -Iblib/lib t/SomeModule.t
Or allow/disallow loading the module from within your test suite:
use Test::Without::Module qw( My::Module );
# Now, loading of My::Module fails :
eval { require My::Module; };
warn $# if $#;
# Now it works again
eval q{ no Test::Without::Module qw( My::Module ) };
eval { require My::Module; };
print "Found My::Module" unless $#;

How can I use the Environment Modules system in Perl?

How can one use the Environment Modules system* in Perl?
Running
system("load module <module>");
does not work, presumably because it forks to another environment.
* Not to be confused with Perl modules. According to the Wikipedia entry:
The Environment Modules system is a tool to help users manage their Unix or Linux shell environment, by allowing groups of related environment-variable settings to be made or removed dynamically.
It looks like the Perl module Env::Modulecmd will do what you want. From the documentation:
Env::Modulecmd provides an automated interface to modulecmd from Perl. The most straightforward use of Env::Modulecmd is for loading and unloading modules at compile time, although many other uses are provided.
Example usage:
use Env::Modulecmd { load => 'foo/1.0' };
Alternately, to do it less perl-namespace like and more environment module shell-like, you can source the Environment Modules initialization perl code like the other shells:
do( '/usr/share/Modules/init/perl');
module('load use.own');
print module('list');
For a one-line example:
perl -e "do ('/usr/share/Modules/init/perl');print module('list');"
(This problem, "source perl environment module" uses such generic words, that it is almost un-searchable.)
system("load module foo ; foo bar");
or, if that doesn't work, then
system("load module foo\nfoo bar");
I'm guessing it makes changes to the environment variables. To change Perl's environment variables, it would have to be executed within the Perl process. That's not going to work since it was surely only designed to be integrated into the shell. (It might not be too hard to port it, though.)
If you are ok with restarting the script after loading the module, you can use the following workaround:
use String::ShellQuote qw( shell_quote );
BEGIN {
if (!#ARGV || $ARGV[0] ne '!!foo_loaded!!') {
my $perl_cmd = shell_quote($^X, '--', $0, '!!foo_loaded!!', #ARGV);
exec("load module foo ; $perl_cmd")
or die $!;
}
shift(#ARGV);
}

Perl: What is the fastest way to run a perl script from within a perl script?

I am writing a Perl script that uses other Perl scripts (not mine). Some of them receive inputs with flags and some don't. Another thing that I need is to redirect the outputs of these scripts to different files. For example:
W\O flags: script1.pl arg1 arg2 arg3 > output1.log
W flags: script2.pl -a 1 -b 2 -c 3 > output2.log
Bottom line - I was using system() to do this, but then I found out that the script takes too long.
I tried doing this with do() but it didn't work (like here).
So what is the fastest way to achieve that?
You need to make your test script define a subroutine that executes everything you want to run, then make your main script read the Perl code of the test script and invoke that subroutine - so the test script will look something like this:
#!/usr/bin/env perl
#
# defines how to run a test
use strict;
use warnings;
sub test
{
my ($arg1, $arg2, $arg3) = #_;
# run the test
(...)
)
The main script:
#!/usr/bin/env perl
#
# runs all of the tests
use strict;
use warnings;
require 'testdef.pl'; # the other script
foreach (...)
{
(...)
test($arg1, $arg2, $arg3);
}
This is still a very basic way of doing it.
The proper way, as ikegami says, is to turn the test script into a module.
That will be worthwhile if you will be creating more test script files than just these two or if you want to install the scripts in various locations.
Use a multi argument system call: http://perldoc.perl.org/functions/system.html
This is not going to execute the shell, you could spare a few CPU cycles
system(qw(script1.pl arg1 arg2 arg3 > output1.log));
"As an optimization, may not call the command shell specified in
$ENV{PERL5SHELL} . system(1, #args) spawns an external process and
immediately returns its process designator, without waiting for it to
terminate. "
if you are not interested in the return status you could use exec instead or you could use fork/thread for paralel execution.

How can I compile a Perl script inside a running Perl session?

I have a Perl script that takes user input and creates another script that will be run at a later date. I'm currently going through and writing tests for these scripts and one of the tests that I would like to perform is checking if the generated script compiles successfully (e.g. perl -c <script>.) Is there a way that I can have Perl perform a compile on the generated script without having to spawn another Perl process? I've tried searching for answers, but searches just turn up information about compiling Perl scripts into executable programs.
Compiling a script has a lot of side-effects. It results in subs being defined. It results in modules being executed. etc. If you simply want to test whether something compiles, you want a separate interpreter. It's the only way to be sure that one testing one script doesn't cause later tests to give false positives or false negatives.
To execute dynamically generated code, use eval function:
my $script = join /\n/, <main::DATA>;
eval($script); # 3
__DATA__
my $a = 1;
my $b = 2;
print $a+$b, "\n";
However if you want to just compile or check syntax, then you will not be able to do it within same Perl session.
Function syntax_ok from library Test::Strict run a syntax check by running perl -c with an external perl interpreter, so I assume there is no internal way.
Only work-around that may work for you would be:
my $script = join /\n/, <main::DATA>;
eval('return;' . $script);
warn $# if $#; # syntax error at (eval 1) line 3, near "1
# my "
__DATA__
my $a = 1
my $b = 2;
print $a+$b, "\n";
In this case, you will be able to check for compilation error(s) using $#, however because the first line of the code is return;, it will not execute.
Note: Thanks to user mob for helpfull chat and code correction.
Won't something like this work for you ?
open(FILE,"perl -c generated_script.pl 2>&1 |");
#output=<FILE>;
if(join('',#output)=~/syntax OK/)
{
printf("No Problem\n");
}
close(FILE);
See Test::Compile module, particularly pl_file_ok() function.

How can I test that a Perl program compiles from my test suite?

I'm building a regression system (not unit testing) for some Perl scripts.
A core component of the system is
`perl script.pl #params 1>stdoutfile 2>stderrfile`;
However, in the course of actually working on the scripts, they sometimes don't compile(Shock!). But perl itself will execute correctly. However, I don't know how to detect on stderr whether Perl failed to compile (and therefore wrote to stderr), or my script barfed on input (and therefore wrote to stderr).
How do I detect whether a program executed or not, without exhaustively finding Perl error messages and grepping the stderr file?
It might be easiest to do this in two steps:
system('$^X -c script.pl');
if ($? == 0) {
# it compiled, now let's see if it runs
system('$^X script.pl', #params, '1>stdoutfile', '2>stderrfile');
# check $?
}
else {
warn "script.pl didn't compile";
}
Note the use of $^X instead of perl. This is more flexible and robust. It ensures that you're running from the same installation instead of whatever interpreter shows up first in your path. The system call will inherit your environment (including PERL5LIB), so spawning a different version of perl could result in hard-to-diagnose compatibility errors.
When I want to check that a program compiles, I check that it compiles :)
Here's what I put into t/compile.t to run with the rest of my test suite. It stops all testing with the "bail out" if the script does not compile:
use Test::More tests => 1;
my $file = '...';
print "bail out! Script file is missing!" unless -e $file;
my $output = `$^X -c $file 2>&1`;
print "bail out! Script file does not compile!"
unless like( $output, qr/syntax OK$/, 'script compiles' );
Scripts are notoriously hard to test. You have to run them and then scrape their output. You can't unit test their guts... or can you?
#!/usr/bin/perl -w
# Only run if we're the file being executed by Perl
main() if $0 eq __FILE__;
sub main {
...your code here...
}
1;
Now you can load the script like any other library.
#!/usr/bin/perl -w
use Test::More;
require_ok("./script.pl");
You can even run and test main(). Test::Output is handy for capturing the output. You can say local #ARGV to control arguments or you can change main() to take #ARGV as an argument (recommended).
Then you can start splitting main() up into smaller routines which you can easily unit test.
Take a look at the $? variable.
From perldoc perlvar:
The status returned by the last pipe
close, backtick ("``") command,
successful call to wait() or
waitpid(), or from the system()
operator. This is just the 16-bit
status word returned by the
traditional Unix wait() system call
(or else is made up to look like it).
Thus, the exit value of the subprocess
is really ("$? >> 8"), and "$? & 127"
gives which signal, if any, the
process died from, and "$? & 128"
reports whether there was a core dump.
It sounds like you need IPC::Open3.