When Module::Starter initializes a project, it creates a test called manifest.t.
#!perl -T
use strict;
use warnings;
use Test::More;
unless ( $ENV{RELEASE_TESTING} ) {
plan( skip_all => "Author tests not required for installation" );
}
eval "use Test::CheckManifest 0.9";
plan skip_all => "Test::CheckManifest 0.9 required" if $#;
ok_manifest();
When you run tests with Build test, here's part of the output:
t\00-load.t ....... ok
t\boilerplate.t ... ok
t\manifest.t ...... skipped: Author tests not required for installation
I understand the outcome in a narrow sense ($ENV{RELEASE_TESTING} is not set, so the tests are skipped), but I don't fully grasp the big picture. What's the intended development process? I assume it's a good idea to run tests to confirm that my module's manifest is accurate. Should I be setting that environment variable? If so, at what point during the development process?
Many module distributions have tests that check not whether the code works, but whether the distribution is in a suitable state for releasing. Things like the MANIFEST being up to date, whether all functions have been documented in POD, etc.
In order to save time, these tests may be written to skip themselves unless the RELEASE_TESTING environment variable is set. This is an informal standard. That way, these tests don't get run when people install the module, nor do they run when the author is just checking to see if a code change broke anything.
You should run RELEASE_TESTING=1 make test (or the Build equivalent) before releasing your dist. If you use Dist::Zilla (which I highly recommend), you can run release tests with dzil test --release. That flag is also set automatically by the TestRelease plugin, which you should definitely use if you use dzil.
Other environment variables commonly used to control testing are AUTOMATED_TESTING and AUTHOR_TESTING. AUTOMATED_TESTING is set by CPAN testers running automated smoke tests.
Related
I'm using Test::More to test my application. I have a single script, run_tests.pl, that runs all the tests. Now I want to split this into run_tests_component_A.pl and B, and run both test suites from run_tests.pl. What is the proper way of doing this, does Test::More have any helpful methods?
I'm not using any build system.
Instead of running the creating a run_tests.pl to run the test suite, the standard practice is to use prove.
Say you have
t/foo.t
t/bar.t
Then,
prove is short for prove t.
prove t runs the entire test suite (both t/foo.t and t/bar.t).
prove t/foo.t runs that specific test file.
perl t/foo.t runs that specific test file, and you get the raw output. Easier for debugging.
perl -d t/foo.t even allows you to run the test in the debugger.
Each file is a self-standing program. If you need to share code between test programs, you can create t/lib/Test/Utils.pm (or whatever) and use the following in your test files:
use FindBin qw( $RealBin );
use lib "$RealBin/lib";
use Test::Utils;
prove executes the files in alphabetical order, so it's common to name the files
00_baseline.t
01_basic_tests.t
02_more_basic_tests.t
03_advanced_tests.t
The 00 test tests if the modules can be loaded and that's it. It usually outputs the versions of loaded modules to help with dependency problems. Then you have your more basic tests. The stuff that's like "if this doesn't work, you have major problems". There's no point in testing the more complex features if the basics don't work.
I am writing tests that require external software (Amazon's local DynamoDB server) to be installed and running. Is there some way to tell CPAN Testers what to do?
Or should I just download the server and start it myself in the test setup? That would require Java 6.x or newer to be installed. So I think I am back to the first question.
In case people don't know, CPAN Testers is a group of people who test all of CPAN using automated scripts called smokers.
Further background:
Right now, CPAN Testers shows 227 machines pass all tests for Amazon::DynamoDB, but that is misleading since only one of the over seven thousand tests is currently being run: use_ok( 'Amazon::DynamoDB' );. The rest are hidden behind unless statements:
unless ( $ENV{'AMAZON_DYNAMODB_EXPENSIVE_TESTS'} ) {
plan skip_all => 'Testing this module for real costs money.';
}
And a significant number of the tests do not pass. I have fixed that, but testing requires either the setting of three environment variables in the tester's environment and money (the current way):
AMAZON_DYNAMODB_EXPENSIVE_TESTS=1
EC2_ACCESS_KEY=<user's AWS access key>
EC2_SECRET_KEY=<user's AWS secret key>
or the installation of the local version of Amazon DynamoDB. If this module is released as is, it will appear broken on all machines it runs on that don't have the prerequisite environment setup (ie it will erroneously appear broken rather than erroneously appear to be working).
CPAN Testers run the same tests that your module will run upon installation. Should your tests install other software on the machine? Probably not. Instead, the tests should fail loudly when its prerequisites are not met.
You should also draw a distinction between author tests and installation tests. There is no expectation that the installation tests verify all the functionality. Expensive tests (in this case, tests that literally cost money) shouldn't be part of that. You can run them yourself before you release. However, it might be better to put them in xt/ and guard them with the EXTENDED_TESTING variable instead of a non-standard environment variable. See also the Lancaster Consensus for a discussion of various environment variables during testing of Perl projects.
You can also consider using a different provider for your more thorough tests than the donated CPAN Testers capacity, e.g. by setting up Travis CI for your project. Since they give you a container to play around, you can install extra software. You can also securely provide credentials to your tests. In contrast, the main advantage of CPAN Testers is the diverse range of operating systems, i.e. the lack of control over the testing environment.
Call die from Makefile.PL or Build.PL if the prerequisites for building your module cannot be satisfied. On CPANTesters, aborting from the Makefile will give you an NA test result instead of a FAIL test result, and does not reflect poorly on your module and your build process.
# Makefile.PL
...
if ($ENV{AUTOMATED_TESTING}) {
if (!$ENV{AMAZON_DYNAMODB_EXPENSIVE_TESTS} ||
!$ENV{EC2_ACCESS_KEY} ||
!$ENV{EC2_SECRET_KET}) {
die "To test this module, you must set the environment\n",
"variables EC2_ACCESS_KEY, EC2_SECRET_KEY, and\n",
"AMAZON_DYNAMODB_EXPENSIVE_TESTS. Be advised that\n",
"running these test will result in charges against\n",
"your AWS account.";
}
}
...
Is there some way to tell CPAN Testers what to do?
This is more of a social problem than a technical one.
You can ask the regulars on cpan-testers-discuss to manually set up the requirements; there's precedent for doing so. Not everyone will oblige, of course.
Another possibility is to reach out to your module's users and ask them to become ad-hoc test reporters via Task::CPAN::Reporter/cpanm-reporter or similar.
Look at what other CPAN modules that have external dependencies do, and do something like that.
For example, look at the DBI drivers for various databases. While File and SQLite come with their own prereqs, the same is not true for others like Oracle and DB2. Or look at wxGTK which, IIRC, uses an Alien package to install Wx.
In your case, I would suggest more along the lines of the DBD drivers than embedding through Alien, but you have to make that choice.
Is it possible to configure CPAN to run author tests as well if they are appropriate for my OS/Arch? I am more concerned about installing a package that is out of sync with its own test suite without realizing it than taking more time to install new packages.
"Author tests" are usually run or skipped based on an env var, so it's just a question of setting that env var. For example, I use DEVEL_TESTS, so the following would run all of WWW-Kickstarter's tests, including the one that makes sure all references to the distro's version are consistent:
DEVEL_TESTS=1 cpan WWW::Kickstarter
According to the Lancaster Consensus AUTHOR_TESTING is the env var that distribution authors should be using for this kind of testing. In practice, there are other var names out in the wild, but people should probably standardize on this one.
AUTHOR_TESTING=1 cpan Module::NAME
Say I have a Perl module:
use Foo::Bar;
sub add {
return Foo::Bar::func() + Foo::Buzz::func();
}
There is an error in this module, because it forgets to use Foo::Buzz. This error will not always be catched by my unit tests, for example if the test for Foo::Buzz runs earlier and imports Foo::Buzz before add() is run. If I use this module in production code, it will fail with the error that Foo::Buzz is not imported.
How can I check whether all modules that I use in the code are also imported?
Edit: I want to check the code before deploying it in production, to avoid that any errors occur. The example will fail in production, and I want to catch the error before that, for example when I run my unit tests. I want a tool or some code that I can run before deployment that catches this error, like flake8 for python.
The short answer is you can't. Since Perl is a dynamic language, you can't check whether you load all modules before runtime as you can't check whether there are some other bugs in your code.
You still can use some static code analysis, trying to find This::Pattern in files where use This::Pattern; is not presented, but it doesn't guarantee anything.
If Perl was strictly a dynamic language, you could easily check whether or not a module is installed in the program. The problem is that Perl isn't 100% dynamic. It does some compilation and part of that compilation work is done after modules are checked.
Bulrush is on the right track. Unfortunately, you can use use clause in order to do this. use is checked pre-compile, so you'll get an error before your eval executes.
However, there's a clue in the use perldoc page:
use Module LIST
use Module
use VERSION
Imports some semantics into the current package from the named module, generally by aliasing certain subroutine or variable names into your package. It is exactly equivalent to
BEGIN { require Module; Module->import( LIST ); }
There you go! You can use require inside a BEGIN clause which is executed even before the rest of the file is parsed. You can use your eval there to see if this works or not too. You need to use a package variable as a flag to see whether or not this worked because of the scope issues. A regular lexically scoped variable will disappear when you leave the BEGIN clause.
BEGIN {
our $FOO_BAR_available = 0; # Must be a package variable
eval {
require Foo::Bar;
Module->import( qw(...) ); # No need if you don't import any subroutines
};
if (not $# ) {
$FOO_BAR_AVAILABLE = 0;
}
}
Then in your program, you'd have:
our $FOO_BAR_available;
if ( not $FOO_BAR_available ) {
# Here be dragons...
}
else {
# Back to your normal code...
}
The our $FOO_BAR_available is a bit confusing. You're not declaring this variable again, you're merely stating that you want to use this variable without prefixing it with the full package name. The variable was set in the BEGIN clause, and this won't affect the value.
You can skip the use of a package variable entirely, if this module was written correctly. Modules are suppose to set a package variable called $VERSION. You can use this variable as your flag:
BEGIN {
eval {
require Foo::Bar;
Module->import( qw(...) ); # No need if you don't import any subroutines
};
}
Note I not only don't have to declare a package variable, I don't even have to verify if the eval statement worked or not.
Then in your program...
if ( not $FOO::BAR::VERSION ) {
# Here be dragons...
}
else {
# Back to your normal code...
}
If the module set the $VERSION variable, you know it loaded. Otherwise, you know the module was not loaded.
Addendum
I want to check the code before deploying it in production, to avoid that any errors occur. The example will fail in production, and I want to catch the error before that, for example when I run my unit tests.
Here's my recommendations. It isn't as simple as running a script, but it's much better:
First, define your production environment: What version of Perl does it have? What modules are used? This will help developers know what to expect. I know Foo::Bar is good, but I shouldn't use Far::Bu because production doesn't have that. It's the first step. I'm surprised at the number of places that have no idea what's on their production environment.
Use Vagrant. This defines a virtual machine that matches your production environment. Developers can download it to their system, and have on their desktop a copy of the production environment.
Use Jenkins. Jenkins is a continuous build engine. Yes, you don't compile Perl, but you can still benefit from Jenkins:
Jenkins can run your unit tests for you. Automatic testing with each and every change in the code. You catch your errors early on.
Your Jenkins system can match your production machines - Same Perl version, same Perl modules. If it doesn't run on your Jenkins build machine because something's not installed, there's a good chance it won't run on Production.
You install via Jenkins. Jenkins can package your release, and you can use that to install known releases. No pulling code from a developer's system and finding out that there's something on the system that's not in your version control system. I don't know how many times I've seen a developer spool something up from their machine for production (thoroughly tested! Trust me!), and then we discover that we don't have that code in our version control system because the developer forgot to check something in.
You don't normally run flake8 in a production environment. By then, it's a wee bit late.
Perl has a lot of nice tools that perform a similar function:
Perlbrew: This allows your developers to install a separate Perl program with its own CPAN module library into their development system. They can use this to match the Perl version and the modules required to the production environment. This way, they're playing with the same set of rules.
Perlcritic: This checks your module against coding standard set forth by Damian Conway in his Perl Best Practices.
B::Lint: This is like the old lint program in C and can catch coding issues.
However, this is stuff to do before you're all set to run in Production. Use Vagrant to help developers setup their own private production environment for testing. Use Jenkins to make sure you test in a very production like environment and catch errors as soon as they happen rather than after UAT testing.
Are there any good (and preferably free) code coverage tools out there for Perl?
As usual, CPAN is your friend: Have a look at Devel::Cover
Yes, Devel::Cover is the way to go.
If you develop a module, and use Module::Build to manage the installation, you even have a testcover target:
perl Build.PL
./Build testcover
That runs the whole test suite, and makes a combined coverage report in nice HTML, where you can browse through your modules and watch their coverage.
As noted, Devel::Cover is your friend, but you'll want to google for it, too. It's documentation is a bit sparse and if you change your environment radically, you'll need to reinstall it because it builds Devel::Cover::Inc with a bunch of information pulled from your environment at the time you install it. This has caused plenty of problems for us at work as we have a shared CPAN environment and if one developer installs Devel::Cover and a different developer tries to run it, strange (and incorrect) results are common.
If you use this module, also check out Devel::CoverX::Covered. This module will capture much of the information which Devel::Cover throws away. It's very handy.
Moritz discusses how modules built with Module::Build can use Devel::Cover easily.
For modules using ExtUtils::MakeMaker, an extension module exists to invoke the same functionality. Adding the following code before the call to WriteMakefile():
eval "use ExtUtils::MakeMaker::Coverage";
if( !$# ) {
print "Adding testcover target\n";
}
... will allow one to run the command 'make testcover' and have Devel::Cover perform its magic.