I'm sort of new to Perl and I'm wondering if there a prefered unit testing framework?
Google is showing me some nice results, but since I'm new to this, I don't know if there is a clear preference within the community.
Perl has a MASSIVE set of great testing tools that come with it! The Perl core has several tens of thousands of automated checks for it, and for the most part they all use use these standard Perl frameworks. They're all tied together using TAP - the Test Anything Protocol.
The standard way of creating TAP tests in Perl is using the Test::More family of packages, including Test::Simple for getting started. Here's a quick example:
use 5.012;
use warnings;
use Test::More tests => 3;
my $foo = 5;
my $bar = 6;
ok $foo == 5, 'Foo was assigned 5.';
ok $bar == 6, 'Bar was assigned 6.';
ok $foo + $bar == 11, 'Addition works correctly.';
And the output would be:
ok 1 - Foo was assigned 5.
ok 2 - Bar was assigned 6.
ok 3 - Addition works correctly.
Essentially, to get started, all you need to do is put pass a boolean value and a string explaining what should occur!
Once you get past that step, Test::More has a large number of other functions to make testing other things easier (string, regex compares, deep structure compares) and there's the Test::Harness back end that will let you test large groups of individual test scripts together.
On top of that, as Schwern pointed out, almost all of the modern Test:: modules work together. That means you can use Test::Class (as pointed out by Markus) with all of the great modules listed in rjh's answer. In fact, because Test::Builder--the tool that Test::More and others are built on (and currently maintained by Schwern...thanks Schwern!)--you can, if needed, build your OWN test subroutines from the ground up that will work with all the other test frameworks. That alone makes Perl's TAP system one of the nicest out there in my opinion: everything works together, everyone uses the same tool, and you can add on to the framework to suit your needs with very little additional work.
Perl's most popular test 'framework' is a test results format known as TAP (Test Anything Protocol) which is a set of strings that look like:
ok 1 - Imported correctly
ok 2 - foo() takes two arguments
not ok 3 - foo() throws an error if passed no arguments
Any script that can generate these strings counts as a Perl test. You can use Test::More to generate TAP for various conditions - checking if a variable is equal to a value, checking if a module imported correctly, or if two structures (arrays/hashes) are identical. But in true Perl spirit, there's more than one way to do it, and there are other approaches (e.g. Test::Class, which looks a bit like JUnit!)
A simple example of a test script (they usually end in .t, e.g. foo.t)
use strict;
use warnings;
use Test::More tests => 3; # Tell Test::More you intend to do 3 tests
my $foo = 3;
ok(defined $foo, 'foo is defined');
is($foo, 3, 'foo is 3');
$foo++;
is($foo, 4, 'incremented foo');
You can use Test::Harness (commonly invoked as prove from the shell) to run a series of tests in sequence, and get a summary of which ones passed or failed.
Test::More can also do some more complex stuff, like mark tests as TODO (don't expect them to pass, but run them just in case) or SKIP (these tests are broken/optional, don't run them). You can declare the number of tests you expect to run, so if your test script dies half-way, this can be detected.
Once you begin to do more complex testing, you might find some other CPAN modules useful - here are a few examples, but there are many (many) more:
Test::Exception - test that your code throws an error/doesn't throw any errors
Test::Warn - test that your code does/doesn't generate warnings
Test::Deep - deeply compare objects. They don't have to be identical - you can ignore array ordering, use regexes, ignore classes of objects etc.
Test::Pod - make sure your script has POD (documentation), and that it is valid
Test::Pod::Coverage - make sure that your POD documents all the methods/functions in your modules
Test::DBUnit - test database interactions
Test::MockObject - make pretend objects to control the environment of your tests
Definitely start with this page: http://perldoc.perl.org/Test/Simple.html and follow the reference to Test::Tutorial.
If you practice TDD, you will notice that your set of unit tests are changing A LOT. Test::Class follows the xUnit patterns (http://en.wikipedia.org/wiki/XUnit).
For me, the main benefit with xUnit is the encapsulation of each test in methods. The framework names each assertion by the name of the test method, and adds the possibility to run setup- and teardown methods before and after each test.
I have tried the "perl-ish" way for unit testing also (just using Test::More), but I find it kind of old-fashioned and cumbersome.
Some anti-recommendations may be in order:
Anti-recommendation:
Do NOT use the Test::Unit family of test packages for Perl, such as Test::Unit::Assert and Test::Unit::TestCases.
Reason: Test::Unit appears to be abandoned.
Test::Unit, Test::Unit::TestCases, Test::Unit::Assert work, pretty well (when I used them 2015-2016). Test::Unit is supposedly not integrated with Perl's Test Anything Protocol (TAP), although I found that easy to fix.
But Test::Unit is frustrating because so many of the other Perl test packages, mostly built using Test::Builder, like Test::More, Test::Most, Test::Exception, Test::Differences, Test::Deep, Test::Warn, etc., do NOT interact well with the object oriented testing approach of Test::Unit.
You can mix Test::Unit tests and Test::Builder tests once you have adapted Test::Unit to work with Test::More and the TAP; but the good features of these other packages are not available for OO extension. Which is much of the reason to use an xUnit-style test anyway.
Supposedly CPAN's Test::Class allows to "Easily create test classes in an xUnit/JUnit style" -- but I am not sure that I can recommend this. It certainly doesn't look like xUnit to me - not OO, idiosyncratic names like is(VAL1,VAL2,TESTNAME) instead of xUnit style names like $test_object->assert_equals(VAL1,VAL2,TEST_ERR_MSG). Test::Class does have the pleasant feature of auto-detecting all tests annotated :Test, comparable to xUnit and TEST::Unit::TestCase's approach of using introspection to run all functions named test_*.
However, the underlying package Test::Builder is object oriented, and hence much more xUnit style. Don't be scared away by the name - it's not a factory, it's mostly a suite with test assert methods. Although most people inherit from it, you can call it directly if you wish, e.g. $test_object->is(VAL1,VAL2,TESTNAME), and often you can use Test::Builder calls to work around the limitations of procedural packages like Test::More that are built on top of Test::Builder - like fixing the callstack level at which an error is reported.
Test::Builder is usually used singleton style, but you can create multiple objects. I am unsure as to whether these behave as one would expect from an xUnit family test.
So far, no easy way to work around limitations such as Perl TAP tests use TEST_NAMES, per assert, without hierarchy, and without distinguishing TEST_NAMES from TEST_ERROR_MESSAGES. (Err reporting level helps with that lack.)
It may be possible to create an adapter that makes Test::Builder and TAP style tests more object oriented, so that you can rebase on something other than TAP (that records more useful info than TAP - supposedly like ANT's XML protocol). I think to adapt the names and/or the missing concepts will either involve going into Test::Builder, or introspection.
Related
When I use Test::Class and Test::More to do system testing, it seems that the test cases execute in parallel. My tests, however, have dependencies between them such that I would like to have tests executive in series. How can I do this?
From the documentation of the module Test::Unit::TestCase in the NOTES section at the bottom:
If you need to specify the test order, you can do one of the following:
Set #TESTS
our #TESTS = qw(my_test my_test_2);
This is the simplest, and recommended way.
Override the list_tests() method
to return an ordered list of methodnames
Provide a suite() method
which returns a Test::Unit::TestSuite.
My personal 2 cents:
Using Test::Class instead of Test::Unit::TestCase is probably a better alternative. The module documentation even has a good introduction, and a useful section on "Confused Junit Users" which you should be reading even if you keep using Test::Unit::TestCase .
Test::Class executes its tests in alphabetic order. It's annoying, but you can name your test subroutines in a way that they will be executed in the proper order. Are you sure they are running in parallel? Are you possibly using prove on more than one file with a --jobs flag?
Perl is one of such language which supports the function overloading by return type.
Simple example of this is wantarray().
Few nice modules are available in CPAN which extends this wantarray() and provide overloading for many other return types. These modules are Contextual::Return and Want. Unfortunately, I can not use these modules as both of these fails the perl critic with perl version 5.8.9 (I can not upgrade this perl version).
So, I am thinking to write my own module like Contextual::Return and Want but with very minimal. I tried to understand Contextual::Return and Want modules code, but I am not an expert.
I need function overloading for return types BOOL, OBJREF, LIST, SCALAR only.
Please help me by providing some guidelines, how can I start for it.
Modules that play with Perl's syntax in the way that Contextual::Return and Want do are pretty much bound to fall foul of Perl::Critic. In this case the main transgressions are occasionally disabling strict and using subroutine prototypes, which are minimal.
I personally believe it is a foolish rule that insists that all code must pass an arbitrary set of tests with no exceptions, but I also think that any code that behaves very differently depending on the context in which it is called is likely to be badly designed and difficult to understand and maintain. It is rare to see even wantarray used, as Perl generally does the right thing without you having to explain.
I think you may have come across a module that looks interesting to use, and want to incorporate it into your code somehow. Can you change my mind by showing an example of a subroutine that would require the comprehensive context checking that you describe?
I noticed that in Perl the custom is to stick all tests into the t directory. How do you separate the unit test from the functional ones? Or, to make the question simpler and more obvious, how do you separate the tests that run quickly from the ones that do not? When all the tests run together the testing takes too long to be routinely used in development, which is a pity.
I figured I could set some environment variable like QUICK_TEST and skip the long tests according to its value. Do you separate unit and functional tests? How? (This is not meant to be a poll – I just thought maybe there’s some idiomatic solution.)
Update: So far I have come to this:
package Test::Slow;
use strict;
use Test::More;
BEGIN {
plan(skip_all => 'Slow test.') if $ENV{QUICK_TEST};
}
1;
And in a nearby .t file:
# This is a slow test not meant
# to run frequently.
use Test::Slow;
use Test::More;
It seems to work nicely.
P.S. Now available as Test::Slow on CPAN.
Run prove --state=all,save once to get some info added to .prove.
Run prove --state=slow -j9if you have a multi-core machine, and your tests can be run at the same time.
This will cause your longest running tests to be started at the beginning, so that they will be more likely to finish before all of your other tests are done. This could reduce the overall time to completion, without preventing any tests from being run.
You can certainly divide tests into subdirectories under t, with whatever categorization scheme you want. If you use an environment variable, I'd recommend making the default (if the variable is not set) be to run all tests. I've seen situations where t/ contains just the tests that would be routinely run in development and other tests are put under a different directory (e.g. t-selenium/).
I think it comes down to consistency being more important than which choice you make; just about anything will work if you are consistent.
Usually author-only test are put into xt directory. They are run manually. So, if you long tests are author-only, use xt. In general, t is common for CPAN modules. For private use, you can put them anywhere you like.
In Test::Manifest, I have a way to assign each test file a level. The test file only runs if the testing level is above that threshold. The lower level would be the stuff I want to run all of the time, the next level the slightly slower stuff, and so on.
However, I hardly ever use it. If I'm concentrating on one part of a system, I just run the test for that part:
% perl -Mblib t/some_test.t
Some people like to use prove to do the same thing.
I only end up running the full test suite when I need the integration testing to see if my changes broke anything else.
Is there a Perl module that allows me to view diffs between actual and reference output of programs (or functions)? The test fails if there are differences.
Also, in case there are differences but the output is OK (because the functionality has changed) I want to be able to commit the actual output as future reference output.
Perl has excellent utilities for doing testing. The most commonly used module is probably Test::More, which provides all the infrastructure you're likely to need for writing regression tests. The prove utility provides an easy interface for running test suites and summarizing the results. The Test::Differences module (which can be used with Test::More) might be useful to you as well. It formats differences as side-by-side comparisons. As for committing the actual output as the new reference material, that will depend on how your code under test provides output and how you capture it. It should be easy if you write to files and then compare them. If that's the case you might want to use the Text::Diff module within your test suite.
As mentioned, Test::Differences is one of the standard ways of accomplishing this, but I needed to mention PerlUnit: please do not use this. It's "abandonware" and does not integrate with standard Perl testing tools. Thus, for all new test modules coming out, you would have to port their functionality if you wanted to use them. (If someone has picked up the maintenance of this abandoned module, drop me a line. I need to talk to them as I maintain core testing tools I'd like to help integrate with PerlUnit).
Disclaimer: while Id didn't write it, I currently maintain Test::Differences, so I might be biased.
I tend to use more of the Test::Simple and Test::More functionality. I looked at PerlUnit and it seems to provide much of the functionality which is already built into the standard libraries with the Test::Simple and Test::More libraries.
I question those of you who recommend the use of PerlUnit. It hasn't had a release in 3 years. If you really want xUnit-style testing, have a look at Test::Class, it does the same job, but in a more Perlish way. The fact that it's still maintained and has regular releases doesn't hurt either.
Just make sure that it makes sense for your project. Maybe good old Test::More is all you need (it usually is for me). I recommend reading the "Why you should [not] use Test::Class" sections in the docs.
The community standard workhorses are Test::Simple (for getting started with testing) and Test::More (for once you want more than Test::Simple can do for you). Both are built around the concept of expected versus actual output, and both will show you differences when they occur. The perldoc for these modules will get you on your way.
You might also want to check out the Perl QA wiki, and if you're really interested in perl testing, the perl-qa mailing list might be worth looking into -- though it's generally more about creation of testing systems for Perl than using those systems within the language.
Finally, using the module-starter tool (from Module::Starter) will give you a really nice "CPAN standard" layout for new work -- or for dropping existing code into -- including a readymade test harness setup.
For testing the output of a program, there is Test::Command. It allows to easily verify the stdout and stderr (and the exit value) of programs. E.g.:
use Test::Command tests => 3;
my $echo_test = Test::Command->new( cmd => 'echo out' );
$echo_test->exit_is_num(0, 'exit normally');
$echo_test->stdout_is_eq("out\n", 'echoes out');
$echo_test->stderr_unlike( qr/something went (wrong|bad)/, 'nothing went bad' )
The module also has a functional interface too, if it's more to your liking.
Are there conventions for function names when using the Perl Test::More or Test::Simple modules?
I'm specifically asking about the names of functions that are used to set up a test environment before the test and to tear down the environment after successful completion of the test(s).
cheers,
Rob
If you are looking for more XUnit-style testing, check out Test::Class. It provides the Test(setup) and Test(teardown) attributes for methods that, well, set up and tear down your environment. It also gives you a much nicer way of dealing with plans (you can provide one for each test method individually, so the counting is much less fiddly) and lets you inherit tests via test class hierarchies.
I dont think there are any such conventions out there.
The only way you can do it is perhaps use BEGIN/END blocks, if the resources are to be used over the whole file.
The general approach I take is to put related tests in one code block and then initialize the variables/resource etc there. You can perhaps keep an easy count of how many tests you have for each function.
Something like ...
BEGIN {
# If you want to set some global db setting/file setting/INC changes etc
}
# Tests functionality 1...
{
# have fun ....
}
# Tests functionality 2...
{
# have more fun ....
}
END {
# Clean up the BEGIN changes
}
On other note, you may want to read this for testing in perl ... http://perlandmac.blogspot.com/2007/08/using-perl-testsimple-and-testmore.html
I do not think there is a official set of conventions, so I would recommend looking at the examples at http://perldoc.perl.org/Test/More.html and see how the write their tests.
We use Test::More extensively for our unit tests as a lot (most) of our data processing scripts are written in Perl. We don't have a specific convention for the function names but rather do something like Jagmal suggests, namely breaking the tests up into smaller chunks and initializing locally.
In our case each subtest is encapsulated in a separate function within the test script. On top of this we have a framework that allows us to run all the subtests (the full unit test) or call individual subtests or sets of subtests to allow for running of just the ones we're working on at the moment.
Thanks Espo.
I've had a look at the relevant perldocs but there's no real convention regarding the setup and teardown aspects.
Not like XUnit series of tests.
Thanks for the answer Jagmal but I'm not sure about using the BEGIN and END blocks for the setup and teardown as you are not making clear what you are doing by the names. There's also the obvious problem of only having one setup run and one teardown run per test, i.e. per each .t file.
I've had a quick look at Test::Most and it looks really interesting, especially the explain function. Thanks Matt.
Hmmm. Just thinking further about using the BEGIN and END blocks, I'm thinking if I decrease the granularity of the tests so that there is only one setup and one teardown needed then this would be a good solution.
cheers,
Rob
First convention I'd suggest is ditching Test::More for Test::Most
Perl testing scripts aren't special or magic in any way. As such, they can contain the exact same things that any other Perl script can.
You can name routines anything you want, and call them before, after, and intertwingled with, your tests.
You can have any amount of initialization code before any tests, any amount of cleanup code after tests, and any amount of any other code mixed in with tests.
This all assumes that you're talking about CPAN-style t/*.t test scripts. I think you are, but I can manage to read your question as one about extending test harnesses, if I squint just right.
If you are open to get into acceptance testing as well, like Ruby's Cucumber - take a look at this small example http://github.com/kesor/p5-cucumber that is using Test::More and a cucumber style of acceptance testing.