Perl: integration tests - perl

I'm asking for best practice.
Usually I put all unit tests into t/, but what about integration tests?
Where are the best place for integration tests?
And how them will be named?

There isn't a globally accepted naming/location convention for integration tests that I am aware of in Perl.
From my observations most people put everything under t/.
Some folk split tests up into sub-directories under t/ (so you might have integration tests in a separate t/integration/ directory).
Some folk use a file-prefix (so you might have integration tests named 'I-whatever.t').
Some folk don't care - and just throw everything into the same directory.
I've occasionally seen folk separate out a group of tests into a separate directory at the same level as 't/' - but that's been usually for tests that are not normally run (e.g. author specific tests for CPAN dists) rather than for integration vs unit).

Related

Split Test::More suite into multiple files

I'm using Test::More to test my application. I have a single script, run_tests.pl, that runs all the tests. Now I want to split this into run_tests_component_A.pl and B, and run both test suites from run_tests.pl. What is the proper way of doing this, does Test::More have any helpful methods?
I'm not using any build system.
Instead of running the creating a run_tests.pl to run the test suite, the standard practice is to use prove.
Say you have
t/foo.t
t/bar.t
Then,
prove is short for prove t.
prove t runs the entire test suite (both t/foo.t and t/bar.t).
prove t/foo.t runs that specific test file.
perl t/foo.t runs that specific test file, and you get the raw output. Easier for debugging.
perl -d t/foo.t even allows you to run the test in the debugger.
Each file is a self-standing program. If you need to share code between test programs, you can create t/lib/Test/Utils.pm (or whatever) and use the following in your test files:
use FindBin qw( $RealBin );
use lib "$RealBin/lib";
use Test::Utils;
prove executes the files in alphabetical order, so it's common to name the files
00_baseline.t
01_basic_tests.t
02_more_basic_tests.t
03_advanced_tests.t
The 00 test tests if the modules can be loaded and that's it. It usually outputs the versions of loaded modules to help with dependency problems. Then you have your more basic tests. The stuff that's like "if this doesn't work, you have major problems". There's no point in testing the more complex features if the basics don't work.

Testing on multiple platforms with pytest-cpp

I'm testing a project that has Python and C++ parts.
Both parts have unit tests, using pytest on the Python side and Catch2 on the C++ side, plus there are integration tests that are supposed to check the cooperation between the parts.
Recently I discovered pytest-cpp and it works great to create a "test all" run, with both sets of unit tests.
Integration tests are also written in pytest.
I have several testing binaries on the C++ side and pytest fixtures for compiling, running and interfacing with them from python.
Since the C++ code needs to work on multiple platforms, I'm planning to parametrize the fixtures for integration tests by a platform name and emulation method, allowing to cross compile the binaries for example for ARM64 and run them through qemu-aarch64.
My question is: Is there some way to hijack the test discovery done by pytest-cpp and force it to look at specific paths (with emulation wrappers) provided by my fixture, so that I can also run the C++ unit tests this way?
I've looked at the test discovery bit of pytest-cpp and I guess I could reimplement this function in my own code, but I don't see any way to parametrize pytest_collect_file by a fixture ...

Why do we need OpenDDS run_test.pl?

I am running OpenDDS MPC based example stockQuoter. I deleted the run_test.pl, still the project builds and runs properly. why do we need this Perl script?
You don't really need it and you're free to start the programs directly. All examples and tests in OpenDDS have a file called run_test.pl for the purposes of testing. Among other functions, they define what programs get called with what arguments for a certain test scenario and are responsible for killing the programs if they get stuck.

How can I tell CPAN Testers how to setup the environment for my tests?

I am writing tests that require external software (Amazon's local DynamoDB server) to be installed and running. Is there some way to tell CPAN Testers what to do?
Or should I just download the server and start it myself in the test setup? That would require Java 6.x or newer to be installed. So I think I am back to the first question.
In case people don't know, CPAN Testers is a group of people who test all of CPAN using automated scripts called smokers.
Further background:
Right now, CPAN Testers shows 227 machines pass all tests for Amazon::DynamoDB, but that is misleading since only one of the over seven thousand tests is currently being run: use_ok( 'Amazon::DynamoDB' );. The rest are hidden behind unless statements:
unless ( $ENV{'AMAZON_DYNAMODB_EXPENSIVE_TESTS'} ) {
plan skip_all => 'Testing this module for real costs money.';
}
And a significant number of the tests do not pass. I have fixed that, but testing requires either the setting of three environment variables in the tester's environment and money (the current way):
AMAZON_DYNAMODB_EXPENSIVE_TESTS=1
EC2_ACCESS_KEY=<user's AWS access key>
EC2_SECRET_KEY=<user's AWS secret key>
or the installation of the local version of Amazon DynamoDB. If this module is released as is, it will appear broken on all machines it runs on that don't have the prerequisite environment setup (ie it will erroneously appear broken rather than erroneously appear to be working).
CPAN Testers run the same tests that your module will run upon installation. Should your tests install other software on the machine? Probably not. Instead, the tests should fail loudly when its prerequisites are not met.
You should also draw a distinction between author tests and installation tests. There is no expectation that the installation tests verify all the functionality. Expensive tests (in this case, tests that literally cost money) shouldn't be part of that. You can run them yourself before you release. However, it might be better to put them in xt/ and guard them with the EXTENDED_TESTING variable instead of a non-standard environment variable. See also the Lancaster Consensus for a discussion of various environment variables during testing of Perl projects.
You can also consider using a different provider for your more thorough tests than the donated CPAN Testers capacity, e.g. by setting up Travis CI for your project. Since they give you a container to play around, you can install extra software. You can also securely provide credentials to your tests. In contrast, the main advantage of CPAN Testers is the diverse range of operating systems, i.e. the lack of control over the testing environment.
Call die from Makefile.PL or Build.PL if the prerequisites for building your module cannot be satisfied. On CPANTesters, aborting from the Makefile will give you an NA test result instead of a FAIL test result, and does not reflect poorly on your module and your build process.
# Makefile.PL
...
if ($ENV{AUTOMATED_TESTING}) {
if (!$ENV{AMAZON_DYNAMODB_EXPENSIVE_TESTS} ||
!$ENV{EC2_ACCESS_KEY} ||
!$ENV{EC2_SECRET_KET}) {
die "To test this module, you must set the environment\n",
"variables EC2_ACCESS_KEY, EC2_SECRET_KEY, and\n",
"AMAZON_DYNAMODB_EXPENSIVE_TESTS. Be advised that\n",
"running these test will result in charges against\n",
"your AWS account.";
}
}
...
Is there some way to tell CPAN Testers what to do?
This is more of a social problem than a technical one.
You can ask the regulars on cpan-testers-discuss to manually set up the requirements; there's precedent for doing so. Not everyone will oblige, of course.
Another possibility is to reach out to your module's users and ask them to become ad-hoc test reporters via Task::CPAN::Reporter/cpanm-reporter or similar.
Look at what other CPAN modules that have external dependencies do, and do something like that.
For example, look at the DBI drivers for various databases. While File and SQLite come with their own prereqs, the same is not true for others like Oracle and DB2. Or look at wxGTK which, IIRC, uses an Alien package to install Wx.
In your case, I would suggest more along the lines of the DBD drivers than embedding through Alien, but you have to make that choice.

Is there a way to package my unit tests with PAR or PerlApp?

I have an app that I pack into "binary" form using PerlApp for distribution. Since my clients want a simple install for their Win32 systems, this works very nicely.
Now a client has decided that they need to run all unit tests, like in a standard install. However, they still will not install a normal Perl.
So, I find myself in need of a way to package my unit tests for operation on my client's systems.
My first thought was that I could pack up prove in one file and pack each of my tests separately. Then ship a zip file with the appropriate structure.
A bit of research showed that Test::Harness::Straps invokes perl from the command line.
Is there an existing tool that helps with this process?
Perhaps I could use PAR::Packer's parl tool to handle invocation of my test scripts.
I'm interested in thoughts on how to apply either PAR or PerlApp, as well as any thoughts on how to approach overriding Test::Harness and friends.
Thanks.
Update: I don't have my heart set on PAR or PerlApp. Those are just the tools I am familiar with. If you have an idea or a solution that requires a different packager (such as Cava Packager), I would love to hear about it.
Update 2: tsee pointed out a great new feature in PAR that gets me close. Are there any TAP experts out there that can supply some ideas or pointers on where to look in the new Test::Harness distribution?
I'm probably not breaking big news if I tell you that PAR (and probably also perlapp) aren't meant to package the whole test suite and plethora of CPAN-module build byproducts. They're intended to package stand-alone applications or binary JAR-like module libraries.
This being said, you can add arbitrary files to a PAR archive (both to .par libraries and stand-alone .exe's) using pp's -a switch. In case of the stand-alone executable, the contents will be extracted to $ENV{PAR_TEMP}."/inc" at run-time.
That leaves you with the problem of reusing the PAR-packaged executable to run the test harness (and letting that run your executable as a "perl"). Now, I have no ready and done solution for that, but I've recently worked on making PAR-packaged executables re-useable as more-or-less general purpose perl interpreters. Two gotchas before I explain how you can use that:
Your application isn't going to magically be called "perl" and add itself to your $PATH.
The "reuse" of the application as a general purpose perl requires special options and does not currently support the normal perl options (those in perlrun). It can simply run an external perl script of your choice.
Unfortunately, the latter problem is what may kill this approach for you. Support for perl command line options is something I've been thinking about, but won't implement any time soon.
Here's the recipe how you get PAR with "reusable exe" support:
Install the newest version of PAR from CPAN.
Install the newest, developer version of PAR::Packer from CPAN (0.992_02 or 03).
Add the "--reusable" option to your pp command line.
Run your executable with the following options to run an external script "foo.pl":
./myapp --par-options --reuse foo.pl FOO-PL-OPTIONS-HERE
Unfortunately, how you're going to teach Test::Harness that "./myapp --par-options --reuse" is a perl interpreter is beyond me.
Cava Packager allows you to package test scripts with your packaged executables. This is primarily to allow you to run tests against the packaged code before distribution. However the option is there to also distribute the tests and test capability to your end users.
Note: As indicated by my name, I am affiliated with Cava Packager.