Testing on multiple platforms with pytest-cpp - pytest

I'm testing a project that has Python and C++ parts.
Both parts have unit tests, using pytest on the Python side and Catch2 on the C++ side, plus there are integration tests that are supposed to check the cooperation between the parts.
Recently I discovered pytest-cpp and it works great to create a "test all" run, with both sets of unit tests.
Integration tests are also written in pytest.
I have several testing binaries on the C++ side and pytest fixtures for compiling, running and interfacing with them from python.
Since the C++ code needs to work on multiple platforms, I'm planning to parametrize the fixtures for integration tests by a platform name and emulation method, allowing to cross compile the binaries for example for ARM64 and run them through qemu-aarch64.
My question is: Is there some way to hijack the test discovery done by pytest-cpp and force it to look at specific paths (with emulation wrappers) provided by my fixture, so that I can also run the C++ unit tests this way?
I've looked at the test discovery bit of pytest-cpp and I guess I could reimplement this function in my own code, but I don't see any way to parametrize pytest_collect_file by a fixture ...

Related

before, beforeEach, after, and afterEach of mocha equivalent in pytest

I'm new to python and using pytest along with requests to start API testing.
I want to run some scripts before each test module and other snippets before each test case in a module for testcases data setup.
I've checked pytest fixtures and scope but I don't think this is what I'm looking for as I can't control the data passed to fixtures. What other possible solutions for that?

Why do we need OpenDDS run_test.pl?

I am running OpenDDS MPC based example stockQuoter. I deleted the run_test.pl, still the project builds and runs properly. why do we need this Perl script?
You don't really need it and you're free to start the programs directly. All examples and tests in OpenDDS have a file called run_test.pl for the purposes of testing. Among other functions, they define what programs get called with what arguments for a certain test scenario and are responsible for killing the programs if they get stuck.

How to run tests smoothly in Leiningen project within Counterclockwise/Eclipse?

I'm a newbie with Clojure and Counterclockwise and I succeeded adding a Leiningen 2 project with "Poor man's integration" (External tools, linked from question Using Clojure and Leiningen with IDEs).
My alternatives for running tests so far:
From command line : lein test
Running "lein test" with "Poor man's integration" (External tool)
These work pretty fine but I'm wondering whether there's some smoother alternative, for example showing the tests run like with JUnit etc?
Or with more general formulation, how to have fluent TDD flow with Counterclockwise?
Another alternative I found (with clojure.test API) was loading the test file in REPL (Alt+Cmd+S) and calling run-tests:
(run-tests)
With some trying, I can re-run the tests with my modifications by loading the modified file to REPL and calling run-tests again. (Works but isn't probably the final solution)
Midje with autotest in REPL seems to be worth checking out.
One way to do this is to use cljunit as an interface between the JUNit runner in Eclipse and your Clojure tests.

Perl: integration tests

I'm asking for best practice.
Usually I put all unit tests into t/, but what about integration tests?
Where are the best place for integration tests?
And how them will be named?
There isn't a globally accepted naming/location convention for integration tests that I am aware of in Perl.
From my observations most people put everything under t/.
Some folk split tests up into sub-directories under t/ (so you might have integration tests in a separate t/integration/ directory).
Some folk use a file-prefix (so you might have integration tests named 'I-whatever.t').
Some folk don't care - and just throw everything into the same directory.
I've occasionally seen folk separate out a group of tests into a separate directory at the same level as 't/' - but that's been usually for tests that are not normally run (e.g. author specific tests for CPAN dists) rather than for integration vs unit).

Is there a way to package my unit tests with PAR or PerlApp?

I have an app that I pack into "binary" form using PerlApp for distribution. Since my clients want a simple install for their Win32 systems, this works very nicely.
Now a client has decided that they need to run all unit tests, like in a standard install. However, they still will not install a normal Perl.
So, I find myself in need of a way to package my unit tests for operation on my client's systems.
My first thought was that I could pack up prove in one file and pack each of my tests separately. Then ship a zip file with the appropriate structure.
A bit of research showed that Test::Harness::Straps invokes perl from the command line.
Is there an existing tool that helps with this process?
Perhaps I could use PAR::Packer's parl tool to handle invocation of my test scripts.
I'm interested in thoughts on how to apply either PAR or PerlApp, as well as any thoughts on how to approach overriding Test::Harness and friends.
Thanks.
Update: I don't have my heart set on PAR or PerlApp. Those are just the tools I am familiar with. If you have an idea or a solution that requires a different packager (such as Cava Packager), I would love to hear about it.
Update 2: tsee pointed out a great new feature in PAR that gets me close. Are there any TAP experts out there that can supply some ideas or pointers on where to look in the new Test::Harness distribution?
I'm probably not breaking big news if I tell you that PAR (and probably also perlapp) aren't meant to package the whole test suite and plethora of CPAN-module build byproducts. They're intended to package stand-alone applications or binary JAR-like module libraries.
This being said, you can add arbitrary files to a PAR archive (both to .par libraries and stand-alone .exe's) using pp's -a switch. In case of the stand-alone executable, the contents will be extracted to $ENV{PAR_TEMP}."/inc" at run-time.
That leaves you with the problem of reusing the PAR-packaged executable to run the test harness (and letting that run your executable as a "perl"). Now, I have no ready and done solution for that, but I've recently worked on making PAR-packaged executables re-useable as more-or-less general purpose perl interpreters. Two gotchas before I explain how you can use that:
Your application isn't going to magically be called "perl" and add itself to your $PATH.
The "reuse" of the application as a general purpose perl requires special options and does not currently support the normal perl options (those in perlrun). It can simply run an external perl script of your choice.
Unfortunately, the latter problem is what may kill this approach for you. Support for perl command line options is something I've been thinking about, but won't implement any time soon.
Here's the recipe how you get PAR with "reusable exe" support:
Install the newest version of PAR from CPAN.
Install the newest, developer version of PAR::Packer from CPAN (0.992_02 or 03).
Add the "--reusable" option to your pp command line.
Run your executable with the following options to run an external script "foo.pl":
./myapp --par-options --reuse foo.pl FOO-PL-OPTIONS-HERE
Unfortunately, how you're going to teach Test::Harness that "./myapp --par-options --reuse" is a perl interpreter is beyond me.
Cava Packager allows you to package test scripts with your packaged executables. This is primarily to allow you to run tests against the packaged code before distribution. However the option is there to also distribute the tests and test capability to your end users.
Note: As indicated by my name, I am affiliated with Cava Packager.