Can I force each nunit test method to run on a separate process ?
I need to do this because calling some of the methods-under-test may have side-effects. So, I need to make sure that each unit test run in complete isolation from the other unit tests.
You can make use of the "/process" option to specify - single or separate or multiple options. Here is the reference to the NUnit documentation for version 2.5: http://www.nunit.org/index.php?p=consoleCommandLine&r=2.5. Look at the Controlling the Use of Processes section.
Related
I have three classes all containing multiple tests. Each class has the annotation of TestFixture. I need each class to run the tests in order. (e.g. TextFixture1 runs all of its tests, then TestFixture2 runs all of its tests, and finally TestFixture3 runs all of its tests.) How can I accomplish this?
Use the OrderAttribute on each fixture, specifying the order in which you want the fixtures to run. Use the same attribute on each test method, specifying the order the test should run within the fixture.
See OrderAttribute in the docs for more info.
I'm working with multiple features and scenarios and am looking for a way to find out what scopes are included in a test run at the time to test run start, if that's possible.
There's a large-ish subset (category) of our tests that require a setup that takes 5-10 seconds--currently we're using a BeforeFeature to optimize this setup as much as we can but we have several features (but not all) under the same scope. We'd like to run this setup only when that category of tests of tests is included in the test run.
in pseudo code it would essentially be
[BeforeTestRun]
If test run includes scenarios/features with tag "AdvancedSetup"
AdvancedSetup();
In SpecFlow this information is not available.
But perhaps your test runner has this information available.
FYI: Tags are translated to TestCategories.
NUnit allows use of a higher-level setup that applies to a namespace. You access this by creating a SetUpFixture. If SpecFlow gives you a way to map features into specific namespaces, you could use this.
I have a class which extends org.scalatest.junit.JUnitSuite. This class has a couple of tests. I do not want these tests to run in parallel.
I know how simple it is with Specs2 (extend the class with Specification and add a single line sequential inside the class) as shown here: How to run specifications sequentially.
I do not want to alter the Build file by setting:
parallelExecution in Test := false
nor I want to use tags to run specific test files sequentially.
All I want is a way to make sure that all tests inside my class run sequentially. Is this possible with ScalaTest ? Any sample test/template is appreciated.
A quick google search pointed me to this: http://doc.scalatest.org/2.0/index.html#org.scalatest.Sequential
Just for the couple of tests I have, I think it is a total overkill to create StepSuites. I am not completely sure if that's the way to go about with my case!
The doc for org.scalatest.ParallelTestExecution says
ScalaTest's normal approach for running suites of tests in parallel is to run different suites in parallel, but the tests of any one suite sequentially.
So it looks like you don't have to do anything to get what you want, if your tests are in a single suite.
I've got a Perl-based test suite with 10,000+ tests that I would like to make run faster. I've tested using the -j flag to prove, and I have found that most-but-not-all of my tests are ready to run in parallel.
While I can work on making the remaining tests to be "parallel friendly", I expect there always be some tests which are not. What's a good way to manage this? I would like for it to be easy to run the whole set of tests efficiently, and make it easy to mark tests as "not-parallel-ready" if I need to.
Here are some options I see:
prove could be patched to support some tests as not-parallel-ready
Jenkins is being used to manage the test suite runs. I could split off the non-parallel tests into their own run. In other words, give up and use two test runs.
Perhaps there is a way to merge two TAP result streams together that I have yet to recover.
I'm not too concerned with how I will manage the list of exceptions. Either I can keep a list in a file as part of the test harness infrastructure, or I could put something in each test header that would mark it as such, and our test harness could determine the list of exceptions dynamically.
( The test suite is partially based on Test::Class, and I'll also be looking at Test::Class::Load to speed it up as well. )
I found a solution. It's in the documentation for aggregate_tests() for TAP::Harness. It includes a code sample for how I could write my own harness for this purpose:
...This is useful, for example, in the case where some tests should
run in parallel but others are unsuitable for parallel execution.
my $formatter = TAP::Formatter::Console->new;
my $ser_harness = TAP::Harness->new( { formatter => $formatter } );
my $par_harness = TAP::Harness->new(
{ formatter => $formatter,
jobs => 9
}
);
my $aggregator = TAP::Parser::Aggregator->new;
$aggregator->start();
$ser_harness->aggregate_tests( $aggregator, #ser_tests );
$par_harness->aggregate_tests( $aggregator, #par_tests );
$aggregator->stop();
$formatter->summary($aggregator);
From there it looks like I could:
Sub-class App::Prove and override _runtests(), which is where the new functionality above could be merged in.
Fork prove so that it calls My::App::Prove instead of App::Prove.
Now that I better understand how the pieces fit together I can see how I might create a patch for prove that would add an option like --exclude-from-parallel FILE, which would allow you to specify a file, which contains a list of test files to be excluded from parallel testing.
UPDATE 2012-08-16: I have a patch for prove now, and have submitted it for review. You can view and comment on the Pull Request. No summary is produced after the run output. It's not clear why.
I've now found the best solution so far to this problem. In turns out that prove has had undocumented support for marking some tests to be run in sequence instead of parallel since 2008. It's backed by a rather fancy "rules" system in TAP::Parser::Scheduler that allows for complex specifications of ordering arrangements for parallel and sequential test runs.
Here's the basic current recipe for prove:
# All tests are allowed to run in parallel, except those starting with "p"
--rules='seq=t/p*.t' --rules='par=**'
I have a new pull request that adds documentation for this feature, and have started a discussion about possibly offering a simpler syntax for basic exceptions as well. See the pull request for details.
I found another solution which advertised this feature, but I could only get trivial cases to work. It's to use Test::Steering. It allows me to do this:
include_tests( { jobs => 4 }, #parallel_tests );
include_tests( #serial_tests );
With this solution, be aware:
Before it actually works, I currently have to patch the code to fix a basic bug with it that has remained unpatched for multiple years.
Additional code is needed to handle generating the list of of parallel and serial tests to run.
I didn't actually get a combined summary for my real-world test... both sections emitted their own summary reports, so it didn't really work. Maybe I missed something, or maybe it's broken.
Test::Parallel also provides an easier way to run some tests in parallel
have a look at the sample from https://metacpan.org/pod/Test::Parallel
Another option: use a rules file for TAP::Harness.
You can build custom rules in a YAML file (called testrules.yml by default). I needed something similar to what you describe, which I was able to do with a testrules.yml file that looked like this:
---
seq:
# tests that are not parallel-ready (will run in isolation)
- seq:
- t/test1.t
- t/test2.t
# tests that can run in parallel
- par:
# wildcard for everything else
- **
In my case, I was using this with code that directly called App::Prove, rather than command-line prove. But I think it would work with prove too?
I have some NUnit tests which uses a TestCaseSource function. Unfortunately, the TestCaseSource function that I need takes a long time to initialize, because it scans a folder tree recursively to find all of the test images that would be passed into the test function. (Alternatively it could load from a file list XML every time it's run, but automatic discovery of new image files is still a requirement.)
Is it possible to specify an NUnit attribute together with TestCaseSource such that NUnit does not enumerate the test cases (does not call the TestCaseSource function) until either the user clicks on the node, or until the test suite is being run?
The need to get all test images stored in a folder is a project requirement because other people who do not have access to the test project will need to add new test images to the folder, without having to modify the test project's source code. They would then be able to view the test result.
Some dogmatic unit-testers may counter that I am using NUnit to do something it's not supposed to do. I have to admit that I have to meet a requirement, and NUnit is such a great tool with a great GUI that satisfies most of my requirements, such that I do not care about whether it is proper unit testing or not.
Additional info (from NUnit documentation)
Note on Object Construction
NUnit locates the test cases at the
time the tests are loaded, creates
instances of each class with
non-static sources and builds a list
of tests to be executed. Each source
object is only created once at this
time and is destroyed after all tests
are loaded.
If the data source is in the test
fixture itself, the object is created
using the appropriate constructor for
the fixture parameters provided on the
TestFixtureAttribute or the default
constructor if no parameters were
specified. Since this object is
destroyed before the tests are run, no
communication is possible between
these two phases - or between
different runs - except through the
parameters themselves.
It seems the purpose of loading the test cases up front is to avoid having communications (or side-effects) between TestCaseSource and the execution of the tests. Is this true? Is this the only reason to require test cases to be loaded up front?
Note:
A modification of NUnit was needed, as documented in http://blog.sponholtz.com/2012/02/late-binded-parameterized-tests-in.html
There are plans to introduce this option to later versions of NUnit.
I don't know of a way to delay-load test names in the GUI. My recommendation would be to move those tests to a separate assembly. That way, you can quickly run all of your other tests, and load the slower exhaustive tests only when needed.