How to Order NUnit Tests - nunit

More than once the question has been asked on SO. But the only answers that are given read "you should not need to order your unit tests, it is bad because" or "you can avoid that if..."
I already know it is bad, why it is bad, and techniques to avoid it. But that is not what I want to know. I'd like to know if it is possible to order the execution of NUnit tests, other than an alphabetical order. To be blunt: I actually want state to propogate from one test to the next. Trust me that I have a clever reason for this, that defies the usual philosophy.
MSTest has the "ordered test" capability, which is very useful in certain cases. I'd like to have that ability in NUnit. Can it be done?

Update for NUnit 3.2.0 - now it support OrderAttribute.
The OrderAttribute may be placed on a test method to specify the order in which tests are run.
Example:
public class MyFixture
{
[Test, Order(1)]
public void TestA() { ... }
[Test, Order(2)]
public void TestB() { ... }
[Test]
public void TestC() { ... }
}
https://github.com/nunit/docs/wiki/Order-Attribute

The work-around (hack) is to alphabetize your test case names. See this thread:
https://bugs.launchpad.net/nunit-3.0/+bug/740539
Relying on alphabetical order is a workaround that you can use but it is not documented and supported beyond the visual order of the display. In theory it could change at any time. In practice it won't change until NUnit 3.0, so you're pretty safe using it as a workaround
This quote is from Charlie Poole, the main dev on NUnit.
It also seems they have a scheme cooking to support ordered tests in NUnit 3, though how they will do so is still under discussion.

Just an update for NUnit 2.5.1. According to documentation there are cases that even alphabetical order is not supported.
NUnit TestCaseAttribute
Order of Execution
In NUnit 2.5, individual test cases are sorted alphabetically and
executed in that order. With NUnit 2.5.1, the individual cases are not
sorted, but are executed in the order in which NUnit discovers them.
This order does not follow the lexical order of the attributes and
will often vary between different compilers or different versions of
the CLR.
As a result, when TestCaseAttribute appears multiple times on a method
or when other data-providing attributes are used in combination with
TestCaseAttribute, the order of the test cases is undefined.

Try to use NameParameters argument to pass the TestName with a string you wish, in order the TestCase() to be ordered by TestName.
[TestCase(..., TestName = "1stTest")]
[TestCase(..., TestName = "2ndTest")]

for Nuint you can use following code .
[TestMethod]
[Priority(2)]

Related

Does NUnit 3.9 support Test Suites?

I am trying to find a way to create custom suites of NUnit tests to target our wide variety of environments. The closest thing I found was this http://nunit.org/docs/2.5.6/suite.html which is exactly what I am looking for. Tying to implement this though, the [Suite] annotation doesnt even seem to exist.. Was this taken away? Is there a better solution now?
The SuiteAttribute was eliminated in NUnit 3. It never got a lot of use as most people simply organize their tests by namespace, which provides the same grouping of tests that the SuiteAttribute used to do.
FUN FACT: "Automatic namespace suites" were once a new cool thing!
If you want the ability to group tests in different ways, across namespace boundaries, you can use categories to do it. It's not as easy of course.
An alternative, if you are using the command-line console runner, is to list the fixtures you want to run in a file and use the --testlist option.
Building off of Charlies post from above - The way I was able to set this up was using the --testlist option.
First create a testlist.txt file and store it somewhere in your solution. Structure the file in such a way so if you have a class like.
namespace NamespaceA
{
class TestGroup
{
[Test]
public void TestOne()
{
}
[Test]
public void TestTwo()
{
}
}
}
the files contents would look like this..
NamespaceA.TestGroup.TestOne
or for both..
NamespaceA.TestGroup
Then just your standard consule runner command
"nunit-console.exe" "path/to/.dll" --testlist="path/to/testlist.txt"
:D)

How may I set category in NUnit 3 in GUI

I have the following code:
[TestFixture]
class A: B
{
[Test(Description = "SW"), Category("Minimum")]
public void TS1()
{
}
}
[TestFixture]
class C: B
{
[Test(Description = "ex"), Category("Minimum")]
public void TS2()
{
}
}
}
When I run the GUI of NUnit 3.0.1, filter test, and category I get only one test in the Minimum category and not 2 tests in the Minimum category. (the other test is under None category although it's under Minimum)
How may I get the correct filter?
Sounds like a bug, which you can report at https://github.com/nunit/nunit-gui/issues.
That said, since the GUI is not yet released there could be an issue in how you are building it. You should also keep in mind that the source for the GUI may depend on a version of NUnit that has not yet been released.
Even when we do release a binary, it will be called 0.1 and will be mainly for the purpose of getting feedback on usability of the GUI.
The NUnit team is very conservative about recommending use of our pre-alpha source code in production use - that is, we don't recommend it. :-)
All that said, if it's a problem, we like to hear about it. :-)

How to run some but not all tests in a Perl test suite in parallel?

I've got a Perl-based test suite with 10,000+ tests that I would like to make run faster. I've tested using the -j flag to prove, and I have found that most-but-not-all of my tests are ready to run in parallel.
While I can work on making the remaining tests to be "parallel friendly", I expect there always be some tests which are not. What's a good way to manage this? I would like for it to be easy to run the whole set of tests efficiently, and make it easy to mark tests as "not-parallel-ready" if I need to.
Here are some options I see:
prove could be patched to support some tests as not-parallel-ready
Jenkins is being used to manage the test suite runs. I could split off the non-parallel tests into their own run. In other words, give up and use two test runs.
Perhaps there is a way to merge two TAP result streams together that I have yet to recover.
I'm not too concerned with how I will manage the list of exceptions. Either I can keep a list in a file as part of the test harness infrastructure, or I could put something in each test header that would mark it as such, and our test harness could determine the list of exceptions dynamically.
( The test suite is partially based on Test::Class, and I'll also be looking at Test::Class::Load to speed it up as well. )
I found a solution. It's in the documentation for aggregate_tests() for TAP::Harness. It includes a code sample for how I could write my own harness for this purpose:
...This is useful, for example, in the case where some tests should
run in parallel but others are unsuitable for parallel execution.
my $formatter = TAP::Formatter::Console->new;
my $ser_harness = TAP::Harness->new( { formatter => $formatter } );
my $par_harness = TAP::Harness->new(
{ formatter => $formatter,
jobs => 9
}
);
my $aggregator = TAP::Parser::Aggregator->new;
$aggregator->start();
$ser_harness->aggregate_tests( $aggregator, #ser_tests );
$par_harness->aggregate_tests( $aggregator, #par_tests );
$aggregator->stop();
$formatter->summary($aggregator);
From there it looks like I could:
Sub-class App::Prove and override _runtests(), which is where the new functionality above could be merged in.
Fork prove so that it calls My::App::Prove instead of App::Prove.
Now that I better understand how the pieces fit together I can see how I might create a patch for prove that would add an option like --exclude-from-parallel FILE, which would allow you to specify a file, which contains a list of test files to be excluded from parallel testing.
UPDATE 2012-08-16: I have a patch for prove now, and have submitted it for review. You can view and comment on the Pull Request. No summary is produced after the run output. It's not clear why.
I've now found the best solution so far to this problem. In turns out that prove has had undocumented support for marking some tests to be run in sequence instead of parallel since 2008. It's backed by a rather fancy "rules" system in TAP::Parser::Scheduler that allows for complex specifications of ordering arrangements for parallel and sequential test runs.
Here's the basic current recipe for prove:
# All tests are allowed to run in parallel, except those starting with "p"
--rules='seq=t/p*.t' --rules='par=**'
I have a new pull request that adds documentation for this feature, and have started a discussion about possibly offering a simpler syntax for basic exceptions as well. See the pull request for details.
I found another solution which advertised this feature, but I could only get trivial cases to work. It's to use Test::Steering. It allows me to do this:
include_tests( { jobs => 4 }, #parallel_tests );
include_tests( #serial_tests );
With this solution, be aware:
Before it actually works, I currently have to patch the code to fix a basic bug with it that has remained unpatched for multiple years.
Additional code is needed to handle generating the list of of parallel and serial tests to run.
I didn't actually get a combined summary for my real-world test... both sections emitted their own summary reports, so it didn't really work. Maybe I missed something, or maybe it's broken.
Test::Parallel also provides an easier way to run some tests in parallel
have a look at the sample from https://metacpan.org/pod/Test::Parallel
Another option: use a rules file for TAP::Harness.
You can build custom rules in a YAML file (called testrules.yml by default). I needed something similar to what you describe, which I was able to do with a testrules.yml file that looked like this:
---
seq:
# tests that are not parallel-ready (will run in isolation)
- seq:
- t/test1.t
- t/test2.t
# tests that can run in parallel
- par:
# wildcard for everything else
- **
In my case, I was using this with code that directly called App::Prove, rather than command-line prove. But I think it would work with prove too?

Selenium junit tests - how do I run tests within a test in sequential order?

I am using junit with eclipse to write function tests.
When running an individual test it runs in the order that I have them set within the class.
Eg.
testCreateUser
testJoinUserToRoom
testVerify
testDeleteUser
However when I run this test as part of a suite, (so in a package) the order is random.
It will for example do the verify, then delete user then joinuserToRoom then Createuser.
My tests within the suite are not dependent on each other. However each individual test within a test is dependent on them being run in the correct order.
Is there any way I can achieve this?
Thanks.
You can't guarantee the order of execution of test methods in JUnit.
The order of execution of test classes within a suite is guaranteed (if you're using Suite), but the order of execution if the test classes are found by reflection isn't (for instance, if you're running a package in Eclipse, or a set of tests from maven or ant). This may be definable by ant or maven, but it isn't defined by JUnit.
In general, JUnit executes the test methods in the order in which they are defined in the source file, but not all JVMs guarantee this (particulary with JVM 7). If some of the methods are inherited from an abstract base test class, then this may not hold either. (This sounds like your case, but I can't tell from your description).
For more information on this, see my answer to Has JUnit4 begun supporting ordering of test? Is it intentional?.
So what can you do to fix your problem? There are two solutions.
In your original example, you've actually only got one test (verify), but you've got 4 methods, two setup (createUser, joinUserToRoom) and one teardown (deleteUser). So your first option is to better define your test cases, using a TestRule, in particular ExternalResource. ExternalResource allows you to define before/after behaviour for a test, similar to #Before/#After. However, the advantage of ExternalResource is that you can factor this out of your test.
So, you would create/delete the user in your external resource:
public class UsesExternalResource {
#Rule
public ExternalResource resource= new ExternalResource() {
#Override
protected void before() throws Throwable {
// create user
};
#Override
protected void after() {
// destroy user
};
};
#Test
public void testJoinUserToRoom() {
// join user to room
// verify all ok
}
}
For me, this is simpler and easier to understand, and you get independent tests, which is a good thing. This is what I would do, but you will need to refactor your tests a bit. You can also stack these Rules using RuleChain.
Your second option, if you really want to introduce dependencies between your test methods, is to look at TestNG, in which you can define dependencies from one test to another.
If they have a 'correct' order, then they are not multiple tests, but a single test that you have incorrectly annotated as being multiple independent tests.
Best practise would to rewrite them in approved junit style (setup - act - verify), supported by #Before or #BeforeClass methods that did any required common setup.
Quick workaround would be to have a single #Test-annotated method that called the other test methods in sequence. That becomes something like the preferred alternative if you are using Junit not to do strict unit testing, but something more like scenario-driven systems testing. It's not necessarily the best tool for such use, but it does work perfectly well in some cases.
Then, what you would have so far is have a single test:
#Test public void testUserNominalLifeCycle(...
which could then, if you are feeling virtuous, be supplemented by extra new tests like
#Test public void testUserWhoNeverJoinsARoom(...

Is it possible for run NUnit against a specific (long) list of tests

I have a list of several thousand NUnit tests that I want to run (generated automatically by another tool). (This is a subset of all of the tests, and changes frequently)
I'd like to be able to run these via NUnit-Console.exe. Unfortunately the /run option only takes a direct list of files which in my case would not fit on a single command line. I'd like it to pickup the list from a filename.
I appreciate that I could use categories, but the list I want to run changes frequently and so I'd prefer not to have to start changing source code.
Does anyone know if there is a clean way to get NUnit to run my specified tests?
(I could break it down into a series of smaller calls to NUnit-console with a full command line, but that's not very elegant)
(If it's not possible, maybe I should add it as an NUnit feature request.)
Had a reply from Charlie Poole (from NUnit development team), that this is not currently possible but has been added as a feature request for NUnit 2.6
I see what you're saying, but like you say you can run a single fixture from the command line.
nunit-console /fixture:namespace.fixture tests.dll
How about generating all the tests in the same fixture? Or place them all in the same assembly?
nunit-console tests.dll
As mentioned in the nunitLink, we need to mention the scenario/test case name. It simple but it has bit of a trick in it. Directly mentioning the test case name will not serve the purpose and you will end up with the 0 testcases executed. We need to write the exact path for the same.
I don't know how it works for other languages but using c# I have found a solution. Whenever we create a feature file corresponding feature.cs file get's created in Visual Studio. Click on the featureFileName.feature.cs and look for namespace and keep it aside(Part 1)
namespace MMBank.Test.Features
Scroll a bit down you will get the class name. Note that as well and keep it aside(Part 2)
public partial class HistoricalTransactionFeature
Keep scrolling down, you will see the code which nunit understands for execution basically.
[NUnit.Framework.TestAttribute()]
[NUnit.Framework.DescriptionAttribute("TC_1_A B C D")]
[NUnit.Framework.CategoryAttribute("MM_Bank")]
Below the code you can see the function/method name which will most likely be TC_1_ABCD(certain parameters)
public virtual void TC_1_ABCD(string username, string password, string visit)
You will be having multiple such methods based on no. of scenarios you have in your feature file. Note the method(test case) which you want to execute and keep it aside(Part 3)
Now collate all the parts with dots. Finally you will land up with something like this,
MMBank.Test.Features.HistoricalTransactionFeature.TC_1_ABCD
This is it. Similarly you can create the test case names from multiple feature files and stack them up in text file. Every test case name should be in different line. For command you can browse through above nunit link for execution using command prompt.