I'm using NUnit 3.0 and TestFixtureSource to run test cases inside a fixture multiple times with different parameters/configurations (I do want to do this at TestFixture level). Simple example:
[TestFixtureSource(typeof (ConfigurationProvider))]
public class Fixture
{
public Fixture(Configuration configuration)
{
_configuration = configuration;
}
private Configuration _configuration;
[Test]
public void Test()
{
//do something with _configuration
Assert.Fail();
}
}
Let's say Test() fails for one of the configurations and succeeds for another. In the run report file and in Visual Studio's Test Explorer the name for both the failed and the succeeded runs will be displayed as just Test(), which doesn't tell me anything about which setup caused issues.
Is there a way to affect the test cases names in this situation (i.e. prefix its name per fixture run/configuration)? As a workaround I'm currently printing to the results output before each test case fires but I would rather avoid doing that.
Since NUnit 3.0 is in beta and this feature is fairly new I wasn't able to find anything in the docs. I found TestCaseData but I don't think it's tailored to be used with fixtures just yet (it's designed for test cases).
I can't find a way to change the testname, but it should not be neccessary, because NUnit3 constructs the testname by including a description of the testfixture.
The example class Fixture from the question can be used unchanged if the Configuration and ConfigurationProvider has an implementation like this:
public class Configuration
{
public string Description { get; }
public Configuration(string description)
{
Description = description;
}
public override string ToString()
{
return Description;
}
}
public class ConfigurationProvider : IEnumerable
{
public IEnumerator GetEnumerator()
{
yield return new Configuration("Foo");
yield return new Configuration("Bar");
yield return new Configuration("Baz");
}
}
The 'trick' is to make sure the constructor-parameter to the fixture is a string or has a ToString-method that gives a sensible description of the fixture.
If you are using NUnit 3 Test Adapter in Visual Studio, then the testfixtures will be displayed as Fixture(Foo), Fixture(Bar) and Fixture(Baz) so you can easily distinguish between their tests. The xml-output from nunit3-console.exe also uses descriptive names, fx: fullname=MyTests.Fixture(Bar).Test
<test-case id="0-1003" name="Test" fullname="MyTests.Fixture(Bar).Test" methodname="Test" classname="MyTests.Fixture" runstate="Runnable" result="Failed" ... >
<failure>
<message><![CDATA[]]></message>
<stack-trace><![CDATA[at MyTests.Fixture.Test() in ... ]]></stack-trace>
</failure>
...
</test-case>
One way to perform such actions is to have find and replace tokens in source code and dynamically build test libraries before execution using command line msbuild. High level steps are
Define test case names as sometest_TOKEN in source then using command line tools like fnr.exe replce _TOKEN with whatever you like. For example sometest_build2145.
Compile the dll with using msbuild for example msbuild /t:REbuild mytestproj.sln. Thereafter execute all test cases in mytestproj.dll.
Related
I am very new to C# and nunit. Pls bear with me if this is basic and has been already been asked here.
We have a global setup,defined by [SetupFixture] class,which is expected to be run only once. The private variables are defined in it's [setup]. We wish to use the same variables in all our testfixtures,hence inheriting the testbase class in all our testfixtures.
But, while executing Testcase1, i observe that globalSetup() is called more than once. Can anyone point me the issue? sample code is as below.
namespace CTB
{
[SetupFixture]
public class Testbase
{
private byte val1;
private byte val2;
[setup]
public void globalSetup
{
val1 = 5;
val2 = 10;
}
[Teardown]
public void globalTeardown
{
//
}
}
}
namespace CTB.Testcase
{
public class TestCase : Testbase
{
[Setup]
public void Setup()
{
}
[Teardown]
public void Teardown()
{
}
[Test]
public void Testcase1()
{
byte val3 = val1 + val2; // Expect 15
}
}
}
I'm assuming that the answer to my comment is "No" and that you are using a current version of NUnit 3. Please correct me if I'm wrong. :-)
You have made the class TestBase serve two functions:
It's the base class for your TestFixture and therefore it's a TestFixture itself.
It's marked as a SetUpFixture so it also serves that function - a completely different function, by the way.
To be clear, you should never do this. It's a sort of "trick" that almost seems designed to confuse NUnit - not your intention of course. Your test fixtures should have no inheritance relationship with any SetUpFixture. Use different classes for the test fixture base and the setup fixture.
With that out of the way, here is the longer story of what is happening...
Before your tests even execute, the SetUpFixture is first "run" - in quotes because it actually does nothing. That's because it doesn't contain any methods marked with [OneTimeSetUp] or '[OneTimeTearDown]`.
NOTE: As an alternate explanation, if you are using a pretty old version of NUnit, the [SetUp] and [TearDown] methods are actually called at this point. Nnit V2 used those attributes with different meanings when encountered in a SetUpFixture versus a TestFixture.
Next your tests execute. Before each test, the inherited [SetUp] and [TearDown] methods are run. Of course, these are actually the same methods as in step 1. NUnit has been tricked into doing this!
Here is some general guidance for the future...
If you want multiple fixtures to use the same data, a base class is useful. Any public or protected fields or properties will be shared by the inheriting fixtures.
If you want to do some common setup or teardown for a group of unrelated test fixtures, use a SetUpFixture. Note that the only way to pass data from a SetUpFixture to the test fixtures is through static fields or properties. Generally, you use a SetUpFixture to set up the environment in which the test is run, not to provide data.
Never use the same class for both purposes.
Having a test class like this
public class VerySimpleFactory {
#TestFactory
public Stream<? extends DynamicNode> someTests() {
DynamicContainer container1 = DynamicContainer.dynamicContainer("A",
Arrays.asList(t("A1"), t("A2"), t("A3"), t("A4"), t("A5")));
DynamicContainer container2 = DynamicContainer.dynamicContainer("B",
Arrays.asList(t("B1"), t("B2"), t("B3"), t("B4"), t("B5")));
DynamicContainer container3 = DynamicContainer.dynamicContainer("C",
Arrays.asList(t("C1"), t("C2"), t("C3"), t("C4"), t("C5")));
DynamicContainer container4 = DynamicContainer.dynamicContainer("D",
Arrays.asList(t("D1"), t("D2"), t("D3"), t("D4"), t("D5")));
return Arrays.asList(container1, container2, container3, container4).stream();
}
#Test
public void t1() throws Exception {
Thread.sleep(1000);
}
#Test
public void t2() throws Exception {
Thread.sleep(1000);
}
public DynamicTest t(String name) {
return DynamicTest.dynamicTest(name, () -> Thread.sleep(1000));
}
}
the Tests having a #Test annotaiton are discovered instantly by JUnit View, but the tests from TestFactory are discoverd at runtime, each after the last test was completely executed. This leads to a changing and "jumping" JUnit view. Also I cannot select a special test I'm interested in to be executed as single test, until all previous tests were executed.
It would be much nicer if all dynamic tests were shown in JUnit view also at beginning of test execution.
If this doesn't happen, is it a problem of JUnit 5, eclipse or my code?
Dynamic tests are dynamic. Not static.
It is not possible to know before-hand which and how many tests will be generated by #TestFactory annotated method ... actually, it may produce tests in an eternal loop.
Copied from https://junit.org/junit5/docs/current/user-guide/#writing-tests-dynamic-tests-examples
generateRandomNumberOfTests() implements an Iterator that generates
random numbers, a display name generator, and a test executor and then
provides all three to DynamicTest.stream(). Although the
non-deterministic behavior of generateRandomNumberOfTests() is of
course in conflict with test repeatability and should thus be used
with care, it serves to demonstrate the expressiveness and power of
dynamic tests.
I would like to be able to run tests on my fake repository (that uses a list)
and my real repository (that uses a database) to make sure that both my mocked up version works as expected and my actual production repository works as expected. I thought the easiest way would be to use TestCase
private readonly StandardKernel _kernel = new StandardKernel();
private readonly IPersonRepository fakePersonRepository;
private readonly IPersonRepository realPersonRepository;
[Inject]
public PersonRepositoryTests()
{
realPersonRepository = _kernel.Get<IPersonRepository>();
_kernel = new StandardKernel(new TestModule());
fakePersonRepository = _kernel.Get<IPersonRepository>();
}
[TestCase(fakePersonRepository)]
[TestCase(realPersonRepository)]
public void CheckRepositoryIsEmptyOnStart(IPersonRepository personRepository)
{
if (personRepository == null)
{
throw new NullReferenceException("Person Repostory never Injected : is Null");
}
var records = personRepository.GetAllPeople();
Assert.AreEqual(0, records.Count());
}
but it asks for a constant expression.
Attributes are a compile-time decoration for an attribute, so anything that you put in a TestCase attribute has to be a constant that the compiler can resolve.
You can try something like this (untested):
[TestCase(typeof(FakePersonRespository))]
[TestCase(typeof(PersonRespository))]
public void CheckRepositoryIsEmptyOnStart(Type personRepoType)
{
// do some reflection based Activator.CreateInstance() stuff here
// to instantiate the incoming type
}
However, this gets a bit ugly because I imagine that your two different implementation might have different constructor arguments. Plus, you really don't want all that dynamic type instantiation code cluttering the test.
A possible solution might be something like this:
[TestCase("FakePersonRepository")]
[TestCase("TestPersonRepository")]
public void CheckRepositoryIsEmptyOnStart(string repoType)
{
// Write a helper class that accepts a string and returns a properly
// instantiated repo instance.
var repo = PersonRepoTestFactory.Create(repoType);
// your test here
}
Bottom line is, the test case attribute has to take a constant expression. But you can achieve the desired result by shoving the instantiation code into a factory.
You might look at the TestCaseSource attribute, though that may fail with the same error. Otherwise, you may have to settle for two separate tests, which both call a third method to handle all of the common test logic.
I'm using the NUnit 2.5.3 TestCaseSource attribute and creating a factory to generate my tests. Something like this:
[Test, TestCaseSource(typeof(TestCaseFactories), "VariableString")]
public void Does_Pass_Standard_Description_Tests(string text)
{
Item obj = new Item();
obj.Description = text;
}
My source is this:
public static IEnumerable<TestCaseData> VariableString
{
get
{
yield return new TestCaseData(string.Empty).Throws(typeof(PreconditionException))
.SetName("Does_Reject_Empty_Text");
yield return new TestCaseData(null).Throws(typeof(PreconditionException))
.SetName("Does_Reject_Null_Text");
yield return new TestCaseData(" ").Throws(typeof(PreconditionException))
.SetName("Does_Reject_Whitespace_Text");
}
}
What I need to be able to do is to add a maximum length check to the Variable String, but this maximum length is defined in the contracts in the class under test. In our case its a simple public struct:
public struct ItemLengths
{
public const int Description = 255;
}
I can't find any way of passing a value to the test case generator. I've tried static shared values and these are not picked up. I don't want to save stuff to a file, as then I'd need to regenerate this file every time the code changed.
I want to add the following line to my testcase:
yield return new TestCaseData(new string('A', MAX_LENGTH_HERE + 1))
.Throws(typeof(PreconditionException));
Something fairly simple in concept, but something I'm finding impossible to do. Any suggestions?
Change the parameter of your test as class instead of a string. Like so:
public class StringTest {
public string testString;
public int maxLength;
}
Then construct this class to pass as an argument to TestCaseData constructor. That way you can pass the string and any other arguments you like.
Another option is to make the test have 2 arguments of string and int.
Then for the TestCaseData( "mystring", 255). Did you realize they can have multiple arguments?
Wayne
I faced a similar problem like yours and ended up writing a small NUnit addin and a custom attribute that extends the NUnit TestCaseSourceAttribute. In my particular case I wasn't interested in passing parameters to the factory method but you could easily use the same technique to achieve what you want.
It wasn't all that hard and only required me to write something like three small classes. You can read more about my solution at: blackbox testing with nunit using a custom testcasesource.
PS. In order to use this technique you have to use NUnit 2.5 (at least) Good luck.
We've got some integration tests in our solution. To run these tests, simulation software must be installed on the developer PC. This software is, however, not installed on every developer PC. If the simulation software is not installed, these tests should be skipped, otherwise ==> NullRefException.
I'm now seeking for a way to do a "conditional ignore" for tests/testfixtures.
Something like
if(simulationFilesExist)
do testfixture
else
skip testfixture
NUnit gives some useful things like ignore and explicit, but that's not quite what I need.
Use some code in your test or fixture set up method that detects if the simulation software is installed or not and calls Assert.Ignore() if it isn't.
[SetUp]
public void TestSetUp()
{
if (!TestHelper.SimulationFilesExist())
{
Assert.Ignore( "Simulation files are not installed. Omitting." );
}
}
or
[TestFixtureSetUp]
public void FixtureSetUp()
{
if (!TestHelper.SimulationFilesExist())
{
Assert.Ignore( "Simulation files are not installed. Omitting fixture." );
}
}
In NUnit 3.0 and higher you have to use OneTimeSetUp attribute instead of TestFixtureSetUp.
NUnit also gives you the option to supply a Category attribute.
Depending on how you are launching your tests, it may be appropriate to flag all the tests that require the simulator with a known category (e.g., [Category("RequiresSimulationSoftware")]). Then from the NUnit GUI you can choose to exclude certain categories. You can do the same thing from the NUnit command line runner (specify /exclude:RequiresSimulationSoftware if applicable).
I didn't want to duplicate Assert.Ignore condition in every test case, so I ended up using a custom Attribute class, which I derived from the NUnitAttribute class:
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false, Inherited = false)]
public class SimulatorOnlyAttribute : NUnitAttribute, IApplyToTest
{
public void ApplyToTest(Test test)
{
if (test.RunState == RunState.NotRunnable)
{
return;
}
if (!Helper.RunsOnSimulator)
{
test.RunState = RunState.Ignored;
test.Properties.Set(PropertyNames.SkipReason, "This test should run only on simulator");
}
}
}
So now I can just mark required test cases with the new attribute:
[SimulatorOnly]
public void Test()
For reference you could investigate source code of the IgnoreAttribute.
Use:
[SetUp]
public void TestSetUp()
{
if (!TestHelper.SimulationFilesExist())
{
Assert.Ignore( "Simulation files are not installed. Omitting." );
}
}
You use this type of condition in TestFixtureSet Attribute. But if this fixture have a parameterized test then if you want to ignore parameterized test of this fixture then this goes in an infinite loop and your test will be hanged. So you use the setup attribute better for the if condition.
There are a lot of ways to alter the result status of a test. Here are a few, and ways to read out the various status:
TestExecutionContext.CurrentContext.CurrentTest.MakeInvalid("I want this test to be SKIPPED");
ResultState resultStateObject = new ResultState(TestStatus.Skipped);
TestExecutionContext.CurrentContext.CurrentResult.SetResult(resultStateObject, "this test is being skipped derp derp");
TestExecutionContext.CurrentContext.CurrentTest.RunState = RunState.Ignored;
Logger.log("After doing things");
resultstate = TestExecutionContext.CurrentContext.CurrentResult.ResultState.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Result State: " + resultstate);
resultstatestatus = TestExecutionContext.CurrentContext.CurrentResult.ResultState.Status.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Result State Status: " + resultstate);
runstate = TestExecutionContext.CurrentContext.CurrentTest.RunState.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Run State: " + runstate); //test="#runstate = 'Skipped' or #runstate = 'Ignored' or #runstate='Inconclusive'
status = TestContext.CurrentContext.Result.Outcome.Status.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Result Status: " + status);
message = TestExecutionContext.CurrentContext.CurrentResult.Message.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Message: " + message);