How to make NUnit stop executing tests in a Class on a failure - nunit

Is there any way to make NUnit abort running any more tests in that class only when it encounters the first error? I'm running integration tests, using the [Order] attribute. These tests can get quite lengthy, and in certain cases there's no need to continue running tests in the class if one of the tests fail. I want NUnit to continue on to other classes, but I want it to abort calling any more classes in that specified class.
Is there any way to do this?
I'm using NUnit 3.2
Thanks!
Buzz

There isn't currently any way to stop execution of tests within a class on first error. There is only the command line option --stoponerror which stops running all tests on the first error.
The Order attribute is new in 3.2. Unlike other frameworks, it was only intended to order your tests, not setup dependencies for your tests. There is an open enhancement on GitHub for a Test Dependency Attribute, but it hasn't been worked on because people cannot come to a consensus on the design. I think a good first step would be a variation of the Order attribute with dependencies, but some people want a full dependency graph. I would recommend that you head over to GitHub and comment on the issue with your requirements.

Related

Requirements coverage using pytest

We use LabGrid for our testing, which is based on pytest. I would like to do some requirements coverage measuring. All the searches on covertage for pytest, ends up in line coverage for python and that is not what I want.
Actually I'm not testing python code at all, but remotely testing features on an embedded target.
Now my idea was to create a marker for each test function with an URL to a requirement (eg in Jira). Now when a requirement is identified, then first thing to do is to add an empty test case in the pytest, and mark it as skipped (or some not tested).
After running the test, a report could be generated, telling the total coverage, with links. This would require the information being dumped into the junit.xml file.
In this way I get a tight link between the test and the requirement, bound in the test code.
Now anybody knows of some markers which could help do this. Or even some projects which has gotten a similar idea.
we are using a marker which we create by ourselves:
pytest.mark.reqcov(JIRA-123)
Afterwards we analyzing the test run with self written script.
Using some pytest hooks, collecting the marker, checking JIRA via python api and creating metrics out of it (we are using Testspace).
Not found a way to add this marker to junit report.
Link to JIRA can be done in different ways,
using SPHINX docu and link the Jira id automatically from test
case description to Jira
use the python script, which analyze the requirements coverage
and create a link to Jira.
hope that helps

Is there a nunit setup attribute equivalent that could be added for some tests in xunit?

In NUnit when I had to do the setup for some tests I would use [Setup] Attribute.
Now in xUnit there are TextFixtures but they are only run before each test or run before all tests once, but there's no option to run it for some but not all tests.
This question is purely for my personal educational purposes.
Above can be easily solved by simply making a SetUp() method and just calling it inside the tests where it is needed, but I was wondering if there was a 'proper' way to do this.

Show only specific Tests or TestFixtures in NUNIT via a configuration file or another way

I have a bunch of NUNIT tests in several TestFixtures. Currently, I just display all the tests for everyone. Is there a way to hide some tests and/or test fixtures. I have various "customers" and they don't all need to see every test. For example, I have engineers using low level tests, and I have a QA department that is using higher level tests. If I could have a configuration (XML?) file that I distributed with the dll that would be ideal. Can someone point me to the documentation and example? I did search the NUNIT site and did not see anything.
I am aware of the [IGNORE] attribute and I suppose a somewhat acceptable solution would be to have a configuration file that can apply IGNORE to various tests or testfixtures. I'd hand out a different version of the configuration file to each customer. At least that way certain customers would not be able run certain tests.
I'm using version 2.5.5
Ideas?
Thanks,
Dave
Yes - if the tests are in seperate assemblies, this can be accomplished by proper configuration of your NUnit projects. However, this is not an option if the tests are in one large test assembly. If this is the case, you may wish to break up the test assembly. Here is the documentation on the NUnit ProjectEditor: http://www.nunit.org/index.php?p=projectEditor&r=2.2.10

xUnit framework: Equivalent of [TestFixtureSetUp] of NUnit in XUnit?

What is xUnit's equivalent of NUnit's [TestFixtureSetUp]?
We have explored and found that IUseFixture<T> is the equivalent of [TestFixtureSetUp], but it's not working as expected.
As we have explored (in case of NUnit), we found that [TestFixtureSetUp] marked methods for code to be executed only once before all test in the fixture have been run. In xUnit, the equivalent of [TestFixtureSetUp] is IUseFixture<T> as we have explored, but during testing, we found that the SetFixture method of IUseFixture is being called before every test (not only once for all methods).
Please let us know how can we achieve the above in xUnit. Also correct us if we are misunderstanding something. Thanks.
There is no direct equivalent of [TestFixtureSetUp] in XUnit, but you can achieve similar functionality. This page lays out the translation between NUnit and XUnit (as well as a couple other C#/.NET test frameworks). However, XUnit largely got rid of setups/teardowns (this article explains why that decision was made). Instead, you need the test suite to implement an interface called IUseFixture<T> which can initialize some data for the fixture.
You might also want to read this overview of XUnit, written from the perspective somebody coming from an NUnit/MbUnit background.

Noshadow option for nunit-console

I have following question :
what are advantages and disadvantages in running nunit-console with /noshadow option?
Your comments will be very helpful
Thanks
The main issue I've found with /noshadow is that it stops your project from building as NUnit is now forced to use and lock your DLL. If you leave this option disabled, then NUnit creates a copy of your DLL.
If you are trying to practice TDD and are constantly building the project in the Red, Green, Refactor cycle, then you can't easily use /noshadow. You will get an error message like:
The process cannot access the file 'bin\Debug\calculator.dll' because it is being used by another process.
There are probably ways around this, but that's the main problem I've found.
As for when you would use this: I think the main reason is to speed up performance, but as most true unit tests run really quickly, I'm not sure when you would really need this. I'm sure other people will come up with some good examples.
If you happen to rely on anything that uses a file location in your tests, say for some curious assembly loading process, or just something as simple as Assembly.GetExecutingAssembly().Location, then you're likely to hit problems because NUnit has copied your file to some other location than the build location.
I'd say that typically these problems can be avoided though -- especially if you avoid touching the filesystem in your unit tests.
A quick warning, the gradle plugin for Nunit has changed how to specify shadow options. Took me a while to find this so posting here in case it can help someone else.
noShadow is replaced by shadowCopy and defaults to false, that is, the name has changed and the sense/direction of it is the opposite. This was done apparently to match more closely what Nunit 3 does. You can read details about this in the plugin changlog at https://github.com/Ullink/gradle-nunit-plugin/blob/master/CHANGELOG.md