Show only specific Tests or TestFixtures in NUNIT via a configuration file or another way - nunit

I have a bunch of NUNIT tests in several TestFixtures. Currently, I just display all the tests for everyone. Is there a way to hide some tests and/or test fixtures. I have various "customers" and they don't all need to see every test. For example, I have engineers using low level tests, and I have a QA department that is using higher level tests. If I could have a configuration (XML?) file that I distributed with the dll that would be ideal. Can someone point me to the documentation and example? I did search the NUNIT site and did not see anything.
I am aware of the [IGNORE] attribute and I suppose a somewhat acceptable solution would be to have a configuration file that can apply IGNORE to various tests or testfixtures. I'd hand out a different version of the configuration file to each customer. At least that way certain customers would not be able run certain tests.
I'm using version 2.5.5
Ideas?
Thanks,
Dave

Yes - if the tests are in seperate assemblies, this can be accomplished by proper configuration of your NUnit projects. However, this is not an option if the tests are in one large test assembly. If this is the case, you may wish to break up the test assembly. Here is the documentation on the NUnit ProjectEditor: http://www.nunit.org/index.php?p=projectEditor&r=2.2.10

Related

Requirements coverage using pytest

We use LabGrid for our testing, which is based on pytest. I would like to do some requirements coverage measuring. All the searches on covertage for pytest, ends up in line coverage for python and that is not what I want.
Actually I'm not testing python code at all, but remotely testing features on an embedded target.
Now my idea was to create a marker for each test function with an URL to a requirement (eg in Jira). Now when a requirement is identified, then first thing to do is to add an empty test case in the pytest, and mark it as skipped (or some not tested).
After running the test, a report could be generated, telling the total coverage, with links. This would require the information being dumped into the junit.xml file.
In this way I get a tight link between the test and the requirement, bound in the test code.
Now anybody knows of some markers which could help do this. Or even some projects which has gotten a similar idea.
we are using a marker which we create by ourselves:
pytest.mark.reqcov(JIRA-123)
Afterwards we analyzing the test run with self written script.
Using some pytest hooks, collecting the marker, checking JIRA via python api and creating metrics out of it (we are using Testspace).
Not found a way to add this marker to junit report.
Link to JIRA can be done in different ways,
using SPHINX docu and link the Jira id automatically from test
case description to Jira
use the python script, which analyze the requirements coverage
and create a link to Jira.
hope that helps

How to make NUnit stop executing tests in a Class on a failure

Is there any way to make NUnit abort running any more tests in that class only when it encounters the first error? I'm running integration tests, using the [Order] attribute. These tests can get quite lengthy, and in certain cases there's no need to continue running tests in the class if one of the tests fail. I want NUnit to continue on to other classes, but I want it to abort calling any more classes in that specified class.
Is there any way to do this?
I'm using NUnit 3.2
Thanks!
Buzz
There isn't currently any way to stop execution of tests within a class on first error. There is only the command line option --stoponerror which stops running all tests on the first error.
The Order attribute is new in 3.2. Unlike other frameworks, it was only intended to order your tests, not setup dependencies for your tests. There is an open enhancement on GitHub for a Test Dependency Attribute, but it hasn't been worked on because people cannot come to a consensus on the design. I think a good first step would be a variation of the Order attribute with dependencies, but some people want a full dependency graph. I would recommend that you head over to GitHub and comment on the issue with your requirements.

How to conditionally exclude a scenario in cucumber

I am trying to exclude scenarios programmatically in cucumber. Testcases are OS dependent in my case. Say if underlying OS is Windows, I would like to skip certain scenarios. After some research on google I found out that there a place where you can hook up this logic in ruby i.e. AfterConfiguration. However, I am not able to find where I can hook this up to cucumber through scala.
I am also aware that it is not good practice to exclude scenarios but I have no choice.
First, add tags for the os-dependent scenarios (this can be at a feature file level by putting the tag at the top of the file).
#windows8
Scenario: Seeing extra feature XYZ in Windows 8
Then cucumber options that only use the tags for that os, or that ignore the tags for the other os. If you are using mvn, it might look like this:
mvn clean install -Dcucumber.options="--tags #windows8"

Is it possible to specify aggregate code coverage testing when deploying with a list of tests to Salesforce

I am automating deployment and CI to our Salesforce orgs using Ant. In my build xml, I am specifying the complete list of our tests to run. Salesforce is returning code coverage errors, demanding 75% code coverage on a per file basis rather than allowing only 75% based on the total code base. Some of our old files do not have that level of coverage, and I am trying not to have to go back and create a ton of new tests for old software.
It seems like Salesforce is doing the code coverage based on the quickdeploy model, rather that the aggregate.
Does anyone know a way I can tell Salesforce not to use the quickdeploy model (if that is what it is doing). I have checked the Migration tool docs, but don't see anything.
Thanks...
Have you tried setting the attribute runAllTests="true" inside the sf:deploy tasks, rather than listing each test out?

Writing Logs/results or generating reports using Selenium C# API

I am testing the web application using Selenium RC. All things works fine and I have written many test cases and executing these test cases using Nunit.
Now the hurdle that I am facing is how to keep track of failures or how to generate the Reports?
Please advice which could be best way to capture this.
Because you're using NUnit you'll want to use NUnits reporting facilities. If the GUI runner is enough, that's great. But if you're running from the command line, or NAnt, you'll want to use the XML output take a look at the NAnt documentation for more information.
If you're using Nant you'll want to look at NUnit2Report. It's nolonger maintained, but it may suit your needs. Alternatively, you could extract it's XSLT files and apply it against the XML output.
Selenium itself doesn't have report because it is only a library used by many different languages.
For anyone else happening randomly into this question, the 'nunit2report' task is now available in NAntContrib.
NantContrib Nunit2report task