Adjust test name in NUnit - nunit

We're running our test suite two times for different configurations. Our CI server aggregates them into a single report which doesn't work properly because all the test names exist twice in the NUnit report file.
Is there a way to adjust all test names, e.g., by adding a prefix, depending on a configuration value. Or in other words: is it possible to dynamically prepend a prefix to all the test names (e.g., in the SetUpFixture or something like that).

Ah, found it. There is an option of the console runner:
--test-name-format="MyPrefix_{m}{a}"
See https://github.com/nunit/docs/wiki/Console-Command-Line and https://github.com/nunit/docs/wiki/Template-Based-Test-Naming for more information.

Related

How to control the WebTau report

When I first used WebTau it would produce a report file in the project directory where the WebTau tests were being run. Tests that use WebTau are no longer generating a report.
Is there a way to (manually) conrol when the report is produced?
Can I specify an output directory for this report?
Also, there is another question as to how I managed to change things and the report is no longer being generated automatically? I have moved some common code patterns into a TestingSupport project, because I found a bunch of tests were almost cut-n-paste at this ealry stage. There's no report file appearing in another directory though.
Another potential explaination is that I commented-out the #WebTau annotation on my test Class. I did that when I moved REST calling patterns to a library class. Everything works fine, of course there's no report. I'm guessing that is going to be a clue.
#WebTau annotation is essential for JUnit5 to generate the report. You need to annotate any class that you want to participate in the report.
Alternatively you can try to register global WebTau extension org.testingisdocumenting.webtau.junit5.WebTauJunitExtension following this JUnit5 guide: https://www.baeldung.com/junit-5-extensions#1-automatic-extension-registration
Use reportPath config value to change the report location
It can be specified using config file or system property override or using environment variable

Is it possible to exclude the test parameters from being included in the NUnit 3 test result XML?

We use NUnit 3 to run deployment smoke tests with parameters read from VCS. It works like this:
Powershell reads the parameters from VCS and compose NUnit 3 console command line.
NUnit3 console runs the tests.
Some parameters are passwords.
The problem is that the end test result XML lists all the test parameters, including the passwords.
Is it possible to instruct NUni3 somehow to avoid including the test parameters in the test result XML?
There is no feature like that because the assumption is you would not pass in anything that needs to be secure.
Why is that? Imagine there were an argument like --secret:password=XXX which worked like a parameter but was not displayed in the results. In that case, your password would still be in clear in your script, for anyone to read. Also, it would be available to any test, which could do what it wanted with it, like write it somewhere.
A better approach is to use some sort of encryption, so that you are only passing in a key, which is not usable except by an account or program that knows how to decrypt it. There are various approaches to doing this,depending on how you are running tests. I believe you will find that VCS has a way of encrypting passwords that you may be able to use.
In any case, without such a "secret" option, the only way you could avoid publishing the password would be to create your own output format by writing an engine result writer extension. Your extension code would receive the entire nunit 3 output document and you could modify it to remove the passwords before saving the file.

Protractor - how to reuse the same spec file for different tests

In my Protractor conf.js file, I'd like to re-use the same spec files multiple times; however, it's seems to not be possible.
Some background:
We are reading test cases from a JSON file, launching reports, then testing grid results and various DOM elements.
All reports have the same format. The primary differences lie in the report titles, data columns, actual data results, etc.
So in my conf.js file, ideally I'd like to re-use the same spec files multiple times - but my understanding is that I cannot do this.
For example, my spec array:
specs: [
'spec/report1-spec.js',
'spec/report-grid-details-spec.js',
'spec/report2-spec.js',
'spec/report-grid-details-spec.js',
'spec/report3-spec.js',
'spec/report-grid-details-spec.js',
]
I've read this post (http://ramt.in/how-to-run-identical-jasmine-specs-multiple-times-with-protractor/ ) where you can move your spec files into a node module, but 1) I don't want to move all specs files there, and 2) it doesn't work anyway when I move even one spec file into a module export file.
If I can't do it, then I'll just move my report-grid-details-spec.js code into a common page object file and call it whenever it's needed.
Just wondering if anyone out there has found a solution to this need to re-use spec files multiple times in one conf.js configuration.
Thank you,
Bob
If I can't do it, then I'll just move my report-grid-details-spec.js code into a common page object file and call it whenever it's needed.
This would probably be the easiest way to approach the problem. Though, I like the idea of putting specs into modules - it is a plus to reusability overall.
The thing is, jasmine does not allow executing the same test in a single test run. And, from what I understand, there is no easy way to change the behavior.
One of the possible workarounds is to completely restart protractor and, hence, recreate the jasmine testing environment so that the next report-grid-details-spec.js would run in a new jasmine environment - this is something that protractor-flake project uses to retry the failing tests (it basically restarts protractor through command-line passing the failing specs as a comma-separated list to the specs argument, source).

Running a suite of pytest tests on multiple objects

As a small part of a much larger set of tests, I have a suite of test functions I want to run on each of a list of of objects. Basically, I have a set of plugins, and a set of "plugin tests".
Naively, I can just make a list of test functions that take a plugin argument, and a list of plugins, and have a test where I call all of the former on all of the latter. But ideally, each test/plugin combo would appear as an individual test in the results.
Is there already a nicer/standardized way of doing something like this in pytest?
Check out pytest's documentation on parametrization (https://pytest.org/latest/parametrize.html).
It's a mechanism for running the same test a number of times with different parameters -- it sounds like just what you want. It generates tests that run individually, and they have nice output and reporting.

Is it possible for run NUnit against a specific (long) list of tests

I have a list of several thousand NUnit tests that I want to run (generated automatically by another tool). (This is a subset of all of the tests, and changes frequently)
I'd like to be able to run these via NUnit-Console.exe. Unfortunately the /run option only takes a direct list of files which in my case would not fit on a single command line. I'd like it to pickup the list from a filename.
I appreciate that I could use categories, but the list I want to run changes frequently and so I'd prefer not to have to start changing source code.
Does anyone know if there is a clean way to get NUnit to run my specified tests?
(I could break it down into a series of smaller calls to NUnit-console with a full command line, but that's not very elegant)
(If it's not possible, maybe I should add it as an NUnit feature request.)
Had a reply from Charlie Poole (from NUnit development team), that this is not currently possible but has been added as a feature request for NUnit 2.6
I see what you're saying, but like you say you can run a single fixture from the command line.
nunit-console /fixture:namespace.fixture tests.dll
How about generating all the tests in the same fixture? Or place them all in the same assembly?
nunit-console tests.dll
As mentioned in the nunitLink, we need to mention the scenario/test case name. It simple but it has bit of a trick in it. Directly mentioning the test case name will not serve the purpose and you will end up with the 0 testcases executed. We need to write the exact path for the same.
I don't know how it works for other languages but using c# I have found a solution. Whenever we create a feature file corresponding feature.cs file get's created in Visual Studio. Click on the featureFileName.feature.cs and look for namespace and keep it aside(Part 1)
namespace MMBank.Test.Features
Scroll a bit down you will get the class name. Note that as well and keep it aside(Part 2)
public partial class HistoricalTransactionFeature
Keep scrolling down, you will see the code which nunit understands for execution basically.
[NUnit.Framework.TestAttribute()]
[NUnit.Framework.DescriptionAttribute("TC_1_A B C D")]
[NUnit.Framework.CategoryAttribute("MM_Bank")]
Below the code you can see the function/method name which will most likely be TC_1_ABCD(certain parameters)
public virtual void TC_1_ABCD(string username, string password, string visit)
You will be having multiple such methods based on no. of scenarios you have in your feature file. Note the method(test case) which you want to execute and keep it aside(Part 3)
Now collate all the parts with dots. Finally you will land up with something like this,
MMBank.Test.Features.HistoricalTransactionFeature.TC_1_ABCD
This is it. Similarly you can create the test case names from multiple feature files and stack them up in text file. Every test case name should be in different line. For command you can browse through above nunit link for execution using command prompt.