How to use "--where" parameter in NUnit 3 console? - nunit

NUnit 3 has "--where" parameter in console that allows us to select different tests to run. It can include different namespaces or test categories.
I want (but don't know how) to include some namespaces to run tests. I have specific examples and I ask you for help.
Let's assume we have the next namespaces with tests:
Project.MainSuite (includes 1 tests)
Project.MainSuite.Category1 (has 2 tests)
Project.MainSuite.Category1.TestSuite1 (has 3 tests)
How to run the next tests using --where parameter:
Tests only from Project.MainSuite.Category1 (2 tests should be run)
Tests from Project.MainSuite.Category1 and Project.MainSuite.Category1.TestSuite1 together (5 tests should be run)
All test from Project.MainSuite including sub-namespaces (6 tests should be run)
Thanks in advance for your help.

I recently ran into a similar issue and wanted to get a solid answer for this.
The short answer to your question is that you cannot do what you are asking without being more explicit.
When you run your tests with a where clause of --where "test == Project.MainSuite" (the highest namespace in your project), it will run all of the tests in that namespace and all sub-namespaces.
If you run your tests with a where clause of --where "test == Project.MainSuite.Category1.TestSuite1" (the lowest sub namespace in Project.MainSuite), it will only run all of the tests inside of that namespace.
You can do a few things to get what you are trying to accomplish.
1. Tests only from Project.MainSuite.Category1
--where "class == Project.MainSuite.Category1.ClassWithTests"
Just be explicit about the classes that are inside of this namespace.
Or if you are worried about adding more tests inside of this namespace in the future and don't want to update the script to run the tests. You can add Category attributes to the Suites/Tests inside this namespace and run them based off that Category.
--where "cat == TestsInCategory1Namespace"
2. Tests from Project.MainSuite.Category1 and Project.MainSuite.Category1.TestSuite1 together
Similarly for this scenario, you can combine the category and the class clause together.
--where "cat == TestsInCategory1Namespace and class == Project.MainSuite.Category1.TestSuite1"
3. All test from Project.MainSuite including sub-namespaces
--where test == Project.MainSuite

Does this help? Test Selection Language
this should work for categorys --where "cat == SmokeTests" --noresult
this for namespace: --where test == "My.Namespace" and cat == Urgent

Related

How to use pytest reuse-db correctly

I have broken my head trying to figure out how --reuse-db. I have a super-simple Django project with one model Student and the following test
import pytest
from main.models import Student
#pytest.mark.django_db
def test_1():
Student.objects.create(name=1)
assert Student.objects.all().count() == 1
When I run it for the first time with command pytest --reuse-db, the test passes - and I am not surprised.
But when I run the pytest --reuse-db for the second time, I expect that the db is not destroyed and the test fails, because I expect that Student.objects.all().count() == 2.
I am misunderstanding the --reuse-db flag ?
--reuse-db means to reuse the database between N tests within the same test run.
This flag has no bearing on running pytest twice.

How to run a sub-set of TestCases using --where for nunit

For my project I want to run the exact same test cases twice, once locally and on a different VM in parallel in the cloud (Azure in my case).
I duplicated the TestCase and tagged one Category("Local") and the other Category("Cloud").
Running nunit3 from the console with --where="cat == Cloud" will thus run all TestCases of every test that has one or more TestCases tagged with Category("Cloud").
Is there a different way of only running selected TestCases by a commandline switch?
Simplified example:
[TestCase(TestName = "Canary, Run in cloud."), Category("Cloud")]
[TestCase(TestName = "Canary, Run locally."), Category("Local")]
public void Canary()
{
Assert.True(true);
}
Found a work-around.
Using --params:Cloud=true as command line argument and in the code
private bool ShallRunInCloud => TestContext.Parameters["Cloud"]?.ToLowerInvariant() == "true";

How to use xunit trait flag with specflow tests?

I am attempting to run my SpecFlow/xUnit tests on the command line, as described here:
http://gasparnagy.com/2016/02/running-specflow-scenarios-in-parallel-with-xunit-v2/
If I type this:
.\packages\xunit.runner.console.2.3.1\tools\net452\xunit.console.exe --help
One of the flags described is this:
-trait "name=value" : only run tests with matching name/value traits
: if specified more than once, acts as an OR operation
I have a SpecFlow scenario with the trait #justthisone which I would like to run on its own. The Visual Studio test explorer lists this as having the trait Category [justthisone] I have tried this:
.\packages\xunit.runner.console.2.3.1\tools\net452\xunit.console.exe .\MyProj.Tests\bin\Debug\MyProj.Tests.dll -trait "name=justthisone"
But I get this output:
=== TEST EXECUTION SUMMARY ===
Order.UserInterface.Tests.dll Total: 0
How should I write the -trait flag/option to tell xUnit which tests I want to run?
Turns out I just had to specify the correct file name:
.\packages\xunit.runner.console.2.3.1\tools\net452\xunit.console.exe .\MyProj.Tests\bin\Debug\MyProj.Tests.dll -trait "**Category**=justthisone"
As shown here:
https://github.com/techtalk/SpecFlow/issues/938

how to rename a test name in pytest based on fixture param

Need to run same test on different devices. Used fixture to give ip addresses of the devices, and all tests run for the IPs provided by fixtures as requests. But at the same time, need to append the test name with the IP address to quickly analyze results. pytest results have test name as same for all params, only in the log or statement we could see the parameter used, is there anyway to change the testname by appending the param to the test name based on the fixture params ?
class TestClass:
def test1():
pass
def test2():
pass
We need to run the whole test class for every device, all test methods in sequence for each device. We can not run each test with paramter cycle, we need to run the whole test class in a parameter cycle. This we achieved by a fixture implementation, but we couldn't rename the tests.
You can read my answer: How to customize the pytest name
I could change the pytest name, by creating a hook in a conftest.py file.
However, I had to use pytest private variables, so my solution could stop working when you upgrade pytest
You don't need to change the test name. The use case you're describing is exactly what parametrized fixtures are for.
Per the pytest docs, here's output from an example test run. Notice how the fixture values are included in the failure output right after the name of the test. This makes it obvious which test cases are failing.
$ pytest
======= test session starts ========
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
test_expectation.py ..F
======= FAILURES ========
_______ test_eval[6*9-42] ________
test_input = '6*9', expected = 42
#pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(test_input, expected):
> assert eval(test_input) == expected
E AssertionError: assert 54 == 42
E + where 54 = eval('6*9')
test_expectation.py:8: AssertionError
======= 1 failed, 2 passed in 0.12 seconds ========

ScalaTest in sbt: is there a way to run a single test without tags?

I know that a single test can be ran by running, in sbt,
testOnly *class -- -n Tag
Is there a way of telling sbt/scalatest to run a single test without tags? For example:
testOnly *class -- -X 2
it would mean "run the second test in the class. Whatever it is". We have a bunch of tests and no one bothered to tag them, so is there a way to run a single test without it having a tag?
This is now supported (since ScalaTest 2.1.3) within interactive mode:
testOnly *MySuite -- -z foo
to run only the tests whose name includes the substring "foo".
For exact match rather than substring, use -t instead of -z.
If you run it from the command line, it should be as single argument to sbt:
sbt 'testOnly *MySuite -- -z foo'
I wanted to add a concrete example to accompany the other answers
You need to specify the name of the class that you want to test, so if you have the following project (this is a Play project):
You can test just the Login tests by running the following command from the SBT console:
test:testOnly *LoginServiceSpec
If you are running the command from outside the SBT console, you would do the following:
sbt "test:testOnly *LoginServiceSpec"
I don't see a way to run a single untagged test within a test class but I am providing my workflow since it seems to be useful for anyone who runs into this question.
From within a sbt session:
test:testOnly *YourTestClass
(The asterisk is a wildcard, you could specify the full path com.example.specs.YourTestClass.)
All tests within that test class will be executed. Presumably you're most concerned with failing tests, so correct any failing implementations and then run:
test:testQuick
... which will only execute tests that failed. (Repeating the most recently executed test:testOnly command will be the same as test:testQuick in this case, but if you break up your test methods into appropriate test classes you can use a wildcard to make test:testQuick a more efficient way to re-run failing tests.)
Note that the nomenclature for test in ScalaTest is a test class, not a specific test method, so all untagged methods are executed.
If you have too many test methods in a test class break them up into separate classes or tag them appropriately. (This could be a signal that the class under test is in violation of single responsibility principle and could use a refactoring.)
Just to simplify the example of Tyler.
test:-prefix is not needed.
So according to his example:
In the sbt-console:
testOnly *LoginServiceSpec
And in the terminal:
sbt "testOnly *LoginServiceSpec"
Here's the Scalatest page on using the runner and the extended discussion on the -t and -z options.
This post shows what commands work for a test file that uses FunSpec.
Here's the test file:
package com.github.mrpowers.scalatest.example
import org.scalatest.FunSpec
class CardiBSpec extends FunSpec {
describe("realName") {
it("returns her birth name") {
assert(CardiB.realName() === "Belcalis Almanzar")
}
}
describe("iLike") {
it("works with a single argument") {
assert(CardiB.iLike("dollars") === "I like dollars")
}
it("works with multiple arguments") {
assert(CardiB.iLike("dollars", "diamonds") === "I like dollars, diamonds")
}
it("throws an error if an integer argument is supplied") {
assertThrows[java.lang.IllegalArgumentException]{
CardiB.iLike()
}
}
it("does not compile with integer arguments") {
assertDoesNotCompile("""CardiB.iLike(1, 2, 3)""")
}
}
}
This command runs the four tests in the iLike describe block (from the SBT command line):
testOnly *CardiBSpec -- -z iLike
You can also use quotation marks, so this will also work:
testOnly *CardiBSpec -- -z "iLike"
This will run a single test:
testOnly *CardiBSpec -- -z "works with multiple arguments"
This will run the two tests that start with "works with":
testOnly *CardiBSpec -- -z "works with"
I can't get the -t option to run any tests in the CardiBSpec file. This command doesn't run any tests:
testOnly *CardiBSpec -- -t "works with multiple arguments"
Looks like the -t option works when tests aren't nested in describe blocks. Let's take a look at another test file:
class CalculatorSpec extends FunSpec {
it("adds two numbers") {
assert(Calculator.addNumbers(3, 4) === 7)
}
}
-t can be used to run the single test:
testOnly *CalculatorSpec -- -t "adds two numbers"
-z can also be used to run the single test:
testOnly *CalculatorSpec -- -z "adds two numbers"
See this repo if you'd like to run these examples. You can find more info on running tests here.