What is the NUnit test name template for the test fixture arguments? - nunit

So {a} refers to the test case arguments, but in the full name of the test case we can see the test fixture arguments. For example:
C:\DFDeploymentSmokeTests\LocalTestProfiles> $xml = [xml](cat ..\TestResults\CSTests.xml)
C:\DFDeploymentSmokeTests\LocalTestProfiles> $TestCase = $xml.SelectSingleNode('//test-case')
C:\DFDeploymentSmokeTests\LocalTestProfiles> $TestCase.name
SiteCheck
C:\DFDeploymentSmokeTests\LocalTestProfiles> $TestCase.fullname
Web.ForEachWebServer(nan4dfc1app01_10.192.78.221_smoketest.dayforce.com).SiteCheck
C:\DFDeploymentSmokeTests\LocalTestProfiles>
The nan4dfc1app01_10.192.78.221_smoketest.dayforce.com is the ToString() result of the Test Fixture argument and NUnit includes it in the fullname of a test case.
However, there does not seem to be a way to provide it in the --test-name-format command line parameter.
Or am I wrong and there is a way?
Clarification
I do not want to change the full name of a test, but just its name. My problem is with the test names under a fixture using TestFixtureSource. Indeed, suppose the fixture name is F, the tests under it are T1 and T2 and the fixture is invoked twice with arguments A1 and A2. The default test name pattern is {m}{a}, but {a} does not include the fixture parameters. So, the test report shows these test names (not full names):
T1
T2
T1
T2
This is how it shows in the Azure DevOps Tests (the Publish Tests plugin uses the test names when publishing the results)
I want to change the name to be equal to the full name, because the full names are:
F(A1).T1
F(A1).T2
F(A2).T1
F(A2).T2
I realize that if the name would be F(A1).T1, then the full name would be F(A1).F(A1).T1, but since UI does not show the full names, I can live with that.

The full name of a test case is always the name (default or set by you) appended to the full name of the containing class. There is no way to change this.
UPDATE: Based on your clarification,you want the test case name to include the parameters passed to the particular fixture instance. This is also impossible, using the current "static" design.
[Using "static" and "dynamic" in a special NUnit-y way here. In a sense, all of this is dynamic, since it happens when you execute the runner. But we use it to mean "predetermined when the test is loaded (created, discovered) as opposed to "determined at each test execution."]
At the time your tests are discovered (and named) no fixtures have been instantiated yet. The code that runs your TestCaseSource method is generating test names to be used for each instance of the test fixture. We could have done it differently, but... well, we didn't because nobody thought of this use case.
Sorry!
PS: There is a long-standing NUnit issue calling for the creation of (what we call) "dynamic" test cases, which could easily include the feature you are asking for.

Related

Python Pytest skip rest /part of code in the function with #skip decorator

Is Pytest allows to skip not a whole test (function) but a part of code inside the function?
What I want (usage example):
def test_fill(my_dict: dict):
assert all(v is None for v is my_dict.values()):
my_dict.fill()
# Temporary check for "foo" values
assert all(v is not None for v is my_dict.values()):
# Should skip the code below
pytest.mark.skip(reason='Need values setup')
# The real checks with exact values are here (skipped for now)
assert my_dict['key_1'] = 1 # Part of future test
assert my_dict['key_2'] = 10 # Part of future test
assert my_dict['key_3'] = 100 # Part of future test
How pytest.mark.skip supposed to work:
It may raise exception and quietly catch it.
And I will see it in the final test results output like in case of regular skipping.
Surely I can easily comment it, place in if branch, or skip the whole test with #pytest.mark.skip decorator,
but this will be not reflected in the tests output and it's easily to forgot about this weak test.
Skipping a test while inside a test makes sense, if the information needed to decide if to skip a test is only available inside the test. This can easily be done using pytest.skip:
def test_something():
if not some_condition():
pytest.skip("Condition not fullfilled")
# do the test
This will skip the test the same way a pytest.mark.skipIf decorator will do, e.g. mark the test as skipped in the output and display the given skip reason.
In most cases (e.g. when the skip condition can be defined outside of the test) the decorator version can be used. From the documentation:
It is better to use the pytest.mark.skipif marker when possible to declare a test to be skipped under certain conditions like mismatching platforms or dependencies.
For the sake of completeness: in unittest this is also possible by using TestCase.skipTest.

How to parametrize fixture from another fixture?

Is there possibility to parametrize fixture from a fixture?
Let's say I've a fixture, which is taking relay_number as a parameter:
#pytest.fixture
def unipi_relay(request):
try:
relay_number = request.param["relay_number"]
except KeyError:
raise ValueError(
"This function requires as a parameter dictionary with values for keys:"
"\nrelay_number - passed as integer\n"
)
relay = RelayFactory.get_unipi_relay(relay_number)
relay.reset()
yield relay
relay.reset()
Now I would like to have another fixture, which will yield unipi_relay with already passed parameter.
Reason why I want to implement such a solution is that I would like to reuse unipi_relay fixture a few times in single test.
I'm not sure if I'm understanding correctly what you want to achieve because you haven't put the parameters your fixture is taking. Maybe the “factory as fixture” pattern is what you're looking for, because you'll then be able to reuse the unipi_relay fixture. Please also have a look at the question Reusing pytest fixture in the same test.

Pytest-bdd - Fixture 'self' not found

i am using pytest-bdd
Here is my feature file
#recon_test.feature
Feature: This is used to run recon
Scenarios:Run Recon
Test File
'''python
#recon_test.py
Class Recon_Tests():
#scenario('recon_test.feature','Run Recon')
def test_run_recon(self):
#do something
when i run this using command pytest , i get error **fixture 'self' not found.**
Because due to scenario annotation it treats this function as fixture maybe , and expects **'self'** to be another fixture.
I want to use the '#scenario' in my test functions inside the test classes . Is there any way ?
Also , i have found a workaround for this , i have created a fixture
```python
def self():
pass
to avoid this , and the error is gone .
But it gives another error saying that 'Recon_Tests' does not have an attribute config.
as bdd tries to read the fixture's config object for pre test hooks.
Please suggest
This is because pytest has no way of knowing whether it is a self (in terms of class instance) or a fixture.
This is fixed when you inherit your class from unittest.TestCase.
Meaning instead of class Recon_Tests() you specify
class ReconTests(unittest.TestCase).

how to use forAll in scalatest to generate only one object of a generator?

Im working with scalatest and scalacheck, alsso working with FeatureSpec.
I have a generator class that generate object for me that looks something like this:
object InvoiceGen {
def myObj = for {
country <- Gen.oneOf(Seq("France", "Germany", "United Kingdom", "Austria"))
type <- Gen.oneOf(Seq("Communication", "Restaurants", "Parking"))
amount <- Gen.choose(100, 4999)
number <- Gen.choose(1, 10000)
valid <- Arbitrary.arbitrary[Boolean]
} yield SomeObject(country, type, "1/1/2014", amount,number.toString, 35, "something", documentTypeValid, valid, "")
Now, I have the testing class which works with FeatureSpec and everything that I need to run the tests.
In this class I have scenarios, and in each scenario I want to generate a different object.
The thing is from what I understand is that to generate object is better to use forAll func, but for all will not sure to bring you an object so you can add minSuccessful(1) to make sure you get at list 1 obj....
I did it like this and it works:
scenario("some scenario") {
forAll(MyGen.myObj, minSuccessful(1)) { someObject =>
Given("A connection to the system")
loginActions shouldBe 'Connected
When("something")
//blabla
Then("something should happened")
//blabla
}
}
but im not sure exactly what it means.
What I want is to generate an invoice each scenario and do some actions on it...
im not sure why i care if the generation work or didnt work...i just want a generated object to work with.
TL;DR: To get one object, and only one, use myObj.sample.get. Unless your generator is doing something fancy that's perfectly safe and won't blow up.
I presume that your intention is to run some kind of integration/acceptance test with some randomly generated domain object—in other words (ab-)use scalacheck as a simple data generator—and you hope that minSuccessful(1) would ensure that the test only runs once.
Be aware that this is not the case!. scalacheck will run your test multiple times if it fails, to try and shrink the input data to a minimal counterexample.
If you'd like to ensure that your test runs only once you must use sample.
However, if running the test multiple times is fine, prefer minSuccessful(1) to "succeed fast" but still profit from minimized counterexamples in case the test fails.
Gen.sample returns an option because generators can fail:
ScalaCheck generators can fail, for instance if you're adding a filter (listingGen.suchThat(...)), and that failure is modeled with the Option type.
But:
[…] if you're sure that your generator never will fail, you can simply call Option.get like you do in your example above. Or you can use Option.getOrElse to replace None with a default value.
Generally if your generator is simple, i.e. does not use generators that could fail and does not use any filters on its own, it's perfectly safe to just call .get on the option returned by .sample. I've been doing that in the past and never had problems with it. If your generators frequently return None from .sample they'd likely make scalacheck fail to successfully generate values as well.
If all that you want is a single object use Gen.sample.get.
minSuccessful has a very different meaning: It's the minimal number of successful tests that scalacheck runs—which by no means implies
that scalacheck takes only a single value out of the generator, or
that the test runs only once.
With minSuccessful(1) scalacheck wants one successful test. It'll take samples out of the generator until the test runs at least once—i.e. if you filter the generated values with whenever in your test body scalacheck will take samples as long as whenever discards them.
If the test passes scalacheck is happy and won't run the test a second time.
However if the test fails scalacheck will try and produce a minimal example to fail the test. It'll shrink the input data and run the test as long as it fails and then provides you with the minimized counter example rather than the actual input that triggered the initial failure.
That's an important property of property testing as it helps you to discover bugs: The original data is frequently too large to lend itself for debugging. Minimizing it helps you discover the piece of input data that actually triggers the failure, i.e. corner cases like empty strings that you didn't think of.
I think the way you want to use Scalacheck (generate only one object and execute the test for it) defeats the purpose of property-based testing. Let me explain a bit in detail:
In classical unit-testing, you would generate your system under test, be it an object or a system of dependent objects, with some fixed data. This could e.g. be strings like "foo" and "bar" or, if you needed a name, you would use something like "John Doe". For integers and other data, you can also randomly choose some values.
The main advantage is that these are "plain" values—you can directly see them in the code and correlate them with the output of a failed test. The big disadvantage is that the tests will only ever run with the values you specified, which in turn means that your code is also only tested with these values.
In contrast, property-based testing allows you to just describe how the data should look like (e.g. "a positive integer", "a string of maximum 20 characters"). The testing framework will then—with the help of generators—generate a number of matching objects and execute the test for all of them. This way, you can be more sure that your code will actually be correct for different inputs, which after all is the purpose of testing: to check if your code does what it should for the possible inputs.
I never really worked with Scalacheck, but a colleague explained it to me that it also tries to cover edge-cases, e.g. putting in a 0 and MAX_INT for a positive integer, or an empty string for the aforementioned string with max. 20 characters.
So, to sum it up: Running a property-based test only once for one generic object is the wrong thing to do. Instead, once you have the generator infrastructure in place, embrace the advantage you then have and let your code be checked a lot more times!

Is there any way to access the current test's parameters (apart from the parameters themselves)?

If I write a parameterized NUnit test, using something like [TestCaseSource] or [ValueSource], NUnit will pass the parameters directly to my test method. But is there any other way to access those parameters, e.g. from SetUp, or from a helper method (without having to explicitly pass the parameter value to that helper method)?
For example, suppose I have three different scenarios (maybe it's "rising rates", "falling rates", and "constant rates"). I'm writing tests for a particular calculation, and some tests will have the same behavior in all three scenarios; others in two of the three (and I'll write a second test for the other scenario); others will have a separate test for each scenario. Parameterized tests seem like a good way to model this; I can write a strategy object for each scenario, and parameterize the tests based on which scenarios each test should apply to.
I can do something like this:
public IEnumerable<RateStrategy> AllScenarios {
get {
yield return new RisingRatesStrategy();
yield return new FallingRatesStrategy();
yield return new ConstantRatesStrategy();
}
}
[TestCaseSource("AllScenarios")]
public void SomethingThatIsTheSameInAllScenarios(RateStrategy scenario) {
InitializeScenario(scenario);
... arrange ...
... act ...
... assert ...
}
The downside to this is that I need to remember to call InitializeScenario in every test. This is easy to mess up, and it also makes the tests harder to read -- in addition to the attribute that says exactly which scenarios this test applies to, I also need an extra line of code cluttering up my test, saying that oh yeah, there are scenarios.
Is there some other way I could access the test parameters? Is there a static property, similar to those on TestContext, that would let me access the test's parameters from, say, my SetUp method, so I could make my tests more declarative (convention-based) and less repetitive?
(TestContext looked promising, but it only tells me the test's name and whether it passed or failed. The test's parameters are sort of there, but only as part of a display string, not as actual objects; I can't grab the strategy object and start calling methods on it.)