Is there possibility to parametrize fixture from a fixture?
Let's say I've a fixture, which is taking relay_number as a parameter:
#pytest.fixture
def unipi_relay(request):
try:
relay_number = request.param["relay_number"]
except KeyError:
raise ValueError(
"This function requires as a parameter dictionary with values for keys:"
"\nrelay_number - passed as integer\n"
)
relay = RelayFactory.get_unipi_relay(relay_number)
relay.reset()
yield relay
relay.reset()
Now I would like to have another fixture, which will yield unipi_relay with already passed parameter.
Reason why I want to implement such a solution is that I would like to reuse unipi_relay fixture a few times in single test.
I'm not sure if I'm understanding correctly what you want to achieve because you haven't put the parameters your fixture is taking. Maybe the “factory as fixture” pattern is what you're looking for, because you'll then be able to reuse the unipi_relay fixture. Please also have a look at the question Reusing pytest fixture in the same test.
Related
I'm trying to run the same test for a series of arguments using #pytest.mark.parametrize. The test data must be computed dynamically, which I attempted as follows:
data = [("1", "2")]
#pytest.fixture(scope="class")
def make_data():
global data
data.append(("3", "4"))
#pytest.mark.usefixtures("make_data")
class Tester:
#pytest.mark.parametrize("arg0, arg1", data)
def test_data(self, arg0, arg1):
print(arg0, arg1)
print(data)
assert 0
I'm creating the data in the class scope fixture and then using it as the parameter set for test_data. I expect test_data to run twice, with arguments 1, 2 and 3, 4, respectively. However, what I get is a single test with arguments 1, 2 and the following stdout:
1 2
[('1', '2'), ('3', '4')]
The value of data is obviously [('1', '2'), ('3', '4')], which means that the class scoped fixture initialized it as I wanted. But somehow it appears that parametrization already happened before this.
Is there a cleaner way to achieve what I want? I could simply run a loop within the test_data method, but I feel like this defies the purpose of parametrization.
Is there a way to return data in the make_data fixture and use the fixture in #pytest.mark.parametrize? When using #pytest.mark.parametrize("arg0, arg1", make_data) I get TypeError: 'function' object is not iterable. make_data must be a fixture, because in the real test case it relies on other fixtures.
I am new to pytest and would be grateful for any hints. Thank you.
EDIT
To provide an explanation on why I'm doing what I'm doing: the way I understand, #pytest.mark.parametrize("arg0, arg1", data) allows parametrization with a hard-coded data set. What if my test data is not hard-coded? What if I need to pre-process it, like I tried in the make_data method? Specifically, what if I need to read it from a file or url? Let's say I have 1000 data samples for which to run the test case, how can I be expected to hard-code them?
Can I somehow use a function to generate the data argument in #pytest.mark.parametrize("arg0, arg1", data)? Something like:
def obtain_data():
data = []
# read 1000 samples
# pre-process
return data
#pytest.mark.parametrize("arg0, arg1", obtain_data())
This produces an error.
As it turns out, the pytest-cases provides the option to define cases for the parametrization by functions, which helped a great deal. Hope this helps everyone who's looking for something similar
Straight forward way is to use pytest global variables:
You can assign them in the conftest.py:
def pytest_configure(config):
pytest.data = foo()
And in the test case
#pytest.mark.parametrize("data", pytest.data)
def test_000():
...
But I agree with previous comment, it's not the best practice
Having a testclass and testcases like below:
class TestSomething():
...
#pytest.fixture(autouse=True)
def before_and_after_testcases(self):
setup()
yield
cleanup()
def test_abc_1():
...
def test_abc_2():
...
def test_def_1():
...
def test_def_2():
...
Problem is, before_and_after_testcases() would run for each testcase in the test class. Is it possible to let the fixture apply to testcases with abc pattern in the function name only? The fixture is not supposed to run on test_def_xxx, but I don't know how to exclude those testcases.
The autouse=True fixture is automatically applied to all of the tests, to remove that auto-application you'll remove autouse=True
but now that fixture isn't applied to any!
to manually apply that fixture to the tests that need it you can either:
add that fixture's name as a parameter (if you need the value that the fixture has)
decorate the tests which need that fixture with #pytest.mark.usefixtures('fixture_name_here')
Another approach is to split your one test class into multiple test classes, grouping the tests which need the particular auto-used fixtures
disclaimer: I'm a pytest developer, though I don't think that's entirely relevant to this answer SO just requires disclosure of affiliation
I am using pytest and would like to invoke a test function for a number of objects returned by a server, and for a number of servers.
The servers are defined in a YAML file and those definitions are provided as parametrization to a fixture "server_connection" that returns a Connection object for a single server. Due to the parametrization, it causes the test function to be invoked once for each server.
I am able to do this with a loop in the test function: There is a second fixture "server_objects" that takes a "server_connection" fixture as input and returns a list of server objects. The pytest test function then takes that second fixture and executes the actual test in a loop through the server objects.
Here is that code:
import pytest
SD_LIST = ... # read list of server definitions from YAML file
#pytest.fixture(
params=SD_LIST,
scope='module'
)
def server_connection(request):
server_definition = request.param
return Connection(server_definition.url, ...)
#pytest.fixture(
scope='module'
)
def server_objects(request, server_connection):
return server_connection.get_objects()
def test_object_foo(server_objects):
for server_object in server_objects:
# Perform test for a single server object:
assert server_object == 'foo'
However, the disadvantage is of course that a test failure causes the entire test function to end.
What I want to happen instead is that the test function is invoked for each single server object, so that a test failure for one object does not prevent the tests on the other objects. Ideally, I'd like to have a fixture that provides a single server object, that I can pass to the test function:
...
#pytest.fixture(
scope='module'
)
def server_object(request, server_connection):
server_objects = server_connection.get_objects()
# TBD: Some magic to parametrize this fixture with server_objects
def test_object_foo(server_object):
# Perform test for a single server object:
assert server_object == 'foo'
I have read through all pytest docs regarding fixtures but did not find a way to do this.
I know about pytest hooks and have used e.g. pytest_generate_tests() before, but I did not find a way how pytest_generate_tests() can access the values of other fixtures.
Any ideas?
Update: Let me add that I also did search SO for this, but did not find an answer. I specifically looked at:
pytest fixture of fixtures
How to parametrize a Pytest fixture
py.test: Pass a parameter to a fixture function
initing a pytest fixture with a parameter
If I write a parameterized NUnit test, using something like [TestCaseSource] or [ValueSource], NUnit will pass the parameters directly to my test method. But is there any other way to access those parameters, e.g. from SetUp, or from a helper method (without having to explicitly pass the parameter value to that helper method)?
For example, suppose I have three different scenarios (maybe it's "rising rates", "falling rates", and "constant rates"). I'm writing tests for a particular calculation, and some tests will have the same behavior in all three scenarios; others in two of the three (and I'll write a second test for the other scenario); others will have a separate test for each scenario. Parameterized tests seem like a good way to model this; I can write a strategy object for each scenario, and parameterize the tests based on which scenarios each test should apply to.
I can do something like this:
public IEnumerable<RateStrategy> AllScenarios {
get {
yield return new RisingRatesStrategy();
yield return new FallingRatesStrategy();
yield return new ConstantRatesStrategy();
}
}
[TestCaseSource("AllScenarios")]
public void SomethingThatIsTheSameInAllScenarios(RateStrategy scenario) {
InitializeScenario(scenario);
... arrange ...
... act ...
... assert ...
}
The downside to this is that I need to remember to call InitializeScenario in every test. This is easy to mess up, and it also makes the tests harder to read -- in addition to the attribute that says exactly which scenarios this test applies to, I also need an extra line of code cluttering up my test, saying that oh yeah, there are scenarios.
Is there some other way I could access the test parameters? Is there a static property, similar to those on TestContext, that would let me access the test's parameters from, say, my SetUp method, so I could make my tests more declarative (convention-based) and less repetitive?
(TestContext looked promising, but it only tells me the test's name and whether it passed or failed. The test's parameters are sort of there, but only as part of a display string, not as actual objects; I can't grab the strategy object and start calling methods on it.)
I'm trying to test-drive some Scala code using Specs2 and Mockito. I'm relatively new to all three, and having difficulty with the mocked methods returning null.
In the following (transcribed with some name changes)
"My Component's process(File)" should {
"pass file to Parser" in new modules {
val file = mock[File]
myComponent.process(file)
there was one(mockParser).parse(file)
}
"pass parse result to Translator" in new modules {
val file = mock[File]
val myType1 = mock[MyType1]
mockParser.parse(file) returns (Some(myType1))
myComponent.process(file)
there was one(mockTranslator).translate(myType1)
}
}
The "pass file to Parser" works until I add the translator call in the SUT, and then dies because the mockParser.parse method has returned a null, which the translator code can't take.
Similarly, the "pass parse result to Translator" passes until I try to use the translation result in the SUT.
The real code for both of these methods can never return null, but I don't know how to tell Mockito to make the expectations return usable results.
I can of course work around this by putting null checks in the SUT, but I'd rather not, as I'm making sure to never return nulls and instead using Option, None and Some.
Pointers to a good Scala/Specs2/Mockito tutorial would be wonderful, as would a simple example of how to change a line like
there was one(mockParser).parse(file)
to make it return something that allows continued execution in the SUT when it doesn't deal with nulls.
Flailing about trying to figure this out, I have tried changing that line to
there was one(mockParser).parse(file) returns myResult
with a value for myResult that is of the type I want returned. That gave me a compile error as it expects to find a MatchResult there rather than my return type.
If it matters, I'm using Scala 2.9.0.
If you don't have seen it, you can look the mock expectation page of the specs2 documentation.
In your code, the stub should be mockParser.parse(file) returns myResult
Edited after Don's edit:
There was a misunderstanding. The way you do it in your second example is the good one and you should do exactly the same in the first test:
val file = mock[File]
val myType1 = mock[MyType1]
mockParser.parse(file) returns (Some(myType1))
myComponent.process(file)
there was one(mockParser).parse(file)
The idea of unit testing with mock is always the same: explain how your mocks work (stubbing), execute, verify.
That should answer the question, now a personal advice:
Most of the time, except if you want to verify some algorithmic behavior (stop on first success, process a list in reverse order) you should not test expectation in your unit tests.
In your example, the process method should "translate things", thus your unit tests should focus on it: mock your parsers and translators, stub them and only check the result of the whole process. It's less fine grain but the goal of a unit test is not to check every step of a method. If you want to change the implementation, you should not have to modify a bunch of unit tests that verify each line of the method.
I have managed to solve this, though there may be a better solution, so I'm going to post my own answer, but not accept it immediately.
What I needed to do was supply a sensible default return value for the mock, in the form of an org.mockito.stubbing.Answer<T> with T being the return type.
I was able to do this with the following mock setup:
val defaultParseResult = new Answer[Option[MyType1]] {
def answer(p1: InvocationOnMock): Option[MyType1] = None
}
val mockParser = org.mockito.Mockito.mock(implicitly[ClassManifest[Parser]].erasure,
defaultParseResult).asInstanceOf[Parser]
after a bit of browsing of the source for the org.specs2.mock.Mockito trait and things it calls.
And now, instead of returning null, the parse returns None when not stubbed (including when it's expected as in the first test), which allows the test to pass with this value being used in the code under test.
I will likely make a test support method hiding the mess in the mockParser assignment, and letting me do the same for various return types, as I'm going to need the same capability with several return types just in this set of tests.
I couldn't locate support for a shorter way of doing this in org.specs2.mock.Mockito, but perhaps this will inspire Eric to add such. Nice to have the author in the conversation...
Edit
On further perusal of source, it occurred to me that I should be able to just call the method
def mock[T, A](implicit m: ClassManifest[T], a: org.mockito.stubbing.Answer[A]): T = org.mockito.Mockito.mock(implicitly[ClassManifest[T]].erasure, a).asInstanceOf[T]
defined in org.specs2.mock.MockitoMocker, which was in fact the inspiration for my solution above. But I can't figure out the call. mock is rather overloaded, and all my attempts seem to end up invoking a different version and not liking my parameters.
So it looks like Eric has already included support for this, but I don't understand how to get to it.
Update
I have defined a trait containing the following:
def mock[T, A](implicit m: ClassManifest[T], default: A): T = {
org.mockito.Mockito.mock(
implicitly[ClassManifest[T]].erasure,
new Answer[A] {
def answer(p1: InvocationOnMock): A = default
}).asInstanceOf[T]
}
and now by using that trait I can setup my mock as
implicit val defaultParseResult = None
val mockParser = mock[Parser,Option[MyType1]]
I don't after all need more usages of this in this particular test, as supplying a usable value for this makes all my tests work without null checks in the code under test. But it might be needed in other tests.
I'd still be interested in how to handle this issue without adding this trait.
Without the full it's difficult to say but can you please check that the method you're trying to mock is not a final method? Because in that case Mockito won't be able to mock it and will return null.
Another piece of advice, when something doesn't work, is to rewrite the code with Mockito in a standard JUnit test. Then, if it fails, your question might be best answered by someone on the Mockito mailing list.