I'm trying to run the same test for a series of arguments using #pytest.mark.parametrize. The test data must be computed dynamically, which I attempted as follows:
data = [("1", "2")]
#pytest.fixture(scope="class")
def make_data():
global data
data.append(("3", "4"))
#pytest.mark.usefixtures("make_data")
class Tester:
#pytest.mark.parametrize("arg0, arg1", data)
def test_data(self, arg0, arg1):
print(arg0, arg1)
print(data)
assert 0
I'm creating the data in the class scope fixture and then using it as the parameter set for test_data. I expect test_data to run twice, with arguments 1, 2 and 3, 4, respectively. However, what I get is a single test with arguments 1, 2 and the following stdout:
1 2
[('1', '2'), ('3', '4')]
The value of data is obviously [('1', '2'), ('3', '4')], which means that the class scoped fixture initialized it as I wanted. But somehow it appears that parametrization already happened before this.
Is there a cleaner way to achieve what I want? I could simply run a loop within the test_data method, but I feel like this defies the purpose of parametrization.
Is there a way to return data in the make_data fixture and use the fixture in #pytest.mark.parametrize? When using #pytest.mark.parametrize("arg0, arg1", make_data) I get TypeError: 'function' object is not iterable. make_data must be a fixture, because in the real test case it relies on other fixtures.
I am new to pytest and would be grateful for any hints. Thank you.
EDIT
To provide an explanation on why I'm doing what I'm doing: the way I understand, #pytest.mark.parametrize("arg0, arg1", data) allows parametrization with a hard-coded data set. What if my test data is not hard-coded? What if I need to pre-process it, like I tried in the make_data method? Specifically, what if I need to read it from a file or url? Let's say I have 1000 data samples for which to run the test case, how can I be expected to hard-code them?
Can I somehow use a function to generate the data argument in #pytest.mark.parametrize("arg0, arg1", data)? Something like:
def obtain_data():
data = []
# read 1000 samples
# pre-process
return data
#pytest.mark.parametrize("arg0, arg1", obtain_data())
This produces an error.
As it turns out, the pytest-cases provides the option to define cases for the parametrization by functions, which helped a great deal. Hope this helps everyone who's looking for something similar
Straight forward way is to use pytest global variables:
You can assign them in the conftest.py:
def pytest_configure(config):
pytest.data = foo()
And in the test case
#pytest.mark.parametrize("data", pytest.data)
def test_000():
...
But I agree with previous comment, it's not the best practice
Related
I would like to test datasets with a variable number of values. Each value should be tested and I would like to have a standardized output that I can read in afterward again. My used framework is Matlab.
Example:
The use case would be a dataset which includes, e.g., 14 values that need to be testet. The comparison is already completely handled by my implementation. So I have 14 values, which I would like to compare against some tolerance or similar and get an output like
1..14
ok value1
ok value2
not ok value3
...
ok value14
Current solution:
I try to use the unit-testing framework and the according TAPPlugin that would produce exactly such an output (tap), one for every unit-test. My main problem is that the unit-testing framework does not take any input parameters. I already read about parametrization, but I do not know how this helps me. I could put the values as a list into the parameter, but how to I pass them there? Afaik the unit-test class does not allow additional parameters during initialization, so I cannot include this in the program the way I want.
I would like to avoid to need to format the TAP output on my own, because it is already there, but only for unit-test objects. Unfortunately, I cannot see how to implement this wisely.
How can I implement the output of a test anything protocol where I have a variable amount of comparisons (values) in Matlab?
If you are using class based unit tests you could access its properties from outside the test.
So let's say you have following unit test:
classdef MyTestCase < matlab.unittest.TestCase
properties
property1 = false;
end
methods(Test)
function test1(testCase)
verifyTrue(testCase,testCase.property1)
end
end
You could access and change properties from outside:
test=MyTestCase;
MyTestCase.property1 = true;
MyTestCase.run;
This should no succeed since you changed from false to true. If you want to have a more flexible way, you could have a list with the variables and a list with the requirements and then cycle through both in one of the test functions.
properties
variables = [];
requirements = [];
end
methods(Test)
function test1(testCase)
for i = 1:length(variables):
verifyEqual(testCase,testCase.variables[i],testCase.requirements [i])
end
end
end
Now you would set variables and requirements:
test=MyTestCase;
MyTestCase.variables = [1,2,3,4,5,6];
MyTestCase.requirements = [1,3,4,5,5,6];
MyTestCase.run;
Please note, that in theory you should not have multiple assert statements in one test.
Is there possibility to parametrize fixture from a fixture?
Let's say I've a fixture, which is taking relay_number as a parameter:
#pytest.fixture
def unipi_relay(request):
try:
relay_number = request.param["relay_number"]
except KeyError:
raise ValueError(
"This function requires as a parameter dictionary with values for keys:"
"\nrelay_number - passed as integer\n"
)
relay = RelayFactory.get_unipi_relay(relay_number)
relay.reset()
yield relay
relay.reset()
Now I would like to have another fixture, which will yield unipi_relay with already passed parameter.
Reason why I want to implement such a solution is that I would like to reuse unipi_relay fixture a few times in single test.
I'm not sure if I'm understanding correctly what you want to achieve because you haven't put the parameters your fixture is taking. Maybe the “factory as fixture” pattern is what you're looking for, because you'll then be able to reuse the unipi_relay fixture. Please also have a look at the question Reusing pytest fixture in the same test.
Im working with scalatest and scalacheck, alsso working with FeatureSpec.
I have a generator class that generate object for me that looks something like this:
object InvoiceGen {
def myObj = for {
country <- Gen.oneOf(Seq("France", "Germany", "United Kingdom", "Austria"))
type <- Gen.oneOf(Seq("Communication", "Restaurants", "Parking"))
amount <- Gen.choose(100, 4999)
number <- Gen.choose(1, 10000)
valid <- Arbitrary.arbitrary[Boolean]
} yield SomeObject(country, type, "1/1/2014", amount,number.toString, 35, "something", documentTypeValid, valid, "")
Now, I have the testing class which works with FeatureSpec and everything that I need to run the tests.
In this class I have scenarios, and in each scenario I want to generate a different object.
The thing is from what I understand is that to generate object is better to use forAll func, but for all will not sure to bring you an object so you can add minSuccessful(1) to make sure you get at list 1 obj....
I did it like this and it works:
scenario("some scenario") {
forAll(MyGen.myObj, minSuccessful(1)) { someObject =>
Given("A connection to the system")
loginActions shouldBe 'Connected
When("something")
//blabla
Then("something should happened")
//blabla
}
}
but im not sure exactly what it means.
What I want is to generate an invoice each scenario and do some actions on it...
im not sure why i care if the generation work or didnt work...i just want a generated object to work with.
TL;DR: To get one object, and only one, use myObj.sample.get. Unless your generator is doing something fancy that's perfectly safe and won't blow up.
I presume that your intention is to run some kind of integration/acceptance test with some randomly generated domain object—in other words (ab-)use scalacheck as a simple data generator—and you hope that minSuccessful(1) would ensure that the test only runs once.
Be aware that this is not the case!. scalacheck will run your test multiple times if it fails, to try and shrink the input data to a minimal counterexample.
If you'd like to ensure that your test runs only once you must use sample.
However, if running the test multiple times is fine, prefer minSuccessful(1) to "succeed fast" but still profit from minimized counterexamples in case the test fails.
Gen.sample returns an option because generators can fail:
ScalaCheck generators can fail, for instance if you're adding a filter (listingGen.suchThat(...)), and that failure is modeled with the Option type.
But:
[…] if you're sure that your generator never will fail, you can simply call Option.get like you do in your example above. Or you can use Option.getOrElse to replace None with a default value.
Generally if your generator is simple, i.e. does not use generators that could fail and does not use any filters on its own, it's perfectly safe to just call .get on the option returned by .sample. I've been doing that in the past and never had problems with it. If your generators frequently return None from .sample they'd likely make scalacheck fail to successfully generate values as well.
If all that you want is a single object use Gen.sample.get.
minSuccessful has a very different meaning: It's the minimal number of successful tests that scalacheck runs—which by no means implies
that scalacheck takes only a single value out of the generator, or
that the test runs only once.
With minSuccessful(1) scalacheck wants one successful test. It'll take samples out of the generator until the test runs at least once—i.e. if you filter the generated values with whenever in your test body scalacheck will take samples as long as whenever discards them.
If the test passes scalacheck is happy and won't run the test a second time.
However if the test fails scalacheck will try and produce a minimal example to fail the test. It'll shrink the input data and run the test as long as it fails and then provides you with the minimized counter example rather than the actual input that triggered the initial failure.
That's an important property of property testing as it helps you to discover bugs: The original data is frequently too large to lend itself for debugging. Minimizing it helps you discover the piece of input data that actually triggers the failure, i.e. corner cases like empty strings that you didn't think of.
I think the way you want to use Scalacheck (generate only one object and execute the test for it) defeats the purpose of property-based testing. Let me explain a bit in detail:
In classical unit-testing, you would generate your system under test, be it an object or a system of dependent objects, with some fixed data. This could e.g. be strings like "foo" and "bar" or, if you needed a name, you would use something like "John Doe". For integers and other data, you can also randomly choose some values.
The main advantage is that these are "plain" values—you can directly see them in the code and correlate them with the output of a failed test. The big disadvantage is that the tests will only ever run with the values you specified, which in turn means that your code is also only tested with these values.
In contrast, property-based testing allows you to just describe how the data should look like (e.g. "a positive integer", "a string of maximum 20 characters"). The testing framework will then—with the help of generators—generate a number of matching objects and execute the test for all of them. This way, you can be more sure that your code will actually be correct for different inputs, which after all is the purpose of testing: to check if your code does what it should for the possible inputs.
I never really worked with Scalacheck, but a colleague explained it to me that it also tries to cover edge-cases, e.g. putting in a 0 and MAX_INT for a positive integer, or an empty string for the aforementioned string with max. 20 characters.
So, to sum it up: Running a property-based test only once for one generic object is the wrong thing to do. Instead, once you have the generator infrastructure in place, embrace the advantage you then have and let your code be checked a lot more times!
I'm trying to test-drive some Scala code using Specs2 and Mockito. I'm relatively new to all three, and having difficulty with the mocked methods returning null.
In the following (transcribed with some name changes)
"My Component's process(File)" should {
"pass file to Parser" in new modules {
val file = mock[File]
myComponent.process(file)
there was one(mockParser).parse(file)
}
"pass parse result to Translator" in new modules {
val file = mock[File]
val myType1 = mock[MyType1]
mockParser.parse(file) returns (Some(myType1))
myComponent.process(file)
there was one(mockTranslator).translate(myType1)
}
}
The "pass file to Parser" works until I add the translator call in the SUT, and then dies because the mockParser.parse method has returned a null, which the translator code can't take.
Similarly, the "pass parse result to Translator" passes until I try to use the translation result in the SUT.
The real code for both of these methods can never return null, but I don't know how to tell Mockito to make the expectations return usable results.
I can of course work around this by putting null checks in the SUT, but I'd rather not, as I'm making sure to never return nulls and instead using Option, None and Some.
Pointers to a good Scala/Specs2/Mockito tutorial would be wonderful, as would a simple example of how to change a line like
there was one(mockParser).parse(file)
to make it return something that allows continued execution in the SUT when it doesn't deal with nulls.
Flailing about trying to figure this out, I have tried changing that line to
there was one(mockParser).parse(file) returns myResult
with a value for myResult that is of the type I want returned. That gave me a compile error as it expects to find a MatchResult there rather than my return type.
If it matters, I'm using Scala 2.9.0.
If you don't have seen it, you can look the mock expectation page of the specs2 documentation.
In your code, the stub should be mockParser.parse(file) returns myResult
Edited after Don's edit:
There was a misunderstanding. The way you do it in your second example is the good one and you should do exactly the same in the first test:
val file = mock[File]
val myType1 = mock[MyType1]
mockParser.parse(file) returns (Some(myType1))
myComponent.process(file)
there was one(mockParser).parse(file)
The idea of unit testing with mock is always the same: explain how your mocks work (stubbing), execute, verify.
That should answer the question, now a personal advice:
Most of the time, except if you want to verify some algorithmic behavior (stop on first success, process a list in reverse order) you should not test expectation in your unit tests.
In your example, the process method should "translate things", thus your unit tests should focus on it: mock your parsers and translators, stub them and only check the result of the whole process. It's less fine grain but the goal of a unit test is not to check every step of a method. If you want to change the implementation, you should not have to modify a bunch of unit tests that verify each line of the method.
I have managed to solve this, though there may be a better solution, so I'm going to post my own answer, but not accept it immediately.
What I needed to do was supply a sensible default return value for the mock, in the form of an org.mockito.stubbing.Answer<T> with T being the return type.
I was able to do this with the following mock setup:
val defaultParseResult = new Answer[Option[MyType1]] {
def answer(p1: InvocationOnMock): Option[MyType1] = None
}
val mockParser = org.mockito.Mockito.mock(implicitly[ClassManifest[Parser]].erasure,
defaultParseResult).asInstanceOf[Parser]
after a bit of browsing of the source for the org.specs2.mock.Mockito trait and things it calls.
And now, instead of returning null, the parse returns None when not stubbed (including when it's expected as in the first test), which allows the test to pass with this value being used in the code under test.
I will likely make a test support method hiding the mess in the mockParser assignment, and letting me do the same for various return types, as I'm going to need the same capability with several return types just in this set of tests.
I couldn't locate support for a shorter way of doing this in org.specs2.mock.Mockito, but perhaps this will inspire Eric to add such. Nice to have the author in the conversation...
Edit
On further perusal of source, it occurred to me that I should be able to just call the method
def mock[T, A](implicit m: ClassManifest[T], a: org.mockito.stubbing.Answer[A]): T = org.mockito.Mockito.mock(implicitly[ClassManifest[T]].erasure, a).asInstanceOf[T]
defined in org.specs2.mock.MockitoMocker, which was in fact the inspiration for my solution above. But I can't figure out the call. mock is rather overloaded, and all my attempts seem to end up invoking a different version and not liking my parameters.
So it looks like Eric has already included support for this, but I don't understand how to get to it.
Update
I have defined a trait containing the following:
def mock[T, A](implicit m: ClassManifest[T], default: A): T = {
org.mockito.Mockito.mock(
implicitly[ClassManifest[T]].erasure,
new Answer[A] {
def answer(p1: InvocationOnMock): A = default
}).asInstanceOf[T]
}
and now by using that trait I can setup my mock as
implicit val defaultParseResult = None
val mockParser = mock[Parser,Option[MyType1]]
I don't after all need more usages of this in this particular test, as supplying a usable value for this makes all my tests work without null checks in the code under test. But it might be needed in other tests.
I'd still be interested in how to handle this issue without adding this trait.
Without the full it's difficult to say but can you please check that the method you're trying to mock is not a final method? Because in that case Mockito won't be able to mock it and will return null.
Another piece of advice, when something doesn't work, is to rewrite the code with Mockito in a standard JUnit test. Then, if it fails, your question might be best answered by someone on the Mockito mailing list.
Hello Pythoneers: the following code is only a mock up of what I'm trying to do, but it should illustrate my question.
I would like to know if this is dirty trick I picked up from Java programming, or a valid and Pythonic way of doing things: basically I'm creating a load of instances, but I need to track 'static' data of all the instances as they are created.
class Myclass:
counter=0
last_value=None
def __init__(self,name):
self.name=name
Myclass.counter+=1
Myclass.last_value=name
And some output of using this simple class , showing that everything is working as I expected:
>>> x=Myclass("hello")
>>> print x.name
hello
>>> print Myclass.last_value
hello
>>> y=Myclass("goodbye")
>>> print y.name
goodbye
>>> print x.name
hello
>>> print Myclass.last_value
goodbye
So is this a generally acceptable way of doing this kind of thing, or an anti-pattern ?
[For instance, I'm not too happy that I can apparently set the counter from both within the class(good) and outside of it(bad); also not keen on having to use full namespace 'Myclass' from within the class code itself - just looks bulky; and lastly I'm initially setting values to 'None' - probably I'm aping static-typed languages by doing this?]
I'm using Python 2.6.2 and the program is single-threaded.
Class variables are perfectly Pythonic in my opinion.
Just watch out for one thing. An instance variable can hide a class variable:
x.counter = 5 # creates an instance variable in the object x.
print x.counter # instance variable, prints 5
print y.counter # class variable, prints 2
print myclass.counter # class variable, prints 2
Do. Not. Have. Stateful. Class. Variables.
It's a nightmare to debug, since the class object now has special features.
Stateful classes conflate two (2) unrelated responsibilities: state of object creation and the created objects. Do not conflate responsibilities because it "seems" like they belong together. In this example, the counting of created objects is the responsibility of a Factory. The objects which are created have completely unrelated responsibilities (which can't easily be deduced from the question).
Also, please use Upper Case Class Names.
class MyClass( object ):
def __init__(self, name):
self.name=name
def myClassFactory( iterable ):
for i, name in enumerate( iterable ):
yield MyClass( name )
The sequence counter is now part of the factory, where the state and counts should be maintained. In a separate factory.
[For folks playing Code Golf, this is shorter. But that's not the point. The point is that the class is no longer stateful.]
It's not clear from question how Myclass instances get created. Lacking any clue, there isn't much more than can be said about how to use the factory. An iterable is the usual culprit. Perhaps something that iterates through a list or a file or some other iterable data structure.
Also -- for folks just of the boat from Java -- the factory object is just a function. Nothing more is needed.
Since the example on the question is perfectly unclear, it's hard to know why (1) two unique objects are created with (2) a counter. The two unique objects are already two unique objects and a counter isn't needed.
For example, the static variables in the Myclass are never referenced anywhere. That makes it very, very hard to understand the example.
x, y = myClassFactory( [ "hello", "goodbye" ] )
If the count or last value where actually used for something, then a perhaps meaningful example could be created.
You can solve this problem by splitting the code into two separate classes.
The first class will be for the object you are trying to create:
class MyClass(object):
def __init__(self, name):
self.Name = name
And the second class will create the objects and keep track of them:
class MyClassFactory(object):
Counter = 0
LastValue = None
#classmethod
def Build(cls, name):
inst = MyClass(name)
cls.Counter += 1
cls.LastValue = inst.Name
return inst
This way, you can create new instances of the class as needed, but the information about the created classes will still be correct.
>>> x = MyClassFactory.Build("Hello")
>>> MyClassFactory.Counter
1
>>> MyClassFactory.LastValue
'Hello'
>>> y = MyClassFactory.Build("Goodbye")
>>> MyClassFactory.Counter
2
>>> MyClassFactory.LastValue
'Goodbye'
>>> x.Name
'Hello'
>>> y.Name
'Goodbye'
Finally, this approach avoids the problem of instance variables hiding class variables, because MyClass instances have no knowledge of the factory that created them.
>>> x.Counter
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'MyClass' object has no attribute 'Counter'
You don't have to use a class variable here; this is a perfectly valid case for using globals:
_counter = 0
_last_value = None
class Myclass(obj):
def __init__(self, name):
self.name = name
global _counter, _last_value
_counter += 1
_last_value = name
I have a feeling some people will knee-jerk against globals out of habit, so a quick review may be in order of what's wrong--and not wrong--with globals.
Globals traditionally are variables which are visible and changeable, unscoped, from anywhere in the program. This is a problem with globals in languages like C. It's completely irrelevant to Python; these "globals" are scoped to the module. The class name "Myclass" is equally global; both names are scoped identically, in the module they're contained in. Most variables--in Python equally to C++--are logically part of instances of objects or locally scoped, but this is cleared shared state across all users of the class.
I don't have any strong inclination against using class variables for this (and using a factory is completely unnecessary), but globals are how I'd generally do it.
Is this pythonic? Well, it's definitely more pythonic than having global variables for a counter and the value of the most recent instance.
It's said in Python that there's only one right way to do anything. I can't think of a better way to implement this, so keep going. Despite the fact that many will criticize you for "non-pythonic" solutions to problems (like the needless object-orientation that Java coders like or the "do-it-yourself" attitude that many from C and C++ bring), in most cases your Java habits will not send you to Python hell.
And beyond that, who cares if it's "pythonic"? It works, and it's not a performance issue, is it?