pytest: monkeypatch while using hypothesis - pytest

Within a unit test, I'm using monkeypatch in order to change entries in a dict.
from hypothesis import given, strategies
test_dict = {"first": "text1", "second": "text2"}
given(val=strategies.text())
def test_monkeypath(monkeypatch, val):
monkeypatch.setitem(test_dict, "second", val)
assert isinstance(test_dict["second"], str)
The test passes, but I get a warning when executing the following test code with pytest.
=================================================================================================================== warnings summary ====================================================================================================================
.PyCharm2019.2/config/scratches/hypothesis_monkeypatch.py::test_monkeypath
c:\users\d292498\appdata\local\conda\conda\envs\pybt\lib\site-packages\hypothesis\extra\pytestplugin.py:172: HypothesisDeprecationWarning: .PyCharm2019.2/config/scratches/hypothesis_monkeypatch.py::test_monkeypath uses the 'monkeypatch' fixture, wh
ich is reset between function calls but not between test cases generated by `#given(...)`. You can change it to a module- or session-scoped fixture if it is safe to reuse; if not we recommend using a context manager inside your test function. See h
ttps://docs.pytest.org/en/latest/fixture.html#sharing-test-data for details on fixture scope.
note_deprecation(
-- Docs: https://docs.pytest.org/en/stable/warnings.html
============================================================================================================= 1 passed, 1 warning in 0.30s ==============================================================================================================
Does this mean that the value of the dict will only be changed once, no matter how many test cases will be generated by hypothesis?
I am not sure how to use a context manager in this case. Can somebody please point me in the right direction?

Your problem is that the dict is patched only once for all test invocations, and Hypothesis is warning you about that. If you had any logic before the monkeypatch.setitem line, this would be very bad!
You can work around this by using monkeypatch directly, instead of via a fixture:
from hypothesis import given, strategies
from _pytest.monkeypatch import MonkeyPatch
test_dict = {"first": "text1", "second": "text2"}
#given(val=strategies.text())
def test_monkeypath(val):
assert test_dict["second"] == "text2" # this would fail in your version
with MonkeyPatch().context() as mp:
mp.setitem(test_dict, "second", val)
assert test_dict["second"] == val
assert test_dict["second"] == "text2"
et voila, no warning.

Use the monkeypatch context manager
#given(val=strategies.text())
def test_monkeypath(monkeypatch, val):
with monkeypatch.context() as m:
m.setitem(test_dict, "second", val)
assert isinstance(test_dict["second"], str)

Related

Using fixtures at collect time in pytest

I use testinfra with ansible transport. It provides host fixture which has ansible, so I can do host.ansible.get_variables().
Now I need to create a parametrization of test based on value from this inventory.
Inventory:
foo:
hosts:
foo1:
somedata:
- data1
- data2
I want to write a test which tests each of 'data' from somedata for each host in inventory. 'Each host' part is handled by testnfra, but I'm struggling with parametrization of the test:
#pytest.fixture
def somedata(host):
return host.ansible.get_variables()["somedata"]
#pytest.fixture(params=somedata):
def data(request):
return request.param
def test_data(host, data):
assert 'data' in data
I've tried both ways:
#pytest.fixture(params=somedata) -> TypeError: 'function' object is not iterable
#pytest.fixture(params=somedata()) -> Fixture "somedata" called directly. Fixtures are not meant to be called directly...
How can I do this? I understand that I can't change the number of tests at test time, but I pretty sure I have the same inventory at collection time, so, theoretically, it can be doable...
After reading a lot of source code I have came to conclusion, that it's impossible to call fixtures at collection time. There are no fixtures at collection time, and any parametrization should happen before any tests are called. Moreover, it's impossible to change number of tests at test time (so no fixture could change that).
Answering my own question on using Ansible inventory to parametrize a test function: It's possible, but it requires manually reading inventory, hosts, etc. There is a special hook for that: pytest_generate_tests (it's a function, not a fixture).
My current code to get any test parametrized by host_interface fixture is:
def cartesian(hosts, ar):
for host in hosts:
for interface in ar.get_variables(host).get("interfaces",[]):
yield (host, interface)
def pytest_generate_tests(metafunc):
if 'host_interface' in metafunc.fixturenames:
inventory_file = metafunc.config.getoption('ansible_inventory')
ansible_config = testinfra.utils.ansible_runner.get_ansible_config()
inventory = testinfra.utils.ansible_runner.get_ansible_inventory(ansible_config, inventory_file)
ar = testinfra.utils.ansible_runner.AnsibleRunner(inventory_file)
hosts = ar.get_hosts(metafunc.config.option.hosts)
metafunc.parametrize("host_interface", cartesian(hosts, ar))
You should use helper function instead of fixture to parametrize another fixture. Fixtures can not be used as decorator parameters in pytest.
def somedata(host):
return host.ansible.get_variables()["somedata"]
#pytest.fixture(params=somedata()):
def data(request):
return request.param
def test_data(host, data):
assert 'data' in data
This assumes that the host is not a fixture.
If the host is a fixture, there is hacky way to get around the problem. You should write the parameters to a tmp file or in a environment variable and read it with a helper function.
import os
#pytest.fixture(autouse=True)
def somedata(host):
os.environ["host_param"] = host.ansible.get_variables()["somedata"]
def get_params():
return os.environ["host_param"] # do some clean up to return a list instead of a string
#pytest.fixture(params=get_params()):
def data(request):
return request.param
def test_data(host, data):
assert 'data' in data

Give Pytest fixtures different scopes for different tests

In my test suite, I have certain data-generation fixtures which are used with many parameterized tests. Some of these tests would want these fixtures to run only once per session, while others need them to run every function. For example, I may have a fixture similar to:
#pytest.fixture
def get_random_person():
return random.choice(list_of_people)
and 2 parameterized tests, one which wants to use the same person for each test condition and one which wants a new person each time. Is there any way for this fixture to have scope="session" for one test and scope="function" for another?
James' answer is okay, but it doesn't help if you yield from your fixture code. This is a better way to do it:
# Built In
from contextlib import contextmanager
# 3rd Party
import pytest
#pytest.fixture(session='session')
def fixture_session_fruit():
"""Showing how fixtures can still be passed to the different scopes.
If it is `session` scoped then it can be used by all the different scopes;
otherwise, it must be the same scope or higher than the one it is used on.
If this was `module` scoped then this fixture could NOT be used on `fixture_session_scope`.
"""
return "apple"
#contextmanager
def _context_for_fixture(val_to_yield_after_setup):
# Rather long and complicated fixture implementation here
print('SETUP: Running before the test')
yield val_to_yield_after_setup # Let the test code run
print('TEARDOWN: Running after the test')
#pytest.fixture(session='function')
def fixture_function_scope(fixture_session_fruit):
with _context_for_fixture(fixture_session_fruit) as result:
yield result
#pytest.fixture(scope='class')
def fixture_class_scope(fixture_session_fruit):
with _context_for_fixture(fixture_session_fruit) as result:
yield result
#pytest.fixture(scope='module')
def fixture_module_scope(fixture_session_fruit):
with _context_for_fixture(fixture_session_fruit) as result:
yield result
#pytest.fixture(scope='session')
def fixture_session_scope(fixture_session_fruit):
with _context_for_fixture(fixture_session_fruit) as result:
# NOTE if the `_context_for_fixture` just did `yield` without any value,
# there should still be a `yield` here to keep the fixture
# inside the context till it is done. Just remove the ` result` part.
yield result
This way you can still handle contextual fixtures.
Github issue for reference: https://github.com/pytest-dev/pytest/issues/3425
One way to do this to separate out the implementation and then have 2 differently-scoped fixtures return it. So something like:
def _random_person():
return random.choice(list_of_people)
#pytest.fixture(scope='function')
def get_random_person_function_scope():
return _random_person()
#pytest.fixture(scope='session')
def get_random_person_session_scope():
return _random_person()
I've been doing this:
def _some_fixture(a_dependency_fixture):
def __some_fixture(x):
return x
yield __some_fixture
some_temp_fixture = pytest.fixture(_some_fixture, scope="function")
some_module_fixture = pytest.fixture(_some_fixture, scope="module")
some_session_fixture = pytest.fixture(_some_fixture, scope="session")
Less verbose than using a context manager.
Actually there is a workaround for this using the request object.
You could do something like:
#pytest.fixture(scope='class')
def get_random_person(request):
request.scope = getattr(request.cls, 'scope', request.scope)
return random.choice(list_of_people)
Then back at the test class:
#pytest.mark.usefixtures('get_random_person')
class TestSomething:
scope = 'function'
def a_random_test():
def another_test():
However, this only works properly for choosing between 'function' and 'class' scope and particularly if the fixture starts as class-scoped (and then changes to 'function' or is left as is).
If I try the other way around (from 'function' to 'class') funny stuff happen and I still can't figure out why.

Scala: Test out Inner function

quite new to scala, hoping to get some help, if I have a method like this, how to test this prepareCappuccino method? Is there something like mokito.spy one this inner method? Thanks
http://danielwestheide.com/blog/2013/01/09/the-neophytes-guide-to-scala-part-8-welcome-to-the-future.html
def prepareCappuccino(): Future[Cappuccino] = {
def grind
def heatWater
def fronthMilk
def brew
// implementation of these methods above within this prepareCappuccino method
for {
ground <- grind("arabica beans")
water <- heatWater(Water(20))
foam <- frothMilk("milk")
espresso <- brew(ground, water)
} yield combine(espresso, foam)
}
As you have hardcoded the actual values you could simply test the expected ouput.
import scala.concurrent.ExecutionContext.Implicits.global
val expected = Cappuccino(...)
val f = prepareCappuccino()
f.map(c => assert(c == expected))
Have a look at ScalaTest, Specs2 and ScalaCheck which are the established testing frameworks for scala. ScalaCheck gives you property based testing and choosing between the first two is just a matter of taste. Just pick one that you like more.
Handling of futures can be done via blocking (discouraged) or async testing. Here is a link to the ScalaTest docs regarding it: http://www.scalatest.org/user_guide/async_testing
Please note that in "real code" you should avoid structuring your code in a way that complicates testing.

global fixture that injects values from current module

I have test modules of this style:
#test_mammals.py:
PETS = ['cats', 'dogs']
def test_mammals_1(pet):
assert 0, pet
def test_mammals_2(pet):
assert 0, pet
And here another one:
#test_birds.py:
PETS = ['budgie', 'parrot']
def test_birds_1(pet):
assert 0, pet
def test_birds_2(pet):
assert 0, pet
And I would like to define the fixture "pet" only once:
#conftest.py:
import pytest
#pytest.fixture(scope='module', autouse=True)
def getpets(request):
return getattr(request.module, 'PETS', [])
#pytest.fixture(scope='module', params=getpets, autouse=True)
def pet(request):
return request.param
Unfortunately this doesn't work because "pet" expects a list for "params". But if I put "getpets" into a list the ficture will return a pointer to "getpets" but not the values from "PETS" from the corresponding module.
This is a bit hard to answer because your code doesn't make a lot of sense as it stands - if your 'PETS' really are just a list of strings, you should just use pytest.mark.parametrize and you don't need anything special in conftest, or any fixture, in fact.
However if you have something more complicated happening, probably the easiest thing to do is have a generic fixture in conftest, and in each test module, define a lightweight fixture that has your specific data for that module, that makes use of your generic pet fixture in whatever way it needs to.

In py.test, how can I narrow the scope of an xfail mark?

I would like to narrow the scope of the pytest xfail mark. As I currently use it, it marks the entire test function, and any failure in the function is cool.
I would like to narrow that down to a smaller scope, perhaps with a context manager similar to "with pytest.raises (module.Error)". For example:
#pytest.mark.xfail
def test_12345():
first_step()
second_step()
third_step()
This test will xfail if I assert in any of the three methods I call. I would like instead for the test to xfail only if it asserts in second_step(), and not elsewhere. Something like this:
def test_12345():
first_step()
with pytest.something.xfail:
second_step()
third_step()
Is this possible with py.test?
Thanks.
You can define a context manager yourself that does it, like this:
import pytest
class XFailContext:
def __enter__(self):
pass
def __exit__(self, type, val, traceback):
if type is not None:
pytest.xfail(str(val))
xfail = XFailContext()
def step1():
pass
def step2():
0/0
def step3():
pass
def test_hello():
step1()
with xfail:
step2()
step3()
Of course you can also modify the contextmanager to look for specific exceptions.
The only caveat is that you cannot cause an "xpass" outcome, i.e. a special result that the (part of the) test unexpectedly passed.