Pytest dynamic fixture scope - how to set and will the fixture scope apply to all tests? - pytest

I am trying to use pytest fixtures dynamic scoping.
The docs state that the scope will be determined during fixture definition; does it mean that once the scope is dynamically set in pytest run , this scope will apply to all tests using the fixture?
Is it possible to affect the scope during the test run (i.e using markers)?
If not, how can I change the config (without using a command line arg) to change the scope?
I tried to add a marker and use its value as a scope in pytest_generate_tests, however fixture definition runs prior to pytest_generate_tests.
def pytest_generate_tests(metafunc):
fixture_scope = metafunc.definition.get_closest_marker('scope').args
if fixture_scope is not None:
metafunc.config.addinivalue_line('markers', f'scope:{fixture_scope[0]}')
def determine_scope(fixture_name, config):
scope = [i for i in config.getini('markers') if 'scope' in i][0]
if scope:
return scope.split(':')[-1]
return "function"
#pytest.fixture(scope=determine_scope)
def some_fixture():

I would suggest using env var as scope. And change the env var if you want to change the scope. Scope change will only take effect once fixture's current scope ends.
#pytest.fixture(scope=os.environ.get("fixture_scope","function))
def fixture_name():
pass
def test_change_fixture():
os.environ["fixture_scope"] = "class"
I have not tested this code. But I have found use of env var very useful in passing variables between different stages of pytest run. You can also use dynamic config file or something similar.
An ugly way to achieve that is to have 4 copies of same fixture and get the relevant fixture as needed.
def fixture_code():
pass
#pytest.fixture(scope="module)
def module_fixture():
fixture_code()
#pytest.fixture(scope="session")
def session_fixture():
fixture_code()
# similar fixtures for other scopes
#pytest.fixture(scope="session")
def fixture_provider(request, module_fixture, session_fixture, .....):
if request.param['scope'] == "module":
return module_fixture
elif request.param['scope'] == "session":
return session_fixture
.
.
.
#pytest.mark.parametrize("scope",[os.environ.get("fixture_scope")], indirect=True)
def test_abc(fixture_provider):
#use fixture provider.
You can set env var with code to get relevant fixture.

Related

Pytest setup class once before testing

I'm using pytest for testing with mixer library for generating model data. So, now I'm trying to setup my tests once before they run. I grouped them into TestClasses, set to my fixtures 'class' scope, but this doesn't work for me.
#pytest.mark.django_db
class TestCreateTagModel:
#classmethod
#pytest.fixture(autouse=True, scope='class')
def _set_up(cls, create_model_instance, tag_model, create_fake_instance):
cls.model = tag_model
cls.tag = create_model_instance(cls.model)
cls.fake_instance = create_fake_instance(cls.model)
print('setup')
def test_create_tag(self, tag_model, create_model_instance, check_instance_exist):
tag = create_model_instance(tag_model)
assert check_instance_exist(tag_model, tag.id)
conftest.py
pytest.fixture(scope='class')
#pytest.mark.django_db(transaction=True)
def create_model_instance():
instance = None
def wrapper(model, **fields):
nonlocal instance
if not fields:
instance = mixer.blend(model)
else:
instance = mixer.blend(model, **fields)
return instance
yield wrapper
if instance:
instance.delete()
#pytest.fixture(scope='class')
#pytest.mark.django_db(transaction=True)
def create_fake_instance(create_related_fields):
"""
Function for creating fake instance of model(fake means that this instance doesn't exists in DB)
Args:
related (bool, optional): Flag which indicates create related objects or not. Defaults to False.
"""
instance = None
def wrapper(model, related=False, **fields):
with mixer.ctx(commit=False):
instance = mixer.blend(model, **fields)
if related:
create_related_fields(instance, **fields)
return instance
yield wrapper
if instance:
instance.delete()
#pytest.fixture(scope='class')
#pytest.mark.django_db(transaction=True)
def create_related_fields():
django_rel_types = ['ForeignKey']
def wrapper(instance, **fields):
for f in instance._meta.get_fields():
if type(f).__name__ in django_rel_types:
rel_instance = mixer.blend(f.related_model)
setattr(instance, f.name, rel_instance)
return wrapper
But I'm catching exception in mixer gen_value method: Database access not allowed, use django_db mark(that I'm already use). Do you have any ideas how this can be implemented?
You can set things up once before a run by returning the results of the setup, rather than modifying the testing class directly. From my own attempts, it seems any changes to the class made within class-scope fixtures are lost when individual tests are run. So here's how you should be able to do this. Replace your _setup fixture with these:
#pytest.fixture(scope='class')
def model_instance(self, tag_model, create_model_instance):
return create_model_instance(tag_model)
#pytest.fixture(scope='class')
def fake_instance(self, tag_model, create_fake_instance):
return create_fake_instance(tag_model)
And then these can be accessed through:
def test_something(self, model_instance, fake_instance):
# Check that model_instance and fake_instance are as expected
I'm not familiar with Django myself though, so there might be something else with it going on. This should at least help you solve one half of the problem, if not the other.

Using fixtures at collect time in pytest

I use testinfra with ansible transport. It provides host fixture which has ansible, so I can do host.ansible.get_variables().
Now I need to create a parametrization of test based on value from this inventory.
Inventory:
foo:
hosts:
foo1:
somedata:
- data1
- data2
I want to write a test which tests each of 'data' from somedata for each host in inventory. 'Each host' part is handled by testnfra, but I'm struggling with parametrization of the test:
#pytest.fixture
def somedata(host):
return host.ansible.get_variables()["somedata"]
#pytest.fixture(params=somedata):
def data(request):
return request.param
def test_data(host, data):
assert 'data' in data
I've tried both ways:
#pytest.fixture(params=somedata) -> TypeError: 'function' object is not iterable
#pytest.fixture(params=somedata()) -> Fixture "somedata" called directly. Fixtures are not meant to be called directly...
How can I do this? I understand that I can't change the number of tests at test time, but I pretty sure I have the same inventory at collection time, so, theoretically, it can be doable...
After reading a lot of source code I have came to conclusion, that it's impossible to call fixtures at collection time. There are no fixtures at collection time, and any parametrization should happen before any tests are called. Moreover, it's impossible to change number of tests at test time (so no fixture could change that).
Answering my own question on using Ansible inventory to parametrize a test function: It's possible, but it requires manually reading inventory, hosts, etc. There is a special hook for that: pytest_generate_tests (it's a function, not a fixture).
My current code to get any test parametrized by host_interface fixture is:
def cartesian(hosts, ar):
for host in hosts:
for interface in ar.get_variables(host).get("interfaces",[]):
yield (host, interface)
def pytest_generate_tests(metafunc):
if 'host_interface' in metafunc.fixturenames:
inventory_file = metafunc.config.getoption('ansible_inventory')
ansible_config = testinfra.utils.ansible_runner.get_ansible_config()
inventory = testinfra.utils.ansible_runner.get_ansible_inventory(ansible_config, inventory_file)
ar = testinfra.utils.ansible_runner.AnsibleRunner(inventory_file)
hosts = ar.get_hosts(metafunc.config.option.hosts)
metafunc.parametrize("host_interface", cartesian(hosts, ar))
You should use helper function instead of fixture to parametrize another fixture. Fixtures can not be used as decorator parameters in pytest.
def somedata(host):
return host.ansible.get_variables()["somedata"]
#pytest.fixture(params=somedata()):
def data(request):
return request.param
def test_data(host, data):
assert 'data' in data
This assumes that the host is not a fixture.
If the host is a fixture, there is hacky way to get around the problem. You should write the parameters to a tmp file or in a environment variable and read it with a helper function.
import os
#pytest.fixture(autouse=True)
def somedata(host):
os.environ["host_param"] = host.ansible.get_variables()["somedata"]
def get_params():
return os.environ["host_param"] # do some clean up to return a list instead of a string
#pytest.fixture(params=get_params()):
def data(request):
return request.param
def test_data(host, data):
assert 'data' in data

Give Pytest fixtures different scopes for different tests

In my test suite, I have certain data-generation fixtures which are used with many parameterized tests. Some of these tests would want these fixtures to run only once per session, while others need them to run every function. For example, I may have a fixture similar to:
#pytest.fixture
def get_random_person():
return random.choice(list_of_people)
and 2 parameterized tests, one which wants to use the same person for each test condition and one which wants a new person each time. Is there any way for this fixture to have scope="session" for one test and scope="function" for another?
James' answer is okay, but it doesn't help if you yield from your fixture code. This is a better way to do it:
# Built In
from contextlib import contextmanager
# 3rd Party
import pytest
#pytest.fixture(session='session')
def fixture_session_fruit():
"""Showing how fixtures can still be passed to the different scopes.
If it is `session` scoped then it can be used by all the different scopes;
otherwise, it must be the same scope or higher than the one it is used on.
If this was `module` scoped then this fixture could NOT be used on `fixture_session_scope`.
"""
return "apple"
#contextmanager
def _context_for_fixture(val_to_yield_after_setup):
# Rather long and complicated fixture implementation here
print('SETUP: Running before the test')
yield val_to_yield_after_setup # Let the test code run
print('TEARDOWN: Running after the test')
#pytest.fixture(session='function')
def fixture_function_scope(fixture_session_fruit):
with _context_for_fixture(fixture_session_fruit) as result:
yield result
#pytest.fixture(scope='class')
def fixture_class_scope(fixture_session_fruit):
with _context_for_fixture(fixture_session_fruit) as result:
yield result
#pytest.fixture(scope='module')
def fixture_module_scope(fixture_session_fruit):
with _context_for_fixture(fixture_session_fruit) as result:
yield result
#pytest.fixture(scope='session')
def fixture_session_scope(fixture_session_fruit):
with _context_for_fixture(fixture_session_fruit) as result:
# NOTE if the `_context_for_fixture` just did `yield` without any value,
# there should still be a `yield` here to keep the fixture
# inside the context till it is done. Just remove the ` result` part.
yield result
This way you can still handle contextual fixtures.
Github issue for reference: https://github.com/pytest-dev/pytest/issues/3425
One way to do this to separate out the implementation and then have 2 differently-scoped fixtures return it. So something like:
def _random_person():
return random.choice(list_of_people)
#pytest.fixture(scope='function')
def get_random_person_function_scope():
return _random_person()
#pytest.fixture(scope='session')
def get_random_person_session_scope():
return _random_person()
I've been doing this:
def _some_fixture(a_dependency_fixture):
def __some_fixture(x):
return x
yield __some_fixture
some_temp_fixture = pytest.fixture(_some_fixture, scope="function")
some_module_fixture = pytest.fixture(_some_fixture, scope="module")
some_session_fixture = pytest.fixture(_some_fixture, scope="session")
Less verbose than using a context manager.
Actually there is a workaround for this using the request object.
You could do something like:
#pytest.fixture(scope='class')
def get_random_person(request):
request.scope = getattr(request.cls, 'scope', request.scope)
return random.choice(list_of_people)
Then back at the test class:
#pytest.mark.usefixtures('get_random_person')
class TestSomething:
scope = 'function'
def a_random_test():
def another_test():
However, this only works properly for choosing between 'function' and 'class' scope and particularly if the fixture starts as class-scoped (and then changes to 'function' or is left as is).
If I try the other way around (from 'function' to 'class') funny stuff happen and I still can't figure out why.

Pytest yield fixture usage

I have a use case where I may use fixture multiple times inside a test in a "context manager" way. See example code below:
in conftest.py
class SomeYield(object):
def __enter__(self):
log.info("SomeYield.__enter__")
def __exit__(self, exc_type, exc_val, exc_tb):
log.info("SomeYield.__exit__")
def generate_name():
name = "{current_time}-{uuid}".format(
current_time=datetime.now().strftime("%Y-%m-%d-%H-%M-%S"),
uuid=str(uuid.uuid4())[:4]
)
return name
#pytest.yield_fixture
def some_yield():
name = generate_name()
log.info("Start: {}".format(name))
yield SomeYield()
log.info("End: {}".format(name))
in test_some_yield.py
def test_some_yield(some_yield):
with some_yield:
pass
with some_yield:
pass
Console output:
INFO:conftest:Start: 2017-12-06-01-50-32-5213
INFO:conftest:SomeYield.__enter__
INFO:conftest:SomeYield.__exit__
INFO:conftest:SomeYield.__enter__
INFO:conftest:SomeYield.__exit__
INFO:conftest:End: 2017-12-06-01-50-32-5213
Questions:
If I have some setup code in SomeYield.enter and cleanup code in
SomeYield.exit, is this the right way to do it using fixture for
multiple calls in my test?
Why didn't I see three occurrences of
enter and exit? Is this expected?

Is there any alternative to creating one fixture per variable in a py.test class?

I would like to use some common data in all my py.test class methods, and only in that class, e.g.
n_files = 1000
n_classes = 10
n_file_per_class = int(n_files / n_classes)
I found out that I can use fixtures, e.g.:
class TestDatasplit:
#pytest.fixture()
def n_files(self):
return 1000
#pytest.fixture()
def n_classes(self):
return 10
#pytest.fixture()
def n_files_per_class(self, n_files, n_classes):
return int(n_files / n_classes)
def test_datasplit_1(self, n_files):
assert n_files == 1000
def test_datasplit(self, n_files_per_class):
assert n_files_per_class == 100
but here I need to create a fixture for all my variables, but that seems quite verbose (I have much more than 3 variables)...
What is the best way to create a bunch of shared variables in a py.test class?
Your tests don't seem to be mutating these values, so you can use module-level or class-level constants. Pytest fixtures are there to provide each test with a separate copy of a value, so that tests don't begin to depend on each other (or inadvertently make each other fail) when one or more test mutate the values.
I agree with what #das-g said, but if you wanted to use fixtures, then you could have a fixture which returns a object based on a custom class, or e.g. a namedtuple.