How can I make a pytest plugin that adds a fixture configurable? - pytest

I would like to create a Python package which contains a fixture for pytest. That fixture should mock the behavior of an identification web service. The service contains some parameters for the clients, e.g. a username and a password and other non-credentials. I want the plugin users to set those globally once so that they can test all behavior they want.
I've seen that I can parametrize fixtures and use pytest.mark.parametrize to pass the values.
How can I add a global setting for all tests for my fixture?

If I understand your requirement, you have a couple of issues: having a global fixture value shared between tests and having the fixture as a plugin.
First, you can develop your plugin as usual inside your code, along with your tests. A basic version of a shared value can be achieved using a global variable.
Consider this sample code in conftest.py:
class MyClientClass:
def __init__(self, auth):
self.auth = auth
default_client = None
#pytest.fixture(scope="session")
def web_client(request):
global default_client
param = getattr(request, "param", None)
if param:
default_client = MyClientClass(request.param)
return default_client
Then your tests can look like this (only the first test does the initialization, the others can benefit the auth):
import pytest
#pytest.mark.parametrize("web_client", [{"user": "user", "pass": "pass"}],
indirect=True)
def test_with_init_creds(web_client):
print(web_client.auth)
def test_some(web_client):
print(web_client.auth)
def test_another(web_client):
print(web_client.auth)
Now, once you are happy with the local fixture, you can put its code to an installable library. Check out these two links from the official documentation: https://docs.pytest.org/en/7.1.x/how-to/writing_plugins.html#writing-your-own-plugin and https://docs.pytest.org/en/7.1.x/how-to/writing_plugins.html#making-your-plugin-installable-by-others. The important thing is to have the entry point pytest11 so the plugin is discoverable.
I created a very small plugin using poetry, which you can also reference: https://github.com/pksol/pytest-fastapi-deps

Related

How do I use a different fixture implementation based on a command line argument?

I am testing an app that has user profiles.
Normally, I tear down the profile after each test,
but it is very slow, so I wanted to have the option to run the test faster via
keeping the profile but tearing down changes after each test.
This is what I have now, and it works fine:
#pytest.fixture(scope="session")
def session_scope_app():
with empty_app_started() as app:
yield app
#pytest.fixture(scope="session")
def session_scope_app_with_profile_loaded(session_scope_app):
with profile_loaded(session_scope_app):
yield session_scope_app
if TEAR_DOWN_PROFILE_AFTER_EACH_TEST:
#pytest.fixture
def setup(session_scope_app):
with profile_loaded(session_scope_app):
yield session_scope_app
else:
#pytest.fixture
def setup(session_scope_app_with_profile_loaded):
with profile_state_preserved(session_scope_app_with_profile_loaded):
yield session_scope_app_with_profile_loaded
This produces a fixture setup that, as far as other tests are concerned,
behaves the same way regardless of whether the profile is torn down after each test.
Now, I want to turn TEAR_DOWN_PROFILE_AFTER_EACH_TEST into a command line
option. How can I do this? Command line options are not yet available in test collection stage,
and I can't just put the if into the fixture function body, as the two variants of setup depend on different fixtures.
There are two ways of doing that, but first, let's add the command option itself.
def pytest_addoption(parser):
parser.addoption("--tear-down-profile-after-each-test",
action="store_true",
default=True)
parser.addoption("--no-tear-down-profile-after-each-test", "-T",
action="store_false",
dest="tear_down_profile_after_each_test")
Now, we can either invoke fixtures dynamically, or create a tiny plugin that shuffles our fixtures.
Invoke the fixture dynamically
This is very simple. Instead of depending on a fixture via function arguments,
we can call request.getfixturevalue(name) from inside the fixture.
#pytest.fixture
def setup(session_scope_app):
if request.config.option.tear_down_profile_after_each_test:
with profile_loaded(session_scope_app):
yield session_scope_app
else:
session = request.getfixturevalue(
session_scope_app_with_profile_loaded.__name__
)
with profile_state_preserved(session):
yield session
(It's ok to depend on session_scope_app since session_scope_app_with_profile_loaded depends on it anyway.)
Pros: PyCharm is happy. Cons: you won't be seeing session_scope_app_with_profile_loaded in --setup-plan.
Make a simple plugin
Plugins have the benefit of having access to the configuration.
def pytest_configure(config):
class Plugin:
if config.option.tear_down_profile_after_each_test:
#pytest.fixture
def setup(self, session_scope_app):
with profile_loaded(session_scope_app):
yield session_scope_app
else:
#pytest.fixture
def setup(self, session_scope_app_with_profile_loaded):
with profile_state_preserved(session_scope_app_with_profile_loaded):
yield session_scope_app_with_profile_loaded
config.pluginmanager.register(Plugin())
Pros: You get excellent --setup-plan. Cons: PyCharm won't recongize that setup is a fixture.

How can I parametrize my fixture and get testdata to parametreize my tests

I'm a starter with pytest. Just learned about fixture and tried to do this:
My tests call functions I wrote, and get test data from a code practicing website.
Each test is from a particular page and has several sets of test data.
So, I want to use #pytest.mark.parametrize to parametrize my single test func.
Also, as the operations of the tests are samelike, I want to made the pageObject instantiation and the steps to get test data from page as a fixture.
# content of conftest.py
#pytest.fixture
def get_testdata_from_problem_page():
def _get_testdata_from_problem_page(problem_name):
page = problem_page.ProblemPage(problem_name)
return page.get_sample_data()
return _get_testdata_from_problem_page
# content of test_problem_a.py
import pytest
from page_objects import problem_page
from problem_func import problem_a
#pytest.mark.parametrize('input,expected', test_data)
def test_problem_a(get_testdata_from_problem_page):
input, expected = get_testdata_from_problem_page("problem_a")
assert problem_a.problem_a(input) == expected
Then I realized, as above, I can't parametrize the test using pytest.mark as the test_data should be given outside the test function....
Are there solutions for this? Thanks very much~~
If I understand you correctly, you want to write one parameterized test per page. In this case you just have to write a function instead of a fixture and use that for parametrization:
import pytest
from page_objects import problem_page
from problem_func import problem_a
def get_testdata_from_problem_page(problem_name):
page = problem_page.ProblemPage(problem_name)
# returns a list of (input, expected) tuples
return page.get_sample_data()
#pytest.mark.parametrize('input,expected',
get_testdata_from_problem_page("problem_a"))
def test_problem_a(input, expected):
assert problem_a.problem_a(input) == expected
As you wrote, a fixture can only be used as a parameter to a test function or to another fixture, not in a decorator.
If you want to use the function to get test data elsewhere, just move it to a common module and import it. This can be just some custom utility module, or you could put it into contest.py, though you still have to import it.
Note also that the fixture you wrote does not do anything - it defines a local function that is not called and returns.

pytest fixture for certain test cases

Having a testclass and testcases like below:
class TestSomething():
...
#pytest.fixture(autouse=True)
def before_and_after_testcases(self):
setup()
yield
cleanup()
def test_abc_1():
...
def test_abc_2():
...
def test_def_1():
...
def test_def_2():
...
Problem is, before_and_after_testcases() would run for each testcase in the test class. Is it possible to let the fixture apply to testcases with abc pattern in the function name only? The fixture is not supposed to run on test_def_xxx, but I don't know how to exclude those testcases.
The autouse=True fixture is automatically applied to all of the tests, to remove that auto-application you'll remove autouse=True
but now that fixture isn't applied to any!
to manually apply that fixture to the tests that need it you can either:
add that fixture's name as a parameter (if you need the value that the fixture has)
decorate the tests which need that fixture with #pytest.mark.usefixtures('fixture_name_here')
Another approach is to split your one test class into multiple test classes, grouping the tests which need the particular auto-used fixtures
disclaimer: I'm a pytest developer, though I don't think that's entirely relevant to this answer SO just requires disclosure of affiliation

pytest fixture with parametrization from another fixture

I am using pytest and would like to invoke a test function for a number of objects returned by a server, and for a number of servers.
The servers are defined in a YAML file and those definitions are provided as parametrization to a fixture "server_connection" that returns a Connection object for a single server. Due to the parametrization, it causes the test function to be invoked once for each server.
I am able to do this with a loop in the test function: There is a second fixture "server_objects" that takes a "server_connection" fixture as input and returns a list of server objects. The pytest test function then takes that second fixture and executes the actual test in a loop through the server objects.
Here is that code:
import pytest
SD_LIST = ... # read list of server definitions from YAML file
#pytest.fixture(
params=SD_LIST,
scope='module'
)
def server_connection(request):
server_definition = request.param
return Connection(server_definition.url, ...)
#pytest.fixture(
scope='module'
)
def server_objects(request, server_connection):
return server_connection.get_objects()
def test_object_foo(server_objects):
for server_object in server_objects:
# Perform test for a single server object:
assert server_object == 'foo'
However, the disadvantage is of course that a test failure causes the entire test function to end.
What I want to happen instead is that the test function is invoked for each single server object, so that a test failure for one object does not prevent the tests on the other objects. Ideally, I'd like to have a fixture that provides a single server object, that I can pass to the test function:
...
#pytest.fixture(
scope='module'
)
def server_object(request, server_connection):
server_objects = server_connection.get_objects()
# TBD: Some magic to parametrize this fixture with server_objects
def test_object_foo(server_object):
# Perform test for a single server object:
assert server_object == 'foo'
I have read through all pytest docs regarding fixtures but did not find a way to do this.
I know about pytest hooks and have used e.g. pytest_generate_tests() before, but I did not find a way how pytest_generate_tests() can access the values of other fixtures.
Any ideas?
Update: Let me add that I also did search SO for this, but did not find an answer. I specifically looked at:
pytest fixture of fixtures
How to parametrize a Pytest fixture
py.test: Pass a parameter to a fixture function
initing a pytest fixture with a parameter

Breakdown of fixture setup time in py.test

I have some py.test tests that have multiple dependent and parameterized fixtures and I want to measure the time taken by each fixture. However, in the logs with --durations it only shows time for setup for actual tests, but doesn't give me a breakdown of how long each individual fixture took.
Here is a concrete example of how to do this:
import logging
import time
import pytest
logger = logging.getLogger(__name__)
#pytest.hookimpl(hookwrapper=True)
def pytest_fixture_setup(fixturedef, request):
start = time.time()
yield
end = time.time()
logger.info(
'pytest_fixture_setup'
f', request={request}'
f', time={end - start}'
)
With output similar to:
2018-10-29 20:43:18,783 - INFO pytest_fixture_setup, request=<SubRequest 'some_data_source' for <Function 'test_ruleset_customer_to_campaign'>>, time=3.4723987579345703
The magic is hookwrapper:
pytest plugins can implement hook wrappers which wrap the execution of other hook implementations. A hook wrapper is a generator function which yields exactly once. When pytest invokes hooks it first executes hook wrappers and passes the same arguments as to the regular hooks.
One fairly important gotcha that I ran into is that the conftest.py has to be in your project's root folder in order to pick up the pytest_fixture_setup hook.
There isn't anything builtin for that, but you can easily implement yourself by using the new pytest_fixture_setup hook in a conftest.py file.