I would like to run specific code after all the tests are executed using pytest
for eg: I open up a database connection before any tests are executed. And I would like to close the connection after all the tests are executed.
How can i achieve this with py.test? Is there a fixture or some thing that can do this?
You can use a autouse fixture with session scope:
#pytest.fixture(scope='session', autouse=True)
def db_conn():
# Will be executed before the first test
conn = db.connect()
yield conn
# Will be executed after the last test
conn.disconnect()
You can then also use db_conn as an argument to a test function:
def test_foo(db_conn):
results = db_conn.execute(...)
Related
I have broken my head trying to figure out how --reuse-db. I have a super-simple Django project with one model Student and the following test
import pytest
from main.models import Student
#pytest.mark.django_db
def test_1():
Student.objects.create(name=1)
assert Student.objects.all().count() == 1
When I run it for the first time with command pytest --reuse-db, the test passes - and I am not surprised.
But when I run the pytest --reuse-db for the second time, I expect that the db is not destroyed and the test fails, because I expect that Student.objects.all().count() == 2.
I am misunderstanding the --reuse-db flag ?
--reuse-db means to reuse the database between N tests within the same test run.
This flag has no bearing on running pytest twice.
I have 2 tests. I want to run the only one:
pipenv run pytest -s tmp_test.py::test_my_var
But pytest executes both functions in #pytest.mark.parametrize (in both tests)
How can I force Pytest to execute the only get_my_var() function if I run the only test_my_var?
If I run the whole file:
pipenv run pytest -s tmp_test.py
I want Pytest to execute the code in the following manner:
get_my_var()
test_my_var()
get_my_var_1()
test_my_var_1()
Actually, my functions in #pytest.mark.parametrize make some data preparation and both tests use the same entities. So each function in #pytest.mark.parametrize changes the state of the same test data.
That's why I strongly need the sequential order of running parametrization functions just before corresponding test.
def get_my_var():
with open('my var', 'w') as f:
f.write('my var')
return 'my var'
def get_my_var_1():
with open('my var_1', 'w') as f:
f.write('my var_1')
return 'my var_1'
#pytest.mark.parametrize('my_var', get_my_var())
def test_my_var(my_var):
pass
#pytest.mark.parametrize('my_var_1', get_my_var_1())
def test_my_var_1(my_var_1):
pass
Or how can I achive the same goal with any other options?
For example, with fixtures. I could use fixtures for data preparation but I need to use the same fixture in different tests because the preparation is the same. So I cannot use scope='session'.
At the same time scope='function' results in fixture runs for every instance of parameterized test.
Is there a way to run fixture (or any other function) the only one time for parameterized test before runs of all parameterized instances?
It looks like that only something like that can resolved the issue.
import pytest
current_test = None
#pytest.fixture()
def one_time_per_test_init(request):
test_name = request.node.originalname
global current_test
if current_test != test_name:
current_test = test_name
init, kwargs = request.param
init(**kwargs)
This is the real code from MLflow: https://github.com/mlflow/mlflow/blob/8a7659ee961c2a0d3a2f14c67140493a76d1e51d/tests/conftest.py#L42
#pytest.fixture
def test_mode_on():
try:
prev_env_var_value = os.environ.pop(_AUTOLOGGING_TEST_MODE_ENV_VAR, None)
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = "true"
assert is_testing()
yield
finally:
if prev_env_var_value:
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = prev_env_var_value
else:
del os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR]
#pytest.fixture(autouse=True, scope="session")
def enable_test_mode_by_default_for_autologging_integrations():
"""
Run all MLflow tests in autologging test mode, ensuring that errors in autologging patch code
are raised and detected. For more information about autologging test mode, see the docstring
for :py:func:`mlflow.utils.autologging_utils._is_testing()`.
"""
yield from test_mode_on()
There are also multiple places where test_mode_on is used like this:
#pytest.mark.usefixtures(test_mode_on.__name__)
def test_safe_patch_propagates_exceptions_raised_outside_of_original_function_in_test_mode(
When I try to run any tests I get the following:
tests/test_version.py::test_is_release_version ERROR [100%]
==================================== ERRORS ====================================
Fixture "test_mode_on" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
I want to understand what the original code was doing with yield from test_mode_on() and how to fix it.
Update:
I've tried to change the code to request the fixture, but got an error that test_mode_on has function scope while enable_test_mode_by_default_for_autologging_integrations has session scope.
#pytest.fixture(autouse=True, scope="session")
def enable_test_mode_by_default_for_autologging_integrations(test_mode_on):
"""
Run all MLflow tests in autologging test mode, ensuring that errors in autologging patch code
are raised and detected. For more information about autologging test mode, see the docstring
for :py:func:`mlflow.utils.autologging_utils._is_testing()`.
"""
yield from test_mode_on()
The intention obviously was to re-use a function-scoped fixture in a session-scoped fixture. Apparently, this was an option that was working in old pytest versions.
In any recent pytest version, this is not possible (as you have noticed). If you cannot fix the MLflow tests, your only option is to use an old pytest version that still supports that - MLflow has pinned pytest to 3.2.1 (probably for that same reason).
Be aware that any pytest plugin you have installed will likely not work with that pytest version either, so you have to downgrade or remove the plugins, too.
This recent issue is probably related to the outdated pytest version, so there is a chance that this will be addressed in MLflow.
UPDATE:
Just realized that it would help to show how to fix this for a current pytest version. In current pytest you are not allowed to derive (or yield) from a fixture with a narrower scope, as this would often not work as expected. You can, however, move the fixture code into a generator function, and yield from that. So a working version could be something like:
def test_mode_on_gen():
try:
prev_env_var_value = os.environ.pop(_AUTOLOGGING_TEST_MODE_ENV_VAR, None)
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = "true"
assert is_testing()
yield
finally:
if prev_env_var_value:
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = prev_env_var_value
else:
del os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR]
#pytest.fixture
def test_mode_on():
yield from test_mode_on_gen()
#pytest.fixture(autouse=True, scope="session")
def enable_test_mode_by_default_for_autologging_integrations():
yield from test_mode_on_gen()
I am very new to pytest.
There is a test_conf dir which has several test config files.
test_conf
test_conf1
test_conf2
Here is my test function.
conf_files function gets all test_conf files from that dir and returns a list.
#pytest.mark.parametrize('test_conf', conf_files())
def test_performance_scenario_1(fixture1, test_conf):
do_some_thing(test_conf)
The fixture1 does setup and teardown for this test.
My proposal is for each test conf file from test_conf, we run test function to do some test against it.
My question is how to pass each element of test_conf to fixture1, since I need to do some initialization in fixture1 setup step which needs the test conf file.
Any help is appreciated.
I think what you're looking for is parameterised fixtures. So instead of passing the conf files to your test, you pass it to the fixture, which does the setup/teardown with the particular conf file.
definition of your fixture1:
#pytest.fixture(scope="module", params=conf_files())
def fixture1(request):
# The pytest fixture `request` gives you access to the params defined in the annotation.
conf_file = request.param
# setup / teardown logic
...
Then in your test, it suffices to only pass the fixture:
def test_performance_scenario_1(fixture1):
# do_some_things
Need to run same test on different devices. Used fixture to give ip addresses of the devices, and all tests run for the IPs provided by fixtures as requests. But at the same time, need to append the test name with the IP address to quickly analyze results. pytest results have test name as same for all params, only in the log or statement we could see the parameter used, is there anyway to change the testname by appending the param to the test name based on the fixture params ?
class TestClass:
def test1():
pass
def test2():
pass
We need to run the whole test class for every device, all test methods in sequence for each device. We can not run each test with paramter cycle, we need to run the whole test class in a parameter cycle. This we achieved by a fixture implementation, but we couldn't rename the tests.
You can read my answer: How to customize the pytest name
I could change the pytest name, by creating a hook in a conftest.py file.
However, I had to use pytest private variables, so my solution could stop working when you upgrade pytest
You don't need to change the test name. The use case you're describing is exactly what parametrized fixtures are for.
Per the pytest docs, here's output from an example test run. Notice how the fixture values are included in the failure output right after the name of the test. This makes it obvious which test cases are failing.
$ pytest
======= test session starts ========
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
test_expectation.py ..F
======= FAILURES ========
_______ test_eval[6*9-42] ________
test_input = '6*9', expected = 42
#pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(test_input, expected):
> assert eval(test_input) == expected
E AssertionError: assert 54 == 42
E + where 54 = eval('6*9')
test_expectation.py:8: AssertionError
======= 1 failed, 2 passed in 0.12 seconds ========