The problem is my given fixture function has an external dependency and that is causing an "Error" (like unreachable network / insufficient resource etc).
I'd like to skip the fixture and there by skip any test that depends on this fixture.
Doing something like this wont work:
import pytest
#pytest.mark.skip(reason="Something.")
#pytest.fixture(scope="module")
def parametrized_username():
raise Exception("foobar")
return 'overridden-username'
this will result in
_______________________________ ERROR at setup of test_username _______________________________
#pytest.mark.skip(reason="Something.")
#pytest.fixture(scope="module")
def parametrized_username():
> raise Exception("foobar")
E Exception: foobar
a2.py:6: Exception
What's the right away to skip a pytest fixture?
Yes, you can do this easily:
import pytest
#pytest.fixture
def myfixture():
pytest.skip('Because I want so')
def test_me(myfixture):
pass
$ pytest -v -s -ra r.py
r.py::test_me SKIPPED
=========== short test summary info ===========
SKIP [1] .../r.py:6: Because I want so
=========== 1 skipped in 0.01 seconds ===========
Internally, pytest.skip() function raises an exception Skipped, which is inherited from OutcomeException. These exceptions are specially handled to simulate the test outcome, but to not fail the test (similar to pytest.fail()).
Related
TLTR:
The question is maximally easy: Please look at the code base case. Pytest just ignoring this class. How I should run tests on a such class?
I just started switching from a simple python tests (with just assert) to testing with pytest and come across with this problem. Most of my tests is are classes that extending a real classes with test methods. One of my classes inherit from collections.UserDict. Pytest just ignoring this class. How I should run tests on a such class?
# Inheritance from object are ok, Inheritance from dict are not ok. Need dict :(
class TestFoo(dict):
def test_foo(self):
assert 1
output:
/home/david/PycharmProjects/proj/venv/bin/python /snap/pycharm-professional/302/plugins/python/helpers/pydev/pydevd.py --multiprocess --qt-support=auto --client 127.0.0.1 --port 44145 --file /snap/pycharm-professional/302/plugins/python/helpers/pycharm/_jb_pytest_runner.py --path /home/david/PycharmProjects/proj/tests/unit_tests_2.py
Testing started at 11:07 ...
Launching pytest with arguments /home/david/PycharmProjects/proj/tests/unit_tests_2.py --no-header --no-summary -q in /home/david/PycharmProjects/proj/tests
============================= test session starts ==============================
collecting ... collected 0 items
============================= 2 warnings in 0.03s ==============================
Process finished with exit code 5
Empty suite
UPD Thanks for #Teejay Bruno, running tests from pycharm hiding a warning from me:
PytestCollectionWarning: cannot collect test class 'TestFoo' because it has a __init__ constructor
The warning tells you the problem:
PytestCollectionWarning: cannot collect test class 'TestFoo' because it has a __init__ constructor
If I understand what you're trying to do, why not just pass the object as a fixture?
import pytest
#pytest.fixture
def my_dict():
return dict()
class TestFoo:
def test_foo(self, my_dict):
assert len(my_dict) == 0
This is the real code from MLflow: https://github.com/mlflow/mlflow/blob/8a7659ee961c2a0d3a2f14c67140493a76d1e51d/tests/conftest.py#L42
#pytest.fixture
def test_mode_on():
try:
prev_env_var_value = os.environ.pop(_AUTOLOGGING_TEST_MODE_ENV_VAR, None)
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = "true"
assert is_testing()
yield
finally:
if prev_env_var_value:
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = prev_env_var_value
else:
del os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR]
#pytest.fixture(autouse=True, scope="session")
def enable_test_mode_by_default_for_autologging_integrations():
"""
Run all MLflow tests in autologging test mode, ensuring that errors in autologging patch code
are raised and detected. For more information about autologging test mode, see the docstring
for :py:func:`mlflow.utils.autologging_utils._is_testing()`.
"""
yield from test_mode_on()
There are also multiple places where test_mode_on is used like this:
#pytest.mark.usefixtures(test_mode_on.__name__)
def test_safe_patch_propagates_exceptions_raised_outside_of_original_function_in_test_mode(
When I try to run any tests I get the following:
tests/test_version.py::test_is_release_version ERROR [100%]
==================================== ERRORS ====================================
Fixture "test_mode_on" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
I want to understand what the original code was doing with yield from test_mode_on() and how to fix it.
Update:
I've tried to change the code to request the fixture, but got an error that test_mode_on has function scope while enable_test_mode_by_default_for_autologging_integrations has session scope.
#pytest.fixture(autouse=True, scope="session")
def enable_test_mode_by_default_for_autologging_integrations(test_mode_on):
"""
Run all MLflow tests in autologging test mode, ensuring that errors in autologging patch code
are raised and detected. For more information about autologging test mode, see the docstring
for :py:func:`mlflow.utils.autologging_utils._is_testing()`.
"""
yield from test_mode_on()
The intention obviously was to re-use a function-scoped fixture in a session-scoped fixture. Apparently, this was an option that was working in old pytest versions.
In any recent pytest version, this is not possible (as you have noticed). If you cannot fix the MLflow tests, your only option is to use an old pytest version that still supports that - MLflow has pinned pytest to 3.2.1 (probably for that same reason).
Be aware that any pytest plugin you have installed will likely not work with that pytest version either, so you have to downgrade or remove the plugins, too.
This recent issue is probably related to the outdated pytest version, so there is a chance that this will be addressed in MLflow.
UPDATE:
Just realized that it would help to show how to fix this for a current pytest version. In current pytest you are not allowed to derive (or yield) from a fixture with a narrower scope, as this would often not work as expected. You can, however, move the fixture code into a generator function, and yield from that. So a working version could be something like:
def test_mode_on_gen():
try:
prev_env_var_value = os.environ.pop(_AUTOLOGGING_TEST_MODE_ENV_VAR, None)
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = "true"
assert is_testing()
yield
finally:
if prev_env_var_value:
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = prev_env_var_value
else:
del os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR]
#pytest.fixture
def test_mode_on():
yield from test_mode_on_gen()
#pytest.fixture(autouse=True, scope="session")
def enable_test_mode_by_default_for_autologging_integrations():
yield from test_mode_on_gen()
Suppose I have the below test cases written in a file, test_something.py:
#pytest.fixture(scope="module")
def get_some_binary_file():
# Some logic here that creates a path "/a/b/bin" and then downloads a binary into this path
os.mkdir("/a/b/bin") ### This line throws the error in pytest-parallel
some_binary = os.path.join("/a/b/bin", "binary_file")
download_bin("some_bin_url", some_binary)
return some_binary
test_input = [
{"some": "value"},
{"foo": "bar"}
]
#pytest.mark.parametrize("test_input", test_input, ids=["Test_1", "Test_2"])
def test_1(get_some_binary_file, test_input):
# Testing logic here
# Some other completely different tests below
def test_2():
# Some other testing logic here
When I run the above using below pytest command then they work without any issues.
pytest -s --disable-warnings test_something.py
However, I want to run these test cases in a parallel manner. I know that test_1 and test_2 should run parallelly. So I looked into pytest-parallel and did the below:
pytest --workers auto -s --disable-warnings test_something.py.
But as shown above in the code, when it goes to create the /a/b/bin folder, it throws an error saying that the directory already exists. So this means that the module-scope is not being honoured in pytest-parallel. It is trying to execute the get_some_binary_file for every parameterized input to test_1 Is there a way for me to do this?
I have also looked into pytest-xdist with the --dist loadscope option, and ran the below command for it:
pytest -n auto --dist loadscope -s --disable-warnings test_something.py
But this gave me an output like below, where both test_1 and test_2 are being executed on the same worker.
tests/test_something.py::test_1[Test_1]
[gw1] PASSED tests/test_something.py::test_1[Test_1] ## Expected
tests/test_something.py::test_1[Test_2]
[gw1] PASSED tests/test_something.py::test_1[Test_2] ## Expected
tests/test_something.py::test_2
[gw1] PASSED tests/test_something.py::test_2 ## Not expected to run in gw1
As can be seen from above output, the test_2 is running in gw1. Why? Shouldn't it run in a different worker?
Group the definitions with xdist_group to run per process. Run like this to assign it to per process, pytest xdistloadscope.py -n 2 --dist=loadgroup
#pytest.mark.xdist_group("group1")
#pytest.fixture(scope="module")
def get_some_binary_file():
# Some logic here that creates a path "/a/b/bin" and then downloads a binary into this path
os.mkdir("/a/b/bin") ### This line throws the error in pytest-parallel
some_binary = os.path.join("/a/b/bin", "binary_file")
download_bin("some_bin_url", some_binary)
return some_binary
test_input = [
{"some": "value"},
{"foo": "bar"}
]
#pytest.mark.xdist_group("group1")
#pytest.mark.parametrize("test_input", test_input, ids=["Test_1", "Test_2"])
def test_1(get_some_binary_file, test_input):
# Testing logic here
# Some other completely different tests below
#pytest.mark.xdist_group("group2")
def test_2():
# Some other testing logic here
Need to run same test on different devices. Used fixture to give ip addresses of the devices, and all tests run for the IPs provided by fixtures as requests. But at the same time, need to append the test name with the IP address to quickly analyze results. pytest results have test name as same for all params, only in the log or statement we could see the parameter used, is there anyway to change the testname by appending the param to the test name based on the fixture params ?
class TestClass:
def test1():
pass
def test2():
pass
We need to run the whole test class for every device, all test methods in sequence for each device. We can not run each test with paramter cycle, we need to run the whole test class in a parameter cycle. This we achieved by a fixture implementation, but we couldn't rename the tests.
You can read my answer: How to customize the pytest name
I could change the pytest name, by creating a hook in a conftest.py file.
However, I had to use pytest private variables, so my solution could stop working when you upgrade pytest
You don't need to change the test name. The use case you're describing is exactly what parametrized fixtures are for.
Per the pytest docs, here's output from an example test run. Notice how the fixture values are included in the failure output right after the name of the test. This makes it obvious which test cases are failing.
$ pytest
======= test session starts ========
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
test_expectation.py ..F
======= FAILURES ========
_______ test_eval[6*9-42] ________
test_input = '6*9', expected = 42
#pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(test_input, expected):
> assert eval(test_input) == expected
E AssertionError: assert 54 == 42
E + where 54 = eval('6*9')
test_expectation.py:8: AssertionError
======= 1 failed, 2 passed in 0.12 seconds ========
I always thought that imperative and declarative usage of xfail/skip in py.test should work in the same way. In the meantime I've noticed that if I write a test that contains an imperative skip the result of the test will always be "xfail" even it the test passes.
Here's some code:
import pytest
def test_should_fail():
pytest.xfail("reason")
#pytest.mark.xfail(reason="reason")
def test_should_fail_2():
assert 1
Running these tests will always result in:
============================= test session starts ==============================
platform win32 -- Python 2.7.3 -- pytest-2.3.5 -- C:\Python27\python.exe
collecting ... collected 2 items
test_xfail.py:3: test_should_fail xfail
test_xfail.py:6: test_should_fail_2 XPASS
===================== 1 xfailed, 1 xpassed in 0.02 seconds =====================
If I understand correctly what is written in the user manual, both test should be "XPASS'ed".
Is this a bug in py.test or am I getting something wrong?
When using the pytest.xfail() helper function you are effectively raising an exception in the test function. Only when you are using the marker it is possible for py.test to execute the test fully and give you an XPASS.