pytest-parallel not honouring module-scope fixtures - pytest

Suppose I have the below test cases written in a file, test_something.py:
#pytest.fixture(scope="module")
def get_some_binary_file():
# Some logic here that creates a path "/a/b/bin" and then downloads a binary into this path
os.mkdir("/a/b/bin") ### This line throws the error in pytest-parallel
some_binary = os.path.join("/a/b/bin", "binary_file")
download_bin("some_bin_url", some_binary)
return some_binary
test_input = [
{"some": "value"},
{"foo": "bar"}
]
#pytest.mark.parametrize("test_input", test_input, ids=["Test_1", "Test_2"])
def test_1(get_some_binary_file, test_input):
# Testing logic here
# Some other completely different tests below
def test_2():
# Some other testing logic here
When I run the above using below pytest command then they work without any issues.
pytest -s --disable-warnings test_something.py
However, I want to run these test cases in a parallel manner. I know that test_1 and test_2 should run parallelly. So I looked into pytest-parallel and did the below:
pytest --workers auto -s --disable-warnings test_something.py.
But as shown above in the code, when it goes to create the /a/b/bin folder, it throws an error saying that the directory already exists. So this means that the module-scope is not being honoured in pytest-parallel. It is trying to execute the get_some_binary_file for every parameterized input to test_1 Is there a way for me to do this?
I have also looked into pytest-xdist with the --dist loadscope option, and ran the below command for it:
pytest -n auto --dist loadscope -s --disable-warnings test_something.py
But this gave me an output like below, where both test_1 and test_2 are being executed on the same worker.
tests/test_something.py::test_1[Test_1]
[gw1] PASSED tests/test_something.py::test_1[Test_1] ## Expected
tests/test_something.py::test_1[Test_2]
[gw1] PASSED tests/test_something.py::test_1[Test_2] ## Expected
tests/test_something.py::test_2
[gw1] PASSED tests/test_something.py::test_2 ## Not expected to run in gw1
As can be seen from above output, the test_2 is running in gw1. Why? Shouldn't it run in a different worker?

Group the definitions with xdist_group to run per process. Run like this to assign it to per process, pytest xdistloadscope.py -n 2 --dist=loadgroup
#pytest.mark.xdist_group("group1")
#pytest.fixture(scope="module")
def get_some_binary_file():
# Some logic here that creates a path "/a/b/bin" and then downloads a binary into this path
os.mkdir("/a/b/bin") ### This line throws the error in pytest-parallel
some_binary = os.path.join("/a/b/bin", "binary_file")
download_bin("some_bin_url", some_binary)
return some_binary
test_input = [
{"some": "value"},
{"foo": "bar"}
]
#pytest.mark.xdist_group("group1")
#pytest.mark.parametrize("test_input", test_input, ids=["Test_1", "Test_2"])
def test_1(get_some_binary_file, test_input):
# Testing logic here
# Some other completely different tests below
#pytest.mark.xdist_group("group2")
def test_2():
# Some other testing logic here

Related

How to force Pytest to execute the only function in parametrize?

I have 2 tests. I want to run the only one:
pipenv run pytest -s tmp_test.py::test_my_var
But pytest executes both functions in #pytest.mark.parametrize (in both tests)
How can I force Pytest to execute the only get_my_var() function if I run the only test_my_var?
If I run the whole file:
pipenv run pytest -s tmp_test.py
I want Pytest to execute the code in the following manner:
get_my_var()
test_my_var()
get_my_var_1()
test_my_var_1()
Actually, my functions in #pytest.mark.parametrize make some data preparation and both tests use the same entities. So each function in #pytest.mark.parametrize changes the state of the same test data.
That's why I strongly need the sequential order of running parametrization functions just before corresponding test.
def get_my_var():
with open('my var', 'w') as f:
f.write('my var')
return 'my var'
def get_my_var_1():
with open('my var_1', 'w') as f:
f.write('my var_1')
return 'my var_1'
#pytest.mark.parametrize('my_var', get_my_var())
def test_my_var(my_var):
pass
#pytest.mark.parametrize('my_var_1', get_my_var_1())
def test_my_var_1(my_var_1):
pass
Or how can I achive the same goal with any other options?
For example, with fixtures. I could use fixtures for data preparation but I need to use the same fixture in different tests because the preparation is the same. So I cannot use scope='session'.
At the same time scope='function' results in fixture runs for every instance of parameterized test.
Is there a way to run fixture (or any other function) the only one time for parameterized test before runs of all parameterized instances?
It looks like that only something like that can resolved the issue.
import pytest
current_test = None
#pytest.fixture()
def one_time_per_test_init(request):
test_name = request.node.originalname
global current_test
if current_test != test_name:
current_test = test_name
init, kwargs = request.param
init(**kwargs)

Is there a way to skip a pytest fixture?

The problem is my given fixture function has an external dependency and that is causing an "Error" (like unreachable network / insufficient resource etc).
I'd like to skip the fixture and there by skip any test that depends on this fixture.
Doing something like this wont work:
import pytest
#pytest.mark.skip(reason="Something.")
#pytest.fixture(scope="module")
def parametrized_username():
raise Exception("foobar")
return 'overridden-username'
this will result in
_______________________________ ERROR at setup of test_username _______________________________
#pytest.mark.skip(reason="Something.")
#pytest.fixture(scope="module")
def parametrized_username():
> raise Exception("foobar")
E Exception: foobar
a2.py:6: Exception
What's the right away to skip a pytest fixture?
Yes, you can do this easily:
import pytest
#pytest.fixture
def myfixture():
pytest.skip('Because I want so')
def test_me(myfixture):
pass
$ pytest -v -s -ra r.py
r.py::test_me SKIPPED
=========== short test summary info ===========
SKIP [1] .../r.py:6: Because I want so
=========== 1 skipped in 0.01 seconds ===========
Internally, pytest.skip() function raises an exception Skipped, which is inherited from OutcomeException. These exceptions are specially handled to simulate the test outcome, but to not fail the test (similar to pytest.fail()).

how to rename a test name in pytest based on fixture param

Need to run same test on different devices. Used fixture to give ip addresses of the devices, and all tests run for the IPs provided by fixtures as requests. But at the same time, need to append the test name with the IP address to quickly analyze results. pytest results have test name as same for all params, only in the log or statement we could see the parameter used, is there anyway to change the testname by appending the param to the test name based on the fixture params ?
class TestClass:
def test1():
pass
def test2():
pass
We need to run the whole test class for every device, all test methods in sequence for each device. We can not run each test with paramter cycle, we need to run the whole test class in a parameter cycle. This we achieved by a fixture implementation, but we couldn't rename the tests.
You can read my answer: How to customize the pytest name
I could change the pytest name, by creating a hook in a conftest.py file.
However, I had to use pytest private variables, so my solution could stop working when you upgrade pytest
You don't need to change the test name. The use case you're describing is exactly what parametrized fixtures are for.
Per the pytest docs, here's output from an example test run. Notice how the fixture values are included in the failure output right after the name of the test. This makes it obvious which test cases are failing.
$ pytest
======= test session starts ========
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
test_expectation.py ..F
======= FAILURES ========
_______ test_eval[6*9-42] ________
test_input = '6*9', expected = 42
#pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(test_input, expected):
> assert eval(test_input) == expected
E AssertionError: assert 54 == 42
E + where 54 = eval('6*9')
test_expectation.py:8: AssertionError
======= 1 failed, 2 passed in 0.12 seconds ========

Test with imperative xfail in py.test always reports as xfail even if the passes

I always thought that imperative and declarative usage of xfail/skip in py.test should work in the same way. In the meantime I've noticed that if I write a test that contains an imperative skip the result of the test will always be "xfail" even it the test passes.
Here's some code:
import pytest
def test_should_fail():
pytest.xfail("reason")
#pytest.mark.xfail(reason="reason")
def test_should_fail_2():
assert 1
Running these tests will always result in:
============================= test session starts ==============================
platform win32 -- Python 2.7.3 -- pytest-2.3.5 -- C:\Python27\python.exe
collecting ... collected 2 items
test_xfail.py:3: test_should_fail xfail
test_xfail.py:6: test_should_fail_2 XPASS
===================== 1 xfailed, 1 xpassed in 0.02 seconds =====================
If I understand correctly what is written in the user manual, both test should be "XPASS'ed".
Is this a bug in py.test or am I getting something wrong?
When using the pytest.xfail() helper function you are effectively raising an exception in the test function. Only when you are using the marker it is possible for py.test to execute the test fully and give you an XPASS.

How to get test name and test result during run time in pytest

I want to get the test name and test result during runtime.
I have setup and tearDown methods in my script. In setup, I need to get the test name, and in tearDown I need to get the test result and test execution time.
Is there a way I can do this?
You can, using a hook.
I have these files in my test directory:
./rest/
├── conftest.py
├── __init__.py
└── test_rest_author.py
In test_rest_author.py I have three functions, startup, teardown and test_tc15, but I only want to show the result and name for test_tc15.
Create a conftest.py file if you don't have one yet and add this:
import pytest
from _pytest.runner import runtestprotocol
def pytest_runtest_protocol(item, nextitem):
reports = runtestprotocol(item, nextitem=nextitem)
for report in reports:
if report.when == 'call':
print '\n%s --- %s' % (item.name, report.outcome)
return True
The hook pytest_runtest_protocol implements the runtest_setup/call/teardown protocol for the given test item, including capturing exceptions and calling reporting hooks. It is called when any test finishes (like startup or teardown or your test).
If you run your script you can see the result and name of the test:
$ py.test ./rest/test_rest_author.py
====== test session starts ======
/test_rest_author.py::TestREST::test_tc15 PASSED
test_tc15 --- passed
======== 1 passed in 1.47 seconds =======
See also the docs on pytest hooks and conftest.py.
unittest.TestCase.id() this will return the complete Details including class name , method name .
From this we can extract test method name.
Getting the results during can be achieved by checking if there any exceptions in executing the test.
If the test fails then there wil be an exception if sys.exc_info() returns None then test is pass else test will be fail.
Using pytest_runtest_protocol as suggested with fixture marker solved my problem. In my case it was enough just to use reports = runtestprotocol(item, nextitem=nextitem) within my pytest html fixture. So to finalize the item element contains the information you need.
Many Thanks.