Need to run same test on different devices. Used fixture to give ip addresses of the devices, and all tests run for the IPs provided by fixtures as requests. But at the same time, need to append the test name with the IP address to quickly analyze results. pytest results have test name as same for all params, only in the log or statement we could see the parameter used, is there anyway to change the testname by appending the param to the test name based on the fixture params ?
class TestClass:
def test1():
pass
def test2():
pass
We need to run the whole test class for every device, all test methods in sequence for each device. We can not run each test with paramter cycle, we need to run the whole test class in a parameter cycle. This we achieved by a fixture implementation, but we couldn't rename the tests.
You can read my answer: How to customize the pytest name
I could change the pytest name, by creating a hook in a conftest.py file.
However, I had to use pytest private variables, so my solution could stop working when you upgrade pytest
You don't need to change the test name. The use case you're describing is exactly what parametrized fixtures are for.
Per the pytest docs, here's output from an example test run. Notice how the fixture values are included in the failure output right after the name of the test. This makes it obvious which test cases are failing.
$ pytest
======= test session starts ========
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
test_expectation.py ..F
======= FAILURES ========
_______ test_eval[6*9-42] ________
test_input = '6*9', expected = 42
#pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(test_input, expected):
> assert eval(test_input) == expected
E AssertionError: assert 54 == 42
E + where 54 = eval('6*9')
test_expectation.py:8: AssertionError
======= 1 failed, 2 passed in 0.12 seconds ========
Related
TLTR:
The question is maximally easy: Please look at the code base case. Pytest just ignoring this class. How I should run tests on a such class?
I just started switching from a simple python tests (with just assert) to testing with pytest and come across with this problem. Most of my tests is are classes that extending a real classes with test methods. One of my classes inherit from collections.UserDict. Pytest just ignoring this class. How I should run tests on a such class?
# Inheritance from object are ok, Inheritance from dict are not ok. Need dict :(
class TestFoo(dict):
def test_foo(self):
assert 1
output:
/home/david/PycharmProjects/proj/venv/bin/python /snap/pycharm-professional/302/plugins/python/helpers/pydev/pydevd.py --multiprocess --qt-support=auto --client 127.0.0.1 --port 44145 --file /snap/pycharm-professional/302/plugins/python/helpers/pycharm/_jb_pytest_runner.py --path /home/david/PycharmProjects/proj/tests/unit_tests_2.py
Testing started at 11:07 ...
Launching pytest with arguments /home/david/PycharmProjects/proj/tests/unit_tests_2.py --no-header --no-summary -q in /home/david/PycharmProjects/proj/tests
============================= test session starts ==============================
collecting ... collected 0 items
============================= 2 warnings in 0.03s ==============================
Process finished with exit code 5
Empty suite
UPD Thanks for #Teejay Bruno, running tests from pycharm hiding a warning from me:
PytestCollectionWarning: cannot collect test class 'TestFoo' because it has a __init__ constructor
The warning tells you the problem:
PytestCollectionWarning: cannot collect test class 'TestFoo' because it has a __init__ constructor
If I understand what you're trying to do, why not just pass the object as a fixture?
import pytest
#pytest.fixture
def my_dict():
return dict()
class TestFoo:
def test_foo(self, my_dict):
assert len(my_dict) == 0
I have broken my head trying to figure out how --reuse-db. I have a super-simple Django project with one model Student and the following test
import pytest
from main.models import Student
#pytest.mark.django_db
def test_1():
Student.objects.create(name=1)
assert Student.objects.all().count() == 1
When I run it for the first time with command pytest --reuse-db, the test passes - and I am not surprised.
But when I run the pytest --reuse-db for the second time, I expect that the db is not destroyed and the test fails, because I expect that Student.objects.all().count() == 2.
I am misunderstanding the --reuse-db flag ?
--reuse-db means to reuse the database between N tests within the same test run.
This flag has no bearing on running pytest twice.
Suppose I have the below test cases written in a file, test_something.py:
#pytest.fixture(scope="module")
def get_some_binary_file():
# Some logic here that creates a path "/a/b/bin" and then downloads a binary into this path
os.mkdir("/a/b/bin") ### This line throws the error in pytest-parallel
some_binary = os.path.join("/a/b/bin", "binary_file")
download_bin("some_bin_url", some_binary)
return some_binary
test_input = [
{"some": "value"},
{"foo": "bar"}
]
#pytest.mark.parametrize("test_input", test_input, ids=["Test_1", "Test_2"])
def test_1(get_some_binary_file, test_input):
# Testing logic here
# Some other completely different tests below
def test_2():
# Some other testing logic here
When I run the above using below pytest command then they work without any issues.
pytest -s --disable-warnings test_something.py
However, I want to run these test cases in a parallel manner. I know that test_1 and test_2 should run parallelly. So I looked into pytest-parallel and did the below:
pytest --workers auto -s --disable-warnings test_something.py.
But as shown above in the code, when it goes to create the /a/b/bin folder, it throws an error saying that the directory already exists. So this means that the module-scope is not being honoured in pytest-parallel. It is trying to execute the get_some_binary_file for every parameterized input to test_1 Is there a way for me to do this?
I have also looked into pytest-xdist with the --dist loadscope option, and ran the below command for it:
pytest -n auto --dist loadscope -s --disable-warnings test_something.py
But this gave me an output like below, where both test_1 and test_2 are being executed on the same worker.
tests/test_something.py::test_1[Test_1]
[gw1] PASSED tests/test_something.py::test_1[Test_1] ## Expected
tests/test_something.py::test_1[Test_2]
[gw1] PASSED tests/test_something.py::test_1[Test_2] ## Expected
tests/test_something.py::test_2
[gw1] PASSED tests/test_something.py::test_2 ## Not expected to run in gw1
As can be seen from above output, the test_2 is running in gw1. Why? Shouldn't it run in a different worker?
Group the definitions with xdist_group to run per process. Run like this to assign it to per process, pytest xdistloadscope.py -n 2 --dist=loadgroup
#pytest.mark.xdist_group("group1")
#pytest.fixture(scope="module")
def get_some_binary_file():
# Some logic here that creates a path "/a/b/bin" and then downloads a binary into this path
os.mkdir("/a/b/bin") ### This line throws the error in pytest-parallel
some_binary = os.path.join("/a/b/bin", "binary_file")
download_bin("some_bin_url", some_binary)
return some_binary
test_input = [
{"some": "value"},
{"foo": "bar"}
]
#pytest.mark.xdist_group("group1")
#pytest.mark.parametrize("test_input", test_input, ids=["Test_1", "Test_2"])
def test_1(get_some_binary_file, test_input):
# Testing logic here
# Some other completely different tests below
#pytest.mark.xdist_group("group2")
def test_2():
# Some other testing logic here
I always thought that imperative and declarative usage of xfail/skip in py.test should work in the same way. In the meantime I've noticed that if I write a test that contains an imperative skip the result of the test will always be "xfail" even it the test passes.
Here's some code:
import pytest
def test_should_fail():
pytest.xfail("reason")
#pytest.mark.xfail(reason="reason")
def test_should_fail_2():
assert 1
Running these tests will always result in:
============================= test session starts ==============================
platform win32 -- Python 2.7.3 -- pytest-2.3.5 -- C:\Python27\python.exe
collecting ... collected 2 items
test_xfail.py:3: test_should_fail xfail
test_xfail.py:6: test_should_fail_2 XPASS
===================== 1 xfailed, 1 xpassed in 0.02 seconds =====================
If I understand correctly what is written in the user manual, both test should be "XPASS'ed".
Is this a bug in py.test or am I getting something wrong?
When using the pytest.xfail() helper function you are effectively raising an exception in the test function. Only when you are using the marker it is possible for py.test to execute the test fully and give you an XPASS.
I want to get the test name and test result during runtime.
I have setup and tearDown methods in my script. In setup, I need to get the test name, and in tearDown I need to get the test result and test execution time.
Is there a way I can do this?
You can, using a hook.
I have these files in my test directory:
./rest/
├── conftest.py
├── __init__.py
└── test_rest_author.py
In test_rest_author.py I have three functions, startup, teardown and test_tc15, but I only want to show the result and name for test_tc15.
Create a conftest.py file if you don't have one yet and add this:
import pytest
from _pytest.runner import runtestprotocol
def pytest_runtest_protocol(item, nextitem):
reports = runtestprotocol(item, nextitem=nextitem)
for report in reports:
if report.when == 'call':
print '\n%s --- %s' % (item.name, report.outcome)
return True
The hook pytest_runtest_protocol implements the runtest_setup/call/teardown protocol for the given test item, including capturing exceptions and calling reporting hooks. It is called when any test finishes (like startup or teardown or your test).
If you run your script you can see the result and name of the test:
$ py.test ./rest/test_rest_author.py
====== test session starts ======
/test_rest_author.py::TestREST::test_tc15 PASSED
test_tc15 --- passed
======== 1 passed in 1.47 seconds =======
See also the docs on pytest hooks and conftest.py.
unittest.TestCase.id() this will return the complete Details including class name , method name .
From this we can extract test method name.
Getting the results during can be achieved by checking if there any exceptions in executing the test.
If the test fails then there wil be an exception if sys.exc_info() returns None then test is pass else test will be fail.
Using pytest_runtest_protocol as suggested with fixture marker solved my problem. In my case it was enough just to use reports = runtestprotocol(item, nextitem=nextitem) within my pytest html fixture. So to finalize the item element contains the information you need.
Many Thanks.