Pytest: collecting 0 items even after following the conventions - pytest

I created a test module by following all the conventions, but when I run the test, I get the following message:
collecting 0 items
Here's my directory hierarchy:
integration_tests (Directory)-> tests (Directory)-> test_integration_use_cases.py (python file)
And this is the content of the file:
import pytest
from some_tests.integration_tests.backbone.SomeIntegrationTestBase import SomeIntegrationTestBase
class TestSomeIntegration(SomeIntegrationTestBase):
#pytest.mark.p1
def test_some_integration_use_cases(self):
print("**** Executing integration tests ****")
result = self.execute_test(4)
assert (True == result)
when I run the following command:
pytest test_integration_use_cases.py
I see the following result without any errors:
collecting 0 items
FYI: I am running this on a development machine (Like vagrant)

so I had the same problem as you have even after following all the recommended conventions. My application structure was as follows;
Application
-- API
app.py
-- docs
-- venv
-- tests
-- unit_test
test_factory
...
...
I, however, resolved the issue by moving the tests directory under the API package so that my application structure looked as below;
Application
-- API
app.py
-- tests
-- unit_test
test_factory
...
-- docs
-- venv
...
Although pytest is supposed to auto-discover the tests, it seems to do that if they are placed in the application root. Check out the pytest for flask
I also found this resource helpful.

Related

How to use pytest reuse-db correctly

I have broken my head trying to figure out how --reuse-db. I have a super-simple Django project with one model Student and the following test
import pytest
from main.models import Student
#pytest.mark.django_db
def test_1():
Student.objects.create(name=1)
assert Student.objects.all().count() == 1
When I run it for the first time with command pytest --reuse-db, the test passes - and I am not surprised.
But when I run the pytest --reuse-db for the second time, I expect that the db is not destroyed and the test fails, because I expect that Student.objects.all().count() == 2.
I am misunderstanding the --reuse-db flag ?
--reuse-db means to reuse the database between N tests within the same test run.
This flag has no bearing on running pytest twice.

How to aggregate test results to publish to testrail after executing pytest tests with xdist?

I'm running into a problem like this. I'm currently using pytest to run test cases, and reducing execution time using xdist to run tests in parallel and publishing tests results to TestRail. The issue is when using xdist, pytest-testrail plugin creates Test-Run for each xdist workers and then publishes test cases like Untested.
I tried this hook pytest_terminal_summary to prevent pytest_sessionfinish plugin hook from being call multiple times.
I expect only one test run is created, but still multiple test runs are created.
I ran in to the same problem, but found a kind of workaround with duct tape.
I found that all results are collecting properly in test run, if we run the tests with --tr-run-id key.
If you are using jenkins jobs to automate processes, you can do following:
1) create testrun using testrail API
2) get ID of this test run
3) run the tests with --tr-run-id=$TEST_RUN_ID
I used these docs:
http://docs.gurock.com/testrail-api2/bindings-python
http://docs.gurock.com/testrail-api2/reference-runs
from testrail import *
import sys
client = APIClient('URL')
client.user = 'login'
client.password = 'password'
result = client.send_post('add_run/1', {"name": sys.argv[1], "assignedto_id": 1}).get("id")
print(result)
then in jenkins shell
RUN_ID=`python3 testrail_run.py $BUILD_TAG`
and then
python3 -m pytest -n 3 --testrail --tr-run-id=$RUN_ID --tr-config=testrail.cfg ...

How can I debug my python unit tests within Tox with PUDB?

I'm trying to debug a python codebase that uses tox for unit tests. One of the failing tests is proving difficult due to figure out, and I'd like to use pudb to step through the code.
At first thought, one would think to just pip install pudb then in the unit test code add in import pudb and pudb.settrace(). But that results in a ModuleNotFoundError:
> import pudb
>E ModuleNotFoundError: No module named 'pudb'
>tests/mytest.py:130: ModuleNotFoundError
> ERROR: InvocationError for command '/Users/me/myproject/.tox/py3/bin/pytest tests' (exited with code 1)
Noticing the .tox project folder leads one to realize there's a site-packages folder within tox, which makes sense since the point of tox is to manage testing under different virtualenv scenarios. This also means there's a tox.ini configuration file, with a deps section that may look like this:
[tox]
envlist = lint, py3
[testenv]
deps =
pytest
commands = pytest tests
adding pudb to the deps list should solve the ModuleNotFoundError, but leads to another error:
self = <_pytest.capture.DontReadFromInput object at 0x103bd2b00>
def fileno(self):
> raise UnsupportedOperation("redirected stdin is pseudofile, "
"has no fileno()")
E io.UnsupportedOperation: redirected stdin is pseudofile, has no fileno()
.tox/py3/lib/python3.6/site-packages/_pytest/capture.py:583: UnsupportedOperation
So, I'm stuck at this point. Is it not possible to use pudb instead of pdb within Tox?
There's a package called pytest-pudb which overrides the pudb entry points within an automated test environment like tox to successfully jump into the debugger.
To use it, just make your tox.ini file have both the pudb and pytest-pudb entries in its testenv dependencies, similar to this:
[tox]
envlist = lint, py3
[testenv]
deps =
pytest
pudb
pytest-pudb
commands = pytest tests
Using the original PDB (not PUDB) could work too. At least it works on Django and Nose testers. Without changing tox.ini, simply add a pdb breakpoint wherever you need, with:
import pdb; pdb.set_trace()
Then, when it get to that breakpoint, you can use the regular PDB commands:
w - print stacktrace
s - step into
n - step over
c - continue
p - print an argument value
a - print arguments of current function

Test with imperative xfail in py.test always reports as xfail even if the passes

I always thought that imperative and declarative usage of xfail/skip in py.test should work in the same way. In the meantime I've noticed that if I write a test that contains an imperative skip the result of the test will always be "xfail" even it the test passes.
Here's some code:
import pytest
def test_should_fail():
pytest.xfail("reason")
#pytest.mark.xfail(reason="reason")
def test_should_fail_2():
assert 1
Running these tests will always result in:
============================= test session starts ==============================
platform win32 -- Python 2.7.3 -- pytest-2.3.5 -- C:\Python27\python.exe
collecting ... collected 2 items
test_xfail.py:3: test_should_fail xfail
test_xfail.py:6: test_should_fail_2 XPASS
===================== 1 xfailed, 1 xpassed in 0.02 seconds =====================
If I understand correctly what is written in the user manual, both test should be "XPASS'ed".
Is this a bug in py.test or am I getting something wrong?
When using the pytest.xfail() helper function you are effectively raising an exception in the test function. Only when you are using the marker it is possible for py.test to execute the test fully and give you an XPASS.

How to get test name and test result during run time in pytest

I want to get the test name and test result during runtime.
I have setup and tearDown methods in my script. In setup, I need to get the test name, and in tearDown I need to get the test result and test execution time.
Is there a way I can do this?
You can, using a hook.
I have these files in my test directory:
./rest/
├── conftest.py
├── __init__.py
└── test_rest_author.py
In test_rest_author.py I have three functions, startup, teardown and test_tc15, but I only want to show the result and name for test_tc15.
Create a conftest.py file if you don't have one yet and add this:
import pytest
from _pytest.runner import runtestprotocol
def pytest_runtest_protocol(item, nextitem):
reports = runtestprotocol(item, nextitem=nextitem)
for report in reports:
if report.when == 'call':
print '\n%s --- %s' % (item.name, report.outcome)
return True
The hook pytest_runtest_protocol implements the runtest_setup/call/teardown protocol for the given test item, including capturing exceptions and calling reporting hooks. It is called when any test finishes (like startup or teardown or your test).
If you run your script you can see the result and name of the test:
$ py.test ./rest/test_rest_author.py
====== test session starts ======
/test_rest_author.py::TestREST::test_tc15 PASSED
test_tc15 --- passed
======== 1 passed in 1.47 seconds =======
See also the docs on pytest hooks and conftest.py.
unittest.TestCase.id() this will return the complete Details including class name , method name .
From this we can extract test method name.
Getting the results during can be achieved by checking if there any exceptions in executing the test.
If the test fails then there wil be an exception if sys.exc_info() returns None then test is pass else test will be fail.
Using pytest_runtest_protocol as suggested with fixture marker solved my problem. In my case it was enough just to use reports = runtestprotocol(item, nextitem=nextitem) within my pytest html fixture. So to finalize the item element contains the information you need.
Many Thanks.