Why is pytest hook defined in the same file not running? - pytest

In the py.test documentation (v3.0.3) it states that I can make a test hook like this:
import pytest
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
print "Running hook!"
yield
print "Doing interesting stuff inside hook!"
def test_call_fails():
assert False
But running it (pytest -s mytest.py), I don't get any printed output.
I believe this is because the hook is never run -- why is this?

I figured out that I need to put the hook in a separate file conftest.py (in the same folder) in order for it to be run:
conftest.py
import pytest
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
print "Running hook!"
yield
print "Doing interesting stuff inside hook!"
mytest.py
def test_call_fails():
assert False
And now it works!
this is also shown in the documentation with the comment # contents of conftest.py but this unfortunately wasn't clear to me at all in my numerous readthroughs :/

Related

Pytest skipping a test class that inherits from a builtin

TLTR:
The question is maximally easy: Please look at the code base case. Pytest just ignoring this class. How I should run tests on a such class?
I just started switching from a simple python tests (with just assert) to testing with pytest and come across with this problem. Most of my tests is are classes that extending a real classes with test methods. One of my classes inherit from collections.UserDict. Pytest just ignoring this class. How I should run tests on a such class?
# Inheritance from object are ok, Inheritance from dict are not ok. Need dict :(
class TestFoo(dict):
def test_foo(self):
assert 1
output:
/home/david/PycharmProjects/proj/venv/bin/python /snap/pycharm-professional/302/plugins/python/helpers/pydev/pydevd.py --multiprocess --qt-support=auto --client 127.0.0.1 --port 44145 --file /snap/pycharm-professional/302/plugins/python/helpers/pycharm/_jb_pytest_runner.py --path /home/david/PycharmProjects/proj/tests/unit_tests_2.py
Testing started at 11:07 ...
Launching pytest with arguments /home/david/PycharmProjects/proj/tests/unit_tests_2.py --no-header --no-summary -q in /home/david/PycharmProjects/proj/tests
============================= test session starts ==============================
collecting ... collected 0 items
============================= 2 warnings in 0.03s ==============================
Process finished with exit code 5
Empty suite
UPD Thanks for #Teejay Bruno, running tests from pycharm hiding a warning from me:
PytestCollectionWarning: cannot collect test class 'TestFoo' because it has a __init__ constructor
The warning tells you the problem:
PytestCollectionWarning: cannot collect test class 'TestFoo' because it has a __init__ constructor
If I understand what you're trying to do, why not just pass the object as a fixture?
import pytest
#pytest.fixture
def my_dict():
return dict()
class TestFoo:
def test_foo(self, my_dict):
assert len(my_dict) == 0

What does it mean when pytest fixture yields from another fixture instance?

This is the real code from MLflow: https://github.com/mlflow/mlflow/blob/8a7659ee961c2a0d3a2f14c67140493a76d1e51d/tests/conftest.py#L42
#pytest.fixture
def test_mode_on():
try:
prev_env_var_value = os.environ.pop(_AUTOLOGGING_TEST_MODE_ENV_VAR, None)
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = "true"
assert is_testing()
yield
finally:
if prev_env_var_value:
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = prev_env_var_value
else:
del os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR]
#pytest.fixture(autouse=True, scope="session")
def enable_test_mode_by_default_for_autologging_integrations():
"""
Run all MLflow tests in autologging test mode, ensuring that errors in autologging patch code
are raised and detected. For more information about autologging test mode, see the docstring
for :py:func:`mlflow.utils.autologging_utils._is_testing()`.
"""
yield from test_mode_on()
There are also multiple places where test_mode_on is used like this:
#pytest.mark.usefixtures(test_mode_on.__name__)
def test_safe_patch_propagates_exceptions_raised_outside_of_original_function_in_test_mode(
When I try to run any tests I get the following:
tests/test_version.py::test_is_release_version ERROR [100%]
==================================== ERRORS ====================================
Fixture "test_mode_on" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
I want to understand what the original code was doing with yield from test_mode_on() and how to fix it.
Update:
I've tried to change the code to request the fixture, but got an error that test_mode_on has function scope while enable_test_mode_by_default_for_autologging_integrations has session scope.
#pytest.fixture(autouse=True, scope="session")
def enable_test_mode_by_default_for_autologging_integrations(test_mode_on):
"""
Run all MLflow tests in autologging test mode, ensuring that errors in autologging patch code
are raised and detected. For more information about autologging test mode, see the docstring
for :py:func:`mlflow.utils.autologging_utils._is_testing()`.
"""
yield from test_mode_on()
The intention obviously was to re-use a function-scoped fixture in a session-scoped fixture. Apparently, this was an option that was working in old pytest versions.
In any recent pytest version, this is not possible (as you have noticed). If you cannot fix the MLflow tests, your only option is to use an old pytest version that still supports that - MLflow has pinned pytest to 3.2.1 (probably for that same reason).
Be aware that any pytest plugin you have installed will likely not work with that pytest version either, so you have to downgrade or remove the plugins, too.
This recent issue is probably related to the outdated pytest version, so there is a chance that this will be addressed in MLflow.
UPDATE:
Just realized that it would help to show how to fix this for a current pytest version. In current pytest you are not allowed to derive (or yield) from a fixture with a narrower scope, as this would often not work as expected. You can, however, move the fixture code into a generator function, and yield from that. So a working version could be something like:
def test_mode_on_gen():
try:
prev_env_var_value = os.environ.pop(_AUTOLOGGING_TEST_MODE_ENV_VAR, None)
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = "true"
assert is_testing()
yield
finally:
if prev_env_var_value:
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = prev_env_var_value
else:
del os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR]
#pytest.fixture
def test_mode_on():
yield from test_mode_on_gen()
#pytest.fixture(autouse=True, scope="session")
def enable_test_mode_by_default_for_autologging_integrations():
yield from test_mode_on_gen()

Overriding collection paths via PyTest plugin

With PyTest, you can limit the scope of test collection by passing directories/files/nodeids as command line arguments, e.g., pytest tests, pytest tests/my_tests.py and pytest tests/my_tests.py::test_1. Is it possible to override this behavior from within a plugin, i.e., to set them to something else programmatically?
So far I've attempted setting file_or_dir to my own list within config.option and config.known_args_namespace from the pytest_configure hook, but this appears to have no effect on anything.
You are probably looking for config.args:
# conftest.py
def pytest_configure(config):
config.args = ['foo', 'bar/baz.py::test_spam']
Running pytest now will be essentially the same as running pytest foo bar/baz.py::test_spam. However, putting stuff in pytest.ini would be IMO a better solution:
# pytest.ini
[pytest]
addopts = foo bar/baz.py::test_spam

pytest implementing a logfile per test method

I would like to create a separate log file for each test method. And i would like to do this in the conftest.py file and pass the logfile instance to the test method. This way, whenever i log something in a test method it would log to a separate log file and will be very easy to analyse.
I tried the following.
Inside conftest.py file i added this:
logs_dir = pkg_resources.resource_filename("test_results", "logs")
def pytest_runtest_setup(item):
test_method_name = item.name
testpath = item.parent.name.strip('.py')
path = '%s/%s' % (logs_dir, testpath)
if not os.path.exists(path):
os.makedirs(path)
log = logger.make_logger(test_method_name, path) # Make logger takes care of creating the logfile and returns the python logging object.
The problem here is that pytest_runtest_setup does not have the ability to return anything to the test method. Atleast, i am not aware of it.
So, i thought of creating a fixture method inside the conftest.py file with scope="function" and call this fixture from the test methods. But, the fixture method does not know about the the Pytest.Item object. In case of pytest_runtest_setup method, it receives the item parameter and using that we are able to find out the test method name and test method path.
Please help!
I found this solution by researching further upon webh's answer. I tried to use pytest-logger but their file structure is very rigid and it was not really useful for me. I found this code working without any plugin. It is based on set_log_path, which is an experimental feature.
Pytest 6.1.1 and Python 3.8.4
# conftest.py
# Required modules
import pytest
from pathlib import Path
# Configure logging
#pytest.hookimpl(hookwrapper=True,tryfirst=True)
def pytest_runtest_setup(item):
config=item.config
logging_plugin=config.pluginmanager.get_plugin("logging-plugin")
filename=Path('pytest-logs', item._request.node.name+".log")
logging_plugin.set_log_path(str(filename))
yield
Notice that the use of Path can be substituted by os.path.join. Moreover, different tests can be set up in different folders and keep a record of all tests done historically by using a timestamp on the filename. One could use the following filename for example:
# conftest.py
# Required modules
import pytest
import datetime
from pathlib import Path
# Configure logging
#pytest.hookimpl(hookwrapper=True,tryfirst=True)
def pytest_runtest_setup(item):
...
filename=Path(
'pytest-logs',
item._request.node.name,
f"{datetime.datetime.now().strftime('%Y%m%dT%H%M%S')}.log"
)
...
Additionally, if one would like to modify the log format, one can change it in pytest configuration file as described in the documentation.
# pytest.ini
[pytest]
log_file_level = INFO
log_file_format = %(name)s [%(levelname)s]: %(message)
My first stackoverflow answer!
I found the answer i was looking for.
I was able to achieve it using the function scoped fixture like this:
#pytest.fixture(scope="function")
def log(request):
test_path = request.node.parent.name.strip(".py")
test_name = request.node.name
node_id = request.node.nodeid
log_file_path = '%s/%s' % (logs_dir, test_path)
if not os.path.exists(log_file_path):
os.makedirs(log_file_path)
logger_obj = logger.make_logger(test_name, log_file_path, node_id)
yield logger_obj
handlers = logger_obj.handlers
for handler in handlers:
handler.close()
logger_obj.removeHandler(handler)
In newer pytest version this can be achieved with set_log_path.
#pytest.fixture
def manage_logs(request, autouse=True):
"""Set log file name same as test name"""
request.config.pluginmanager.get_plugin("logging-plugin")\
.set_log_path(os.path.join('log', request.node.name + '.log'))

How can I debug a funcargs function?

How can I drop into pdb inside a funcargs function? And how can I see output from print statements in funcargs functions?
My original question included the following, but it turns out I was simply instrumenting the wrong funcarg. Sigh.
I tried:
print "hi from inside funcargs"
invoking with and without -s.
I tried:
import pytest
pytest.set_trace()
And:
import pdb
pdb.set_trace()
And:
raise "hi from inside funcargs"
None produced any output or caused a test failure.
first thing that comes to mind is py.test -s
but by default funcargs give you tracebacks and output/error - what plugins are you using? something is clearly hiding it
for example for the program
def pytest_funcarg__foo(request):
print 'hi'
raise IOError
def test_fun(foo):
pass
a py.test call gives me both - a traceback in the funcarg function and text
To debug a funcarg:
def pytest_funcarg__myfuncarg(request):
import pytest
pytest.set_trace()
...
def test_function(myfuncarg):
...
Then:
python -m pytest test_function.py
As Ronny answered, to see output from a funcarg, pytest -s works.