I'm wrapping pytests in a python program that does some setup and builds the argument list to invoke pytest.main.
arg_list = [ ... ] //build arg_list
pytest.main(args=arg_list)
I also need to pass a configuration object from this wrapper to the tests run by pytest. I was thinking creating a fixture called conf and reference it the test functions
#pytest.fixture
def conf(request):
# Obtain configuration object
def test_mytest(conf):
#use configuration
However, I haven't found a way to pass an arbitrary object to fixtures (only options from the pytest arguments list).
Maybe using a hook? or a plugin injected or initialized from the wrapper?
You can create a module that is shared between your wrapper and your tests or serialize the object first.
Pickle the object and load it before tests
This solution keeps your wrapper and tests mostly independent. You could still execute the tests directly and pass the configuration object from the command line if you want to reproduce the test output for a certain object.
It does not work for all objects, because not all objects can be pickled. See "What can be pickled and unpickled?" for more details. This solution respects the scope of the fixture, because the object is reloaded from disk when the fixture is created.
Add a command line option for the path of the pickled file in conftest.py
import pickle
import pytest
def pytest_addoption(parser):
parser.addoption("--cfg-obj", help="path to the pickled configuration object")
#pytest.fixture
def conf(request):
path = request.config.getoption("--cfg-obj")
with open(path, 'rb') as fp:
return pickle.load(fp)
Pickle the object in wrapper.py and save it in a temporary file.
import pickle
import tempfile
import pytest
config_obj = {"answer": 42}
with tempfile.NamedTemporaryFile(delete=False) as fp:
pickle.dump(config_obj, fp)
fp.close()
args_list = ["tests.py", "--cfg-obj", fp.name]
pytest.main(args=args_list)
Use the conf fixture in tests.py
def test_something(conf):
assert conf == {'answer': 42}
Share the object between the wrapper and the tests
This solution does not seem very "clean" to me, because the tests can't be executed without the wrapper anymore (unless you add a fallback if the object is not set), but it has the advantage that the wrapper and the tests access the same object. This will work for arbitrary objects. It also introduces a possible dependency between your tests if you modify the state of the object, because the scope parameter of the fixture decorator has no effect here (it always loads the same object).
Create a shared.py module which is imported by the tests and the wrapper. It provides a setter and getter for the shared object.
_cfg_obj = None
def set_config_obj(obj):
global _cfg_obj
_cfg_obj = obj
def get_config_obj():
global _cfg_obj
return _cfg_obj
Set the shared object in wrapper.py
import pytest
from shared import set_config_obj
set_config_obj({"answer": 42})
args_list = ["tests.py"]
pytest.main(args=args_list)
Load the shared object in your conf fixture
import pytest
from shared import get_config_obj
#pytest.fixture
def conf():
return get_config_obj()
def test_something(conf):
assert conf == {"answer": 42}
Note that the shared.py module does not have to be outside your tests directory. If you turn the tests directory into a package by adding __init__.py files and add the shared object there, then you can import the tests package from your wrapper and set it with tests.set_config_obj(...).
Related
I have 2 tests. I want to run the only one:
pipenv run pytest -s tmp_test.py::test_my_var
But pytest executes both functions in #pytest.mark.parametrize (in both tests)
How can I force Pytest to execute the only get_my_var() function if I run the only test_my_var?
If I run the whole file:
pipenv run pytest -s tmp_test.py
I want Pytest to execute the code in the following manner:
get_my_var()
test_my_var()
get_my_var_1()
test_my_var_1()
Actually, my functions in #pytest.mark.parametrize make some data preparation and both tests use the same entities. So each function in #pytest.mark.parametrize changes the state of the same test data.
That's why I strongly need the sequential order of running parametrization functions just before corresponding test.
def get_my_var():
with open('my var', 'w') as f:
f.write('my var')
return 'my var'
def get_my_var_1():
with open('my var_1', 'w') as f:
f.write('my var_1')
return 'my var_1'
#pytest.mark.parametrize('my_var', get_my_var())
def test_my_var(my_var):
pass
#pytest.mark.parametrize('my_var_1', get_my_var_1())
def test_my_var_1(my_var_1):
pass
Or how can I achive the same goal with any other options?
For example, with fixtures. I could use fixtures for data preparation but I need to use the same fixture in different tests because the preparation is the same. So I cannot use scope='session'.
At the same time scope='function' results in fixture runs for every instance of parameterized test.
Is there a way to run fixture (or any other function) the only one time for parameterized test before runs of all parameterized instances?
It looks like that only something like that can resolved the issue.
import pytest
current_test = None
#pytest.fixture()
def one_time_per_test_init(request):
test_name = request.node.originalname
global current_test
if current_test != test_name:
current_test = test_name
init, kwargs = request.param
init(**kwargs)
This is the real code from MLflow: https://github.com/mlflow/mlflow/blob/8a7659ee961c2a0d3a2f14c67140493a76d1e51d/tests/conftest.py#L42
#pytest.fixture
def test_mode_on():
try:
prev_env_var_value = os.environ.pop(_AUTOLOGGING_TEST_MODE_ENV_VAR, None)
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = "true"
assert is_testing()
yield
finally:
if prev_env_var_value:
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = prev_env_var_value
else:
del os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR]
#pytest.fixture(autouse=True, scope="session")
def enable_test_mode_by_default_for_autologging_integrations():
"""
Run all MLflow tests in autologging test mode, ensuring that errors in autologging patch code
are raised and detected. For more information about autologging test mode, see the docstring
for :py:func:`mlflow.utils.autologging_utils._is_testing()`.
"""
yield from test_mode_on()
There are also multiple places where test_mode_on is used like this:
#pytest.mark.usefixtures(test_mode_on.__name__)
def test_safe_patch_propagates_exceptions_raised_outside_of_original_function_in_test_mode(
When I try to run any tests I get the following:
tests/test_version.py::test_is_release_version ERROR [100%]
==================================== ERRORS ====================================
Fixture "test_mode_on" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
I want to understand what the original code was doing with yield from test_mode_on() and how to fix it.
Update:
I've tried to change the code to request the fixture, but got an error that test_mode_on has function scope while enable_test_mode_by_default_for_autologging_integrations has session scope.
#pytest.fixture(autouse=True, scope="session")
def enable_test_mode_by_default_for_autologging_integrations(test_mode_on):
"""
Run all MLflow tests in autologging test mode, ensuring that errors in autologging patch code
are raised and detected. For more information about autologging test mode, see the docstring
for :py:func:`mlflow.utils.autologging_utils._is_testing()`.
"""
yield from test_mode_on()
The intention obviously was to re-use a function-scoped fixture in a session-scoped fixture. Apparently, this was an option that was working in old pytest versions.
In any recent pytest version, this is not possible (as you have noticed). If you cannot fix the MLflow tests, your only option is to use an old pytest version that still supports that - MLflow has pinned pytest to 3.2.1 (probably for that same reason).
Be aware that any pytest plugin you have installed will likely not work with that pytest version either, so you have to downgrade or remove the plugins, too.
This recent issue is probably related to the outdated pytest version, so there is a chance that this will be addressed in MLflow.
UPDATE:
Just realized that it would help to show how to fix this for a current pytest version. In current pytest you are not allowed to derive (or yield) from a fixture with a narrower scope, as this would often not work as expected. You can, however, move the fixture code into a generator function, and yield from that. So a working version could be something like:
def test_mode_on_gen():
try:
prev_env_var_value = os.environ.pop(_AUTOLOGGING_TEST_MODE_ENV_VAR, None)
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = "true"
assert is_testing()
yield
finally:
if prev_env_var_value:
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = prev_env_var_value
else:
del os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR]
#pytest.fixture
def test_mode_on():
yield from test_mode_on_gen()
#pytest.fixture(autouse=True, scope="session")
def enable_test_mode_by_default_for_autologging_integrations():
yield from test_mode_on_gen()
I am very new to pytest.
There is a test_conf dir which has several test config files.
test_conf
test_conf1
test_conf2
Here is my test function.
conf_files function gets all test_conf files from that dir and returns a list.
#pytest.mark.parametrize('test_conf', conf_files())
def test_performance_scenario_1(fixture1, test_conf):
do_some_thing(test_conf)
The fixture1 does setup and teardown for this test.
My proposal is for each test conf file from test_conf, we run test function to do some test against it.
My question is how to pass each element of test_conf to fixture1, since I need to do some initialization in fixture1 setup step which needs the test conf file.
Any help is appreciated.
I think what you're looking for is parameterised fixtures. So instead of passing the conf files to your test, you pass it to the fixture, which does the setup/teardown with the particular conf file.
definition of your fixture1:
#pytest.fixture(scope="module", params=conf_files())
def fixture1(request):
# The pytest fixture `request` gives you access to the params defined in the annotation.
conf_file = request.param
# setup / teardown logic
...
Then in your test, it suffices to only pass the fixture:
def test_performance_scenario_1(fixture1):
# do_some_things
Want to know how do I read the command line argument for "Pytest" to get the input and use the variable not as test input with fixture but a parameter to do some other operation.
Here is what I am trying to achieve :
pytest --folder=< label > test_my_logic.py
where label can be a, b and c.and based on the label value I will get the actual 'folder' path which has expected data. e.g.
label=a, folder=common/test_data/a
label=b, folder=common/test_data/b
I have added the conftest.py as below:
import pytest
def pytest_addoption(parser):
parser.addoption("--folder", action="store", default="All",
help="Please enter the folder which needs to be executed")
#pytest.fixture
def folder(request):
return request.config.getoption("--folder")
I have a json and util method using which I can read the json to get the value of actual folder value of a,b etc. I am seeking help to know in my script how do I get the argument --folder and use it to do other operation instead of passing it to the test method with fixture in my script ? In my test script where I am reading various global variables , I have :
test_my_logic.py
import pytest
import json
import sys
sys.path.append(os.path.realpath("%s/../../../../../common/utils" % os.path.dirname(os.path.abspath(__file__))))
import utils
print folder
TEST_ATTRIB = utils.getTestAttributes()['LOL']
PLACE_REQUEST_URL = utils.getURLs()['IMO']
...
...
In command line:
py.test --folder=a tests/test_my_logic.py
Error returned:
tests/test_my_logic.py:16: in <module>
print folder
E NameError: name 'folder' is not defined
Thanks in Advance !
folder is a fixture, it should be used as a parameter for test:
def test_a(folder):
print folder
I would like to create a separate log file for each test method. And i would like to do this in the conftest.py file and pass the logfile instance to the test method. This way, whenever i log something in a test method it would log to a separate log file and will be very easy to analyse.
I tried the following.
Inside conftest.py file i added this:
logs_dir = pkg_resources.resource_filename("test_results", "logs")
def pytest_runtest_setup(item):
test_method_name = item.name
testpath = item.parent.name.strip('.py')
path = '%s/%s' % (logs_dir, testpath)
if not os.path.exists(path):
os.makedirs(path)
log = logger.make_logger(test_method_name, path) # Make logger takes care of creating the logfile and returns the python logging object.
The problem here is that pytest_runtest_setup does not have the ability to return anything to the test method. Atleast, i am not aware of it.
So, i thought of creating a fixture method inside the conftest.py file with scope="function" and call this fixture from the test methods. But, the fixture method does not know about the the Pytest.Item object. In case of pytest_runtest_setup method, it receives the item parameter and using that we are able to find out the test method name and test method path.
Please help!
I found this solution by researching further upon webh's answer. I tried to use pytest-logger but their file structure is very rigid and it was not really useful for me. I found this code working without any plugin. It is based on set_log_path, which is an experimental feature.
Pytest 6.1.1 and Python 3.8.4
# conftest.py
# Required modules
import pytest
from pathlib import Path
# Configure logging
#pytest.hookimpl(hookwrapper=True,tryfirst=True)
def pytest_runtest_setup(item):
config=item.config
logging_plugin=config.pluginmanager.get_plugin("logging-plugin")
filename=Path('pytest-logs', item._request.node.name+".log")
logging_plugin.set_log_path(str(filename))
yield
Notice that the use of Path can be substituted by os.path.join. Moreover, different tests can be set up in different folders and keep a record of all tests done historically by using a timestamp on the filename. One could use the following filename for example:
# conftest.py
# Required modules
import pytest
import datetime
from pathlib import Path
# Configure logging
#pytest.hookimpl(hookwrapper=True,tryfirst=True)
def pytest_runtest_setup(item):
...
filename=Path(
'pytest-logs',
item._request.node.name,
f"{datetime.datetime.now().strftime('%Y%m%dT%H%M%S')}.log"
)
...
Additionally, if one would like to modify the log format, one can change it in pytest configuration file as described in the documentation.
# pytest.ini
[pytest]
log_file_level = INFO
log_file_format = %(name)s [%(levelname)s]: %(message)
My first stackoverflow answer!
I found the answer i was looking for.
I was able to achieve it using the function scoped fixture like this:
#pytest.fixture(scope="function")
def log(request):
test_path = request.node.parent.name.strip(".py")
test_name = request.node.name
node_id = request.node.nodeid
log_file_path = '%s/%s' % (logs_dir, test_path)
if not os.path.exists(log_file_path):
os.makedirs(log_file_path)
logger_obj = logger.make_logger(test_name, log_file_path, node_id)
yield logger_obj
handlers = logger_obj.handlers
for handler in handlers:
handler.close()
logger_obj.removeHandler(handler)
In newer pytest version this can be achieved with set_log_path.
#pytest.fixture
def manage_logs(request, autouse=True):
"""Set log file name same as test name"""
request.config.pluginmanager.get_plugin("logging-plugin")\
.set_log_path(os.path.join('log', request.node.name + '.log'))