pytest overall result 'Pass' when all tests are skipped - pytest

Currently pytest returns 0 when all tests are skipped. Is it possible to configure pytest return value to 'fail' when all tests are skipped? Or is it possible to get total number passed/failed tests in pytest at the end of execution?

There is possibly a more idiomatic solution, but the best I could come up with so far is this.
Modify this example of the documentation to save the results somewhere.
# content of conftest.py
import pytest
TEST_RESULTS = []
#pytest.mark.tryfirst
def pytest_runtest_makereport(item, call, __multicall__):
rep = __multicall__.execute()
if rep.when == "call":
TEST_RESULTS.append(rep.outcome)
return rep
If you want to make the session fail on a certain condition, then you can just write a session scoped fixture-teardown to do that for you:
# conftest.py continues...
#pytest.yield_fixture(scope="session", autouse=True)
def _skipped_checker(request):
yield
if not [tr for tr in TEST_RESULTS if tr != "skipped"]:
pytest.failed("All tests were skipped")
Unfortunatelly the fail (Error actually) from this will be associated to the last testcase in the session.
If you want to change the return value then you can write a hook:
# still conftest.py
def pytest_sessionfinish(session):
if not [tr for tr in TEST_RESULTS if tr != "skipped"]:
session.exitstatus = 10
Or just call through pytest.main() then access that variable and do you own post session checks.
import pytest
return_code = pytest.main()
import conftest
if not [tr for tr in conftest.TEST_RESULTS if tr != "skipped"]:
sys.exit(10)
sys.exit(return_code)

Related

how to use a pytest function to test different site using a different set of test data for each site such as staging/production

I have a set of pytest functions to test APIs, and test data is in a json file loaded by the pytest.mark.parametrize. Because the staging, production, and pre_production have different data but are similar, I want to save the test data in a different folder and use the same file name, in order to keep the python function clean. Site information is a new option from the command line of pytest. It doesn't work, pytest.mark.parametrize can't get the right folder to collect the test data.
This is in the conftest.py
#pytest.fixture(autouse=True)
def setup(request, site):
request.cls.site = site
yield
def pytest_addoption(parser):
parser.addoption("--site", action="store", default="staging")
#pytest.fixture(scope="session", autouse=True)
def site(request):
return request.config.getoption("--site")
This is in the test cases file:
#pytest.mark.usefixtures("setup")
class TestAAA:
#pytest.fixture(autouse=True)
def class_setup(self):
self.endpoint = read_data_from_file("endpoint.json")["AAA"][self.site]
if self.site == "production":
self.test_data_folder = "SourcesV2/production/"
else: // staging
self.test_data_folder = "SourcesV2/"
testdata.set_data_folder(self.test_data_folder)
#pytest.mark.parametrize("test_data", testdata.read_data_from_json_file(r"get_source_information.json"))
def test_get_source_information(self, test_data):
request_url = self.endpoint + f"/AAA/sources/{test_data['sourceID']}"
response = requests.get(request_url)
print(response)
I can use pytest.skip to skip the test data which is not for the current site.
if test_data["site"] != self.site:
pytest.skip("this test case is for " + test_data["site"] + ", skiping...")
But it will need to put all the test data in one file for staging/production/pre-production, and there will be a lot of skipped tests in the report, which is not my favorite.
Do you have any idea to solve this? How to pass a different file name to the parametrize according to the site?
Or, at least, how to let the skipped test not write logs in the report?
Thanks
The parametrize decorator is evaluated at load time, not at run time, so you will not be able to use it directly for this. You need to do the parametrization at runtime instead. This can be done using the pytest_generate_tests hook:
def pytest_generate_tests(metafunc):
if "test_data" in metafunc.fixturenames:
site = metafunc.config.getoption("--site")
if site == "production":
test_data_folder = "SourcesV2/production"
else:
test_data_folder = "SourcesV2"
# this is just for illustration, your test data may be loaded differently
with open(os.path.join(test_data_folder, "test_data.json")) as f:
test_data = json.load(f)
metafunc.parametrize("test_data", test_data)
class TestAAA:
def test_get_source_information(self, test_data):
...
If loading the test data is expansive, you could also cache it to avoid reading it for each test.

Function in pytest file works only with hard coded values

I have the below test_dss.py file which is used for pytest:
import dataikuapi
import pytest
def setup_list():
client = dataikuapi.DSSClient("{DSS_URL}", "{APY_KEY}")
client._session.verify = False
project = client.get_project("{DSS_PROJECT}")
# Check that there is at least one scenario TEST_XXXXX & that all test scenarios pass
scenarios = project.list_scenarios()
scenarios_filter = [obj for obj in scenarios if obj["name"].startswith("TEST")]
return scenarios_filter
def test_check_scenario_exist():
assert len(setup_list()) > 0, "You need at least one test scenario (name starts with 'TEST_')"
#pytest.mark.parametrize("scenario", setup_list())
def test_scenario_run(scenario, params):
client = dataikuapi.DSSClient(params['host'], params['api'])
client._session.verify = False
project = client.get_project(params['project'])
scenario_id = scenario["id"]
print("Executing scenario ", scenario["name"])
scenario_result = project.get_scenario(scenario_id).run_and_wait()
assert scenario_result.get_details()["scenarioRun"]["result"]["outcome"] == "SUCCESS", "test " + scenario[
"name"] + " failed"
My issue is with setup_list function, which able to get only hard coded values for {DSS_URL}, {APY_KEY}, {PROJECT}. I'm not able to use PARAMS or other method like in test_scenario_run
any idea how I can pass the PARAMS also to this function?
The parameters in the mark.parametrize marker are read at load time, where the information about the config parameters is not yet available. Therefore you have to parametrize the test at runtime, where you have access to the configuration.
This can be done in pytest_generate_tests (which can live in your test module):
#pytest.hookimpl
def pytest_generate_tests(metafunc):
if "scenario" in metafunc.fixturenames:
host = metafunc.config.getoption('--host')
api = metafuc.config.getoption('--api')
project = metafuc.config.getoption('--project')
metafunc.parametrize("scenario", setup_list(host, api, project))
This implies that your setup_list function takes these parameters:
def setup_list(host, api, project):
client = dataikuapi.DSSClient(host, api)
client._session.verify = False
project = client.get_project(project)
...
And your test just looks like this (without the parametrize marker, as the parametrization is now done in pytest_generate_tests):
def test_scenario_run(scenario, params):
scenario_id = scenario["id"]
...
The parametrization is now done at run-time, so it behaves the same as if you had placed a parametrize marker in the test.
And the other test that tests setup_list now has also to use the params fixture to get the needed arguments:
def test_check_scenario_exist(params):
assert len(setup_list(params["host"], params["api"], params["project"])) > 0,
"You need at least ..."

Unable to get testname while calling pytest execution from python or subprocess

I am trying to create test runner python file, that executes the pytest.exe in particular testcase folder and send the results via email.
Here is my code:
test_runner.py:
try:
command = "pytest.exe {app} > {log}".format(app=app_folder, log = log_name)
os.system(command)
except:
send_mail()
I use the following code in conftest.py to add screenshots to pytest-html report.
In conftest.py:
#pytest.mark.hookwrapper
def pytest_runtest_makereport(item, call):
pytest_html = item.config.pluginmanager.getplugin('html')
outcome = yield
report = outcome.get_result()
extra = getattr(report, 'extra', [])
if pytest_html:
xfail = hasattr(report, 'wasxfail')
if (report.skipped and xfail) or (report.failed and not xfail):
test_case = str(item._testcase).strip(")")
function_name = test_case.split(" ")[0]
file_and_class_name = ((test_case.split(" ")[1]).split("."))[-2:]
file_name = ".".join(file_and_class_name) + "." + function_name
Issue is, when I run the command "pytest.exe app_folder" in windows command prompt it is able to discover the test cases and execute them and get the results. But when I call the command from .py file either using os.command or subprocess it fails with the following exception:
\conftest.py", line 85, in pytest_runtest_makereport
INTERNALERROR> test_case = str(item._testcase).strip(")")
INTERNALERROR> AttributeError: 'TestCaseFunction' object has no attribute
'_testcase'
Can anyone please help me to understand whats happening here? or any other way to get the testcase name?
Update:
To overcome this issue, I alternatively used the TestResult object from pytest_runtest_makereport hook to get the test case details.
#pytest.mark.hookwrapper
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
In the above example, report variable contain the TestResult object. This can be manipulated to get the testcase/class/module name.
you can use shell=True option with subprocess to get the desired result
from subprocess import Popen
command='pytest.exe app_folder' #you can paste whole command which you run in cmd
p1=Popen(command,shell=True)
This would solve your purpose

Pytest - skip (xfail) mixed with parametrize

is there a way to use the #incremental plugin like described att Pytest: how to skip the rest of tests in the class if one has failed? mixed with #pytest.mark.parametrize like below:
#pytest.mark.incremental
Class TestClass:
#pytest.mark.parametrize("input", data)
def test_preprocess_check(self,input):
# prerequisite for test
#pytest.mark.parametrize("input",data)
def test_process_check(self,input):
# test only if test_preprocess_check succeed
The problem i encountered is, at the first fail of test_preprocess_check with a given input of my data set, the following test_preprocess_check and test_process_check are labeled "xfail".
The behaviour i expect will be, at each new "input" of my parametrized data set, the test will act in an incremental fashion.
ex: data = [0,1,2]
if only test_preprocess_check(0) failed:
i got the following report:
1 failed, 5 xfailed
but i expect the report:
1 failed, 1 xfailed, 4 passed
Thanks
After some experiments i found a way to generalize the #incremental to works with parametrize annotation. Simply rewrite the _previousfailed argument to make it unique for each input. The argument _genid was excactly the need.
I added a #pytest.mark.incrementalparam to achieve this.
Code become:
def pytest_runtest_setup(item):
previousfailed_attr = getattr(item, "_genid",None)
if previousfailed_attr is not None:
previousfailed = getattr(item.parent, previousfailed_attr, None)
if previousfailed is not None:
pytest.xfail("previous test failed (%s)" %previousfailed.name)
previousfailed = getattr(item.parent, "_previousfailed", None)
if previousfailed is not None:
pytest.xfail("previous test failed (%s)" %previousfailed.name)
def pytest_runtest_makereport(item, call):
if "incrementalparam" in item.keywords:
if call.excinfo is not None:
previousfailed_attr = item._genid
setattr(item.parent,previousfailed_attr, item)
if "incremental" in item.keywords:
if call.excinfo is not None:
parent = item.parent
parent._previousfailed = item
It's interesting to mention that's it can't be used without parametrize cause parametrize annotation creates automatically _genid variable.
Hope this can helps others than me.

Showing test count in buildbot

I am not particularly happy about the stats that Buildbot provides. I understand that it is for building and not testing - that's why it has a concept of Steps, but no concept of Test. Still there are many cases when you need test statistics from build results. For example when comparing skipped and failed tests on different platforms to estimate the impact of a change.
So, what is needed to make Buildbot display test count in results?
What is the most simple way, so that a person who don't know anything about Buildbot can do this in 15 minutes?
Depending how you want to process the test results and how the test results are presented, Buildbot does provide a Test step, buildbot.steps.shell.Test
An example of how I use it for my build environment:
from buildbot.steps import shell
class CustomStepResult(shell.Test):
description = 'Analyzing results'
descriptionDone = 'Results analyzed'
def __init__(self, log_file = None, *args, **kwargs):
self._log_file = log_file
shell.Test.__init__(self, *args, **kwargs)
self.addFactoryArguments(log_file = log_file)
def start(self):
if not os.path.exists(self._log_file):
self.finished(results.FAILURE)
self.step_status.setText('TestResult XML file not found !')
else:
import xml.etree.ElementTree as etree
tree = etree.parse(self._log_file)
root = tree.getroot()
passing = len(root.findall('./testsuite/testcase/success'))
skipped = len(root.findall('./testsuite/testcase/skip'))
fails = len(root.findall('./testsuite/error')) + len(root.findall('./testsuite/testcase/error')) + len(root.findall('./testsuite/testcase/failure'))
self.setTestResults(total = fails+passing+skipped, failed = fails, passed = passing)
## the final status for WARNINGS is green but the step itself will be orange
self.finished(results.SUCCESS if fails == 0 else results.WARNINGS)
self.step_status.setText(self.describe(True))
And in the configuration factory I create a step as below:
factory.addStep(CustomStepResult(log_file = log_file))
Basically I override the default Test shell step and pass a custom XML file which contains my test results. I then look for the pass/fail/skip result nodes and accordingly display the results in the waterfall.