Unable to get testname while calling pytest execution from python or subprocess - pytest

I am trying to create test runner python file, that executes the pytest.exe in particular testcase folder and send the results via email.
Here is my code:
test_runner.py:
try:
command = "pytest.exe {app} > {log}".format(app=app_folder, log = log_name)
os.system(command)
except:
send_mail()
I use the following code in conftest.py to add screenshots to pytest-html report.
In conftest.py:
#pytest.mark.hookwrapper
def pytest_runtest_makereport(item, call):
pytest_html = item.config.pluginmanager.getplugin('html')
outcome = yield
report = outcome.get_result()
extra = getattr(report, 'extra', [])
if pytest_html:
xfail = hasattr(report, 'wasxfail')
if (report.skipped and xfail) or (report.failed and not xfail):
test_case = str(item._testcase).strip(")")
function_name = test_case.split(" ")[0]
file_and_class_name = ((test_case.split(" ")[1]).split("."))[-2:]
file_name = ".".join(file_and_class_name) + "." + function_name
Issue is, when I run the command "pytest.exe app_folder" in windows command prompt it is able to discover the test cases and execute them and get the results. But when I call the command from .py file either using os.command or subprocess it fails with the following exception:
\conftest.py", line 85, in pytest_runtest_makereport
INTERNALERROR> test_case = str(item._testcase).strip(")")
INTERNALERROR> AttributeError: 'TestCaseFunction' object has no attribute
'_testcase'
Can anyone please help me to understand whats happening here? or any other way to get the testcase name?
Update:
To overcome this issue, I alternatively used the TestResult object from pytest_runtest_makereport hook to get the test case details.
#pytest.mark.hookwrapper
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
In the above example, report variable contain the TestResult object. This can be manipulated to get the testcase/class/module name.

you can use shell=True option with subprocess to get the desired result
from subprocess import Popen
command='pytest.exe app_folder' #you can paste whole command which you run in cmd
p1=Popen(command,shell=True)
This would solve your purpose

Related

Yocto - git revision in the image name

By default Yocto adds build timestamp to the output image file name, but I would like to replace it by the revision of my integration Git repository (which references all my layers and configuration files). To achieve this, I put the following code to my image recipe:
def get_image_version(d):
import subprocess
import os.path
try:
parentRepo = os.path.dirname(d.getVar("COREBASE", True))
return subprocess.check_output(["git", "describe", "--tags", "--long", "--dirty"], cwd = parentRepo, stderr = subprocess.DEVNULL).strip().decode('UTF-8')
except:
return d.getVar("MACHINE", True) + "-" + d.getVar("DATETIME", True)
IMAGE_VERSION = "${#get_image_version(d)}"
IMAGE_NAME = "${IMAGE_BASENAME}-${IMAGE_VERSION}"
IMAGE_NAME[vardepsexclude] = "IMAGE_VERSION"
This code works properly until I change Git revision (e.g. by adding a new commit). Then I receive the following error:
ERROR: When reparsing /home/ubuntu/yocto/poky/../mylayer/recipes-custom/images/core-image-minimal.bb.do_image_tar, the basehash value changed from 63e1e69797d2813a4c36297517478a28 to 9788d4bf2950a23d0f758e4508b0a894. The metadata is not deterministic and this needs to be fixed.
I understand this happens because the image recipe has already been parsed with older Git revision, but why constant changes of the build timestamp do not cause the same error? How can I fix my code to overcome this problem?
The timestamp does not have this effect since its added to vardepsexclude:
https://docs.yoctoproject.org/bitbake/bitbake-user-manual/bitbake-user-manual-metadata.html#variable-flags
[vardepsexclude]: Specifies a space-separated list of variables that should be excluded from a variable’s dependencies for the purposes of calculating its signature.
You may need to add this in a couple of places, e.g.:
https://git.yoctoproject.org/poky/tree/meta/classes/image-artifact-names.bbclass#n7
IMAGE_VERSION_SUFFIX ?= "-${DATETIME}"
IMAGE_VERSION_SUFFIX[vardepsexclude] += "DATETIME SOURCE_DATE_EPOCH"
IMAGE_NAME ?= "${IMAGE_BASENAME}-${MACHINE}${IMAGE_VERSION_SUFFIX}"
After some research it turned out the problem was in this line
IMAGE_VERSION = "${#get_image_version(d)}"
because the function get_image_version() was called during parsing. I took inspiration from the source file in aehs29's post and moved the code to the anonymous Python function which is called after parsing.
I also had to add vardepsexclude attribute to the IMAGE_NAME variable. I tried to add vardepvalue flag to IMAGE_VERSION variable as well and in this particular case it did the same job as vardepsexclude. Mentioned Bitbake class uses both flags, but I think in my case using only one of them is enough.
The final code is below:
IMAGE_VERSION ?= "${MACHINE}-${DATETIME}"
IMAGE_NAME = "${IMAGE_BASENAME}-${IMAGE_VERSION}"
IMAGE_NAME[vardepsexclude] += "IMAGE_VERSION"
python () {
import subprocess
import os.path
try:
parentRepo = os.path.dirname(d.getVar("COREBASE", True))
version = subprocess.check_output(["git", "describe", "--tags", "--long", "--dirty"], cwd = parentRepo, stderr = subprocess.DEVNULL).strip().decode('UTF-8')
d.setVar("IMAGE_VERSION", version)
except:
bb.warning("Could not get Git revision, image will have default name.")
}
EDIT:
After some research I realized it's better to define a global variable in layer.conf file of the layer containing the recipes referencing the variable. The variable is set by a python script and is immediately expanded to prevent deterministic build warning:
layer.conf:
require conf/image-version.py.inc
IMAGE_VERSION := "${#get_image_version(d)}"
image-version.py.inc:
def get_image_version(d):
import subprocess
import os.path
try:
parentRepo = os.path.dirname(d.getVar("COREBASE", True))
return subprocess.check_output(["git", "describe", "--tags", "--long", "--dirty"], cwd = parentRepo, stderr = subprocess.DEVNULL).strip().decode('UTF-8')
except:
bb.warn("Could not determine image version. Default naming schema will be used.")
return d.getVar("MACHINE", True) + "-" + d.getVar("DATETIME", True)
I think this is cleaner approach which fits BitBake build system better.

How do i implement a locustfile where each locust takes unique value from csv files for it's task?

enter code here
from locust import HttpLocust, TaskSet, task
class ExampleTask(TaskSet):
csvfile = open('failed.csv', 'r')
data = csvfile.readlines()
bakdata = list(data)
#task
def fun(self):
try:
value = self.data.pop().split(',')
print('------This is the value {}'.format(value[0]))
except IndexError:
self.data = list(self.bakdata)
class ExampleUser(HttpLocust):
host = 'https://www.google.com'
task_set = ExampleTask
Following my csv file:
516,True,success
517,True,success
518,True,success
519,True,success
520,True,success
521,True,success
522,True,success
523,True,success
524,True,success
525,True,success
526,True,success
527,True,success
528,True,success
529,True,success
530,True,success
531,True,success
532,True,success
533,True,success
534,True,success
535,True,success
536,True,success
537,True,success
538,True,success
539,True,success
540,True,success
541,True,success
542,True,success
543,True,success
544,True,success
545,True,success
546,True,success
547,True,success
548,True,success
549,True,success
550,True,success
551,True,success
552,True,success
553,True,success
554,True,success
555,True,success
556,True,success
557,True,success
558,True,success
559,True,success
Here after csv file end , locust does not takes unique value, it takes same value for all the users which is simulated.
I'm not 100% sure, but I think your problem is this line:
self.data = list(self.bakdata)
This will give each User instance a different copy of the list.
It should work if you change it to:
ExampleTask.data = list(self.bakdata)
Or you can use locust-plugins's CSVReader, see the example here:
https://github.com/SvenskaSpel/locust-plugins/blob/master/examples/csvreader_ex.py

Calling Command Line From Blue Prism

I am trying to use Nitro PDF Reader from the command line.
This was working correctly, but I am now getting an error Internal : Could not execute code stage: Ambiguous match found.
Passing null values also produces this error.
Code:
timedOut = False
Dim startTime as Date = Date.Now
Dim info as New ProcessStartInfo(appn)
If args <> "" Then info.Arguments = args
If dir <> "" Then info.WorkingDirectory = dir
Using proc As Process = Process.Start(info)
timedOut = Not proc.WaitForExit( _
CInt(timeout.TotalMilliseconds))
End Using
Issue looked to be caused by another action with similar code. There was an action called Run Process until Ended. I then duplicated to have a second which was called Run Process.
This was causing the error.

pytest overall result 'Pass' when all tests are skipped

Currently pytest returns 0 when all tests are skipped. Is it possible to configure pytest return value to 'fail' when all tests are skipped? Or is it possible to get total number passed/failed tests in pytest at the end of execution?
There is possibly a more idiomatic solution, but the best I could come up with so far is this.
Modify this example of the documentation to save the results somewhere.
# content of conftest.py
import pytest
TEST_RESULTS = []
#pytest.mark.tryfirst
def pytest_runtest_makereport(item, call, __multicall__):
rep = __multicall__.execute()
if rep.when == "call":
TEST_RESULTS.append(rep.outcome)
return rep
If you want to make the session fail on a certain condition, then you can just write a session scoped fixture-teardown to do that for you:
# conftest.py continues...
#pytest.yield_fixture(scope="session", autouse=True)
def _skipped_checker(request):
yield
if not [tr for tr in TEST_RESULTS if tr != "skipped"]:
pytest.failed("All tests were skipped")
Unfortunatelly the fail (Error actually) from this will be associated to the last testcase in the session.
If you want to change the return value then you can write a hook:
# still conftest.py
def pytest_sessionfinish(session):
if not [tr for tr in TEST_RESULTS if tr != "skipped"]:
session.exitstatus = 10
Or just call through pytest.main() then access that variable and do you own post session checks.
import pytest
return_code = pytest.main()
import conftest
if not [tr for tr in conftest.TEST_RESULTS if tr != "skipped"]:
sys.exit(10)
sys.exit(return_code)

Showing test count in buildbot

I am not particularly happy about the stats that Buildbot provides. I understand that it is for building and not testing - that's why it has a concept of Steps, but no concept of Test. Still there are many cases when you need test statistics from build results. For example when comparing skipped and failed tests on different platforms to estimate the impact of a change.
So, what is needed to make Buildbot display test count in results?
What is the most simple way, so that a person who don't know anything about Buildbot can do this in 15 minutes?
Depending how you want to process the test results and how the test results are presented, Buildbot does provide a Test step, buildbot.steps.shell.Test
An example of how I use it for my build environment:
from buildbot.steps import shell
class CustomStepResult(shell.Test):
description = 'Analyzing results'
descriptionDone = 'Results analyzed'
def __init__(self, log_file = None, *args, **kwargs):
self._log_file = log_file
shell.Test.__init__(self, *args, **kwargs)
self.addFactoryArguments(log_file = log_file)
def start(self):
if not os.path.exists(self._log_file):
self.finished(results.FAILURE)
self.step_status.setText('TestResult XML file not found !')
else:
import xml.etree.ElementTree as etree
tree = etree.parse(self._log_file)
root = tree.getroot()
passing = len(root.findall('./testsuite/testcase/success'))
skipped = len(root.findall('./testsuite/testcase/skip'))
fails = len(root.findall('./testsuite/error')) + len(root.findall('./testsuite/testcase/error')) + len(root.findall('./testsuite/testcase/failure'))
self.setTestResults(total = fails+passing+skipped, failed = fails, passed = passing)
## the final status for WARNINGS is green but the step itself will be orange
self.finished(results.SUCCESS if fails == 0 else results.WARNINGS)
self.step_status.setText(self.describe(True))
And in the configuration factory I create a step as below:
factory.addStep(CustomStepResult(log_file = log_file))
Basically I override the default Test shell step and pass a custom XML file which contains my test results. I then look for the pass/fail/skip result nodes and accordingly display the results in the waterfall.