Is there a way in pytest to generate test report from custom file for non-python test cases? - pytest

Background
Trigger legacy non-python testcases from pytest. Since these testcases are categorized as testsuites, from pytest perspective we'll be doing an ssh on a remote machine and trigger a testsuite. So from pytest's point of view it is a single testcase, but actually it would be a bunch executing on remote machine.
Requirement
The testsuite will generate a testreport which we'll SCP back to the pytest machine. I wish to parse the testreport and report the PASS/FAIL for each testcase from pytest
I have been looking into example but still can't get my head around on how would I trigger the test case with SSH and parse the testreport(XML/JSON) and generate pytest report
Any suggestions ?
Update:
I have been able to parse the yaml file to generate the terminal report(pytest_terminal_summary) for my testcases. But I would like that pytest also reports the number of testcases failed/passed.

Can you try pytest test.py -v --junitxml="result.xml"
You can also generate html result using pytest file.py -sv --html report.html

Related

In pytest-html, is there a way to combine html reports of 2 different pytest runs?

Basically i have scenario where i need to run set of parallel pytests and another set of serial pytests separately.
Each will generate separate pytest-html reports.
But i am looking for solution to combine both the reports generated.
Eg:
py.test -n auto -m "not serial" --dist=loadfile --html=report1.html
py.test -n auto -m "serial" --dist=loadfile --html=report2.html
Is there a way to combine report1.html and report2.html and generate single html report ?
Pytest HTML Merger
There is a new utility that is able to merge multiple pytest-html reports.
I have used it in my workplace and it worked great for us.
Assume you have multiple html reports under current directory ./.
Installation
pip install pytest-html-merger
Usage
pytest_html_merger -i ./ -o ./merged.html
Will generate a unified pytest-html report.
Tested on Linux but should work on Windows as well.
Link to github page
https://github.com/akavbathen/pytest_html_merger
Enjoy!

Is there a way to run pytests using xdist by file(s)?

I am trying to run 2 test files using xdist with 2 gateways (-n=2). Each test file contains tests which are user permission specific. While running the test with pytest and pytest-xdist, I noticed some of the test fail randomly. It is happening because some of the tests from file1 getting executed by a different gw. So, if [gw0] was running most of the tests from file0, sometimes, [gw0] also executes some tests from file1 which causes the failure.
I am trying to find out if there is a way I can force/ask xdist to execute a specific file or perhaps if there is a way to assign a file to a gw?
pytest test_*.py -n=2 -s -v
also tried:
pytest test_*.py -n=2 -s -v --dist=loadfile
Assuming your file for running parallel tests is correctly distributed (properly receiving PYTEST_XDIST_WORKER and PYTEST_XDIST_WORKER_COUNT environment variables), you only need to run:
pytest test_*.py --tx '2*popen' --dist=loadfile --dist=each

How to run a pytest-bdd test?

I am not understanding how to properly run a simple test(feature file and python file)
with the library pytest-bdd.
From the official documentation, I can't understand what command to issue to run a test.
I tried using pytest command, but I saw the NO test ran.
Do I need to use another library behave to run a feature file?
I figured out trying for 2 days,that ,
for running a pytest-bdd test, there are certain requirements, at least in my view.
put both the feature file and python file in the same directory (maybe this can be changed with configuration files)
the python file name needs to start with test_
the python file needs to contain a method of which name will start with test_
the method starting with test_ , need to be assigned to the #scenario sentence
to run the test, issue pytest command in the same directory(maybe it is also configurable)
After issuing you will only see the method with the name starting with test_ has passed, but all the tests actually ran. To test, you can assert False in any #when or #then annotated method, it will throw errors.
The system contained : pytest-bdd==3.0.2 (copied from pip freeze output)
Features files and python files can be placed in different folders using the bdd_features_base_dir hook provided by pytest-bdd; I think it is better having features files in different folders too.
Here you can see a working example (a simple hello world BDD test):
https://github.com/davidemoro/pytest-play-docker/tree/master/tests
https://github.com/davidemoro/pytest-play-docker/blob/master/tests/pytest.ini (see bdd_features_base_dir in [pytest] section)
https://github.com/davidemoro/pytest-play-docker/tree/master/tests/bdd
If you want to try out pytest-bdd without installation you can use Docker. Create a folder with inside your pytest BDD files and if you want a separate features folder targeted in bdd_features_base_dir and run:
docker run --rm -it -v $(pwd):/src davidemoro/pytest-play:latest
I've found out, that in the python file you don't have to put:
the method starting with test_ , need to be assigned to the #scenario sentence
You can just add: scenarios("") - to allow the tests to be started, which are using steps defined in this specific python file.
Remember to import scenarios!: from pytest_bdd import scenarios
Example:
Code example
Command..
pytest -v path_to_test_file.py
Things to note here..
Check format of feature file as filename.feature
Always __init__ modules, otherwise test-runner will not find test files
Glue right step definitions to test function
Add feature in features module
If you are using python3 execute test with python3
So,
python3 -m pytest -v path_to_test_file.py
Documentation
https://pytest-bdd.readthedocs.io/en/stable/#

Running a test with tox based on a keyword

I am using pytest with tox. I can run some of my tests with a keyword like this:
pytest -k <keyword> path/to/tests
Now it would be really convenient to be able to do this also with tox, as the environments there are clean and different python versions can be tested. However the nearest thing I have found is:
tox -- path/to/tests/test_very_specific_name.py:TestClass.test_func
This is not easy to type, so I rather just run tox without arguments and wait 2 minutes for everything to finish.
Is there a way to run single tests based on keywords with tox? I tried:
tox -- -k <keyword>
This results in a huge list of import errors. It doesn't seem to be able to find any of my local includes. Is this supposed to work?
I figured it out thanks to the comment by phd.
Everything on the command line after -- can be used in tox.ini as {posargs}. I was using that wrong. My tox.ini now has a line like this:
commands = py.test {posargs} <test_folder>
Now it works perfectly with:
tox -- -k <keyword>

Output selenium test result as html after running perl script

I am currently looking for a way to output the test result nicely after running selenium perl script.
The htmlSuite command from running selenium server outputs a nice html format result page, but I don't know how to do that in perl script.
Problem is, I have it setup so that Selenium is being run 24/7 on a virtual machine workstation(Windows 7), where anyone one can run tests on. Therefore I can't use htmlSuite to run the test because the server will close after the test is finished.
Is there a command argument or perl script method to make selenium server output results on html or other nice format other than printing it on the command line?
Or is there a better way to do this?
If your script is output TAP (that's what Test::More would put out), then you can use the Test::Harness family of modules to parse that TAP and use it to generate an HTML report.
How nice is nice? Under Hudson/Jenkins this gives graphs and a tabular report of tests run:
prove --timer --formatter=TAP::Formatter::JUnit large_test.t >junit.xml