In pytest-html, is there a way to combine html reports of 2 different pytest runs? - pytest

Basically i have scenario where i need to run set of parallel pytests and another set of serial pytests separately.
Each will generate separate pytest-html reports.
But i am looking for solution to combine both the reports generated.
Eg:
py.test -n auto -m "not serial" --dist=loadfile --html=report1.html
py.test -n auto -m "serial" --dist=loadfile --html=report2.html
Is there a way to combine report1.html and report2.html and generate single html report ?

Pytest HTML Merger
There is a new utility that is able to merge multiple pytest-html reports.
I have used it in my workplace and it worked great for us.
Assume you have multiple html reports under current directory ./.
Installation
pip install pytest-html-merger
Usage
pytest_html_merger -i ./ -o ./merged.html
Will generate a unified pytest-html report.
Tested on Linux but should work on Windows as well.
Link to github page
https://github.com/akavbathen/pytest_html_merger
Enjoy!

Related

Deselect a pytest test in the test itself

I already know that we can use marks and then call pytest with -m to execute only certain tests
my question is: is there a way to mark a test so that that test is not executed without adding any -m when calling pytest?
EDIT:
I am thinking something like:
mark the test with a special (I don't know if that exists, that is why this question) mark the test thespecialtest.py as
#pytest.mark.notselect
then running the tests like always: pytest will exclude that test.
If I want to run that test specifically I can do explicitly pytest thescpecialtest.py
I know that the best and easiest way would be just to use -m in calling pytest but I want to ask if there is an option where this would not be necessary
-m option is probably the most comfortable to use for this use case.
However, you can choose what tests to run based on names with -k option, which is basically the same as -m option, but you select test cases based on their names, rather than marks.
Another option is you can change test discovery process, so you can for example tell pytest to collect and execute only functions that comply to a certain name patters, e.g. you add to your pytest.ini:
[pytest]
python_functions = *_check
which tells pytest to collect and execute only functions that comply to this glob pattern. You can do this with classes and files as well.

Is there a way in pytest to generate test report from custom file for non-python test cases?

Background
Trigger legacy non-python testcases from pytest. Since these testcases are categorized as testsuites, from pytest perspective we'll be doing an ssh on a remote machine and trigger a testsuite. So from pytest's point of view it is a single testcase, but actually it would be a bunch executing on remote machine.
Requirement
The testsuite will generate a testreport which we'll SCP back to the pytest machine. I wish to parse the testreport and report the PASS/FAIL for each testcase from pytest
I have been looking into example but still can't get my head around on how would I trigger the test case with SSH and parse the testreport(XML/JSON) and generate pytest report
Any suggestions ?
Update:
I have been able to parse the yaml file to generate the terminal report(pytest_terminal_summary) for my testcases. But I would like that pytest also reports the number of testcases failed/passed.
Can you try pytest test.py -v --junitxml="result.xml"
You can also generate html result using pytest file.py -sv --html report.html

How to run a pytest-bdd test?

I am not understanding how to properly run a simple test(feature file and python file)
with the library pytest-bdd.
From the official documentation, I can't understand what command to issue to run a test.
I tried using pytest command, but I saw the NO test ran.
Do I need to use another library behave to run a feature file?
I figured out trying for 2 days,that ,
for running a pytest-bdd test, there are certain requirements, at least in my view.
put both the feature file and python file in the same directory (maybe this can be changed with configuration files)
the python file name needs to start with test_
the python file needs to contain a method of which name will start with test_
the method starting with test_ , need to be assigned to the #scenario sentence
to run the test, issue pytest command in the same directory(maybe it is also configurable)
After issuing you will only see the method with the name starting with test_ has passed, but all the tests actually ran. To test, you can assert False in any #when or #then annotated method, it will throw errors.
The system contained : pytest-bdd==3.0.2 (copied from pip freeze output)
Features files and python files can be placed in different folders using the bdd_features_base_dir hook provided by pytest-bdd; I think it is better having features files in different folders too.
Here you can see a working example (a simple hello world BDD test):
https://github.com/davidemoro/pytest-play-docker/tree/master/tests
https://github.com/davidemoro/pytest-play-docker/blob/master/tests/pytest.ini (see bdd_features_base_dir in [pytest] section)
https://github.com/davidemoro/pytest-play-docker/tree/master/tests/bdd
If you want to try out pytest-bdd without installation you can use Docker. Create a folder with inside your pytest BDD files and if you want a separate features folder targeted in bdd_features_base_dir and run:
docker run --rm -it -v $(pwd):/src davidemoro/pytest-play:latest
I've found out, that in the python file you don't have to put:
the method starting with test_ , need to be assigned to the #scenario sentence
You can just add: scenarios("") - to allow the tests to be started, which are using steps defined in this specific python file.
Remember to import scenarios!: from pytest_bdd import scenarios
Example:
Code example
Command..
pytest -v path_to_test_file.py
Things to note here..
Check format of feature file as filename.feature
Always __init__ modules, otherwise test-runner will not find test files
Glue right step definitions to test function
Add feature in features module
If you are using python3 execute test with python3
So,
python3 -m pytest -v path_to_test_file.py
Documentation
https://pytest-bdd.readthedocs.io/en/stable/#

How to have Dexy evaluate Perl scripts?

The dexy documentation states than any language may be used. The tutorial use the py filter to run Python file, but I didn't find any filter to run Perl file.
I try to execute a very simple Perl file
I've tried to use the bash or sh filter, but with no luck, and didn't find any execute-or-like filter.
Am I missing something obvious ?
Ok, here are the different solutions I found about this.
1. A perl filter now exist
Ok, Ana is the owner of this project and is very reactive. I asked her the question about dexy and perl on IRC, and tada ! Less than 1 hour later, there was a commit on the repository with perl support.
So, if you just get latest version and install it this way :
git clone https://github.com/dexy/dexy
cd dexy
sudo pip install -e .
You should have a perl filter.
If you want to pass arguments to a script, just use the scriptargs setting.
2. Use a bash script
Another very simple solution is to embed the launch of a perl script into a sh/bash script, and use the sh/shint/bash filter that already exist.
3. Use bash script without additional files
If you fear that the latest solution will makes you add a lot of tiny scripts in your directories, you may use the contents feature of dexy. That way, the required one-liners are defined in dexy.yaml only.
Something like :
- shell-myscript.sh|sh:
- contents: "perl ./perl/myscript.pl --any-parameter"
- perl/myscript.pl
is doing the job just fine for me.

Can py.test support multiple -k options?

Can py.test supports multiple -k options?
Each testcase belongs to a particular group such as _eventnotification or _interface, etc.
Is it possible to run test cases that belong to either one or both at the same time?
ie, run testcases that has _eventnotification or _interface in the name at the same time.
I tried the following and only the testcases with _interface were executed.
If that is not supported, is there another way to do this?
py.test -k "_eventnotification" -k "_interface"
The bad news: pytest-2.3.3 does not support it.
The good news: i took your question as an opportunity to finally enhance "-k" behaviour, so that you can use "not", "or", "end" etc, see the [extended -k example][1]. It works now like "-m" except that it matches on (substrings of) test names, not markers. You can use this in-development pytest version with "pip install -i http://pypi.testrun.org -U pytest".