How to run a pytest-bdd test? - pytest

I am not understanding how to properly run a simple test(feature file and python file)
with the library pytest-bdd.
From the official documentation, I can't understand what command to issue to run a test.
I tried using pytest command, but I saw the NO test ran.
Do I need to use another library behave to run a feature file?

I figured out trying for 2 days,that ,
for running a pytest-bdd test, there are certain requirements, at least in my view.
put both the feature file and python file in the same directory (maybe this can be changed with configuration files)
the python file name needs to start with test_
the python file needs to contain a method of which name will start with test_
the method starting with test_ , need to be assigned to the #scenario sentence
to run the test, issue pytest command in the same directory(maybe it is also configurable)
After issuing you will only see the method with the name starting with test_ has passed, but all the tests actually ran. To test, you can assert False in any #when or #then annotated method, it will throw errors.
The system contained : pytest-bdd==3.0.2 (copied from pip freeze output)

Features files and python files can be placed in different folders using the bdd_features_base_dir hook provided by pytest-bdd; I think it is better having features files in different folders too.
Here you can see a working example (a simple hello world BDD test):
https://github.com/davidemoro/pytest-play-docker/tree/master/tests
https://github.com/davidemoro/pytest-play-docker/blob/master/tests/pytest.ini (see bdd_features_base_dir in [pytest] section)
https://github.com/davidemoro/pytest-play-docker/tree/master/tests/bdd
If you want to try out pytest-bdd without installation you can use Docker. Create a folder with inside your pytest BDD files and if you want a separate features folder targeted in bdd_features_base_dir and run:
docker run --rm -it -v $(pwd):/src davidemoro/pytest-play:latest

I've found out, that in the python file you don't have to put:
the method starting with test_ , need to be assigned to the #scenario sentence
You can just add: scenarios("") - to allow the tests to be started, which are using steps defined in this specific python file.
Remember to import scenarios!: from pytest_bdd import scenarios
Example:
Code example

Command..
pytest -v path_to_test_file.py
Things to note here..
Check format of feature file as filename.feature
Always __init__ modules, otherwise test-runner will not find test files
Glue right step definitions to test function
Add feature in features module
If you are using python3 execute test with python3
So,
python3 -m pytest -v path_to_test_file.py
Documentation
https://pytest-bdd.readthedocs.io/en/stable/#

Related

Perl executable crashes even though file is not missing

I get the following error:
Can't load '...\AppData\Local\Temp\par-6e72616f\cache-20221205133501\5743946b.xs.dll' for module GD:
load_file:
The specified module could not be found at <embedded>/DynaLoader.pm line 193.
at <embedded>/PAR/Heavy.pm line 140.
(Line breaks added for readability.)
Here is the file t2.pl:
use GD;
Here is the command to convert it to an exe (I use a batch file that timestamps it):
pp -T 20221205133501 -o t2_20221205133501.exe t2.pl
On my laptop, the exe works, but on a barebones Citrix environment it fails.
My environment:
Strawberry Perl v5.32.1 built for MSWin32-x86-multi-thread-64int
GD v2.73
I know the file is simple, but that one line is enough to cause the crash.
The file it complains about exists and is located where it is looking.
I have looked and is looks like I need to add -m GD, or -l xxx to make it work. I tried adding all the dll files I could find for GD, but failed.
I have a corporate environment so I can't really use anything that depends on external programs not in Windows 10. pp_simple depends on wxpar which I do not have. I have used:
objdump -x C:\Strawberry\perl\vendor\lib\auto\GD\GD.xs.dll | find "DLL"
which got me a list of DLLs, and I did try using them with -l.
From Re: Par with strawberry-Perl
There are likely missing DLLs that need to be added to the pp call
using the --link option.
Finding these manually can be a pain, so have a look at pp_autolink or
pp_simple (the former is mine, but adapted from the latter).
https://github.com/shawnlaffan/perl-pp-autolink
https://www.perlmonks.org/?node_id=1148802

How to test a single failing test when building perl

This issue usually is encountered when trying to run make test and sees one test fails. The README describes one can run each test individually, didn't clearly specifies how to do so.
make test uses the script called TEST in the test directory (t). To replicate make test of a single file, one would use this script as follows:
[.../perl/t]$ ./perl -I../lib TEST op/array.t
t/op/array ... ok
All tests successful.
Elapsed: 0 sec
u=0.01 s=0.00 cu=0.03 cs=0.02 scripts=1 tests=194
If you want to see the raw output of the test script, you can run perl as follows:
[.../perl/t]$ ./perl -I../lib op/array.t
1..194
ok 1
ok 2
ok 3
...
ok 192 - holes passed to sub do not lose their position (multideref, mg)
ok 193 - holes passed to sub do not lose their position (aelem)
ok 194 - holes passed to sub do not lose their position (aelem, mg)
The above information and more is found in perlhack.
This document explains how Perl development works. It includes details about the Perl 5 Porters email list, the Perl repository, the Perlbug bug tracker, patch guidelines, and commentary on Perl development philosophy.
Note that you need to run make test_prep before the above commands work. (If you've run make test, you've effectively run make test_prep already.)
Run ./perl harness ../foo/boo.t in the t directory, with foo/boo the name of the failing test.
To run a single test script, use perl, or better, prove. Assuming you are in the module's base directory:
prove -lv t/some-test-script.t
This will run the test script against the libraries in ./lib, with fallback to the libraries available to your install of Perl.
If you want to use the build libraries built by make, then this:
prove -bv t/some-test-script.t
Now the test script will be run against the libraries in ./blib, falling back to libraries installed for your Perl.
The test scripts are typically just Perl scripts that live in a t/ or xt/ or some similar path within the distribution's directory structure. So you can also run them just with Perl:
perl -Iblib t/some-test-script.t
But prove produces nicer test summary information and color coding.
That is about as granular as you can get unless tests are written to allow for targeting specific segments within a test script. If you need to target a specific test within a test script you'll usually have to dig into the test code itself.

Is there a way in pytest to generate test report from custom file for non-python test cases?

Background
Trigger legacy non-python testcases from pytest. Since these testcases are categorized as testsuites, from pytest perspective we'll be doing an ssh on a remote machine and trigger a testsuite. So from pytest's point of view it is a single testcase, but actually it would be a bunch executing on remote machine.
Requirement
The testsuite will generate a testreport which we'll SCP back to the pytest machine. I wish to parse the testreport and report the PASS/FAIL for each testcase from pytest
I have been looking into example but still can't get my head around on how would I trigger the test case with SSH and parse the testreport(XML/JSON) and generate pytest report
Any suggestions ?
Update:
I have been able to parse the yaml file to generate the terminal report(pytest_terminal_summary) for my testcases. But I would like that pytest also reports the number of testcases failed/passed.
Can you try pytest test.py -v --junitxml="result.xml"
You can also generate html result using pytest file.py -sv --html report.html

Running a test with tox based on a keyword

I am using pytest with tox. I can run some of my tests with a keyword like this:
pytest -k <keyword> path/to/tests
Now it would be really convenient to be able to do this also with tox, as the environments there are clean and different python versions can be tested. However the nearest thing I have found is:
tox -- path/to/tests/test_very_specific_name.py:TestClass.test_func
This is not easy to type, so I rather just run tox without arguments and wait 2 minutes for everything to finish.
Is there a way to run single tests based on keywords with tox? I tried:
tox -- -k <keyword>
This results in a huge list of import errors. It doesn't seem to be able to find any of my local includes. Is this supposed to work?
I figured it out thanks to the comment by phd.
Everything on the command line after -- can be used in tox.ini as {posargs}. I was using that wrong. My tox.ini now has a line like this:
commands = py.test {posargs} <test_folder>
Now it works perfectly with:
tox -- -k <keyword>

Recommended way to run commands after installing dependencies in the virtualenv

I would like to use tox to run py.test on a project which needs additional setup in addition to installing packages into the virtualenv. After creating the virtualenv and installing dependencies, some commands need to be run.
Specifically I'm talking about setting up a node and npm environment using nodeenv:
nodeenv --prebuilt -p
I see that tox allows me to provide a custom command used for installing dependencies by setting install_command in tox.ini. But I don't think this is what I want because that replaces the command (I assume pip) used to install dependencies.
I thought about using a py.test fixture with session scope to handle setting up nodeenv but that seems hacky to me as I don't want this to happen when py.test is run directly, not via tox.
What is the least insane way of achieving this?
You can do all necessary setup after the creation of the virtualenv and the dependency installation in commands. Yes, it says "the commands to be called for testing." but if you need to do extra work to prepare for testing you can just do it right there.
It works through whatever you throw at it in the order it is given - e.g.:
[testenv:someenv]
deps =
nodeenv
pytest
flexmock
commands =
nodeenv --prebuilt -p
; ... and whatever else you might need to do
py.test path/to/my/tests
If you have commands/scripts or whatever else that produces the right result but it returns a non zero exit status you can ignore that by prepending - (as in - naughty-command).
If you need more steps to happen you can wrap them in a little (Python) script and call that script instead as outlined in https://stackoverflow.com/a/47834447/2626627.
There is also an issue to add the ability to use more than one install command: https://github.com/tox-dev/tox/issues/715 is implemented.
I had the same issue, and as it was important for me to be able to create the environment without invoking the tests (via --notest), I wanted the install to happen in the install phase and not the run phase, so I did something slightly differently. First, I created a create-env script:
#!/usr/bin/env sh
set -e
pip install $#
nodeenv --prebuilt --python-virtualenv --node=8.2.1
Made it executable, Then in tox.ini:
[tox]
skipsdist = True
[testenv]
install_command = ./create-env {opts} {packages}
deps = nodeenv
commands = node --version
This complete example runs and outputs the following:
$ tox
python create: .../.tox/python
python installdeps: nodeenv
python installed: nodeenv==1.3.0
python runtests: PYTHONHASHSEED='1150209523'
python runtests: commands[0] | node --version
v8.2.1
_____________________________________________________________________ summary ______________________________________________________________________
python: commands succeeded
congratulations :)
This approach has the downside that it would only work on Unix.
In tox 715, I propose the possibility of native support for multiple install commands.