Running a test with tox based on a keyword - pytest

I am using pytest with tox. I can run some of my tests with a keyword like this:
pytest -k <keyword> path/to/tests
Now it would be really convenient to be able to do this also with tox, as the environments there are clean and different python versions can be tested. However the nearest thing I have found is:
tox -- path/to/tests/test_very_specific_name.py:TestClass.test_func
This is not easy to type, so I rather just run tox without arguments and wait 2 minutes for everything to finish.
Is there a way to run single tests based on keywords with tox? I tried:
tox -- -k <keyword>
This results in a huge list of import errors. It doesn't seem to be able to find any of my local includes. Is this supposed to work?

I figured it out thanks to the comment by phd.
Everything on the command line after -- can be used in tox.ini as {posargs}. I was using that wrong. My tox.ini now has a line like this:
commands = py.test {posargs} <test_folder>
Now it works perfectly with:
tox -- -k <keyword>

Related

How can i run cypress cucumber test cases using tags in parallel

Can anyone help me on how to run cypress-tags run -e TAGS='#feature-tag' in parallel on gitlab pipeline.. I tried using
"e2e:test": "cypress-tags run -e -- TAGS='#feature-tag'",
"test:e2e": "run-p --race "e2e:test -- {#}" --"
but it didn't work although it is working when i use it with cypress run instead of cypress-tag run,
"e2e:test": "cypress run",
"test:e2e": "run-p --race "e2e:test -- {#}" --"
Here this work but i can not use tags with cypress run. It will be really grateful if you can help me with any solution.
It looks like there is no native support from cypress-cucumber-preprocesseor to achieve this. But there are other options in the market if you want to consider them. They are discussed in this issue.

Is there a way to run pytests using xdist by file(s)?

I am trying to run 2 test files using xdist with 2 gateways (-n=2). Each test file contains tests which are user permission specific. While running the test with pytest and pytest-xdist, I noticed some of the test fail randomly. It is happening because some of the tests from file1 getting executed by a different gw. So, if [gw0] was running most of the tests from file0, sometimes, [gw0] also executes some tests from file1 which causes the failure.
I am trying to find out if there is a way I can force/ask xdist to execute a specific file or perhaps if there is a way to assign a file to a gw?
pytest test_*.py -n=2 -s -v
also tried:
pytest test_*.py -n=2 -s -v --dist=loadfile
Assuming your file for running parallel tests is correctly distributed (properly receiving PYTEST_XDIST_WORKER and PYTEST_XDIST_WORKER_COUNT environment variables), you only need to run:
pytest test_*.py --tx '2*popen' --dist=loadfile --dist=each

How to run a pytest-bdd test?

I am not understanding how to properly run a simple test(feature file and python file)
with the library pytest-bdd.
From the official documentation, I can't understand what command to issue to run a test.
I tried using pytest command, but I saw the NO test ran.
Do I need to use another library behave to run a feature file?
I figured out trying for 2 days,that ,
for running a pytest-bdd test, there are certain requirements, at least in my view.
put both the feature file and python file in the same directory (maybe this can be changed with configuration files)
the python file name needs to start with test_
the python file needs to contain a method of which name will start with test_
the method starting with test_ , need to be assigned to the #scenario sentence
to run the test, issue pytest command in the same directory(maybe it is also configurable)
After issuing you will only see the method with the name starting with test_ has passed, but all the tests actually ran. To test, you can assert False in any #when or #then annotated method, it will throw errors.
The system contained : pytest-bdd==3.0.2 (copied from pip freeze output)
Features files and python files can be placed in different folders using the bdd_features_base_dir hook provided by pytest-bdd; I think it is better having features files in different folders too.
Here you can see a working example (a simple hello world BDD test):
https://github.com/davidemoro/pytest-play-docker/tree/master/tests
https://github.com/davidemoro/pytest-play-docker/blob/master/tests/pytest.ini (see bdd_features_base_dir in [pytest] section)
https://github.com/davidemoro/pytest-play-docker/tree/master/tests/bdd
If you want to try out pytest-bdd without installation you can use Docker. Create a folder with inside your pytest BDD files and if you want a separate features folder targeted in bdd_features_base_dir and run:
docker run --rm -it -v $(pwd):/src davidemoro/pytest-play:latest
I've found out, that in the python file you don't have to put:
the method starting with test_ , need to be assigned to the #scenario sentence
You can just add: scenarios("") - to allow the tests to be started, which are using steps defined in this specific python file.
Remember to import scenarios!: from pytest_bdd import scenarios
Example:
Code example
Command..
pytest -v path_to_test_file.py
Things to note here..
Check format of feature file as filename.feature
Always __init__ modules, otherwise test-runner will not find test files
Glue right step definitions to test function
Add feature in features module
If you are using python3 execute test with python3
So,
python3 -m pytest -v path_to_test_file.py
Documentation
https://pytest-bdd.readthedocs.io/en/stable/#

Recommended way to run commands after installing dependencies in the virtualenv

I would like to use tox to run py.test on a project which needs additional setup in addition to installing packages into the virtualenv. After creating the virtualenv and installing dependencies, some commands need to be run.
Specifically I'm talking about setting up a node and npm environment using nodeenv:
nodeenv --prebuilt -p
I see that tox allows me to provide a custom command used for installing dependencies by setting install_command in tox.ini. But I don't think this is what I want because that replaces the command (I assume pip) used to install dependencies.
I thought about using a py.test fixture with session scope to handle setting up nodeenv but that seems hacky to me as I don't want this to happen when py.test is run directly, not via tox.
What is the least insane way of achieving this?
You can do all necessary setup after the creation of the virtualenv and the dependency installation in commands. Yes, it says "the commands to be called for testing." but if you need to do extra work to prepare for testing you can just do it right there.
It works through whatever you throw at it in the order it is given - e.g.:
[testenv:someenv]
deps =
nodeenv
pytest
flexmock
commands =
nodeenv --prebuilt -p
; ... and whatever else you might need to do
py.test path/to/my/tests
If you have commands/scripts or whatever else that produces the right result but it returns a non zero exit status you can ignore that by prepending - (as in - naughty-command).
If you need more steps to happen you can wrap them in a little (Python) script and call that script instead as outlined in https://stackoverflow.com/a/47834447/2626627.
There is also an issue to add the ability to use more than one install command: https://github.com/tox-dev/tox/issues/715 is implemented.
I had the same issue, and as it was important for me to be able to create the environment without invoking the tests (via --notest), I wanted the install to happen in the install phase and not the run phase, so I did something slightly differently. First, I created a create-env script:
#!/usr/bin/env sh
set -e
pip install $#
nodeenv --prebuilt --python-virtualenv --node=8.2.1
Made it executable, Then in tox.ini:
[tox]
skipsdist = True
[testenv]
install_command = ./create-env {opts} {packages}
deps = nodeenv
commands = node --version
This complete example runs and outputs the following:
$ tox
python create: .../.tox/python
python installdeps: nodeenv
python installed: nodeenv==1.3.0
python runtests: PYTHONHASHSEED='1150209523'
python runtests: commands[0] | node --version
v8.2.1
_____________________________________________________________________ summary ______________________________________________________________________
python: commands succeeded
congratulations :)
This approach has the downside that it would only work on Unix.
In tox 715, I propose the possibility of native support for multiple install commands.

Run pgTAP with Perl prove instead of pg_prove

I have a test-suit as usual for Perl projects, containing a lib and a t directory. The tests in t are structured through subdirectories. So I run them using:
prove -Ilib -r t/
So far nothing special, and afaik quite a standard way of testing in Perl.
Since it is the assumption, that this is the standard way of testing, I'd like to make sure that the following applies:
"If you run prove -r on t, you have tested everything that is there to test".
This is very important, since otherwise you can never be sure that you really called all the tests and the stuff is fine. Somebody calling the above would then maybe - not knowing so - just call a part of the available tests, leaving some behind. Quite annoying... tests that are not run, are of no help. It should be as easy and predictive as possible for developers to call all the tests! It is a bad thing when you have to look up how to run the rest of the test-suit. You might not know about it, or might not do it anyway.
So here comes my problem: I have to integrate some Tests using pgTAP which kindly provides the tool pg_prove. Now I have to make two commandos to do the testing. Additionally to running prove -Ilib -r I also have to run something like e.g. pg_prove -S schema=customerX -U dbuser -d dbname t/pgTAP/*.sql. The problem is not that big if you call the tests automatically from cron or what ever. But it really decreases the chance that we lazy developers run all test tests during our busy days.
So I wonder what would be the best approach to implement the tests in such a way that prove will also include those tests. Is it, that I have to create some .t-files which wrap the whole thing (and how?)? Are there any tricks I can do with the whole Harness stuff on CPAN? Would a simple test_all.sh in the root-dir, including both commandos, do the best job, even if it breaks the assumptions I made above?
So my question in short is: Can I run all tests, including pgTAP with prove? If not, is there a best practice for solving my problem?
Thanks a lot.
Yes. In fact, pg_prove just passes everything off to prove. Assuming your pgTAP tests end in .sql, you can run all your tests like this:
prove -lr --ext .sql --ext .t \
--source pgTAP \
--pgtap-option dbname=dbname \
--pgtap-option username=dbuser \
--pgtap-option suffix=.pg \
--pgtap-option set=schema=customerX
If you use Module::Build, you can also have ./Build test run all the tests, too, as I've done for circle.
See the TAP::Parser::SourceHandler::pgTAP documentation for details.