before, beforeEach, after, and afterEach of mocha equivalent in pytest - pytest

I'm new to python and using pytest along with requests to start API testing.
I want to run some scripts before each test module and other snippets before each test case in a module for testcases data setup.
I've checked pytest fixtures and scope but I don't think this is what I'm looking for as I can't control the data passed to fixtures. What other possible solutions for that?

Related

Best practice for invoke-ing a tool that internally calls `exec`?

pipenv uses click, and uses click's invoke method in its unit test suite to test itself.
This does not work great when unit testing the pipenv run command, which internally uses execve. In the context of a single threaded test run this means our test runner (pytest) process gets completely replaced by whatever we're exec-ing, and the test suite immediately and silently exits.
(This bug on the pipenv issue tracker talks about this in more detail: https://github.com/pypa/pipenv/issues/4909)
Is there a best practice for how to handle situations like this with click? Should we not be using click's invoke at all, and instead using regular old subprocess? Or perhaps invoke could optionally run the command in a subprocess rather than the current process? Or maybe there's a best practice for cli apps that use click to detect if they're being run via invoke and just be careful not to do anything in the exec family (this is basically what pipenv does right now: if the CI env var is set, then pipenv run triggers a subprocess rather than using execve).

Distribute shell scripts using setuptools and pyproject.toml

I'm trying to distribute a shell script along with a Python package. Ideally, the shell script is installed when I run pip install my_package. I read from this SO that, my expected behavior is exactly what the scripts keyword of setuptools.setup provides. E.g. the script my_script will be installed with the following setup.py script:
setup(
...
scripts=['my_script'],
...
)
However, I cannot use the above method for two reasons:
the official doc did not mention this behavior. I don't know if I can continue to do this way.
my whole project is built on pyproject.toml, without setup.py. Although pyproject.toml has provided a [project.scripts] table, as explained in the setuptools official doc, the scripts can only be python functions instead of shell scripts.
For completeness, in my case, the shell script reads git status and sets environment variables, which will be read from within my python project. The shell script and my python project are bonded so tightly that I would rather not split them into two projects.
I have also tried to use a python function to execute the shell script, e.g.
[project.scripts]
my_script = 'my_project:my_func'
def my_func():
subprocess.run(...)
The problem with this solution is that every time I run my_script, my_project is loaded and the loading process is really slow.
Maybe a link in the comments leads to this information already. Anyway, I think it is worth posting that scripts = [...] in setup.py can be written in pyproject.toml as:
[tool.setuptools]
script-files = ["scripts/myscript1", "scripts/myscript2"]
However, this feature is deprecated. I hope the authors of the packaging tools will recognize the problem with shell scripts and deal with it.
Link: setuptools docs
I'm not exactly sure it will work for you case, but I solved this by creating a "shim" setup.py file (it has an added benefit of being able to install your project in edit mode).
It usually just calls setup(), but it was possible to pass the scripts argument:
"""Shim setup file to allow for editable install."""
from setuptools import setup
if __name__ == "__main__":
setup(scripts=["myscript"])
Everything else was loaded from pyproject.toml.

Testing on multiple platforms with pytest-cpp

I'm testing a project that has Python and C++ parts.
Both parts have unit tests, using pytest on the Python side and Catch2 on the C++ side, plus there are integration tests that are supposed to check the cooperation between the parts.
Recently I discovered pytest-cpp and it works great to create a "test all" run, with both sets of unit tests.
Integration tests are also written in pytest.
I have several testing binaries on the C++ side and pytest fixtures for compiling, running and interfacing with them from python.
Since the C++ code needs to work on multiple platforms, I'm planning to parametrize the fixtures for integration tests by a platform name and emulation method, allowing to cross compile the binaries for example for ARM64 and run them through qemu-aarch64.
My question is: Is there some way to hijack the test discovery done by pytest-cpp and force it to look at specific paths (with emulation wrappers) provided by my fixture, so that I can also run the C++ unit tests this way?
I've looked at the test discovery bit of pytest-cpp and I guess I could reimplement this function in my own code, but I don't see any way to parametrize pytest_collect_file by a fixture ...

How do get pytest to do discovery based on module name, and not path

I'm looking at moving from unittest to pytest. One thing I like to do, is to do a setup.py install and then run the tests from the installed modules, rather than directly from the source code. This means that I pick up any files I've forgotten to include in MANIFEST.in.
With unittest, I can get the test runner to do test discovery by specifying the root test module. e.g. python -m unittest myproj.tests
Is there a way to do this with pytest?
I'm using the following hack, but I wish there was a built in cleaner way.
pytest $(python -c 'import myproj.tests; print(myproj.tests.__path__[0])')
The Tests as part of application section of pytest good practices says if your tests are available at myproj.tests, run:
py.test --pyargs myproj.tests
With pytest you can instead specify the path to the root test directory. It will run all the tests that pytest is able to discover. You can find more detail in the pytest good practices

How can I combine the PyDev unit test runner with Web2py?

I'm using Eclipse/PyDev on a Web2py application, and I'd like to create a launch configuration that runs a unit test using web2py.
Normally, Web2py wants you to run a unit test with a test runner script, like so:
python web2py.py -S testa -M -R testRunner.py
testRunner.py includes a main method that runs:
unittest.TextTestRunner(verbosity=2).run(suite)
However, in PyDev, the test running is managed outside of your source, in pysrc\runfiles.py.
PyDev's test runner doesn't even take -S, -M, and -R as arguments, and it has no way of passing them on to web2py.py, which it expects to be a suite of tests, and not a runner.
Is there a way to test Web2py using a PyDev unittest configuration, and if so, how?
My suggestion in this case is using the pytest runner (configure it in the pyunit preferences)... I haven't searched, but I bet there's some plugin for running web2py with pytest.