Pytest Finalizers - order of execution - pytest

I am writing py.test program, considering the following py.test fixture code:
#pytest.fixture(scope="class")
def my_fixture(request):
def fin1():
print("fin1")
request.addfinalizer(fin1)
def fin2():
print("fin2")
request.addfinalizer(fin2)
What the execution order? I didn't find any mentions on the documentation regarding the execution order of finalizers.
Thanks in advance.

I guess the easiest way would be to just try running your code with -s and see in which order the prints happen.
What I'd recommend is to use yield fixtures instead, so you can explicitly control the teardown order easily:
#pytest.yield_fixture(scope="class")
def my_fixture():
# do setup
yield
fin1()
fin2()
Starting with pytest 3.0 (which will be released soon), this will also work by just using yield with the normal #pytest.fixture decorator, and will be the recommended way of doing teardown.

Related

How do I use a different fixture implementation based on a command line argument?

I am testing an app that has user profiles.
Normally, I tear down the profile after each test,
but it is very slow, so I wanted to have the option to run the test faster via
keeping the profile but tearing down changes after each test.
This is what I have now, and it works fine:
#pytest.fixture(scope="session")
def session_scope_app():
with empty_app_started() as app:
yield app
#pytest.fixture(scope="session")
def session_scope_app_with_profile_loaded(session_scope_app):
with profile_loaded(session_scope_app):
yield session_scope_app
if TEAR_DOWN_PROFILE_AFTER_EACH_TEST:
#pytest.fixture
def setup(session_scope_app):
with profile_loaded(session_scope_app):
yield session_scope_app
else:
#pytest.fixture
def setup(session_scope_app_with_profile_loaded):
with profile_state_preserved(session_scope_app_with_profile_loaded):
yield session_scope_app_with_profile_loaded
This produces a fixture setup that, as far as other tests are concerned,
behaves the same way regardless of whether the profile is torn down after each test.
Now, I want to turn TEAR_DOWN_PROFILE_AFTER_EACH_TEST into a command line
option. How can I do this? Command line options are not yet available in test collection stage,
and I can't just put the if into the fixture function body, as the two variants of setup depend on different fixtures.
There are two ways of doing that, but first, let's add the command option itself.
def pytest_addoption(parser):
parser.addoption("--tear-down-profile-after-each-test",
action="store_true",
default=True)
parser.addoption("--no-tear-down-profile-after-each-test", "-T",
action="store_false",
dest="tear_down_profile_after_each_test")
Now, we can either invoke fixtures dynamically, or create a tiny plugin that shuffles our fixtures.
Invoke the fixture dynamically
This is very simple. Instead of depending on a fixture via function arguments,
we can call request.getfixturevalue(name) from inside the fixture.
#pytest.fixture
def setup(session_scope_app):
if request.config.option.tear_down_profile_after_each_test:
with profile_loaded(session_scope_app):
yield session_scope_app
else:
session = request.getfixturevalue(
session_scope_app_with_profile_loaded.__name__
)
with profile_state_preserved(session):
yield session
(It's ok to depend on session_scope_app since session_scope_app_with_profile_loaded depends on it anyway.)
Pros: PyCharm is happy. Cons: you won't be seeing session_scope_app_with_profile_loaded in --setup-plan.
Make a simple plugin
Plugins have the benefit of having access to the configuration.
def pytest_configure(config):
class Plugin:
if config.option.tear_down_profile_after_each_test:
#pytest.fixture
def setup(self, session_scope_app):
with profile_loaded(session_scope_app):
yield session_scope_app
else:
#pytest.fixture
def setup(self, session_scope_app_with_profile_loaded):
with profile_state_preserved(session_scope_app_with_profile_loaded):
yield session_scope_app_with_profile_loaded
config.pluginmanager.register(Plugin())
Pros: You get excellent --setup-plan. Cons: PyCharm won't recongize that setup is a fixture.

How to detect untracked future?

Futures are executed in my code and are not being detected.
def f(): Future[String] = {
functionReturningFuture() // How to detect this?
Future("")
}
Ideally, a static analysis tool would help detect this.
The closer you can get is NonUnitStatements wart from WartRemover, but it cannot error only Future statements and skip all the others.
The fact that you have such issue could be used as an argument against using Future and replacing it with some IO: Cats' IO, Monix's Task or Scalaz ZIO. When it comes to them, you build your pipeline first, and the you run it. If you omitted IO value in return and you didn't compose it into the result in any other way (flatMap, map2, for comprehension etc) it would not get executed - it would still be there but it would cause no harm.
If you wanted to have greater control and error only on Future, you would probably have to write your own WartRemover's wart or ScalaFix rule.

Breakdown of fixture setup time in py.test

I have some py.test tests that have multiple dependent and parameterized fixtures and I want to measure the time taken by each fixture. However, in the logs with --durations it only shows time for setup for actual tests, but doesn't give me a breakdown of how long each individual fixture took.
Here is a concrete example of how to do this:
import logging
import time
import pytest
logger = logging.getLogger(__name__)
#pytest.hookimpl(hookwrapper=True)
def pytest_fixture_setup(fixturedef, request):
start = time.time()
yield
end = time.time()
logger.info(
'pytest_fixture_setup'
f', request={request}'
f', time={end - start}'
)
With output similar to:
2018-10-29 20:43:18,783 - INFO pytest_fixture_setup, request=<SubRequest 'some_data_source' for <Function 'test_ruleset_customer_to_campaign'>>, time=3.4723987579345703
The magic is hookwrapper:
pytest plugins can implement hook wrappers which wrap the execution of other hook implementations. A hook wrapper is a generator function which yields exactly once. When pytest invokes hooks it first executes hook wrappers and passes the same arguments as to the regular hooks.
One fairly important gotcha that I ran into is that the conftest.py has to be in your project's root folder in order to pick up the pytest_fixture_setup hook.
There isn't anything builtin for that, but you can easily implement yourself by using the new pytest_fixture_setup hook in a conftest.py file.

Basic pytest teardown that runs on test completion

I am minimally using pytest as a generic test runner for large automated integration tests against various API products at work, and I've been trying to find an equally generic example of a teardown function that runs on completion of any test, regardless of success or failure.
My typical use pattern is super linear and usually goes something like this:
def test_1():
<logic>
assert something
def test_2():
<logic>
assert something
def test_3():
<logic>
assert something
Occasionally, when it makes sense to do so, at the top of my script I toss in a setup fixture with an autouse argument set to "True" that runs on the launch of every script:
#pytest.fixture(scope="session", autouse=True)
def setup_something():
testhelper = TestHelper
testhelper.create_something(host="somehost", channel="somechannel")
def test_1():
<logic>
assert something
def test_2():
<logic>
assert something
def test_3():
<logic>
assert something
Up until recently, disposable docker environments have allowed me to get away with skipping the entire teardown process, but I'm in a bit of pinch where one of those is not available right now. Ideally, without diverting from the same linear pattern I've already been using, how would I implement another pytest fixture that does something like:
#pytest.fixture
def teardown():
testhelper = TestHelper
testhelper.delete_something(thing=something)
when the run is completed?
Every fixture may have a tear down part:
#pytest.fixture
def something(request):
# setup code
def finalize():
# teardown code
request.addfinalizer(finalize)
return fixture_result
Or as I usually use it:
#pytest.fixture
def something():
# setup code
yield fixture_result
# teardown code
Note that in pytest pre-3.0, the decorator required for the latter idiom was #pytest.yield_fixture. Since 3.0, however, one can just use the regular #pytest.fixture decorator, and #pytest.yield_fixture is deprecated.
See more here
you can use these functions in your conftest.py
def pytest_runtest_setup(item):
pass
def pytest_runtest_teardown(item):
pass
see here for docs

py.test mixing fixtures and asyncio coroutines

I am building some tests for python3 code using py.test. The code accesses a Postgresql Database using aiopg (Asyncio based interface to postgres).
My main expectations:
Every test case should have access to a new asyncio event loop.
A test that runs too long will stop with a timeout exception.
Every test case should have access to a database connection.
I don't want to repeat myself when writing the test cases.
Using py.test fixtures I can get pretty close to what I want, but I still have to repeat myself a bit in every asynchronous test case.
This is how my code looks like:
#pytest.fixture(scope='function')
def tloop(request):
# This fixture is responsible for getting a new event loop
# for every test, and close it when the test ends.
...
def run_timeout(cor,loop,timeout=ASYNC_TEST_TIMEOUT):
"""
Run a given coroutine with timeout.
"""
task_with_timeout = asyncio.wait_for(cor,timeout)
try:
loop.run_until_complete(task_with_timeout)
except futures.TimeoutError:
# Timeout:
raise ExceptAsyncTestTimeout()
#pytest.fixture(scope='module')
def clean_test_db(request):
# Empty the test database.
...
#pytest.fixture(scope='function')
def udb(request,clean_test_db,tloop):
# Obtain a connection to the database using aiopg
# (That's why we need tloop here).
...
# An example for a test:
def test_insert_user(tloop,udb):
#asyncio.coroutine
def insert_user():
# Do user insertion here ...
yield from udb.insert_new_user(...
...
run_timeout(insert_user(),tloop)
I can live with the solution that I have so far, but it can get cumbersome to define an inner coroutine and add the run_timeout line for every asynchronous test that I write.
I want my tests to look somewhat like this:
#some_magic_decorator
def test_insert_user(udb):
# Do user insertion here ...
yield from udb.insert_new_user(...
...
I attempted to create such a decorator in some elegant way, but failed. More generally, if my test looks like:
#some_magic_decorator
def my_test(arg1,arg2,...,arg_n):
...
Then the produced function (After the decorator is applied) should be:
def my_test_wrapper(tloop,arg1,arg2,...,arg_n):
run_timeout(my_test(),tloop)
Note that some of my tests use other fixtures (besides udb for example), and those fixtures must show up as arguments to the produced function, or else py.test will not invoke them.
I tried using both wrapt and decorator python modules to create such a magic decorator, however it seems like both of those modules help me create a function with a signature identical to my_test, which is not a good solution in this case.
This can probably solved using eval or a similar hack, but I was wondering if there is something elegant that I'm missing here.
I’m currently trying to solve a similar problem. Here’s what I’ve come up with so far. It seems to work but needs some clean-up:
# tests/test_foo.py
import asyncio
#asyncio.coroutine
def test_coro(loop):
yield from asyncio.sleep(0.1)
assert 0
# tests/conftest.py
import asyncio
#pytest.yield_fixture
def loop():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
yield loop
loop.close()
def pytest_pycollect_makeitem(collector, name, obj):
"""Collect asyncio coroutines as normal functions, not as generators."""
if asyncio.iscoroutinefunction(obj):
return list(collector._genfunctions(name, obj))
def pytest_pyfunc_call(pyfuncitem):
"""If ``pyfuncitem.obj`` is an asyncio coroutinefunction, execute it via
the event loop instead of calling it directly."""
testfunction = pyfuncitem.obj
if not asyncio.iscoroutinefunction(testfunction):
return
# Copied from _pytest/python.py:pytest_pyfunc_call()
funcargs = pyfuncitem.funcargs
testargs = {}
for arg in pyfuncitem._fixtureinfo.argnames:
testargs[arg] = funcargs[arg]
coro = testfunction(**testargs) # Will no execute the test yet!
# Run the coro in the event loop
loop = testargs.get('loop', asyncio.get_event_loop())
loop.run_until_complete(coro)
return True # TODO: What to return here?
So I basically let pytest collect asyncio coroutines like normal functions. I also intercept text exectuion for functions. If the to-be-tested function is a coroutine, I execute it in the event loop. It works with or without a fixture creating a new event loop instance per test.
Edit: According to Ronny Pfannschmidt, something like this will be added to pytest after the 2.7 release. :-)
Every test case should have access to a new asyncio event loop.
The test suite of asyncio uses unittest.TestCase. It uses setUp() method to create a new event loop. addCleanup(loop.close) is close automatically the event loop, even on error.
Sorry, I don't know how to write this with py.test if you don't want to use TestCase. But if I remember correctly, py.test supports unittest.TestCase.
A test that runs too long will stop with a timeout exception.
You can use loop.call_later() with a function which raises a BaseException as a watch dog.