Interleaved repeat of pytest - pytest

I have some tests which I would like to repeat a number of times. I tried the pytest-repeat plugin
pip3 install pytest-repeat
import pytest
#pytest.mark.repeat(2)
class TestDemo():
def test_demo1(self):
pass
def test_demo2(self):
pass
This works
test_class_repeat.py::TestDemo::test_demo1[1/2] PASSED
test_class_repeat.py::TestDemo::test_demo1[2/2] PASSED
test_class_repeat.py::TestDemo::test_demo2[1/2] PASSED
test_class_repeat.py::TestDemo::test_demo2[2/2] PASSED
Except that I want an interleaved order running all tests, and the run all tests again
test_class_repeat.py::TestDemo::test_demo1[1/2] PASSED
test_class_repeat.py::TestDemo::test_demo2[1/2] PASSED
test_class_repeat.py::TestDemo::test_demo1[2/2] PASSED
test_class_repeat.py::TestDemo::test_demo2[2/2] PASSED
Is there a simple way to do this?

Well, not a very clean solution, naively just define a test function that executes the interleaved tests while skipping the definition itself, and apply repeat on that:
import pytest
#pytest.mark.skip(reason='Definition only')
class TestDemo():
def test_demo1(self):
print('In Test 1')
assert 1 == 1
def test_demo2(self):
print('In Test 2')
assert 2 == 2
#pytest.mark.repeat(2)
def test_all():
demo = TestDemo()
demo.test_demo1()
demo.test_demo2()
Execution (in jupyter notebook) gives:
Test.py::TestDemo::test_demo1 SKIPPED
Test.py::TestDemo::test_demo2 SKIPPED
Test.py::test_all[1/2]
In Test 1
In Test 2
PASSED
TestProject.py::test_all[2/2]
In Test 1
In Test 2
PASSED
Side note: if one of the two nested tests does not pass test_all does not pass, something desired from interleaved tests?

You can use pytest-flakefinder package by dropbox. It repeats tests after the run is complete.
Usage: py.test --flake-finder --flake-runs=runs.

This can be done using the pytest.mark.parametrize if the functions have a parameter. Below is an example.
import pytest
iter_list = [1,2,3]
#pytest.mark.parametrize('param1', iter_list, scope = 'class')
class TestDemo():
def test_demo1(self, param1):
pass
def test_demo2(self, param1):
pass

Related

Varying parameters for a session-scoped pytest fixture

I have a test suite with an expensive fixture (it spins up a bunch of containers in a cluster), so I'd like to use a session-scoped fixture for it. However, it's configurable on several axes, and different subsets of tests need to test different subsets of the configuration space.
Here's a minimal demonstration of what I'm trying to do. By default tests need to test the combinations x=1,y=10 and x=2,y=10, but the tests in test_foo.py need to test x=3,y=10 so override the x fixture:
conftest.py:
import pytest
#pytest.fixture(scope="session", params=[1, 2])
def x(request):
return request.param
#pytest.fixture(scope="session", params=[10])
def y(request):
return request.param
#pytest.fixture(scope="session")
def expensive(x, y):
return f"expensive[{x}, {y}]"
test_bar.py:
def test_bar(expensive):
assert expensive in {"expensive[1, 10]", "expensive[2, 10]"}
test_foo.py:
import pytest
#pytest.fixture(scope="session", params=[3])
def x(request):
return request.param
def test_foo(expensive):
assert expensive in {"expensive[3, 10]"}
When I run this, I get the following:
test_bar.py::test_bar[1-10] PASSED [ 33%]
test_foo.py::test_foo[3-10] FAILED [ 66%]
test_bar.py::test_bar[2-10] PASSED [100%]
=================================== FAILURES ===================================
________________________________ test_foo[3-10] ________________________________
expensive = 'expensive[1, 10]'
def test_foo(expensive):
> assert expensive in {"expensive[3, 10]"}
E AssertionError: assert 'expensive[1, 10]' in {'expensive[3, 10]'}
It appears to have reused the 1-10 fixture from test_bar for the 3-10 test in test_foo. Is that expected (some sort of matching by position in the parameter list rather than value), or a bug in pytest? Is there some way I can get it to do what I'm aiming for?
Incidentally, if I make x in test_foo.py non-parametric (just returning a hard-coded 3) it also fails, but in a slightly different way: it runs both test_bar tests first, then reuses the second fixture for the test_foo test.
The problem here is that the expansive fixture is in session scope, and the parameters are only read once.
A workaround would be to move expansive to module scope, so it will be evaluated for each module. That would work, but it would evaluate the fixture for each test module, even if several of them use the same parameters. If you have several test modules that use the same parameters, you could use 2 separate fixtures with different scope instead, with the common code moved out, e.g:
import pytest
...
def do_expansive_stuff(a, b): # a and b are normal arguments, not fixtures
# setup containers with parameters a, b...
return f"expensive[{a}, {b}]"
#pytest.fixture(scope="session")
def expensive(x, y):
yield do_expansive_stuff(x, y)
#pytest.fixture(scope="module")
def expensive_module(x, y):
yield do_expansive_stuff(x, y)
You would use expensive_module in test_foo. This is of course assuming that the expansive setup has to be done separately for each parameter set. The downside of this approach is of course having to use differently named fixtures.
It would be nice if someone would come up with a cleaner approach...

How can I parametrize my fixture and get testdata to parametreize my tests

I'm a starter with pytest. Just learned about fixture and tried to do this:
My tests call functions I wrote, and get test data from a code practicing website.
Each test is from a particular page and has several sets of test data.
So, I want to use #pytest.mark.parametrize to parametrize my single test func.
Also, as the operations of the tests are samelike, I want to made the pageObject instantiation and the steps to get test data from page as a fixture.
# content of conftest.py
#pytest.fixture
def get_testdata_from_problem_page():
def _get_testdata_from_problem_page(problem_name):
page = problem_page.ProblemPage(problem_name)
return page.get_sample_data()
return _get_testdata_from_problem_page
# content of test_problem_a.py
import pytest
from page_objects import problem_page
from problem_func import problem_a
#pytest.mark.parametrize('input,expected', test_data)
def test_problem_a(get_testdata_from_problem_page):
input, expected = get_testdata_from_problem_page("problem_a")
assert problem_a.problem_a(input) == expected
Then I realized, as above, I can't parametrize the test using pytest.mark as the test_data should be given outside the test function....
Are there solutions for this? Thanks very much~~
If I understand you correctly, you want to write one parameterized test per page. In this case you just have to write a function instead of a fixture and use that for parametrization:
import pytest
from page_objects import problem_page
from problem_func import problem_a
def get_testdata_from_problem_page(problem_name):
page = problem_page.ProblemPage(problem_name)
# returns a list of (input, expected) tuples
return page.get_sample_data()
#pytest.mark.parametrize('input,expected',
get_testdata_from_problem_page("problem_a"))
def test_problem_a(input, expected):
assert problem_a.problem_a(input) == expected
As you wrote, a fixture can only be used as a parameter to a test function or to another fixture, not in a decorator.
If you want to use the function to get test data elsewhere, just move it to a common module and import it. This can be just some custom utility module, or you could put it into contest.py, though you still have to import it.
Note also that the fixture you wrote does not do anything - it defines a local function that is not called and returns.

How to control the test flow

I have a test campaign which contains tests of 1,2,3,4,5,6 and I'd like to control the execution flow in one test run. For example, after running tests 1, 2, 3, I need to perform some test setups before executing 4,5,6. Is there any feature or plugin in pytest support this. Also, I don't want to group the tests into multiple test runs.
pytest runs tests in the order that they appear in a module. So if tests 1, 2 and 3 are defined before 4, 5 and 6 in the module they would get executed first. But, in case they are not, or if they are in different modules or you just want to ensure some order, you should check this plugin: pytest-ordering.
Note that there are other alternatives also, if you don't want to use this plugin e.g. pytest_collection_modifyitems hook, which lets you re-order collected tests in place.
For the part where you want to do some test setups before executing test 4, 5 and 6, you can use pytest fixtures. For your case, you can define a session scoped fixture which does the setup job and then include this fixture in the argument list of the tests. Something like below:
import pytest
#pytest.fixture(scope="session")
def test_setup():
print ("Setup done")
def test_1():
assert True
def test_2():
assert True
def test_3(test_setup):
assert True
def test_4(test_setup):
assert True
Here, test_1 and test_2 would get executed first. Then test_setup would be invoked followed up by test_3 and test_4.

pytest fixture with parametrization from another fixture

I am using pytest and would like to invoke a test function for a number of objects returned by a server, and for a number of servers.
The servers are defined in a YAML file and those definitions are provided as parametrization to a fixture "server_connection" that returns a Connection object for a single server. Due to the parametrization, it causes the test function to be invoked once for each server.
I am able to do this with a loop in the test function: There is a second fixture "server_objects" that takes a "server_connection" fixture as input and returns a list of server objects. The pytest test function then takes that second fixture and executes the actual test in a loop through the server objects.
Here is that code:
import pytest
SD_LIST = ... # read list of server definitions from YAML file
#pytest.fixture(
params=SD_LIST,
scope='module'
)
def server_connection(request):
server_definition = request.param
return Connection(server_definition.url, ...)
#pytest.fixture(
scope='module'
)
def server_objects(request, server_connection):
return server_connection.get_objects()
def test_object_foo(server_objects):
for server_object in server_objects:
# Perform test for a single server object:
assert server_object == 'foo'
However, the disadvantage is of course that a test failure causes the entire test function to end.
What I want to happen instead is that the test function is invoked for each single server object, so that a test failure for one object does not prevent the tests on the other objects. Ideally, I'd like to have a fixture that provides a single server object, that I can pass to the test function:
...
#pytest.fixture(
scope='module'
)
def server_object(request, server_connection):
server_objects = server_connection.get_objects()
# TBD: Some magic to parametrize this fixture with server_objects
def test_object_foo(server_object):
# Perform test for a single server object:
assert server_object == 'foo'
I have read through all pytest docs regarding fixtures but did not find a way to do this.
I know about pytest hooks and have used e.g. pytest_generate_tests() before, but I did not find a way how pytest_generate_tests() can access the values of other fixtures.
Any ideas?
Update: Let me add that I also did search SO for this, but did not find an answer. I specifically looked at:
pytest fixture of fixtures
How to parametrize a Pytest fixture
py.test: Pass a parameter to a fixture function
initing a pytest fixture with a parameter

py.test mixing fixtures and asyncio coroutines

I am building some tests for python3 code using py.test. The code accesses a Postgresql Database using aiopg (Asyncio based interface to postgres).
My main expectations:
Every test case should have access to a new asyncio event loop.
A test that runs too long will stop with a timeout exception.
Every test case should have access to a database connection.
I don't want to repeat myself when writing the test cases.
Using py.test fixtures I can get pretty close to what I want, but I still have to repeat myself a bit in every asynchronous test case.
This is how my code looks like:
#pytest.fixture(scope='function')
def tloop(request):
# This fixture is responsible for getting a new event loop
# for every test, and close it when the test ends.
...
def run_timeout(cor,loop,timeout=ASYNC_TEST_TIMEOUT):
"""
Run a given coroutine with timeout.
"""
task_with_timeout = asyncio.wait_for(cor,timeout)
try:
loop.run_until_complete(task_with_timeout)
except futures.TimeoutError:
# Timeout:
raise ExceptAsyncTestTimeout()
#pytest.fixture(scope='module')
def clean_test_db(request):
# Empty the test database.
...
#pytest.fixture(scope='function')
def udb(request,clean_test_db,tloop):
# Obtain a connection to the database using aiopg
# (That's why we need tloop here).
...
# An example for a test:
def test_insert_user(tloop,udb):
#asyncio.coroutine
def insert_user():
# Do user insertion here ...
yield from udb.insert_new_user(...
...
run_timeout(insert_user(),tloop)
I can live with the solution that I have so far, but it can get cumbersome to define an inner coroutine and add the run_timeout line for every asynchronous test that I write.
I want my tests to look somewhat like this:
#some_magic_decorator
def test_insert_user(udb):
# Do user insertion here ...
yield from udb.insert_new_user(...
...
I attempted to create such a decorator in some elegant way, but failed. More generally, if my test looks like:
#some_magic_decorator
def my_test(arg1,arg2,...,arg_n):
...
Then the produced function (After the decorator is applied) should be:
def my_test_wrapper(tloop,arg1,arg2,...,arg_n):
run_timeout(my_test(),tloop)
Note that some of my tests use other fixtures (besides udb for example), and those fixtures must show up as arguments to the produced function, or else py.test will not invoke them.
I tried using both wrapt and decorator python modules to create such a magic decorator, however it seems like both of those modules help me create a function with a signature identical to my_test, which is not a good solution in this case.
This can probably solved using eval or a similar hack, but I was wondering if there is something elegant that I'm missing here.
I’m currently trying to solve a similar problem. Here’s what I’ve come up with so far. It seems to work but needs some clean-up:
# tests/test_foo.py
import asyncio
#asyncio.coroutine
def test_coro(loop):
yield from asyncio.sleep(0.1)
assert 0
# tests/conftest.py
import asyncio
#pytest.yield_fixture
def loop():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
yield loop
loop.close()
def pytest_pycollect_makeitem(collector, name, obj):
"""Collect asyncio coroutines as normal functions, not as generators."""
if asyncio.iscoroutinefunction(obj):
return list(collector._genfunctions(name, obj))
def pytest_pyfunc_call(pyfuncitem):
"""If ``pyfuncitem.obj`` is an asyncio coroutinefunction, execute it via
the event loop instead of calling it directly."""
testfunction = pyfuncitem.obj
if not asyncio.iscoroutinefunction(testfunction):
return
# Copied from _pytest/python.py:pytest_pyfunc_call()
funcargs = pyfuncitem.funcargs
testargs = {}
for arg in pyfuncitem._fixtureinfo.argnames:
testargs[arg] = funcargs[arg]
coro = testfunction(**testargs) # Will no execute the test yet!
# Run the coro in the event loop
loop = testargs.get('loop', asyncio.get_event_loop())
loop.run_until_complete(coro)
return True # TODO: What to return here?
So I basically let pytest collect asyncio coroutines like normal functions. I also intercept text exectuion for functions. If the to-be-tested function is a coroutine, I execute it in the event loop. It works with or without a fixture creating a new event loop instance per test.
Edit: According to Ronny Pfannschmidt, something like this will be added to pytest after the 2.7 release. :-)
Every test case should have access to a new asyncio event loop.
The test suite of asyncio uses unittest.TestCase. It uses setUp() method to create a new event loop. addCleanup(loop.close) is close automatically the event loop, even on error.
Sorry, I don't know how to write this with py.test if you don't want to use TestCase. But if I remember correctly, py.test supports unittest.TestCase.
A test that runs too long will stop with a timeout exception.
You can use loop.call_later() with a function which raises a BaseException as a watch dog.