pytest parameterized method setup - pytest

I have a parameterized pytest test method, test_1. Before all the parameterized cases are run for this test method, I'd like to call another method, tmp_db_uri, which creates a temporary database and yields the uri for the database. I only want to call that generator once, so that I can use the same temporary database for all the test cases. I thought that if I called it from a fixture (db_uri), that would do the trick, since I thought that fixtures are created once per test, but it seems that the fixture is getting called for each case in this test, and a new temporary database is being created each time.
What is the correct way to do this? Is there a way to run a setup for this method before all the cases are run, to use just one tmp_db_uri? I don't want the temporary database hanging around for the entire test module - just for the duration of this one test (cleanup is handled by a context manager on tmp_db_uri).
I currently have something that looks similar to this:
#pytest.fixture
def db_uri(tmp_db_uri):
return tmp_db_uri
#pytest.mark.parameterize(("item1","item2"), ((1, "a"), (2, "b")))
def test_1(item1, item2, db_uri):
print("do something")

You can create a module level fixture ,so that it's created only once for the entire test module or you can create a global variable and return the db if it is already created or create otherwise.
#pytest.fixture(scope="module")
def db_uri(tmp_db_uri):
return tmp_db_uri
or
TMP_DB = None
#pytest.fixture
def db_uri(tmp_db_uri):
global TMP_DB
if not TMP_DB:
# do your stuff to create tmp_db
TMP_DB = tmp_db_uri
return TMP_DB

Related

pytest ScopeMismatch exception

I am trying to create integration tests using pytest framework. I am grouping all the tests for a feature under common class and want to setup some test data before executing test cases. I am trying to do this using a fixture with class scope, this fixture would invoke db and perform some setup required for this feature. For this class level fixture, I pass another fixture(session for accessing db) with scope of function. When I try to do this, I get ScopeMismatch error. I understand this is not allowed in pytest, is there a better was of accessing the db without changing the scope of session fixture or other ways to implement this test scenario.
conftest.py
#pytest.fixture(scope='module')
def connection(engine):
connection = engine.connect()
yield connection
connection.close()
#pytest.fixture(scope='function')
def session(connection):
transaction = connection.begin()
session = Session(bind=connection)
yield session
session.close()
transaction.commit()
test_feature.py
#pytest.fixture(scope='class')
def prepare_test_data(session):
rs = session.execute("select user from user limit 10")
# update test data files/ db tables
#pytest.mark.usefixtures('prepare_test_data')
class TestNodeListResource:

pytest fixture for certain test cases

Having a testclass and testcases like below:
class TestSomething():
...
#pytest.fixture(autouse=True)
def before_and_after_testcases(self):
setup()
yield
cleanup()
def test_abc_1():
...
def test_abc_2():
...
def test_def_1():
...
def test_def_2():
...
Problem is, before_and_after_testcases() would run for each testcase in the test class. Is it possible to let the fixture apply to testcases with abc pattern in the function name only? The fixture is not supposed to run on test_def_xxx, but I don't know how to exclude those testcases.
The autouse=True fixture is automatically applied to all of the tests, to remove that auto-application you'll remove autouse=True
but now that fixture isn't applied to any!
to manually apply that fixture to the tests that need it you can either:
add that fixture's name as a parameter (if you need the value that the fixture has)
decorate the tests which need that fixture with #pytest.mark.usefixtures('fixture_name_here')
Another approach is to split your one test class into multiple test classes, grouping the tests which need the particular auto-used fixtures
disclaimer: I'm a pytest developer, though I don't think that's entirely relevant to this answer SO just requires disclosure of affiliation

passing variable to pytest_sessionfinish

I am looking for a way to pass some variable from session start to session end in pytest.
More specifically I am using fixture scooped session where I create a serial-com object eg.:
#pytest.fixture(scope="session")
def init_setup(request):
# Create serial_com object
After this step I run some tests.
Finally I have pytest_sessionfinish(session, exitsstatus):,
in here i would like to close my com object i created eg.:
def pytest_sessionfinish(session, exitstatus):
# close comport obj.
The problem here is I dont know if it is possible to store my comport obj. in one of these two arguments?
If not, is there a better way of doing this .ie. having a method to clean up your objects that you have created during the test setup phase (not during the test, but during the setup)
Another way you could do it is via a yield. This will return your serial object and then allow you to do a teardown afterwards.:
Try something like this:
#pytest.fixture(scope="session")
def init_setup(request):
# Create my serial object here
yield myserialobject
myserialobject.destroy()

pytest fixture with parametrization from another fixture

I am using pytest and would like to invoke a test function for a number of objects returned by a server, and for a number of servers.
The servers are defined in a YAML file and those definitions are provided as parametrization to a fixture "server_connection" that returns a Connection object for a single server. Due to the parametrization, it causes the test function to be invoked once for each server.
I am able to do this with a loop in the test function: There is a second fixture "server_objects" that takes a "server_connection" fixture as input and returns a list of server objects. The pytest test function then takes that second fixture and executes the actual test in a loop through the server objects.
Here is that code:
import pytest
SD_LIST = ... # read list of server definitions from YAML file
#pytest.fixture(
params=SD_LIST,
scope='module'
)
def server_connection(request):
server_definition = request.param
return Connection(server_definition.url, ...)
#pytest.fixture(
scope='module'
)
def server_objects(request, server_connection):
return server_connection.get_objects()
def test_object_foo(server_objects):
for server_object in server_objects:
# Perform test for a single server object:
assert server_object == 'foo'
However, the disadvantage is of course that a test failure causes the entire test function to end.
What I want to happen instead is that the test function is invoked for each single server object, so that a test failure for one object does not prevent the tests on the other objects. Ideally, I'd like to have a fixture that provides a single server object, that I can pass to the test function:
...
#pytest.fixture(
scope='module'
)
def server_object(request, server_connection):
server_objects = server_connection.get_objects()
# TBD: Some magic to parametrize this fixture with server_objects
def test_object_foo(server_object):
# Perform test for a single server object:
assert server_object == 'foo'
I have read through all pytest docs regarding fixtures but did not find a way to do this.
I know about pytest hooks and have used e.g. pytest_generate_tests() before, but I did not find a way how pytest_generate_tests() can access the values of other fixtures.
Any ideas?
Update: Let me add that I also did search SO for this, but did not find an answer. I specifically looked at:
pytest fixture of fixtures
How to parametrize a Pytest fixture
py.test: Pass a parameter to a fixture function
initing a pytest fixture with a parameter

py.test mixing fixtures and asyncio coroutines

I am building some tests for python3 code using py.test. The code accesses a Postgresql Database using aiopg (Asyncio based interface to postgres).
My main expectations:
Every test case should have access to a new asyncio event loop.
A test that runs too long will stop with a timeout exception.
Every test case should have access to a database connection.
I don't want to repeat myself when writing the test cases.
Using py.test fixtures I can get pretty close to what I want, but I still have to repeat myself a bit in every asynchronous test case.
This is how my code looks like:
#pytest.fixture(scope='function')
def tloop(request):
# This fixture is responsible for getting a new event loop
# for every test, and close it when the test ends.
...
def run_timeout(cor,loop,timeout=ASYNC_TEST_TIMEOUT):
"""
Run a given coroutine with timeout.
"""
task_with_timeout = asyncio.wait_for(cor,timeout)
try:
loop.run_until_complete(task_with_timeout)
except futures.TimeoutError:
# Timeout:
raise ExceptAsyncTestTimeout()
#pytest.fixture(scope='module')
def clean_test_db(request):
# Empty the test database.
...
#pytest.fixture(scope='function')
def udb(request,clean_test_db,tloop):
# Obtain a connection to the database using aiopg
# (That's why we need tloop here).
...
# An example for a test:
def test_insert_user(tloop,udb):
#asyncio.coroutine
def insert_user():
# Do user insertion here ...
yield from udb.insert_new_user(...
...
run_timeout(insert_user(),tloop)
I can live with the solution that I have so far, but it can get cumbersome to define an inner coroutine and add the run_timeout line for every asynchronous test that I write.
I want my tests to look somewhat like this:
#some_magic_decorator
def test_insert_user(udb):
# Do user insertion here ...
yield from udb.insert_new_user(...
...
I attempted to create such a decorator in some elegant way, but failed. More generally, if my test looks like:
#some_magic_decorator
def my_test(arg1,arg2,...,arg_n):
...
Then the produced function (After the decorator is applied) should be:
def my_test_wrapper(tloop,arg1,arg2,...,arg_n):
run_timeout(my_test(),tloop)
Note that some of my tests use other fixtures (besides udb for example), and those fixtures must show up as arguments to the produced function, or else py.test will not invoke them.
I tried using both wrapt and decorator python modules to create such a magic decorator, however it seems like both of those modules help me create a function with a signature identical to my_test, which is not a good solution in this case.
This can probably solved using eval or a similar hack, but I was wondering if there is something elegant that I'm missing here.
I’m currently trying to solve a similar problem. Here’s what I’ve come up with so far. It seems to work but needs some clean-up:
# tests/test_foo.py
import asyncio
#asyncio.coroutine
def test_coro(loop):
yield from asyncio.sleep(0.1)
assert 0
# tests/conftest.py
import asyncio
#pytest.yield_fixture
def loop():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
yield loop
loop.close()
def pytest_pycollect_makeitem(collector, name, obj):
"""Collect asyncio coroutines as normal functions, not as generators."""
if asyncio.iscoroutinefunction(obj):
return list(collector._genfunctions(name, obj))
def pytest_pyfunc_call(pyfuncitem):
"""If ``pyfuncitem.obj`` is an asyncio coroutinefunction, execute it via
the event loop instead of calling it directly."""
testfunction = pyfuncitem.obj
if not asyncio.iscoroutinefunction(testfunction):
return
# Copied from _pytest/python.py:pytest_pyfunc_call()
funcargs = pyfuncitem.funcargs
testargs = {}
for arg in pyfuncitem._fixtureinfo.argnames:
testargs[arg] = funcargs[arg]
coro = testfunction(**testargs) # Will no execute the test yet!
# Run the coro in the event loop
loop = testargs.get('loop', asyncio.get_event_loop())
loop.run_until_complete(coro)
return True # TODO: What to return here?
So I basically let pytest collect asyncio coroutines like normal functions. I also intercept text exectuion for functions. If the to-be-tested function is a coroutine, I execute it in the event loop. It works with or without a fixture creating a new event loop instance per test.
Edit: According to Ronny Pfannschmidt, something like this will be added to pytest after the 2.7 release. :-)
Every test case should have access to a new asyncio event loop.
The test suite of asyncio uses unittest.TestCase. It uses setUp() method to create a new event loop. addCleanup(loop.close) is close automatically the event loop, even on error.
Sorry, I don't know how to write this with py.test if you don't want to use TestCase. But if I remember correctly, py.test supports unittest.TestCase.
A test that runs too long will stop with a timeout exception.
You can use loop.call_later() with a function which raises a BaseException as a watch dog.