pytest ScopeMismatch exception - pytest

I am trying to create integration tests using pytest framework. I am grouping all the tests for a feature under common class and want to setup some test data before executing test cases. I am trying to do this using a fixture with class scope, this fixture would invoke db and perform some setup required for this feature. For this class level fixture, I pass another fixture(session for accessing db) with scope of function. When I try to do this, I get ScopeMismatch error. I understand this is not allowed in pytest, is there a better was of accessing the db without changing the scope of session fixture or other ways to implement this test scenario.
conftest.py
#pytest.fixture(scope='module')
def connection(engine):
connection = engine.connect()
yield connection
connection.close()
#pytest.fixture(scope='function')
def session(connection):
transaction = connection.begin()
session = Session(bind=connection)
yield session
session.close()
transaction.commit()
test_feature.py
#pytest.fixture(scope='class')
def prepare_test_data(session):
rs = session.execute("select user from user limit 10")
# update test data files/ db tables
#pytest.mark.usefixtures('prepare_test_data')
class TestNodeListResource:

Related

How do I use a different fixture implementation based on a command line argument?

I am testing an app that has user profiles.
Normally, I tear down the profile after each test,
but it is very slow, so I wanted to have the option to run the test faster via
keeping the profile but tearing down changes after each test.
This is what I have now, and it works fine:
#pytest.fixture(scope="session")
def session_scope_app():
with empty_app_started() as app:
yield app
#pytest.fixture(scope="session")
def session_scope_app_with_profile_loaded(session_scope_app):
with profile_loaded(session_scope_app):
yield session_scope_app
if TEAR_DOWN_PROFILE_AFTER_EACH_TEST:
#pytest.fixture
def setup(session_scope_app):
with profile_loaded(session_scope_app):
yield session_scope_app
else:
#pytest.fixture
def setup(session_scope_app_with_profile_loaded):
with profile_state_preserved(session_scope_app_with_profile_loaded):
yield session_scope_app_with_profile_loaded
This produces a fixture setup that, as far as other tests are concerned,
behaves the same way regardless of whether the profile is torn down after each test.
Now, I want to turn TEAR_DOWN_PROFILE_AFTER_EACH_TEST into a command line
option. How can I do this? Command line options are not yet available in test collection stage,
and I can't just put the if into the fixture function body, as the two variants of setup depend on different fixtures.
There are two ways of doing that, but first, let's add the command option itself.
def pytest_addoption(parser):
parser.addoption("--tear-down-profile-after-each-test",
action="store_true",
default=True)
parser.addoption("--no-tear-down-profile-after-each-test", "-T",
action="store_false",
dest="tear_down_profile_after_each_test")
Now, we can either invoke fixtures dynamically, or create a tiny plugin that shuffles our fixtures.
Invoke the fixture dynamically
This is very simple. Instead of depending on a fixture via function arguments,
we can call request.getfixturevalue(name) from inside the fixture.
#pytest.fixture
def setup(session_scope_app):
if request.config.option.tear_down_profile_after_each_test:
with profile_loaded(session_scope_app):
yield session_scope_app
else:
session = request.getfixturevalue(
session_scope_app_with_profile_loaded.__name__
)
with profile_state_preserved(session):
yield session
(It's ok to depend on session_scope_app since session_scope_app_with_profile_loaded depends on it anyway.)
Pros: PyCharm is happy. Cons: you won't be seeing session_scope_app_with_profile_loaded in --setup-plan.
Make a simple plugin
Plugins have the benefit of having access to the configuration.
def pytest_configure(config):
class Plugin:
if config.option.tear_down_profile_after_each_test:
#pytest.fixture
def setup(self, session_scope_app):
with profile_loaded(session_scope_app):
yield session_scope_app
else:
#pytest.fixture
def setup(self, session_scope_app_with_profile_loaded):
with profile_state_preserved(session_scope_app_with_profile_loaded):
yield session_scope_app_with_profile_loaded
config.pluginmanager.register(Plugin())
Pros: You get excellent --setup-plan. Cons: PyCharm won't recongize that setup is a fixture.

pytest fixture with parametrization from another fixture

I am using pytest and would like to invoke a test function for a number of objects returned by a server, and for a number of servers.
The servers are defined in a YAML file and those definitions are provided as parametrization to a fixture "server_connection" that returns a Connection object for a single server. Due to the parametrization, it causes the test function to be invoked once for each server.
I am able to do this with a loop in the test function: There is a second fixture "server_objects" that takes a "server_connection" fixture as input and returns a list of server objects. The pytest test function then takes that second fixture and executes the actual test in a loop through the server objects.
Here is that code:
import pytest
SD_LIST = ... # read list of server definitions from YAML file
#pytest.fixture(
params=SD_LIST,
scope='module'
)
def server_connection(request):
server_definition = request.param
return Connection(server_definition.url, ...)
#pytest.fixture(
scope='module'
)
def server_objects(request, server_connection):
return server_connection.get_objects()
def test_object_foo(server_objects):
for server_object in server_objects:
# Perform test for a single server object:
assert server_object == 'foo'
However, the disadvantage is of course that a test failure causes the entire test function to end.
What I want to happen instead is that the test function is invoked for each single server object, so that a test failure for one object does not prevent the tests on the other objects. Ideally, I'd like to have a fixture that provides a single server object, that I can pass to the test function:
...
#pytest.fixture(
scope='module'
)
def server_object(request, server_connection):
server_objects = server_connection.get_objects()
# TBD: Some magic to parametrize this fixture with server_objects
def test_object_foo(server_object):
# Perform test for a single server object:
assert server_object == 'foo'
I have read through all pytest docs regarding fixtures but did not find a way to do this.
I know about pytest hooks and have used e.g. pytest_generate_tests() before, but I did not find a way how pytest_generate_tests() can access the values of other fixtures.
Any ideas?
Update: Let me add that I also did search SO for this, but did not find an answer. I specifically looked at:
pytest fixture of fixtures
How to parametrize a Pytest fixture
py.test: Pass a parameter to a fixture function
initing a pytest fixture with a parameter

pytest parameterized method setup

I have a parameterized pytest test method, test_1. Before all the parameterized cases are run for this test method, I'd like to call another method, tmp_db_uri, which creates a temporary database and yields the uri for the database. I only want to call that generator once, so that I can use the same temporary database for all the test cases. I thought that if I called it from a fixture (db_uri), that would do the trick, since I thought that fixtures are created once per test, but it seems that the fixture is getting called for each case in this test, and a new temporary database is being created each time.
What is the correct way to do this? Is there a way to run a setup for this method before all the cases are run, to use just one tmp_db_uri? I don't want the temporary database hanging around for the entire test module - just for the duration of this one test (cleanup is handled by a context manager on tmp_db_uri).
I currently have something that looks similar to this:
#pytest.fixture
def db_uri(tmp_db_uri):
return tmp_db_uri
#pytest.mark.parameterize(("item1","item2"), ((1, "a"), (2, "b")))
def test_1(item1, item2, db_uri):
print("do something")
You can create a module level fixture ,so that it's created only once for the entire test module or you can create a global variable and return the db if it is already created or create otherwise.
#pytest.fixture(scope="module")
def db_uri(tmp_db_uri):
return tmp_db_uri
or
TMP_DB = None
#pytest.fixture
def db_uri(tmp_db_uri):
global TMP_DB
if not TMP_DB:
# do your stuff to create tmp_db
TMP_DB = tmp_db_uri
return TMP_DB

Play framework - testing data access layer without started application

Is there a way to write tests for the data access objects (DAOs) in play framework 2.x without starting an app?
Tests with fake app are relatively slow even if the database is an in-memory H2 as the docs suggests.
After experiencing similar issues with execution time of tests using FakeAplication I switched to a different approach. Instead of creating one fake app per test I start a real instance of the application and run all my tests against it. With a large test suite there is a big win in total execution time.
http://yefremov.net/blog/fast-functional-tests-play/
For unit testing, a good solution is mocking. If you are using Play 2.4 and above, Mockito is already built in, and you do not have to import mockito separately.
For integration testing, you cannot run tests without fake application, since sometimes your DAOs probably require application context information, for example the information defined in application.conf. In this case, you must setup a FakeApplication with fake application configuration so that DAOs have that information.
This sample repo,https://github.com/luongbalinh/play-mongo/tree/master/test, contains tests at service and controller layers, including both unit tests with Mockito and integration tests. Integration tests for DAOs should be very similar to the service tests. Hopefully, it gives you a hint of how to use Mockito to write DAO tests.
Turns out the Database object can be constructed directly from the Databases factory, so ended up having a trait like this one:
trait DbTests extends BeforeAndAfterAll with SuiteMixin { this: Suite =>
val dbUrl = sys.env.getOrElse("DATABASE_URL",
"jdbc:postgresql://localhost:5432/testuser=user&password=pass")
val database = Databases("org.postgresql.Driver", dbUrl, "tests")
override def afterAll() = {
database.shutdown()
}
}
then use it the following way:
class SampleDaoTest extends DbTests {
val myDao = new MyDao(database) //construct the dao, database is injected so can be passed
"read form db" in {
myDao.read(id = 123) mustEqual MyClass(123)
}
}

py.test mixing fixtures and asyncio coroutines

I am building some tests for python3 code using py.test. The code accesses a Postgresql Database using aiopg (Asyncio based interface to postgres).
My main expectations:
Every test case should have access to a new asyncio event loop.
A test that runs too long will stop with a timeout exception.
Every test case should have access to a database connection.
I don't want to repeat myself when writing the test cases.
Using py.test fixtures I can get pretty close to what I want, but I still have to repeat myself a bit in every asynchronous test case.
This is how my code looks like:
#pytest.fixture(scope='function')
def tloop(request):
# This fixture is responsible for getting a new event loop
# for every test, and close it when the test ends.
...
def run_timeout(cor,loop,timeout=ASYNC_TEST_TIMEOUT):
"""
Run a given coroutine with timeout.
"""
task_with_timeout = asyncio.wait_for(cor,timeout)
try:
loop.run_until_complete(task_with_timeout)
except futures.TimeoutError:
# Timeout:
raise ExceptAsyncTestTimeout()
#pytest.fixture(scope='module')
def clean_test_db(request):
# Empty the test database.
...
#pytest.fixture(scope='function')
def udb(request,clean_test_db,tloop):
# Obtain a connection to the database using aiopg
# (That's why we need tloop here).
...
# An example for a test:
def test_insert_user(tloop,udb):
#asyncio.coroutine
def insert_user():
# Do user insertion here ...
yield from udb.insert_new_user(...
...
run_timeout(insert_user(),tloop)
I can live with the solution that I have so far, but it can get cumbersome to define an inner coroutine and add the run_timeout line for every asynchronous test that I write.
I want my tests to look somewhat like this:
#some_magic_decorator
def test_insert_user(udb):
# Do user insertion here ...
yield from udb.insert_new_user(...
...
I attempted to create such a decorator in some elegant way, but failed. More generally, if my test looks like:
#some_magic_decorator
def my_test(arg1,arg2,...,arg_n):
...
Then the produced function (After the decorator is applied) should be:
def my_test_wrapper(tloop,arg1,arg2,...,arg_n):
run_timeout(my_test(),tloop)
Note that some of my tests use other fixtures (besides udb for example), and those fixtures must show up as arguments to the produced function, or else py.test will not invoke them.
I tried using both wrapt and decorator python modules to create such a magic decorator, however it seems like both of those modules help me create a function with a signature identical to my_test, which is not a good solution in this case.
This can probably solved using eval or a similar hack, but I was wondering if there is something elegant that I'm missing here.
I’m currently trying to solve a similar problem. Here’s what I’ve come up with so far. It seems to work but needs some clean-up:
# tests/test_foo.py
import asyncio
#asyncio.coroutine
def test_coro(loop):
yield from asyncio.sleep(0.1)
assert 0
# tests/conftest.py
import asyncio
#pytest.yield_fixture
def loop():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
yield loop
loop.close()
def pytest_pycollect_makeitem(collector, name, obj):
"""Collect asyncio coroutines as normal functions, not as generators."""
if asyncio.iscoroutinefunction(obj):
return list(collector._genfunctions(name, obj))
def pytest_pyfunc_call(pyfuncitem):
"""If ``pyfuncitem.obj`` is an asyncio coroutinefunction, execute it via
the event loop instead of calling it directly."""
testfunction = pyfuncitem.obj
if not asyncio.iscoroutinefunction(testfunction):
return
# Copied from _pytest/python.py:pytest_pyfunc_call()
funcargs = pyfuncitem.funcargs
testargs = {}
for arg in pyfuncitem._fixtureinfo.argnames:
testargs[arg] = funcargs[arg]
coro = testfunction(**testargs) # Will no execute the test yet!
# Run the coro in the event loop
loop = testargs.get('loop', asyncio.get_event_loop())
loop.run_until_complete(coro)
return True # TODO: What to return here?
So I basically let pytest collect asyncio coroutines like normal functions. I also intercept text exectuion for functions. If the to-be-tested function is a coroutine, I execute it in the event loop. It works with or without a fixture creating a new event loop instance per test.
Edit: According to Ronny Pfannschmidt, something like this will be added to pytest after the 2.7 release. :-)
Every test case should have access to a new asyncio event loop.
The test suite of asyncio uses unittest.TestCase. It uses setUp() method to create a new event loop. addCleanup(loop.close) is close automatically the event loop, even on error.
Sorry, I don't know how to write this with py.test if you don't want to use TestCase. But if I remember correctly, py.test supports unittest.TestCase.
A test that runs too long will stop with a timeout exception.
You can use loop.call_later() with a function which raises a BaseException as a watch dog.