I'm struggling to write py.test fixtures for managing my app's database that maximize speed, supports pytest-xdist parallelization of tests, and isolates the tests from each other.
I'm using Flask-SQLAlchemy 2.1 against a PostgreSQL 9.4 database.
Here's the general outline of what I'm trying to accomplish:
$ py.test -n 3 spins up three test sessions for running tests.
Within each session, a py.test fixture runs once to setup a transaction, create the database tables, and then at the end of the session it rolls back the transaction. Creating the database tables needs to happen within a PostgreSQL transaction that's only visible to that particular test-session, otherwise the parallelized test sessions created by pytest-xdist cause conflicts with each other.
A second py.test fixture that runs for every test connects to the existing transaction in order to see the created tables, creates a nested savepoint, runs the test, then rolls back to the nested savepoint.
Ideally, these pytest fixtures support tests that call db.session.rollback(). There's a potential recipe for accomplishing this at the bottom of this SQLAlchemy doc.
Ideally the pytest fixtures should yield the db object, not just the session so that
folks can write tests without having to remember to use a session that's
different than the standard db.session they use throughout the app.
Here's what I have so far:
import pytest
# create_app() is my Flask application factory
# db is just 'db = SQLAlchemy()' + 'db.init_app(app)' within the create_app() function
from app import create_app, db as _db
#pytest.yield_fixture(scope='session', autouse=True)
def app():
'''Session-wide test application'''
a = create_app('testing')
with a.app_context():
yield a
#pytest.yield_fixture(scope='session')
def db_tables(app):
'''Session-wide test database'''
connection = _db.engine.connect()
trans = connection.begin() # begin a non-ORM transaction
# Theoretically this creates the tables within the transaction
_db.create_all()
yield _db
trans.rollback()
connection.close()
#pytest.yield_fixture(scope='function')
def db(db_tables):
'''db session that is joined to existing transaction'''
# I am quite sure this is broken, but it's the general idea
# bind an individual Session to the existing transaction
db_tables.session = db_tables.Session(bind=db_tables.connection)
# start the session in a SAVEPOINT...
db_tables.session.begin_nested()
# yield the db object, not just the session so that tests
# can be written transparently using the db object
# without requiring someone to understand the intricacies of these
# py.test fixtures or having to remember when to use a session that's
# different than db.session
yield db_tables
# rollback to the savepoint before the test ran
db_tables.session.rollback()
db_tables.session.remove() # not sure this is needed
Here's the most useful references that I've found while googling:
http://docs.sqlalchemy.org/en/latest/orm/session_transaction.html#joining-a-session-into-an-external-transaction-such-as-for-test-suites
http://koo.fi/blog/2015/10/22/flask-sqlalchemy-and-postgresql-unit-testing-with-transaction-savepoints/
https://github.com/mitsuhiko/flask-sqlalchemy/pull/249
I'm a couple years late here, but you might be interested in pytest-flask-sqlalchemy, a plugin I wrote to help address this exact problem.
The plugin provides two fixtures, db_session and db_engine, which you can use like regular Session and Engine objects to run updates that will get rolled back at the end of the test. It also exposes a few configuration directives (mocked-engines and mocked-sessions) that will mock out connectables in your app and replace them with these fixtures so that you can run methods and be sure that any state changes will get cleaned up when the test exits.
The plugin should work with a variety of databases, but it's been tested most heavily against Postgres 9.6 and is in production in the test suite for https://dedupe.io. You can find some examples in the documentation that should help you get started, but if you're willing to provide some code I'd be happy to demonstrate how to use the plugin, too.
I had similar issue trying to combine yield fixtures. Unfortunately according to the doc you are not able to combine more than one yield level.
But you might be able to find a work around using request.finalizer:
#pytest.fixture(scope='session', autouse=True)
def app():
'''Session-wide test application'''
a = create_app('testing')
with a.app_context():
return a
#pytest.fixture(scope='session')
def db_tables(request, app):
'''Session-wide test database'''
connection = _db.engine.connect()
trans = connection.begin() # begin a non-ORM transaction
# Theoretically this creates the tables within the transaction
_db.create_all()
def close_db_session():
trans.rollback()
connection.close()
request.addfinalizer(close_db_session)
return _db
Related
I'm using Gatling to performance test an application, but the application has a Postgres database, which relies on jdbc to connect. Is it possible to run a scenario to test the database performance, and if so, how?
I found the jdbcFeeder, but I do not know how to execute the scenario I've set up, since the exec doesn't accept the url I provide it... jdbc:postgresql://host:port/database?user=<user>&password=<password>. It returns a java.lang.IllegalArgumentException: could not be parsed into a proper Uri, missing host.
Example:
val sqlQueryFeeder: RecordSeqFeederBuilder[Any] =
jdbcFeeder(connectionUrl, user, password, s"SELECT * FROM $schema.$table")
val scn: ScenarioBuilder = scenario("Test 1")
.feed(sqlQueryFeeder)
.exec(http.baseUrl(jdbc:postgresql://host:port/database?user=<user>&password=<password>))
)
setUp(scn.inject(atOnceUsers(100)))
Not out of the box.
jdbcFeeder is about driving performance tests with data read from a database (i.e. it's about "feeding" a test with data, not about testing a database).
The gatling-jdbc project appears to be intended to do what you want, but it didn't work when I tried it out (Gatling 3.0-RC4). I'm pretty new to Gatling and Scala, the problem might be me, or it might even by the Gatling release candidate - you might have better luck if you try it out.
There's also gatling-sql but I've not tried it out myself and it hasn't been touched in over a year.
I am going to write a new endpoint to unlock the domain object, something like:
../domainObject/{id}/unlock
As I apply TDD, I have started to write an API test first. When the test fails, I am going to start writing Integration and Unit tests and implement the real code.
In API test, I need a locked domain data for test fixture setup to test the unlock endpoint that will be created. However, there is no endpoint for locking the domain object on the system. (our Quartz jobs lock the data) I mean, I need to create a data by using the database directly.
I know that in API test, straight forwardly database usage is not the best practice. If you need a test data, you should call the API too. e.g.
../domainObject/{id}/lock
Should this scenario be an exception in this case? Or is there any other practice should I follow?
Thanks.
There is no good or bad practice here, it's all about how much you value end to end testing of the system including the database.
Testing the DB part will require a little more infrastructure, because you'll have to either use an in-memory database for faster test runs, or set up a full-fledged permanent test DB in your dev environment. When doing the latter, it might be a good idea to have a separate test suite for end-to-end tests that runs less frequently than your normal test suite, because it will inevitably be slower.
In that scenario, you'll have preexisting test data always present in the DB and a locked object can be one of them.
If you don't care about all this, you can stub the data store abstraction (repository, DAO or whatever) to return a canned locked object.
In the following code:
val users = TableQuery[Users]
def getUserById(id:Int) = db(users.filter(_.id === id).result)
From what I understand, getUserById would create a prepared statement everytime the getUserById is executed and then discarded. Is there a way to cache the prepared statement so it is created only once and called many times.
The documentation for Slick indicates that you need to enable prepared statement caching on the connection pool configuration.
Here is quite a good article on it also.
The summary is that Slick seems to cache the strings that are used to prepare the actual statements, but delegates the caching of the actual prepared statements to the underlying connection pool implementation.
I'm using Pyramid and SQLAlchemy for my REST server. For the logging purposes I need some way how to determine user name in postgresql after update trigger function. One way is setting some runtime configuration parameter before update execution and using this value in trigger function.
In psql terminal it can be done using
set myapp.user_name = 'User Name';
This parameter will be subsequently used in postgresql trigger.
Is it possible to set this parameter in Pyramid / SQLAlchemy application?
I suppose I can use SQLAlchemy Events. But I'm not sure which event is correct for this case.
You can register a Pyramid event handler which would set the parameter on any new request:
from pyramid import events
def on_new_request(event):
"""
you can access request as event.request
and the current registry settings as event.request.registry.settings
"""
session = DBSession()
session.execute("SET blah TO 'foo'")
def app(global_config, **settings):
config = Configurator(...)
config.add_subscriber(on_new_request, events.NewRequest)
See Deployment Settings and events.NewRequest for more details.
One thing to watch out is to make sure you're always using the same session object - SQLAlchemy maintains a pool of connections and, if you commit your session, all subsequent operations will use a brand-new session which is not pre-configured with your settings. In other words, you should not do session.commit(), session.rollback(), transaction.commit() etc. in your code and instead rely on ZopeTransactionExtension committing/rolling back the transaction at the end of the request/response cycle. Which is a recommended practice anyway.
I'm not sure what the exact command you're running with set myapp.user = "User Name".
When you say "In psql this can be done using..." I'm assuming you are running some sort of SQL command, although the command you listed is not a valid command, at least not typically.
In any event, arbitrary SQL can be executed in SQLAlchemy using the .execute() method.
session = Session()
session.execute("select 1;")
http://docs.sqlalchemy.org/en/rel_0_9/orm/session.html?highlight=execute#sqlalchemy.orm.session.Session.execute
How I can (should) configure Grails integration tests to rollback transactions automatically when using MongoDB as datasource?
(I'm using Grails 2.2.1 + mongodb plugin 1.2.0)
For spock integration tests I defined a MongoIntegrationSpec that gives some control of cleaning up test data.
dropDbOnCleanup = true // will drop the entire DB after every feature method is executed.
dropDbOnCleanupSpec = true // will drop the entire DB after after the spec is complete.
dropCollectionsOnCleanup = ["collectionA", "collectionB", ...] // drops collections after every feature method is executed.
dropCollectionsOnCleanupSpec = ["collectionA", "collectionB", ...] // drops collections after the spec is complete.
dropNewCollectionsOnCleanup = true // after every feature method is executed, all new collections are dropped
dropNewCollectionsOnCleanupSpec = true // after the spec is complete, all new collections are dropped
Here's the source
https://github.com/onetribeyoyo/mtm/tree/dev/src/test/integration/com/onetribeyoyo/util/MongoIntegrationSpec.groovy
And the project has a couple usage examples too.
I don't think that it's even possible, because MongoDB doesn't support transactions. You could use suggested static transactional = 'mongo', but it helps only if you didn't flush your data (it's rare situation I think)
Instead you could cleanup database on setUp() manually. You can drop collection for a domain that you're going to test, like:
MyDomain.collection.drop()
and (optinally) fill with all data require for your test.
Can use static transactional = 'mongo' in integration test and/or service class.
Refer MongoDB Plugin for more details.
MongoDB does not support transactions! And hence you cannot use it. The options you have are
1. Go around and drop the collections for the DomainClasses you used.
MyDomain.collection.drop() //If you use mongoDB plugin alone without hibernate
MyDomain.mongo.collection.drop() //If you use mongoDB plugin with hibernate
Draw back is you have to do it for each domain you used
2. Drop the whole database (You don't need to create it explicitly, but you can)
String host = grailsApplication.config.grails.mongo.host
Integer port = grailsApplication.config.grails.mongo.port
Integer databaseName = grailsApplication.config.grails.mongo.databaseName
def mongo = new GMongo(host, port)
mongo.getDB(databaseName).dropDatabase() //this takes 0.3-0.5 seconds in my machin
The second option is easier and faster. To make this work for all your tests, extend IntegrationSpec and add the code to drop the database in the cleanup block (I am assuming you are using Spock test framework) or do a similar thing for JUnit like tests!
Hope this helps!