Flask unittest and sqlalchemy using all connections - postgresql

I've just run into an issue running unittests on my flask app after I had roughly 100 unittests. All unittests will pass, but when run all at once they will fail with the following error:
OperationalError: (OperationalError) FATAL: remaining connection slots are reserved for non-replication superuser connections
Everything is running in a virtualbox/vagrant/ubuntu12.04 instance on local machine. My postgres max_connections is set to 100 so I'm assuming that the connections aren't closing and after running 100 tests I use up all the available ones.
This person Flask unit tests with SQLAlchemy and PostgreSQL exhausts db connections looks like they are having the same exact problem. Mike/Zzzeek (sqlalchemy dev) even responded to it saying that something may be happening in create_app() so I've included that as well below.
Does this mean I'm not closing my connections somewhere? All of these errors are triggered by db.create_all() in my setUp() method of my unittest.
# test.py
class TestCase(DataMixin, Base):
"""Base test class"""
def create_app(self):
return create_app(TestConfig())
def setUp(self):
db.create_all()
def tearDown(self):
db.session.remove()
db.drop_all()
# app.py
def create_app(config=None):
app = Flask(__name__)
# Config
app.config.from_object(BaseConfig())
if config is not None:
app.config.from_object(config)
# Extensions
db.init_app(app)
mail.init_app(app)
bcrypt.init_app(app)
# Blueprints
app.register_blueprint(core_blueprint, url_prefix='/')
app.register_blueprint(accounts_blueprint, url_prefix='/account')
app.register_blueprint(admin_blueprint, url_prefix='/admin')
app.register_blueprint(cart_blueprint, url_prefix='/cart')
# Login Manager
login_manager.setup_app(app, add_context_processor=True)
login_manager.login_view = "accounts.login"
login_manager.user_callback = load_user
# Templates
app.jinja_env.globals['is_admin'] = is_admin
app.jinja_env.globals['is_staff'] = is_staff
#app.context_processor
def inject_cart():
cart = count = None
if current_user.is_authenticated():
cart = current_user.get_cart()
return dict(cart=cart)
# Error Handling
#app.errorhandler(404)
def page_not_found(error):
return render_template('404.html'), 404
return app

UPDATE: Tested and fixed
Instead of making a new connection and re-creating your database every time (slow), you can use subsessions and do a rollback after each test.
The connection are reused, so this also fix the problem you're having.
class TestCase(Base):
#classmethod
def setUpClass(cls):
cls.app = create_app(MyConfig())
cls.client = cls.app.test_client()
cls._ctx = cls.app.test_request_context()
cls._ctx.push()
db.create_all()
#classmethod
def tearDownClass(cls):
db.session.remove()
db.drop_all()
db.get_engine(cls.app).dispose()
def setUp(self):
self._ctx = self.app.test_request_context()
self._ctx.push()
db.session.begin(subtransactions=True)
def tearDown(self):
db.session.rollback()
db.session.close()
self._ctx.pop()
If you need to also make an instance of the application for each test, just add it to the setUp method but leave it also in setUpClass.
Full test example below requires flask_sqlalchemy and psycopg2. Create a test database named "test" and set its connection limit to 15.
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
from unittest import TestCase as Base
db = SQLAlchemy()
def create_app(config=None):
app = Flask(__name__)
app.config.from_object(config)
db.init_app(app)
return app
class MyConfig(object):
SQLALCHEMY_DATABASE_URI = "postgresql://localhost/test"
TESTING = True
class TestCase(Base):
#classmethod
def setUpClass(cls):
cls.app = create_app(MyConfig())
cls.client = cls.app.test_client()
cls._ctx = cls.app.test_request_context()
cls._ctx.push()
db.create_all()
#classmethod
def tearDownClass(cls):
db.session.remove()
db.drop_all()
def setUp(self):
self._ctx = self.app.test_request_context()
self._ctx.push()
db.session.begin(subtransactions=True)
def tearDown(self):
db.session.rollback()
db.session.close()
self._ctx.pop()
class TestModel(TestCase):
def test_01(self):
pass
def test_02(self):
pass
def test_03(self):
pass
def test_04(self):
pass
def test_05(self):
pass
def test_06(self):
pass
def test_07(self):
pass
def test_08(self):
pass
def test_09(self):
pass
def test_10(self):
pass
def test_11(self):
pass
def test_12(self):
pass
def test_13(self):
pass
def test_14(self):
pass
def test_15(self):
pass
def test_16(self):
pass
if __name__ == "__main__":
import unittest
unittest.main()

I found the answer here -- https://stackoverflow.com/a/17998485/1870623 and a great explination here -- https://stackoverflow.com/a/16390645/1870623
The solution is to add db.get_engine(self.app).dispose() to the tearDown()
class TestCase(Base):
def setUp(self):
db.create_all()
def tearDown(self):
db.session.remove()
db.drop_all()
db.get_engine(self.app).dispose() # This

Related

Slick: Updates not available when fetched just after

I was trying out this slick example and when I try to create an entry and then fetch that right after, I don't get the record. I modified the test case which is here as below.
val response = create(BankProduct("car loan", 1)).flatMap(getById)
whenReady(response) { p =>
assert(p.get === BankProduct("car loan", 1))
}
The above fails because the created BankProduct cannot be fetched immediately.
It is using h2 db for this and below is the configuration.
trait H2DBComponent extends DBComponent {
val logger = LoggerFactory.getLogger(this.getClass)
val driver = slick.driver.H2Driver
import driver.api._
val randomDB = "jdbc:h2:mem:test" + UUID.randomUUID().toString() + ";"
val h2Url = randomDB + "MODE=MySql;DATABASE_TO_UPPER=false;INIT=runscript from 'src/test/resources/schema.sql'\\;runscript from 'src/test/resources/schemadata.sql'"
val db: Database = {
logger.info("Creating test connection")
Database.forURL(url = h2Url, driver = "org.h2.Driver")
}
}
private[repo] trait BankProductTable extends BankTable { this: DBComponent =>
import driver.api._
private[BankProductTable] class BankProductTable(tag: Tag) extends Table[BankProduct](tag, "bankproduct") {
val id = column[Int]("id", O.PrimaryKey, O.AutoInc)
val name = column[String]("name")
val bankId = column[Int]("bank_id")
def bank = foreignKey("bank_product_fk", bankId, bankTableQuery)(_.id)
def * = (name, bankId, id.?) <> (BankProduct.tupled, BankProduct.unapply)
}
protected val bankProductTableQuery = TableQuery[BankProductTable]
protected def bankProductTableAutoInc = bankProductTableQuery returning bankProductTableQuery.map(_.id)
}
I don't understand why this is happening and how to avoid this?
I tried adding the propery autoCommit also but it didn't work either.
Appreciate any help to clarify this ambiguity.
This might be due to in-memory database content being lost after create call closes its connection. According to docs:
By default, closing the last connection to a database closes the
database. For an in-memory database, this means the content is lost.
To keep the database open, add ;DB_CLOSE_DELAY=-1 to the database URL.
To keep the content of an in-memory database as long as the virtual
machine is alive, use jdbc:h2:mem:test;DB_CLOSE_DELAY=-1.
However, after adding DB_CLOSE_DELAY=-1, there will be errors due to
runscript from 'src/test/resources/schemadata.sql'
which is executed on each connection, thus refactoring is neccessary such that database is populated only once on initialization.

mocking snowflake connection

I have a SnowflakeApi class in python which just works as a wrapper on top of the SnowflakeConnection class. My SnowflakeApi is
import logging
import os
from snowflake.connector import connect
class SnowflakeApi(object):
"""
Wrapper to handle snowflake connection
"""
def __init__(self, account, warehouse, database, user, pwd):
"""
Handles snowflake connection. Connection must be closed once it is no longer needed
:param account:
:param warehouse:
:param database:
"""
self.__acct = self._account_url(account)
self.__wh = warehouse
self.__db = database
self.__connection = None
self.__user = user
self.__pwd = pwd
def __create_connection(self):
try:
# set the proxy here
conn = connect(
account=self.__acct
, user=self.__user
, password=self.__pwd
, warehouse=self.__wh
, database=self.__db
)
return conn
except:
raise Exception(
"Unable to connect to snowflake for user: '{0}', warehouse: '{1}', database: '{2}'".format(
self.__user, self.__wh, self.__db))
def get_connection(self):
"""
Gets a snowflake connection. If the connection has already been initialised it is returned
otherwise a new connection is created
:param credentials_func: method to get database credentials.
:return:
"""
try:
if self.__connection is None:
self.__connection = self.__create_connection()
return self.__connection
except:
raise Exception("Unable to initalise Snowflake connection")
def close_connection(self):
"""
Closes snowflake connection.
:return:
"""
self.__connection.close()
Namespace for SnowflakeApi is connection.snowflake_connection.SnowflakeApi (i.e. i have snowflake_connection.py in a folder called connections)
I want to write unit tests for this class using pytest and unittest.mock. The problem is I want to mock 'connect' so that a MagicMock object is returned and no database call is made. So far I have tried:
monkeypatch.setattr(connections.snowflake_connection,"connect",return_value = "")
Changed my original class to just import snowflake. I then created a mock object and used monkeypatch.setattr(snowflake_connection,"snowflake",my_mock_snowflake). That didn't work either
In short, I have tried a couple of other things but nothing has worked. All I want to do is mock snowflake connection so no actual database call is made.
Here is another way where we are mocking snowflake connector, cursor and fetch_all using python mock and patch.
import mock
import unittest
from datetime import datetime, timedelta
import feed_daily_report
class TestFeedDailyReport(unittest.TestCase):
#mock.patch('snowflake.connector.connect')
def test_compare_partner(self, mock_snowflake_connector):
tod = datetime.now()
delta = timedelta(days=8)
date_8_days_ago = tod - delta
query_result = [('partner_1', date_8_days_ago)]
mock_con = mock_snowflake_connector.return_value
mock_cur = mock_con.cursor.return_value
mock_cur.fetchall.return_value = query_result
result = feed_daily_report.main()
assert result == True
An example using unittest.mock and patching the connection:
from unittest import TestCase
from unittest.mock import patch
from connection.snowflake_connection import SnowflakeApi
class TestSnowFlakeApi(TestCase):
#patch('connection.snowflake_connection.connect')
def test_get_connection(self, mock_connect)
api = SnowflakeApi('the_account',
'the_warehouse',
'the_database',
'the_user',
'the_pwd')
api.get_connection()
mock_connect.assert_called_once_with(account='account_url', # Will be the output of self._account_url()
user='the_user',
password='the_pwd',
warehouse='the_warehouse',
database='the_database')
If you're testing other classes that use your SnowFlakeApi wrapper, then you should use the same approach, but patch the SnowFlakeApi itself in those tests.
from package.module.SomeClassThatUsesSnowFlakeApi
class TestSomeClassThatUsesSnowFlakeApi(TestCase):
#patch('package.module.SnowFlakeApi')
def test_some_func(self, mock_api):
instance = SomeClassThatUsesSnowFlakeApi()
instance.do_something()
mock_api.assert_called_once_with(...)
mock_api.return_value.get_connection.assert_called_once_with()
Also note that if you're using Python 2, you will need to pip install mock and then from mock import patch.
Using stubbing and dependency injection
from ... import SnowflakeApi
def some_func(*args, api=None, **kwargs):
api = api or SnowflakeApi(...)
conn = api.get_connection()
# Do some work
return result
Your test
class SnowflakeApiStub(SnowflakeApi)
def __init__(self):
# bypass super constructor
self.__connection = MagicMock()
def test_some_func():
stub = SnowflakeApiStub()
mock_connection = stub.__connection
mock_cursor = mock_connection.cursor.return_value
expect = ...
actual = some_func(api=stub)
assert expect == actual
assert mock_cursor.execute.called
An example using cursor, execute, and fetchone.
import snowflake.connector
class AlongSamePolly:
def __init__(self, conn):
self.conn = conn
def row_count(self):
cur = self.conn.cursor()
query = cur.execute('select count(*) from schema.table;')
return query.fetchone()[0] # returns (12345,)
# I like to dependency inject the snowflake connection object in my classes.
# This lets me use Snowflake Python Connector's built in context manager to
# rollback any errors and automatically close connections. Then you don't have
# try/except/finally blocks everywhere in your code.
#
if __name__ == '__main__':
with snowflake.connector.connect(user='user', password='password') as con:
same = AlongSamePolly(con)
print(same.row_count())
# => 12345
In the unittests you mock out the expected method calls - cursor(), execute(),
fetchone() and define the return value to follow up the chain of defined mocks.
import unittest
from unittest import mock
from along_same_polly import AlongSamePolly
class TestAlongSamePolly(unittest.TestCase):
def test_row_count(self):
with mock.patch('snowflake.connector.connect') as mock_snowflake_conn:
mock_query = mock.Mock()
mock_query.fetchone.return_value = (123,)
mock_cur = mock.Mock()
mock_cur.execute.return_value = mock_query
mock_snowflake_conn.cursor.return_value = mock_cur
same = AlongSamePolly(mock_snowflake_conn)
self.assertEqual(same.row_count(), 123)
if __name__ == '__main__':
unittest.main()
The following Solution Worked for me.
def test_connect(env_var_setup, monkeypatch):
monkeypatch.setattr(snowflake.connector.connection.SnowflakeConnection,
"connect", mocked_sf_connect
)
# calling snowflake connector method
file_job_map(env_var_setup).connect()
#mocked connection
def mocked_sf_connect(self, **kwargs):
print("Connection Successfully Established")
return True

How to apply play-evolutions when running tests in play-framework?

I have problems with evolutions when running tests in play framework using
playframework v2.6.6 for scala
play-slick v3.0.2
play-slick-evolutions v3.0.2
The test looks like this:
class TestFooController extends PlaySpec with GuiceOneServerPerSuite {
"foo endpoint should store some data" in {
val wsClient = app.injector.instanceOf[WSClient]
val url = s"http://localhost:$port/foo"
val requestData = Json.obj("foo" -> "bar")
val response = await(wsClient.url(url).post(requestData))
response.status mustBe OK
}
}
The database configuration looks like this:
slick.dbs.default.driver="slick.driver.H2Driver$"
slick.dbs.default.db.driver="org.h2.Driver"
slick.dbs.default.db.url="jdbc:h2:mem:play"
Asume there is an evolution script which creates the table foos and this script is working fine in dev mode.
When running the test the following error is thrown:
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[JdbcSQLException: Table "foos" not found;
The table foos could not be found so I assume that database evolutions have not been applied.
Then I changed the database configuration to postgresql which is used in dev mode.
slick.dbs.default.driver = "slick.driver.PostgresDriver$"
slick.dbs.default.db.driver = "org.postgresql.Driver"
slick.dbs.default.db.url = "jdbc:postgresql://localhost:5432/foo-test"
slick.dbs.default.db.user = "user"
slick.dbs.default.db.password = "password"
With this configuration the test work fine and data are stored in the database, so database evolutions ran just fine.
Now the problem is, that the database is not cleaned up after tests. I'd like to run each test suite with a clean database.
To sum up. With H2Db evolutions are not applied, with postgresql evolutions are applied but not cleaned up.
Even if this explicitly defined in application.test.conf
play.evolutions.autoApply=true
play.evolutions.autoApplyDowns=true
I also tried
play.evolutions.db.default.autoApply=true
play.evolutions.db.default.autoApplyDowns=true
no effect.
Then I tried to do this manually via:
def withManagedDatabase[T](block: Database => T): Unit = {
val dbapi = app.injector.instanceOf[DBApi]
val database = dbapi.database("default")
Evolutions.applyEvolutions(database)
block(database)
Evolutions.cleanupEvolutions(database)
}
and then changing the test to:
"foo endpoint should store some data" in withManagedDatabase { _ =>
...
}
For the H2 database configuration it has no effect, the same error that table foos can not be found is thrown. For the postgresql database configuration an evolution exceptions is thrown
play.api.db.evolutions.InconsistentDatabase: Database 'default' is in an inconsistent state![An evolution has not been applied properly. Please check the problem and resolve it manually before marking it as resolved.]
I want evolution ups running before and evolution downs running after each test suite. How can this be achieved?
You can use the following to apply evolutions before each suite and clean up afterwards:
trait DatabaseSupport extends BeforeAndAfterAll {
this: Suite with ServerProvider =>
private lazy val db = app.injector.instanceOf[DBApi]
override protected def beforeAll(): Unit = {
super.beforeAll()
initializeEvolutions(db.database("default"))
}
override protected def afterAll(): Unit = {
cleanupEvolutions(db.database("default"))
super.afterAll()
}
private def initializeEvolutions(database: Database):Unit = {
Evolutions.cleanupEvolutions(database)
Evolutions.applyEvolutions(database)
}
private def cleanupEvolutions(database: Database):Unit = {
Evolutions.cleanupEvolutions(database)
}
}
This is working for me:
class DAOSpec extends PlaySpec with GuiceOneAppPerSuite {
val dbUrl = sys.env.getOrElse("DATABASE_URL", "postgres://foo:password#localhost:5432/foo")
val testConfig = Map("db.default.url" -> dbUrl)
implicit override def fakeApplication() = new GuiceApplicationBuilder().configure(testConfig).build()
lazy val database = app.injector.instanceOf[Database]
lazy val dao = app.injector.instanceOf[DAO]
"create" must {
"work" in Evolutions.withEvolutions(database) {
val foo = await(dao.create("foo"))
foo.id must not be null
}
}
}

How can Slick use different Database driver based on Application environment (e.g. test, prod, etc.)

Slick 3.0.0
play 2.6.2
I am experimenting with Slick and run into an interesting issue. I hope the solution is just trivial and I am thinking too much about this
I have implemented the following simple code.
case class Content(content:String)
class ContentTable(tag: Tag) extends Table[Content](tag, "content"){
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def content = column[String]("content")
override def * : ProvenShape[Content] = (content).mapTo[Content]
}
object ContentDb {
val db = Database.forConfig("databaseConfiguration")
lazy val contents = TableQuery[ContentTable]
def all: Seq[Content] = Await.result(db.run(contents.result), 2 seconds)
}
So for this code to work, it requires the following import among others of course.
import slick.jdbc.H2Profile.api._
or alternatively
import slick.jdbc.PostgresProfile.api._
Now, I think, and please correct me if I am wrong, that the database driver should be a configuration detail. That is, I'd choose to run H2 in memory database while developing. Run against test PostgreSQL instance when testing and then run against another instance when in production. I mean the whole idea of abstracting the driver is to have this flexibility... I think.
Now, I have done some research and found that I could do something like this:
trait DbComponent {
val driver: JdbcProfile
import driver.api._
val db: Database
}
trait H2DbComponent extends DbComponent {
val driver: JdbcProfile = slick.jdbc.H2Profile
import driver.api._
val db = Database.forConfig("databaseConfiguration")
}
trait Contents {
def all: Seq[Content]
}
object Contents {
def apply: Contents = new ContentsDb with H2DbComponent
}
trait ContentsDb extends Contents {
this: DbComponent =>
import driver.api._
class ContentTable(tag: Tag) extends Table[Content](tag, "content") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def content = column[String]("content")
override def * : ProvenShape[Content] = content.mapTo[Content]
}
lazy val contents = TableQuery[ContentTable]
def all: Seq[Content] = Await.result(db.run(contents.result), 2 seconds)
}
Then I can use dependency injection to inject the right instance for each entity that I have. Not ideal, but possible. So, I start digging on how to have conditional dependency injection based on which environment is running in Play Framework.
I was expecting something similar to the following:
#Component(env=("prod","test")
class ProductionContentsDb extends ContentsDb with PostgresDbComponent
#Component(env="dev")
class ProductionContentsDb extends ContentsDb with H2DbComponent
But, no luck...
EDIT
Just after I have finished writing this and started reading it again, I am curious if we can just have the something similar to:
class DbComponent #Inject (driver: JdbcProfile) {
import driver.api._
val db = Database.forConfig("databaseConfiguration")
}
You can create separate configuration files for each environment. like
application.conf --local
application.prod.conf -- prod with contents below
include "aplication.conf"
###update slick configurations
Then while running application in different stages feed in with -Dconfig.resource=

Best Practise of Using Connection Pool in Slick 3.0.0 Together with Play Framework

I followed the documentation of Slick 3.0.0-RC1, using Typesafe Config as database connection configuration. Here is my conf:
database = {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost:5432/postgre"
user = "postgre"
}
I established a file Locale.scala as:
package models
import slick.driver.PostgresDriver.api._
import scala.concurrent.Future
case class Locale(id: String, name: String)
class Locales(tag: Tag) extends Table[Locale](tag, "LOCALES") {
def id = column[String]("ID", O.PrimaryKey)
def name = column[String]("NAME")
def * = (id, name) <> (Locale.tupled, Locale.unapply)
}
object Locales {
private val locales = TableQuery[Locales]
val db = Database.forConfig("database")
def count: Future[Int] =
try db.run(locales.length.result)
finally db.close
}
Then I got confused that when and where the proper time is to create Database object using
val db = Database.forConfig("database")
If I create db like this, there will be as many Database objects as my models. So what is the best practice to get this work?
You can create an Object DBLocator and load it using lazy operator so that its loaded only on demand.
You can always invoke the method defined in DBLocator class to get an instance of Session.