Fast test execution in a playframework fake application - scala

Running tests as described here
"Spec" should {
"example" in new WithApplication {
...
}
}
is unacceptably slow for me. This is because new WithApplication is starting and stoping framework at every example. Don't get me wrong, a framework itself loads very fast, but if database is configured (surprise!), situation becomes terrible.
Here are some measurments:
"The database layer" should {
"test1" in {
1 must be equalTo(1)
}
...
"test20" in {
1 must be equalTo(1)
}
}
Execution time: 2 seconds. Same test with WithApplication at every example consumes 9 seconds
I was able to achive much better results thanks to this answer
import play.api.Play
import play.api.test.FakeApplication
import org.specs2.mutable.Specification
import scalikejdbc._
class MySpec extends Specification {
var fake: FakeApplication = _
step {fake = FakeApplication(...)}
step {Play.start(fake)}
"The database layer" should {
"some db test" in {
DB localTx { implicit session =>
...
}
}
"another db test" in {
DB localTx { implicit session =>
...
}
}
step {Play.stop()}
}
}
Pros: performance boost
Cons:
need to copy-paste setup and tear-down code because don't know how to
reuse it (by reuse I mean something like "class MySpec extends
Specification with NoWasteOfTime"
new WithApplication() calls Helpers.running which looks like this
synchronized {
try {
Play.start(fakeApp)
block
} finally {
Play.stop()
play.api.libs.ws.WS.resetClient()
}
}
so I can't completely emulate Helpers.running behaviour (resetClient is not visible for my code) without reflection.
Please suggest how to break cons or different approach how accomplish my issue.

I don't know if it is the best possible solution but in this thread:
Execute code before and after specification
You can read a solution for reusable code. I implemented it with little modifications. For me the beforeAll step did not run and added the sequential modifier.
import org.specs2.mutable._
import org.specs2.specification._
class PlayAppSpec extends Specification with BeforeAllAfterAll{
sequential
lazy val app : FakeApplication = {
FakeApplication()
}
def beforeAll(){
Play.start(app)
}
def afterAll(){
Play.stop()
}
}
import org.specs2.specification.Step
trait BeforeAllAfterAll extends Specification {
// see specs2 User Guide, linked below
override def map(fragments: =>Fragments) = {
beforeAll()
fragments ^ Step(afterAll)
}
def beforeAll()
def afterAll()
}
I think the map would be better with Step(...) ^ fragments ^ Step(...) but it did not run the beforeAll for me. The user guide at "Global setup/teardown" says to use a lazy val.
Overall it was a pain to set up this. My problem was
Exception in thread "Thread-145" java.net.SocketException: Connection reset
Or
Configuration error[Cannot connect to database [default]] (Configuration.scala:559)
When reusing the same FakeApplication:
SQLException: Attempting to obtain a connection from a pool that has already been shutdown.
I think it is much more logical this way than always creating a new application for every "in" block or adding all tests into one block.

Related

When I run my test suites they fail with PSQLException: FATAL: sorry, too many clients already

I'm writing tests for my Play application and I want to run them with a real server so that I can fake all the answers from the external services.
In order to do that I extend PlaySpec and GuiceOneServerPerSuite and I override the method fakeApplication to create my routes and give them to the Guice Application
class MySpec extends PlaySpec with GuiceOneServerPerSuite {
override def fakeApplication(): Application =
GuiceApplicationBuilder().appRoutes(app => {
case ("POST", "/url/") => app.injector.instanceOf(classOf[DefaultActionBuilder]) { Ok }
}).globalApp(true).build()
"Something" should {
"work well" in {
val wsClient = app.injector.instanceOf[WSClient]
val service = new MyService(wsClient)
service.method() mustBe ""
app.injector.instanceOf[DBApi].databases().foreach(_.getConnection().close())
}
}
}
I have multiple test suites like this one and if I run them alone they work fine, but if I run them all together they fill up the connection pool and then everything fails with: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already.
My considerations: I think it happens because at each test suite a new Play Guice Application is created. I also tried to close the connections of all databases manually but didn't solve the problem.
We had the same problems, so we are separating these 2 use cases (running all or just one Test-Suite).
This makes running all tests much faster - as Play Environment is only started once.
The Suite looks like:
class AcceptanceSpecSuite
extends PlaySpec
with GuiceOneAppPerSuite
with BeforeAndAfter {
// all specs
override def nestedSuites: immutable.IndexedSeq[AcceptanceSpec] = Vector(
// api
new DatabaseTaskSpec,
new HistoryPurgeTaskSpec,
...
)
override def fakeApplication(): Application =
// your initialization
}
Now each Spec looks like:
#DoNotDiscover // important that it is run only if called explicitly
class DatabaseTaskSpec extends AcceptanceSpec {
...
The Parent class now we can switch between GuiceOneServerPerSuite and ConfiguredApp:
trait AcceptanceSpec
extends PlaySpec
you need:
// with GuiceOneServerPerSuite // if you want to test only one Test
with ConfiguredApp // if you want to test all
with Logging
with ScalaFutures
with BeforeAndAfter {
...
I know it's a bit of a hack - so I am also interested in a more elegant solution;).
You can put your DB instance as a singleton, if you do that, he won´t create multiple instance, therefore won´t fill the connection pool.
Something like that:
#Singleton
object TestDBProperties extends DBProperties {
override val db: Database = Database.forURL(
url = "jdbc:h2:mem:testdb;MODE=MYSQL;DB_CLOSE_DELAY=-1;DATABASE_TO_UPPER=FALSE;",
driver = "org.h2.Driver")
}
Hope this helps.

scalatest Flatspec: Timeout for entire class

With the scalatest Flatspec and the TimeLimits trait I can set a timeout for a line of code like so:
import org.scalatest.time.SpanSugar._
import org.scalatest.concurrent.TimeLimits
import org.scalatest.FlatSpec
class MyTestClass extends Flatspec with TimeLimits {
"My thread" must "complete on time" in {
failAfter(100 millis) { infiniteLoop() }
// I have also tried cancelAfter
}
}
This should fail due to a timeout. However, when I run this test in Intellij it just runs forever.
Furthermore, I do not want to have to rewrite the timeout for every method, instead I would like to configure it once for the entire class. The PatienceConfig claims to do that, but it does not seem to do anything. The test is still runs forever.
import org.scalatest.FlatSpec
import org.scalatest.time.{Millis, Span}
import org.scalatest.concurrent.{Eventually, IntegrationPatience}
class MyTestClass extends Flatspec with Eventually with IntegrationPatience {
implicit val defaultPatience = PatienceConfig(timeout=Span(100, Millis))
"My thread" must "complete on time" in {
inifiniteLoop()
}
}
I looked for a solution myself. came a cross this answer, it worked for me.
I am using flatspec, added the trait TimeLimitedTests
with TimeLimitedTests
then inside the code I wrote my limit for each of the tests val timeLimit: Span = Span(2000, Millis)
which is defined by the trait (we are overriding it).
Finally it didn't work until I overriden the interrupter as suggested by Rumoku in the referenced answer by
override val defaultTestInterruptor: Interruptor = new Interruptor {
override def apply(testThread: Thread): Unit = {
println("Kindly die")
testThread.stop() // deprecated. unsafe. do not use
}}
I hope this helps

Play tests with database: "Too many connections"

To have a database available in scalatest with evolutions I use this extension of the default PlaySpec inspired by this SO question:
trait ResetDbSpec extends PlaySpec with BeforeAndAfterAll {
lazy val appBuilder = new GuiceApplicationBuilder()
lazy val injector = appBuilder.injector()
lazy val databaseApi = injector.instanceOf[DBApi]
override def beforeAll() = {
Evolutions.applyEvolutions(databaseApi.database("default"))
}
override def afterAll() = {
Evolutions.cleanupEvolutions(databaseApi.database("default"))
databaseApi.database("default").shutdown()
}
}
It applies database evolutions when the suite starts, and reverts them when the suite ends. A test then looks like
class ProjectsSpec extends ResetDbSpec with OneAppPerSuite { ...
After adding more tests like this, I hit a point where some tests that succeed when I run them alone, fail with this error:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Data source rejected establishment of connection, message from server: "Too many connections"
As can be see in the code above, I tried to add the line
databaseApi.database("default").shutdown()
in afterAll() to mitigate that, but it had no effect. I tried to not run tests in parallel, but no effect either. Where is it that I open db connections without closing them, and where should I call shutdown()?
N.B. I use Play 2.5.10 and Slick 3.1.
I have a lot of tests (about 500) and I don't get this error, the only difference I have with your code is that I add
databaseApi.database("default").getConnection().close()
and
Play.stop(fakeApplication)
for the integration tests.
Let me know if it changes anything.
Although it does not answer to what is happening with the connections leakage, I finally managed to hack around this:
Add jdbc to you libraryDependencies, even if the Play-Slick FAQ tells you not to do it:
# build.sbt
libraryDependencies += jdbc
Restart sbt to take changes into account. In IntelliJ, you will want to refresh the project, too.
Disable the jdbc module that is conflicting with play-slick (credits: this SO answer):
# application.conf
play.modules.disabled += "play.api.db.DBModule"
At the same place you should have already configured something like
slick {
dbs {
default {
driver = "slick.driver.MySQLDriver$"
db.driver = "com.mysql.jdbc.Driver"
db.url = "jdbc:mysql://localhost/test"
db.user = "sa"
db.password = ""
}
}
}
Now you can use play.api.db.Databases from jdbc and its method withDatabase to run the evolutions.
import org.scalatest.BeforeAndAfterAll
import org.scalatestplus.play.PlaySpec
import play.api.db.{Database, Databases}
import play.api.db.evolutions.Evolutions
trait ResetDbSpec extends PlaySpec with BeforeAndAfterAll {
/**
* Here we use Databases.withDatabase to run evolutions without leaking connections.
* It slows down the tests considerably, though.
*/
private def withTestDatabase[T](block: Database => T) = {
Databases.withDatabase(
driver = "com.mysql.jdbc.Driver",
url = "jdbc:mysql://localhost/test",
name = "default",
config = Map(
"username" -> "sa",
"password" -> ""
)
)(block)
}
override def beforeAll() = {
withTestDatabase { database =>
Evolutions.applyEvolutions(database)
}
}
override def afterAll() = {
withTestDatabase { database =>
Evolutions.cleanupEvolutions(database)
}
}
}
Finally, call tests requiring a db reset like this:
class MySpec extends ResetDbSpec {...}
Of course it sucks repeating this config both in "application.test.conf" and in withDatabase(), plus it mixes two different APIs, not talking about performance. Also it adds this before and after each suite, which is annoying:
[info] application - Creating Pool for datasource 'default'
[info] application - Shutting down connection pool.
If somebody has a better suggestion, please improve on this answer! I have been struggling for months.

How to set an in memory test database for my Scala Play2 CRUD application?

I'm continuing my exploration of the Play framework and its related components. I used the template for CRUD application with a connection to a PostgreSQL database to begin with. It splits the application in models, repositories, controllers and views. This works well.
Now, I'm trying to create some tests for this application with Specs2. More precisely, I'm trying to test the repository. It is defined as follow:
package dal
import javax.inject.{ Inject, Singleton }
import play.api.db.slick.DatabaseConfigProvider
import slick.driver.JdbcProfile
import models.Cat
import scala.concurrent.{ Future, ExecutionContext }
#Singleton
class CatRepository #Inject() (dbConfigProvider: DatabaseConfigProvider)(implicit ec: ExecutionContext) {
...
}
I would like to set an in memory database which would be created (schema, evolutions) before all tests, destroyed after all tests, populated (data, probably with direct SQL) and flushed around each test. I would like to pass it on to my repository instance which I would then use to perform my test. Like:
val repo = new CatRepository(inMem_DB)
So how do I go about:
1) creating this db and applying the evolutions?
maybe:
trait TestDB extends BeforeAfterAll {
var database: Option[Database] = None
def before = {
database = Some(Databases.inMemory(name = "test_db"))
database match {
case Some(con) => Evolutions.applyEvolutions(con)
case _ => None
}
println("DB READY")
}
def after = {
database match {
case Some(con) => con.shutdown()
case _ => None
}
}
}
Using a var and always making a "match/case" when I need to use the db isn't convenient. I guess there is a much better to do this...
2) Populate and flush around each test?
Shall I create a trait extending Around, just as with BeforeAfterAll?
3) Create on of these play.api.db.slick.DatabaseConfigProvider from the database?
Any link showing how to do this?
I found few examples which where covering this with running a FakeApplication, but I assume there is a way to pass somehow the db to such repository object outside of a running application..?
Thank you for helping.
Cheers!
You can use a lazy val and AfterAll for the database setup/teardown, then BeforeAfterEach for each example:
trait TestDB extends AfterAll with BeforeAfterEach {
lazy val database: Database =
Databases.inMemory(name = "test_db")
def afterAll =
database.shutdown
def before =
database.populate
def after =
datbase.clean
}

Specs2 setting up environment before and after the whole suit

I'm writing some specc2 integration tests for my spray.io project that uses dynamodb. I'm using sbt-dynamodb to load a local dynamodb into the environment. I use the following pattern to load my tables before the tests are run.
trait DynamoDBSpec extends SpecificationLike {
val config = ConfigFactory.load()
val client = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
lazy val db = {
client.setEndpoint(config.getString("campaigns.db.endpoint"))
new DynamoDB(client)
}
override def map(fs: =>Fragments): Fragments =
Step(beforeAll) ^ fs ^ Step(afterAll)
protected def beforeAll() = {
//load my tables
}
protected def afterAll() = {
//delete my tables
}
}
Then any test class can just be extended with DynamoDBSpec and the tables will be created. It all works fine, until extend DynamoDBSpec from more than one test class, during which it throws an ResourceInUseException: 'Cannot create preexisting table'. The reason is that they execute in parallel, thus it wants to execute table creation at the same time.
I tried to overcome it by running the tests in sequential mode, but beforeall and afterall are still executed in parallel.
Ideally I think it would be good to create the tables before the entire suite runs instead of each Spec class invocation, and then tear them down after the entire suite completes. Does anyone know how to accomplish that?
There are 2 ways to achieve this.
With an object
You can use an object to synchronize the creation of your database
object Database {
lazy val config = ConfigFactory.load()
lazy val client =
new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
// this will only be done once in
// the same jvm
lazy val db = {
client.setEndpoint(config.getString("campaigns.db.endpoint"))
val database = new DynamoDB(client)
// drop previous tables if any
// and create new tables
database.create...
database
}
}
// BeforeAll is a new trait in specs2 3.x
trait DynamoDBSpec extends SpecificationLike with BeforeAll {
//load my tables
def beforeAll = Database.db
}
As you can see, in this model we don't remove tables when the specification is finished (because we don't know if all other specifications have been executed), we just remove then when we re-run the specifications. This can actually be a good thing because this will help you investigate failures if any.
The other way to synchronize specifications at a global level, and to properly clean-up at the end, is to use specification links.
With links
With specs2 3.3 you can create dependencies between specification with links. This means that you can define a "Suite" specification which is going to:
start the database
collect all the relevant specifications
delete the database
For example
import org.specs2._
import specification._
import core.Fragments
import runner.SpecificationsFinder
// run this specification with `all` to execute
// all linked specifications
class Database extends Specification { def is =
"All database specifications".title ^ br ^
link(new Create).hide ^
Fragments.foreach(specs)(s => link(s) ^ br) ^
link(new Delete).hide
def specs = specifications(pattern = ".*Db.*")
}
// start the database with this specification
class Create extends Specification { def is = xonly ^
step("create database".pp)
}
// stop the database with this specification
class Delete extends Specification { def is = xonly ^
step("delete database".pp)
}
// an example of a specification using the database
// it will be invoked by the "Database" spec because
// its name matches ".*Db.*"
class Db1Spec extends Specification { def is = s2"""
test $db
"""
def db = { println("use the database - 1"); ok }
}
class Db2Spec extends Specification { def is = s2"""
test $db
"""
def db = { println("use the database - 2"); ok }
}
When you run:
sbt> test-only *Database* -- all
You should see a trace like
create database
use the database - 1
use the database - 2
delete database