Play 2.1 Unit Test With Slick and Postgres - scala

I want to run unit tests for a Play 2 Scala app using the same database setup as used in production: Slick with Postgres. The following fails with "java.sql.SQLException: Attempting to obtain a connection from a pool that has already been shutdown." on the 2nd test.
package controllers
import org.specs2.mutable._
import play.api.db.DB
import play.api.Play.current
import play.api.test._
import play.api.test.Helpers._
import scala.slick.driver.PostgresDriver.simple._
class BogusTest extends Specification {
def postgresDatabase(name: String = "default",
options: Map[String, String] = Map.empty): Map[String, String] =
Map(
"db.test.driver" -> "org.postgresql.Driver",
"db.test.user" -> "postgres",
"db.test.password" -> "blah",
"db.test.url" -> "jdbc:postgresql://localhost/blah"
)
def fakeApp[T](block: => T): T =
running(FakeApplication(additionalConfiguration =
postgresDatabase("test") ++ Map("evolutionplugin" -> "disabled"))) {
def database = Database.forDataSource(DB.getDataSource("test"))
database.withSession { implicit s: Session => block }
}
"Fire 1" should {
"do something" in fakeApp {
success
}
}
"Fire 2" should {
"do something else" in fakeApp {
success
}
}
}
I run the test like this:
$ play -Dconfig.file=`pwd`/conf/dev.conf "test-only controllers.BogusTest"
Two other mysteries:
1) All tests run, even though I ask for just BogusTest to run
2) application.conf is always used, not def.conf, and the driver information comes from application.conf, not the info configured in the code.

This is a tentative answer as I have currently tested on play 2.2.0 and I can't reproduce your bug, using a MYSQL database.
I feel there might be a very tricky bug in your code. First of all, if you explore the DBPlugin implementation provided by Play, BoneCPPPlugin:
/**
* Closes all data sources.
*/
override def onStop() {
dbApi.datasources.foreach {
case (ds, _) => try {
dbApi.shutdownPool(ds)
} catch { case NonFatal(_) => }
}
val drivers = DriverManager.getDrivers()
while (drivers.hasMoreElements) {
val driver = drivers.nextElement
DriverManager.deregisterDriver(driver)
}
}
You see that the onStop() method closes the connection pool. So it's clear, you are providing to the second test example an application which has already been stopped (and therefore its plugins are stopped and the db connectin pool closed).
Scalatests and specs2 run the test in parallel, and you can rely on the test helper because it's thread-safe:
def running[T](fakeApp: FakeApplication)(block: => T): T = {
synchronized {
try {
Play.start(fakeApp)
block
} finally {
Play.stop()
play.api.libs.ws.WS.resetClient()
}
}
}
However, when you do
DB.getDataSource("test")
From the source code of Play:
def getDataSource(name: String = "default")(implicit app: Application): DataSource = app.plugin[DBPlugin].map(_.api.getDataSource(name)).getOrElse(error)
So here there is an implicit, does which not get resolved to FakeApplication (it is not an implicit in scope!!!), but to Play.current and it appears that in the second case, this is not what you were expecting it to be, Play.current still point to the previous instance of FakeApplication: it probably depends on how implicit are captured in closures
If you however, refactor the fakeApp method, you can ensure the application you just created is used to resolve the implicit (you can always make explicit the value for an implicit parameter)
def fakeApp[T](block: => T): T = {
val fakeApplication = FakeApplication(additionalConfiguration =
postgresDatabase("test") ++ Map("evolutionplugin" -> "disabled"))
running(fakeApplication) {
def database = Database.forDataSource(DB.getDataSource("test")(fakeApplication))
database.withSession { implicit s: Session => block }
}
}

Related

How to extend the TestEnvironment of a ZIO Test

I want to test the following function:
def curl(host: String, attempt: Int = 200): ZIO[Loggings with Clock, Throwable, Unit]
If the environment would just use standard ZIO environments, like Console with Clock, the test would work out of the box:
testM("curl on valid URL") {
(for {
r <- composer.curl("https://google.com")
} yield
assert(r, isUnit))
}
The Test environment would be provided by zio-test.
So the question is, how to extend the TestEnvironment with my Loggings module?
Note that this answer is for RC17 and will change significantly in RC18. You're right that as in other cases of composing environments we need to implement a function to build our total environment from the modules we have. Spec has several combinators built in such as provideManaged to do this so you don't need to do it within your test itself. All of these have "normal" variants that will provide a separate copy of the environment to each test in a suite and "shared" variants that will create one copy of the environment for the entire suite when it is a resource that is expensive to create like a Kafka service.
You can see an example below of using provideSomeManaged to provide an environment that extends the test environment to a test.
In RC18 there will be a variety of other provide variants equivalent to those on ZIO as well as a new concept of layers to make it much easier to build composed environments for ZIO applications.
import zio._
import zio.clock._
import zio.test._
import zio.test.environment._
import ExampleSpecUtil._
object ExampleSpec
extends DefaultRunnableSpec(
suite("ExampleSpec")(
testM("My Test") {
for {
time <- clock.nanoTime
_ <- Logging.logLine(
s"The TestClock says the current time is $time"
)
} yield assertCompletes
}
).provideSomeManaged(testClockWithLogging)
)
object ExampleSpecUtil {
trait Logging {
def logging: Logging.Service
}
object Logging {
trait Service {
def logLine(line: String): UIO[Unit]
}
object Live extends Logging {
val logging: Logging.Service =
new Logging.Service {
def logLine(line: String): UIO[Unit] =
UIO(println(line))
}
}
def logLine(line: String): URIO[Logging, Unit] =
URIO.accessM(_.logging.logLine(line))
}
val testClockWithLogging
: ZManaged[TestEnvironment, Nothing, TestClock with Logging] =
ZIO
.access[TestEnvironment] { testEnvironment =>
new TestClock with Logging {
val clock = testEnvironment.clock
val logging = Logging.Live.logging
val scheduler = testEnvironment.scheduler
}
}
.toManaged_
}
This is what I came up:
testM("curl on valid URL") {
(for {
r <- composer.curl("https://google.com")
} yield
assert(r, isUnit))
.provideSome[TestEnvironment](env => new Loggings.ConsoleLogger
with TestClock {
override val clock: TestClock.Service[Any] = env.clock
override val scheduler: TestClock.Service[Any] = env.scheduler
override val console: TestLogger.Service[Any] = MyLogger()
})
}
Using the TestEnvironment with provideSome to setup my environment.

Integration tests for Http4s Client/Resource

I'm implementing a Vault client in Scala using Http4s client.
And I'm now starting to write integration tests. So far I have this:
abstract class Utils extends AsyncWordSpec with Matchers {
implicit override def executionContext = ExecutionContext.global
implicit val timer: Timer[IO] = IO.timer(executionContext)
implicit val cs: ContextShift[IO] = IO.contextShift(executionContext)
val vaultUri = Uri.unsafeFromString(Properties.envOrElse("VAULT_ADDR", throw IllegalArgumentException))
val vaultToken = Properties.envOrElse("VAULT_TOKEN", throw IllegalArgumentException)
val clientResource = BlazeClientBuilder[IO](global)
.withCheckEndpointAuthentication(false)
.resource
def usingClient[T](f: VaultClient[IO] => IO[Assertion]): Future[Assertion] = {
clientResource.use { implicit client =>
f(new VaultClient[IO](vaultUri, vaultToken))
}.unsafeToFuture()
}
}
Then my tests look like this (just showing one test):
class SysSpec extends Utils {
"The auth endpoint" should {
"successfully mount an authentication method" in {
usingClient { client =>
for {
result <- client.sys.auth.create("test", AuthMethod(
"approle", "some nice description", config = TuneOptions(defaultLeaseTtl = 60.minutes)
))
} yield result should be (())
}
}
}
}
This approach works, however it doesn't feel right. For each test I'm opening the connection (clientResource.use) and recreating the VaultClient.
Is there a way for me to reuse the same connection and client for all the tests in SysSpec.
Please note these are integration tests and not unit tests.
This is the best I could come up with.
abstract class Utils extends AsyncWordSpec with Matchers with BeforeAndAfterAll {
implicit override def executionContext = ExecutionContext.global
implicit val timer: Timer[IO] = IO.timer(executionContext)
implicit val cs: ContextShift[IO] = IO.contextShift(executionContext)
val (httpClient, finalizer) = BlazeClientBuilder[IO](global)
.withCheckEndpointAuthentication(false)
.resource.allocated.unsafeRunSync()
override protected def afterAll(): Unit = finalizer.unsafeRunSync()
private implicit val c = httpClient
val client = new VaultClient[IO](uri"http://[::1]:8200", "the root token fetched from somewhere")
}
Then the tests just use the client directly:
class SysSpec extends Utils {
"The auth endpoint" should {
"successfully mount an authentication method" in {
client.sys.auth.create("test", AuthMethod(
"approle", "some nice description",
config = TuneOptions(defaultLeaseTtl = 60.minutes))
).map(_ shouldBe ()).unsafeToFuture()
}
}
}
My two main problems with this approach are the two unsafeRunSyncs in the code. The first one is to create the client and the second one to clean the resource. However it is a much better approach then repeatedly creating and destroy the client.
I would also like not to use the unsafeToFuture but that would require ScalaTest to support Cats-Effect directly.

Async before and after for creating and dropping scala slick tables in scalatest

I'm trying to figure out a way to have async before and after statements where the next test cases aren't run until the completion of the action inside of the test case. In my case, it is the creating and dropping a table inside of a database
val table = TableQuery[BlockHeaderTable]
val dbConfig: DatabaseConfig[PostgresDriver] = DatabaseConfig.forConfig("databaseUrl")
val database: Database = dbConfig.db
before {
//Awaits need to be used to make sure this is fully executed before the next test case starts
//TODO: Figure out a way to make this asynchronous
Await.result(database.run(table.schema.create), 10.seconds)
}
"BlockHeaderDAO" must "store a blockheader in the database, then read it from the database" in {
//...
}
it must "delete a block header in the database" in {
//...
}
after {
//Awaits need to be used to make sure this is fully executed before the next test case starts
//TODO: Figure out a way to make this asynchronous
Await.result(database.run(table.schema.drop),10.seconds)
}
Is there a simple way I can remove these Await calls inside of my before and after functions?
Unfortunately, #Jeffrey Chung's solution hanged for me (since futureValue actually awaits internally). This is what I ended up doing:
import org.scalatest.{AsyncFreeSpec, FutureOutcome}
import scala.concurrent.Future
class TestTest extends AsyncFreeSpec /* Could be any AsyncSpec. */ {
// Do whatever setup you need here.
def setup(): Future[_] = ???
// Cleanup whatever you need here.
def tearDown(): Future[_] = ???
override def withFixture(test: NoArgAsyncTest) = new FutureOutcome(for {
_ <- setup()
result <- super.withFixture(test).toFuture
_ <- tearDown()
} yield result)
}
The following is the testing approach that Dennis Vriend takes in his slick-3.2.0-test project.
First, define a dropCreateSchema method. This method attempts to create a table; if that attempt fails (because, for example, the table already exists), it drops, then creates, the table:
def dropCreateSchema: Future[Unit] = {
val schema = BlockHeaderTable.schema
db.run(schema.create)
.recoverWith {
case t: Throwable =>
db.run(DBIO.seq(schema.drop, schema.create))
}
}
Second, define a createEntries method that populates the table with some sample data for use in each test case:
def createEntries: Future[Unit] = {
val setup = DBIO.seq(
// insert some rows
BlockHeaderTable ++= Seq(
BlockHeaderTableRow(/* ... */),
// ...
)
).transactionally
db.run(setup)
}
Third, define an initialize method that calls the above two methods sequentially:
def initialize: Future[Unit] = for {
_ <- dropCreateSchema
_ <- createEntries
} yield ()
In the test class, mix in the ScalaFutures trait. For example:
class TestSpec extends FlatSpec
with Matchers
with ScalaFutures
with BeforeAndAfterAll
with BeforeAndAfterEach {
// ...
}
Also in the test class, define an implicit conversion from a Future to a Try, and override the beforeEach method to call initialize:
implicit val timeout: Timeout = 10.seconds
implicit class PimpedFuture[T](self: Future[T]) {
def toTry: Try[T] = Try(self.futureValue)
}
override protected def beforeEach(): Unit = {
blockHeaderRepo.initialize // in this example, initialize is defined in a repo class
.toTry recover {
case t: Throwable =>
log.error("Could not initialize the database", t)
} should be a 'success
}
override protected def afterAll(): Unit = {
db.close()
}
With the above pieces in place, there is no need for Await.
You can simplify #Jeffrey Chung
A simplified dropCreateSchema method:
def dropCreateSchema: Future[Unit] = {
val schema = users.schema
db.run(DBIO.seq(schema.dropIfExists, schema.create))
}
Also in the test class, I simplified beforeEach method that calls initialize. I removed an implicit conversion from a Future to a Try, and use onComplete callback:
override protected def beforeEach(): Unit = {
initialize.onComplete(f =>
f recover {
case t: Throwable =>
log.error("Could not initialize the database", t)
} should be a 'success)
}
override protected def afterAll(): Unit = {
db.close()
}

How to query OrientDB asynchronously from Play controller?

I am writing a Play (2.2) controller in Scala, which should return the result of a query against OrientDB. Now, I have succeeded in writing a synchronous version of said controller, but I'd like to re-write it to work asynchronously.
My question is; given the below code (just put together for demonstration purposes), how do I re-write my controller to interact asynchronously with OrientDB (connecting and querying)?
import play.api.mvc.{Action, Controller}
import play.api.libs.json._
import com.orientechnologies.orient.`object`.db.OObjectDatabasePool
import java.util
import com.orientechnologies.orient.core.sql.query.OSQLSynchQuery
import scala.collection.JavaConverters._
object Packages extends Controller {
def packages() = Action { implicit request =>
val db = OObjectDatabasePool.global().acquire("http://localhost:2480", "reader", "reader")
try {
db.getEntityManager().registerEntityClass(classOf[models.Package])
val packages = db.query[util.List[models.Package]](new OSQLSynchQuery[models.Package]("select from Package")).asScala.toSeq
Ok(Json.obj(
"packages" -> Json.toJson(packages)
))
}
finally {
db.close()
}
}
}
EDIT:
Specifically, I wish to use OrientDB's asynchronous API. I know that asynchronous queries are supported by the API, though I'm not sure if you can connect asynchronously as well.
Attempted Solution
Based on Jean's answer, I've tried the following asynchronous implementation, but it fails due to a compilation error value execute is not a member of Nothing possible cause: maybe a semicolon is missing before 'value execute'?:
def getPackages(): Future[Seq[models.Package]] = {
val db = openDb
try {
val p = promise[Seq[models.Package]]
val f = p.future
db.command(
new OSQLAsynchQuery[ODocument]("select from Package",
new OCommandResultListener() {
var acc = List[ODocument]()
#Override
def result(iRecord: Any): Boolean = {
val doc = iRecord.asInstanceOf[ODocument]
acc = doc :: acc
true
}
#Override
def end() {
// This is just a dummy
p.success(Seq[models.Package]())
}
// Fails
})).execute()
f
}
finally {
db.close()
}
}
One way could be to start a promise, return the future representing the result of that promise, locally accumulate the results as they come and complete de promise ( thus resolving the future ) when orient db notifies you that the command has completed.
def executeAsync(osql: String, params: Map[String, String] = Map()): Future[List[ODocument]] = {
import scala.concurrent._
val p = promise[List[ODocument]]
val f =p.future
val req: OCommandRequest = database.command(
new OSQLAsynchQuery[ODocument]("select * from animal where name = 'Gipsy'",
new OCommandResultListener() {
var acc = List[ODocument]()
#Override
def result(iRecord:Any):Boolean= {
val doc = iRecord.asInstanceOf[ODocument]
acc=doc::acc
true
}
#Override
def end() {
p.success(acc)
}
}))
req.execute()
f
}
Be careful though, to enable graph navigation and field lazy loading, orientdb objects used to keep an internal reference to the database instance they were loaded from ( or to depend on a threadlocal database connected instance ) for lazily loading elements from the database. Manipulating these objects asynchronously may result in loading errors. I haven't checked changes from 1.6 but that seemed to be deeply embedded in the design.
It's as simple as wrapping the blocking call in a Future.
import play.api.libs.concurrent.Execution.Implicits.defaultContext
import scala.concurrent.Future
object Packages extends Controller {
def packages = Action.async { implicit request =>
val db = OObjectDatabasePool.global().acquire("http://localhost:2480", "reader", "reader")
db.getEntityManager().registerEntityClass(classOf[models.Package])
val futureResult: Future[Result] = Future(
db.query[util.List[models.Package]](new OSQLSynchQuery[models.Package]("select from Package")).asScala.toSeq
).map(
queryResult => Ok(Json.obj("packages" -> Json.toJson(packages)))
).recover {
// Handle each of the exception cases legitimately
case e: UnsupportedOperationException => UnsupportedMediaType(e.getMessage)
case e: MappingException => BadRequest(e.getMessage)
case e: MyServiceException => ServiceUnavailable(e.toString)
case e: Throwable => InternalServerError(e.toString + "\n" + e.getStackTraceString)
}
futureResult.onComplete { case _ =>
db.close()
}
futureResult
}
}
Note that I did not compile the code. There is a lot of room to improve the code.

How could I know if a database table is exists in ScalaQuery

I'm trying ScalaQuery, it is really amazing. I could defined the database table using Scala class, and query it easily.
But I would like to know, in the following code, how could I check if a table is exists, so I won't call 'Table.ddl.create' twice and get a exception when I run this program twice?
object Users extends Table[(Int, String, String)]("Users") {
def id = column[Int]("id")
def first = column[String]("first")
def last = column[String]("last")
def * = id ~ first ~ last
}
object Main
{
val database = Database.forURL("jdbc:sqlite:sample.db", driver = "org.sqlite.JDBC")
def main(args: Array[String]) {
database withSession {
// How could I know table Users is alrady in the DB?
if ( ??? ) {
Users.ddl.create
}
}
}
}
ScalaQuery version 0.9.4 includes a number of helpful SQL metadata wrapper classes in the org.scalaquery.meta package, such as MTable:
http://scalaquery.org/doc/api/scalaquery-0.9.4/#org.scalaquery.meta.MTable
In the test code for ScalaQuery, we can see examples of these classes being used. In particular, see org.scalaquery.test.MetaTest.
I wrote this little function to give me a map of all the known tables, keyed by table name.
import org.scalaquery.meta.{MTable}
def makeTableMap(dbsess: Session) : Map[String, MTable] = {
val tableList = MTable.getTables.list()(dbsess);
val tableMap = tableList.map{t => (t.name.name, t)}.toMap;
tableMap;
}
So now, before I create an SQL table, I can check "if (!tableMap.contains(tableName))".
This thread is a bit old, but maybe someone will find this useful. All my DAOs include this:
def create = db withSession {
if (!MTable.getTables.list.exists(_.name.name == MyTable.tableName))
MyTable.ddl.create
}
Here's a full solution that checks on application start using a PostGreSQL DB for PlayFramework
import globals.DBGlobal
import models.UsersTable
import org.scalaquery.meta.MTable
import org.scalaquery.session.Session
import play.api.GlobalSettings
import play.api.Application
object Global extends GlobalSettings {
override def onStart(app: Application) {
DBGlobal.db.withSession { session : Session =>
import org.scalaquery.session.Database.threadLocalSession
import org.scalaquery.ql.extended.PostgresDriver.Implicit._
if (!makeTableMap(session).contains("tableName")) {
UsersTable.ddl.create(session)
}
}
}
def makeTableMap(dbsess: Session): Map[String, MTable] = {
val tableList = MTable.getTables.list()(dbsess)
val tableMap = tableList.map {
t => (t.name.name, t)
}.toMap
tableMap
}
}
With java.sql.DatabaseMetaData (Interface). Depending on your Database, more or less functions might be implemented.
See also the related discussion here.I personally prefer hezamu's suggestion and extend it as follows to keep it DRY:
def createIfNotExists(tables: TableQuery[_ <: Table[_]]*)(implicit session: Session) {
tables foreach {table => if(MTable.getTables(table.baseTableRow.tableName).list.isEmpty) table.ddl.create}
}
Then you can just create your tables with the implicit session:
db withSession {
implicit session =>
createIfNotExists(table1, table2, ..., tablen)
}
You can define in your DAO impl the following method (taken from Slick MTable.getTables always fails with Unexpected exception[JdbcSQLException: Invalid value 7 for parameter columnIndex [90008-60]]) that gives you a true o false depending if there a defined table in your db:
def checkTable() : Boolean = {
val action = MTable.getTables
val future = db.run(action)
val retVal = future map {result =>
result map {x => x}
}
val x = Await.result(retVal, Duration.Inf)
if (x.length > 0) {
true
} else {
false
}
}
Or, you can check if some "GIVENTABLENAME" or something exists with println method:
def printTable() ={
val q = db.run(MTable.getTables)
println(Await.result(q, Duration.Inf).toList(0)) //prints first MTable element
println(Await.result(q, Duration.Inf).toList(1))//prints second MTable element
println(Await.result(q, Duration.Inf).toList.toString.contains("MTable(MQName(public.GIVENTABLENAME_pkey),INDEX,null,None,None,None)"))
}
Don't forget to add
import slick.jdbc.meta._
Then call the methods from anywhere with the usual #Inject(). Using
play 2.4 and play-slick 1.0.0.
Cheers,