I've been developing a web site with the Play framework (scala) using H2 as a backend. Testing is nicely integrated, especially with the ability to run tests against an in-memory H2 db.
Now I'd like to move my datastore over to Postgres for various convenience reasons. This leaves me with a problem: How to continue to test and retain the simplicity of a fresh db for each test run? I see on the net that some people manage to run live against postgres and test against H2. However the two are not entirely compatible at the SQL level (even with H2 in Postgres compatibility mode). For instance SERIAL, BIGSERIAL and BYTEA are not supported on H2.
Can I do this by using a constrained compatible intersection of both dialects, or is there another technique I'm missing?
Thanks for any help.
Alex
i know this is an older post, but it looks like there isn't an obvious solution still a few years later. as a short term fix, in play 2.4.x-2.5.x (so far only tested there), you can alter the way evolutions get applied during tests by creating a custom evolutions reader:
package support
import play.api.db.evolutions.{ClassLoaderEvolutionsReader, Evolutions, ResourceEvolutionsReader}
import java.io.{ByteArrayInputStream, InputStream}
import java.nio.charset.StandardCharsets
import scala.io.Source
import scala.util.Try
class EvolutionTransformingReader(
classLoader: ClassLoader = classOf[ClassLoaderEvolutionsReader].getClassLoader,
prefix: String = "")
extends ResourceEvolutionsReader {
def loadResource(db: String, revision: Int): Option[InputStream] =
for {
stream <- Option(classLoader.getResourceAsStream(prefix + Evolutions.resourceName(db, revision)))
lines <- Try(Source.fromInputStream(stream).getLines).toOption
updated = lines map convertPostgresLinesToH2
} yield convertLinesToInputStream(updated)
private val ColumnRename = """(?i)\s*ALTER TABLE (\w+) RENAME COLUMN (\w+) TO (\w+);""".r
private def convertPostgresLinesToH2(line: String): String =
line match {
case ColumnRename(tableName, oldColumn, newColumn) =>
s"""ALTER TABLE $tableName ALTER COLUMN $oldColumn RENAME TO $newColumn;"""
case _ => line
}
private def convertLinesToInputStream(lines: Iterator[String]): InputStream =
new ByteArrayInputStream(lines.mkString("\n").getBytes(StandardCharsets.UTF_8))
}
then pass it into the place where you apply evolutions during your tests:
Evolutions.applyEvolutions(registry.database, new EvolutionTransformingReader())
note that the reader is still in a pretty dumb state (it assumes the SQL statements are oneliners, which is not guaranteed), but this should be enough to get anyone started.
As evolution files use SQL directly, unless you limited yourself to a common cross-db-compatible subset of SQL you may have issues.
There is no real solution on that, but you can still use a fresh db for testing. Just set the following:
%test.jpa.ddl=create-drop
%test.db.driver=org.postgresql.Driver
%test.db=<jdbc url>
//etc
This should create a new postgres connection for test, create the db from scratch, run the evolutions, do the tests and remove all data once done.
Related
Let's say I have a method that permits to update some date in DB:
def updateLastConsultationDate(userId: String): Unit = ???
How can I throttle/debounce that method easily so that it won't be run more than once an hour per user.
I'd like the simplest possible solution, not based on any event-bus, actor lib or persistence layer. I'd like an in-memory solution (and I am aware of the risks).
I've seen solutions for throttling in Scala, based on Akka Throttler, but this really looks to me overkill to start using actors just for throttling method calls. Isn't there a very simple way to do that?
Edit: as it seems not clear enough, here's a visual representation of what I want, implemented in JS. As you can see, throttling may not only be about filtering subsequent calls, but also postponing calls (also called trailing events in js/lodash/underscore). The solution I'm looking for can't be based on pure-synchronous code only.
This sounds like a great job for a ReactiveX-based solution. On Scala, Monix is my favorite one. Here's the Ammonite REPL session illustrating it:
import $ivy.`io.monix::monix:2.1.0` // I'm using Ammonite's magic imports, it's equivalent to adding "io.monix" %% "monix" % "2.1.0" into your libraryImports in SBT
import scala.concurrent.duration.DurationInt
import monix.reactive.subjects.ConcurrentSubject
import monix.reactive.Consumer
import monix.execution.Scheduler.Implicits.global
import monix.eval.Task
class DbUpdater {
val publish = ConcurrentSubject.publish[String]
val throttled = publish.throttleFirst(1 hour)
val cancelHandle = throttled.consumeWith(
Consumer.foreach(userId =>
println(s"update your database with $userId here")))
.runAsync
def updateLastConsultationDate(userId: String): Unit = {
publish.onNext(userId)
}
def stop(): Unit = cancelHandle.cancel()
}
Yes, and with Scala.js this code will work in the browser, too, if it's important for you.
Since you ask for the simplest possible solution, you can store a val lastUpdateByUser: Map[String, Long], which you would consult before allowing an update
if (lastUpdateByUser.getOrElse(userName, 0)+60*60*1000 < System.currentTimeMillis) updateLastConsultationDate(...)
and update when a user actually performs an update
lastUpdateByUser(userName) = System.currentTimeMillis
One way to throttle, would be to maintain a count in a redis instance. Doing so would ensure that the DB wouldn't be updated, no matter how many scala processes you were running, because the state is stored outside of the process.
Is there a way to write tests for the data access objects (DAOs) in play framework 2.x without starting an app?
Tests with fake app are relatively slow even if the database is an in-memory H2 as the docs suggests.
After experiencing similar issues with execution time of tests using FakeAplication I switched to a different approach. Instead of creating one fake app per test I start a real instance of the application and run all my tests against it. With a large test suite there is a big win in total execution time.
http://yefremov.net/blog/fast-functional-tests-play/
For unit testing, a good solution is mocking. If you are using Play 2.4 and above, Mockito is already built in, and you do not have to import mockito separately.
For integration testing, you cannot run tests without fake application, since sometimes your DAOs probably require application context information, for example the information defined in application.conf. In this case, you must setup a FakeApplication with fake application configuration so that DAOs have that information.
This sample repo,https://github.com/luongbalinh/play-mongo/tree/master/test, contains tests at service and controller layers, including both unit tests with Mockito and integration tests. Integration tests for DAOs should be very similar to the service tests. Hopefully, it gives you a hint of how to use Mockito to write DAO tests.
Turns out the Database object can be constructed directly from the Databases factory, so ended up having a trait like this one:
trait DbTests extends BeforeAndAfterAll with SuiteMixin { this: Suite =>
val dbUrl = sys.env.getOrElse("DATABASE_URL",
"jdbc:postgresql://localhost:5432/testuser=user&password=pass")
val database = Databases("org.postgresql.Driver", dbUrl, "tests")
override def afterAll() = {
database.shutdown()
}
}
then use it the following way:
class SampleDaoTest extends DbTests {
val myDao = new MyDao(database) //construct the dao, database is injected so can be passed
"read form db" in {
myDao.read(id = 123) mustEqual MyClass(123)
}
}
I'm using slick 2.0 for my interaction with the database.
As recommended I've added external connection pool using BonesCP.
val driver = Class.forName(Database.driver)
val ds = new BoneCPDataSource();
ds.setJdbcUrl(Database.jdbcUri);
ds.setUsername(Database.user);
ds.setPassword(Database.password);
using it I've created my connection:
scala.slick.jdbc.JdbcBackend.Database.forDataSource(ds)
db withSession {implicit session =>
(...)
}
}
Now if I do on my TableQuery object something like this:
provisioning.foreach {(...)}
it says that there is no foreach method.
So I've imported:
import scala.slick.driver.PostgresDriver.simple._
and now everything works well.
What I don't like is that my code is tied to a certain database implementation.
Can I somehow make it read the "db dialect" from the config file?
So now I know how this should be done properly.
I've been to a presentation by Stefan Zeiger on Slick who is the original author of the library and he pointed out that since PostgresDriver as well as other drivers is a trait we can mixin it instead of importing. Thus having chance to dynamically choose the driver that fits our database.
Firstly, I've never worked with a database in Scala Play Framework. I did a research and found that the only way(?) to work with it is using a plain SQL. Is that so? I wonder, isn't there a way to do that the same way I can do that in RoR using models? At least, I found a plenty of examples showing, even encouraging working with plain SQL.
Secondly, I can't compile the code from the official documentation:
import play.api.db._
import play.api.Play.current
val result:Boolean = SQL("Select 1").execute() //SQL is not found
Also, where is SQL located?
Importing anorm._ should fix the issue.
SQL is located in the package object anorm
Btw, SQL does not work without the sql connection, so wrap it like this:
DB.withConnection { implicit c =>
SQL("select 1").execute()
}
Have you added the sql dependency to your project as described in the docs?
http://www.playframework.com/documentation/2.2.x/ScalaDatabase
I'm currently trying to use Play! Framework 2.2 and play-slick (master branch).
In the play-slick code I would like to override driver definition in order to add the Oracle Driver (I'm using slick-extension). In the Config.Scala of play-slick I just saw /** Extend this to add driver or change driver mapping */ ...
I'm coming from far far away (currently reading Programming In Scala) so there's a lot to learn. So my questions are :
Can someone explain me how to extend this Config object ? this object is used in others classes ... Is the cake apttern useful here ?
Talking about cake pattern, I read the computer-database example provided by play-slick. This sample uses the cake pattern and import play.api.db.slick.Config.driver.simple._ If I'm using Oracle driver I cannot use this import, am I wrong ? How can I use the cake pattern to define an implicit session ?
Thanks a lot.
Waiting for your advices and I'm still studying the play-slick code at home :)
To extend the Config trait I do not think the cake pattern is required. You should be able to create your Config object like this:
import scala.slick.driver.ExtendedDriver
object MyExtendedConfig extends play.api.db.slick.Config {
override def driverByName: String => Option[ExtendedDriver] = {name: String =>
super.driverByName(name) orElse Map("oracledriverstring" -> OracleDriver).get(name)
}
lazy val app = play.api.Play.current
lazy val driver: ExtendedDriver = driver()(app)
}
To be able to use it you only need to do: import MyExtendedConfig.driver._ instead of import play.slick.db.api.Config.driver._. BTW, I see that the type of the driverByName could have been a Map instead of a Function making it easier to extend. This shouldn't break though, but it would be easier to do it.
I think Jonas Bonér's old blog is a great place to read what the cake pattern is (http://jonasboner.com/2008/10/06/real-world-scala-dependency-injection-di/). My naive understanding of it is that you have a cake pattern when you have layers that uses the self types:
trait FooComponent{ driver: ExtendedDriver =>
import driver.simple._
class Foo extends Table[Int]("") {
//...
}
}
There are 2 use cases for the cake pattern in slick/play-slick: 1) if you have tables that references other tables (as in the computer database sample) 2) to have control over exactly which database is used at which time or if you use many many different types. By using the Config you do not really need the cake pattern as long as you only have 2 different DBs (one for prod and one for test), which is the point of the Config.
Hope this answers your questions and good luck on reading Programming in Scala (loved that book :)