Assign dynamically injected database name in Play Slick - scala

I have the following Play Slick DAO class. Note that the database configuration is a constant control0001. The DAO has a function readUser that reads a user based on its user id:
class UsersDAO #Inject()(#NamedDatabase("control0001")
protected val dbConfigProvider: DatabaseConfigProvider)
extends HasDatabaseConfigProvider[JdbcProfile] {
import driver.api._
def readUser (userid: String) = {
val users = TableQuery[UserDB]
val action = users.filter(_.userid === userid).result
val future = db.run(action.asTry)
future.map{
case Success(s) =>
if (s.length>0)
Some(s(0))
else
None
case Failure(e) => throw new Exception ("Failure in readUser: " + e.getMessage)
}
}
}
Instead of having a constant in #NamedDatabase("control0001"), I need the database to be variable. In the application, I have multiple databases (control0001, control002 and so on) configured in application.conf. Depending on a variable value, I need to determine the database to be used in the DAO. All the databases are similar and have the same tables (the data in each database differs).
The following Play class calls the DAO function, but first it needs to determine the database name to be injected:
class TestSlick #Inject()(dao: UsersDAO) extends Controller {
def test(someCode: Int, userId: String) = Action { request =>
val databaseName = if (someCode == 1) "control0001" else "control0002"
// Run the method in UsersDAO accessing the database set by databaseName
val future = dao.readUser(userId)
future.map { result =>
result match {
case Some(user) => Ok(user.firstName)
case _ => Ok("user not found")
}
}
}
}
How can this be achieved in Play Slick?

You can try to initialize slick db object overriding default config:
val db = Database.forURL("jdbc:mysql://localhost/" + databaseName, driver="org.h2.Driver")
more information in slick docs http://slick.lightbend.com/doc/3.0.0/database.html

Instead of trying to use Play's runtime dependency injection utilities in this case, use the SlickApi directly in your DAO and pass the datasource name to the dbConfig(DbName(name)) method. To obtain the SlickApi, mix in the SlickComponents trait:
class UsersDAO extends SlickComponents {
def readUser(userid: String, dbName: String) = {
val users = TableQuery[UserDB]
val action = users.filter(_.userid === userid).result
val dbConfig = slickApi.dbConfig(DbName(dbName))
val future = dbConfig.db.run(action.asTry)
...
}
}
Then in your controller:
def test(someCode: Int, userId: String) = Action { request =>
val databaseName = if (someCode == 1) "control0001" else "control0002"
val future = dao.readUser(userId, databaseName)
...
}

Related

How to Dependency Inject database in Play 2.5

I'm migrating from play 2.3 to 2.5
Originally I have "DAOFactory" object
object DAOFactory {
def categoryDAO: CategoryDAO = AnormCategoryDAO
def itemDAO: ItemDAO = AnormItemDAO
def bidDAO: BidDAO = AnormBidDAO
def userDAO: UserDAO = AnormUserDAO
def feedStatsDAO: FeedStatsDAO = AnormFeedStatsDAO
}
and let's take "AnormCategoryDAO" as a example, and I have to change the "object" into a "Class"
object AnormCategoryDAO extends CategoryDAO {
val category = {
int("id") ~ str("display_name") ~ str("url_name") map {
case id~displayName~urlName => Category(id, displayName, urlName)
}
}
def create(displayName: String, urlName: String) = DB.withConnection { implicit c =>
SQL("INSERT INTO category(display_name, url_name) VALUES({displayName}, {urlName})").on(
'displayName -> displayName, 'urlName -> urlName).executeUpdate()
}
def findById(id: Int): Option[Category] = DB.withConnection { implicit c =>
SQL("SELECT * FROM category WHERE id = {id}").on('id -> id).as(category singleOpt)
}
def findByName(urlName: String): Option[Category] = DB.withConnection { implicit c =>
SQL("SELECT * FROM category WHERE url_name = {urlName}").on('urlName -> urlName).as(category singleOpt)
}
def all(): List[Category] = DB.withConnection { implicit c =>
SQL("SELECT * FROM category ORDER BY display_name").as(category *)
}
}
So I changed the OBJECT to CLASS and annotated with SINGLETON as below, and I changed "DB.withConnection" to "db.withConnection"
#Singleton
class AnormCategoryDAO #Inject()(db: Database) extends CategoryDAO {
val category = {
int("id") ~ str("display_name") ~ str("url_name") map {
case id~displayName~urlName => Category(id, displayName, urlName)
}
}
...
Now, "AnormCategoryDAO" is a Class. So I need to figure out a way to instantiate it with a default database.
But I don't know how to instantiate it.
object DAOFactory {
//def categoryDAO: CategoryDAO = AnormCategoryDAO
def userDAO: UserDAO = AnormUserDAO
def itemDAO: ItemDAO = AnormItemDAO
}
The question is, how do I inject the database and instantiate it?
I don't like to use guice or similar to di. with compile time di I can achieve that by using something like:
import play.api.db.slick.{DbName, SlickComponents}
trait TablesComponents extends BaseComponent with SlickComponents {
lazy val dbConf = api.dbConfig[JdbcProfile](DbName("default"))
lazy val myTable = new MyTable(dbConf.db)
lazy val otherTable = new OtherTable(dbConf.db)
}
You either have the dependency that is to be injected ready, in which case you may call new AnormCategoryDAO(myDb) directly, or you inject the AnormCategoryDAO wherever it is required (this could mean that dependency injection propagates all the way to the controllers, which are instantiated by Play).
For Example:
class CategoryService #Inject() (categoryDao: CategoryDAO) {
def findAll() = categoryDao.findAll()
}
Note that in this example, I used the abstract type CategoryDAO to refer to the categoryDAO. For this, you'll have to tell the dependency injection framework (typically Guice) which concreate class it should inject (binding). Alternatively, you could depend on AnormCategoryDAO directly.
How you can define custom bindings is documented here: https://www.playframework.com/documentation/2.5.x/ScalaDependencyInjection
Note that there is an alternative approach to dependency injection named compile time: https://www.playframework.com/documentation/2.5.x/ScalaCompileTimeDependencyInjection

Scala What is this "_1" in type?

I would like to understand this error:
found : row.type (with underlying type _#TableElementType)
required: _1#TableElementType
Looks like I was very close, but what is this "1" in _1#TableElementType? Can I convert one in the other?
Edit: useful bits of codes for context (Play + Slick):
abstract class GenericDAO[T <: AbstractTable[_]](...) {
def table: TableQuery[T]
def insert(model: T#TableElementType) = db run (table += model)
}
trait TableObject[T <: AbstractTable[_]] {
def rowFromJson(jsObject: JsObject): T#TableElementType
def dao(driver: JdbcProfile, db: Database): GenericDAO[T]
}
// Controller Action with an instance implementing `tableObject` above:
val tableObject = tableObjectFactory("test")
val row = tableObject.rowFromJson(request.body.asJson.get)
val dao = tableObject.dao(driver, db) // tableObject has a DOA extending GenericDAO
dao.insert(row)
Example of tableObject:
object TestTable extends TableObject[Test] {
def dao(driver: JdbcProfile, db: Database) = new TestDAO(driver, db)
def rowFromJson(j: JsObject): TestRow = { TestRow(...) }
class TestDAO(...) extends GenericDAO[Test](driver, db) { ... }
}
I use a factory to get the right one from the url:
object TableObjectFactory {
def tableObjectFactory(name: String) = {
name match {
case "test" => TestTable
case "projects" => ProjectsTable
case "people" => PeopleTable
...
}
}
}
Although it doesn't explain much, it works if I make the DAO parse the request body and insert, instead of producing the row object separately and applying one of the DAO's methods on it.
I got all kinds of similar errors with names such as _$1#TableElementType, _1$u#TableElementType etc., but I think they are compiler aliases for different instances of the same class.
So the solution was to do
val j: JsValue = request.body.asJson.get
val tableObject: TableObject[_] = tableObjectFactory(table)
val dao = tableObject.dao(driver, db)
val res: Future[Int] = dao.insert(j)
where this new insert method now is abstract in GenericDAO, and in the concrete implementations takes a JsValue and parses it, then inserts:
class TestDAO(override val driver: JdbcProfile, override val db: Database) extends GenericDAO[Test](driver, db) {
import this.driver.api._
val table = TableQuery[Test]
//def insert(model: TestRow) = db run (table += model) // NO!
def insert(j: JsValue): Future[Int] = {
val row = TestRow(
(j \ "id").as[Int],
(j \ "name").as[String],
(j \ "value").as[Float],
(j \ "other").as[String]
)
db run (table += row)
}
}
At the same time, it makes Play forms completely useless, which is a good thing anyway.

Wrapper catching exceptions accured on db.run call

In our project we always injecting dbConfigProvider: DatabaseConfigProvider in our bean objects and then doing database operations with db.run(some query), it returns Future. How can I write a logging wrapper for db.run which will print all sql exceptions.
Example:
class SomeBeanImpl #Inject()(dbConfigProvider: DatabaseConfigProvider) {
private val logger = Logger(getClass)
def someDBQuery() = {
db.run(some wrong sql query) // exception raised in future, I need to print it with logger
}
}
Note:
If I add .onFailure on each db.run call it will mess up my code very badly. That's why I need write this wrapper for all of db.run calls.
If I wrap db.run in some function with different signature, I must change it in many places, which is not the best option. How can I do it implicitly?
You don't need to explicitly create a new wrapper class, you can use the Pimp My Library pattern to create an implicit method which wraps the invocation of db.run and attaches a onFailure to the Future:
object MyExtensions {
class DbExtensions(db: Db) {
def runAndLog(query: String): Future[String] = {
val result = db.run(query)
result.onFailure {
case e => Logger.getLogger("x").error(s"Exception: $e")
}
result
}
}
implicit def dbExtention(db: Db): DbExtensions = new DbExtensions(db)
}
class Db {
def run(query: String): Future[String] = Future.successful("Hello")
}
object App extends Application {
import MyExtensions._
val db = new Db
db.runAndLog("hello")
}
For Scala 2.10 and above, this can be shortened significantly using Implicit Classes:
implicit class DbExtensions(val db: Db) {
def runAndLog(query: String): Future[String] = {
val result = db.run(query)
result.onFailure {
case e => Logger.getLogger("x").error(s"Exception: $e")
}
result
}
}
class Db {
def run(query: String): Future[String] = Future.successful("Hello")
}
object App extends Application {
val db = new Db
db.runAndLog("hello")
}
You can further make DbExtensions extend AnyVal for a performance optimization:
implicit class DbExtensions(val db: Db) extends AnyVal
Make a new class:
case class DBWrapper(db: DatabaseComponent) {
def run(query: String) = db.run(query).onFailure { case e => logger.error(e) }
}
And replace your db wherever you initialize it with DBWrapper(db).
You can make the conversions back and forth implicit too, although I would not recommend it in this case.

Verify schema using Slick 3

I am looking to use the Slick 3 framework for a Scala application to manage database interactions. I have been able to automatically generate the necessary table objects using Slick, but I also would like an integration test that verifies that the schemas in the database match the schemas in the objects. This is because sometimes tables get altered without my team being alerted, and so we would prefer to catch the change in an integration test instead of a production application.
One way to do this is to simply run a select query on every single table in a test runner. However, I feel like there should be a more direct way. Furthermore, it is not clear to me how to systematically run through all the tables defined in the file, except to manually append the table object to some sequence the test runner moves through. I notice that there is a schema field, but it only has the ability to generate create and drop statements.
Any help would be greatly appreciated. Thank you!
EDIT:
Here is my solution, but I was hoping for a better one:
class TablesIT extends FunSuite with BeforeAndAfter with ScalaFutures {
var db: Database = _
before{ db = Database.forURL( /* personal details */ )}
object ResultMap extends GetResult[Map[String,Any]] { //Object borrowed from http://stackoverflow.com/questions/20262036/slick-query-multiple-tables-databases-with-getting-column-names
def apply(pr: PositionedResult) = {
val rs = pr.rs // <- jdbc result set
val md = rs.getMetaData
val res = (1 to pr.numColumns).map{ i=> md.getColumnName(i) -> rs.getObject(i) }.toMap
pr.nextRow // <- use Slick's advance method to avoid endless loop
res
}
}
def testTableHasCols[A <: Table[_]](table: slick.lifted.TableQuery[A]): Unit = {
whenReady(db.run(table.take(1).result.headOption.asTry)) { case Success(t) => t match {
case Some(r) => logTrace(r.toString)
case None => logTrace("Empty table")
}
case Failure(ex) => fail("Query exception: " + ex.toString)
}
}
def plainSqlSelect[A](query: String)(implicit gr: GetResult[A]): Future[Seq[A]] = {
val stmt = sql"""#$query""".as[A]
db.run(stmt)
}
def compareNumOfCols[A <: Table[_]](table: slick.lifted.TableQuery[A]) = {
val tableName = table.baseTableRow.tableName
val selectStar = whenReady(db.run(sql"""select * from #$tableName limit 1""".as(ResultMap).headOption)) {
case Some(m) => m.size
case None => 0
}
val model = whenReady(db.run(sql"""#${table.take(1).result.statements.head}""".as(ResultMap).headOption)) {
case Some(m) => m.size
case None => 0
}
assert(selectStar === model, "The number of columns do not match")
}
test("Test table1") {
testTableHasCols(Table1)
compareNumOfCols(Table1)
}
// And on for each table
}
I ended up devising a better solution that uses the following idea. It is more or less the same, and unfortunately I still have to manually create a test for each table, but the method is cleaner, I think. Note, however, that this only works for PostgreSQL because of the information schema, but other database systems have other methods.
class TablesIT extends FunSuite with BeforeAndAfter with ScalaFutures {
var db: Database = _
before{ db = Database.forURL( /* personal details */ )}
def testTableHasCols[A <: Table[_]](table: slick.lifted.TableQuery[A]): Unit = {
whenReady(db.run(table.take(1).result.headOption.asTry)) { case Success(t) => t match {
case Some(r) => logTrace(r.toString)
case None => logTrace("Empty table")
}
case Failure(ex) => fail("Query exception: " + ex.toString)
}
}
def compareNumOfCols[A <: Table[_]](table: slick.lifted.TableQuery[A]) = {
val tableName = table.baseTableRow.tableName
val selectStar = whenReady(db.run(sql"""select column_name from information_schema.columns where table_name='#$tableName'""".as[String])) {
case m: Seq[String] => m.size
case _ => 0
}
val model = table.baseTableRow.create_*.map(_.name).toSeq.size
assert(selectStar === model, "The number of columns do not match")
}
test("Test table1") {
testTableHasCols(Table1)
compareNumOfCols(Table1)
}
// And on for each table
}

Scala Best Practices: Trait Inheritance vs Enumeration

I'm currently experimenting with Scala and looking for best practices. I found myself having two opposite approaches to solving a single problem. I'd like to know which is better and why, which is more conventional, and if maybe you know of some other better approaches. The second one looks prettier to me.
1. Enumeration-based solution
import org.squeryl.internals.DatabaseAdapter
import org.squeryl.adapters.{H2Adapter, MySQLAdapter, PostgreSqlAdapter}
import java.sql.Driver
object DBType extends Enumeration {
val MySql, PostgreSql, H2 = Value
def fromUrl(url: String) = {
url match {
case u if u.startsWith("jdbc:mysql:") => Some(MySql)
case u if u.startsWith("jdbc:postgresql:") => Some(PostgreSql)
case u if u.startsWith("jdbc:h2:") => Some(H2)
case _ => None
}
}
}
case class DBType(typ: DBType) {
lazy val driver: Driver = {
val name = typ match {
case DBType.MySql => "com.mysql.jdbc.Driver"
case DBType.PostgreSql => "org.postgresql.Driver"
case DBType.H2 => "org.h2.Driver"
}
Class.forName(name).newInstance().asInstanceOf[Driver]
}
lazy val adapter: DatabaseAdapter = {
typ match {
case DBType.MySql => new MySQLAdapter
case DBType.PostgreSql => new PostgreSqlAdapter
case DBType.H2 => new H2Adapter
}
}
}
2. Singleton-based solution
import org.squeryl.internals.DatabaseAdapter
import org.squeryl.adapters.{H2Adapter, MySQLAdapter, PostgreSqlAdapter}
import java.sql.Driver
trait DBType {
def driver: Driver
def adapter: DatabaseAdapter
}
object DBType {
object MySql extends DBType {
lazy val driver = Class.forName("com.mysql.jdbc.Driver").newInstance().asInstanceOf[Driver]
lazy val adapter = new MySQLAdapter
}
object PostgreSql extends DBType {
lazy val driver = Class.forName("org.postgresql.Driver").newInstance().asInstanceOf[Driver]
lazy val adapter = new PostgreSqlAdapter
}
object H2 extends DBType {
lazy val driver = Class.forName("org.h2.Driver").newInstance().asInstanceOf[Driver]
lazy val adapter = new H2Adapter
}
def fromUrl(url: String) = {
url match {
case u if u.startsWith("jdbc:mysql:") => Some(MySql)
case u if u.startsWith("jdbc:postgresql:") => Some(PostgreSql)
case u if u.startsWith("jdbc:h2:") => Some(H2)
case _ => None
}
}
}
If you declare a sealed trait DBType, you can pattern match on it with exhaustiveness checking (ie, Scala will tell you if you forget one case).
Anyway, I dislike Scala's Enumeration, and I'm hardly alone in that. I never use it, and if there's something for which enumeration is really the cleanest solution, it is better to just write it in Java, using Java's enumeration.
For this particular case you don't really need classes for each database type; it's just data. Unless the real case is dramatically more complex, I would use a map and string parsing based solution to minimize the amount of code duplication:
case class DBRecord(url: String, driver: String, adapter: () => DatabaseAdapter) {}
class DBType(record: DBRecord) {
lazy val driver = Class.forName(record.driver).newInstance().asInstanceOf[Driver]
lazy val adapter = record.adapter()
}
object DBType {
val knownDB = List(
DBRecord("mysql", "com.mysql.jdbc.Driver", () => new MySQLAdapter),
DBRecord("postgresql", "org.postgresql.Driver", () => new PostgreSqlAdapter),
DBRecord("h2", "org.h2.Driver", () => new H2Adapter)
)
val urlLookup = knownDB.map(rec => rec.url -> rec).toMap
def fromURL(url: String) = {
val parts = url.split(':')
if (parts.length < 3 || parts(0) != "jdbc") None
else urlLookup.get(parts(1)).map(rec => new DBType(rec))
}
}
I'd go for the singleton variant, since it allows clearer subclassing.
Also you might need to do db-specific things/overrides, since some queries/subqueries/operators might be different.
But i'd try something like this:
import org.squeryl.internals.DatabaseAdapter
import org.squeryl.adapters.{H2Adapter, MySQLAdapter, PostgreSqlAdapter}
import java.sql.Driver
abstract class DBType(jdbcDriver: String) {
lazy val driver = Class.forName(jdbcDriver).newInstance().asInstanceOf[Driver]
def adapter: DatabaseAdapter
}
object DBType {
object MySql extends DBType("com.mysql.jdbc.Driver") {
lazy val adapter = new MySQLAdapter
}
object PostgreSql extends DBType("org.postgresql.Driver") {
lazy val adapter = new PostgreSqlAdapter
}
object H2 extends DBType("org.h2.Driver") {
lazy val adapter = new H2Adapter
}
def fromUrl(url: String) = {
url match {
case _ if url.startsWith("jdbc:mysql:") => Some(MySql(url))
case _ if url.startsWith("jdbc:postgresql:") => Some(PostgreSql(url))
case _ if url.startsWith("jdbc:h2:") => Some(H2(url))
case _ => None
}
}
if this helped, please consider to +1 this :)