Why do I need to import driver to use lifted queries? - scala

I'm using slick 2.0 for my interaction with the database.
As recommended I've added external connection pool using BonesCP.
val driver = Class.forName(Database.driver)
val ds = new BoneCPDataSource();
ds.setJdbcUrl(Database.jdbcUri);
ds.setUsername(Database.user);
ds.setPassword(Database.password);
using it I've created my connection:
scala.slick.jdbc.JdbcBackend.Database.forDataSource(ds)
db withSession {implicit session =>
(...)
}
}
Now if I do on my TableQuery object something like this:
provisioning.foreach {(...)}
it says that there is no foreach method.
So I've imported:
import scala.slick.driver.PostgresDriver.simple._
and now everything works well.
What I don't like is that my code is tied to a certain database implementation.
Can I somehow make it read the "db dialect" from the config file?

So now I know how this should be done properly.
I've been to a presentation by Stefan Zeiger on Slick who is the original author of the library and he pointed out that since PostgresDriver as well as other drivers is a trait we can mixin it instead of importing. Thus having chance to dynamically choose the driver that fits our database.

Related

Spray IO with Slick Use Conf File

I'm using Spray with Slick. Spray is actually very easy to use, and the same goes with Slick. However, the infamous Slick way to connect to Database is like:
Database.forURL("jdbc:mysql://localhost:3306/SprayBlog?characterEncoding=UTF-8", user="xxxx", password="xxx", driver = "com.mysql.jdbc.Driver") withSession {
implicit session =>
(Article.articles.ddl ++ User.users.ddl).create
}
I hate typing this much whenever I do a database connection. I have used Play-Slick framework before, and Play has this application.conf, with which I can store my database connection address, username and password. I don't know if this is true, but shouldn't people store their database info on an encrypted file, and I may be wrong but I feel conf is blocked from outside access and encrypted.
So is there a way for me to call database manipulations easier?? If I do want to put the info in the conf, how can I access it?
With Slick 2.1.0 you can use Database.forConfig().
In application.conf:
db {
url = "jdbc:mysql://localhost/DatabaseName"
driver = "com.mysql.jdbc.Driver"
user = "root"
password = ""
}
In your database access code:
import scala.slick.driver.MySQLDriver.simple._
import models.Item
import tables.ItemTable
class ItemService {
val items = TableQuery[ItemTable]
def all: List[Item] = Database.forConfig("db") withSession { implicit session: Session =>
items.list
}
}
store your db in a val or write a def that abstracts over what you feel is repetitive. withSession is good for connection scoping. I don't know spray conf files, but they may use Typesafe config, which is a library that allows to read the file. Spray probably also exposes its config through an api.

I can't make a simple example of Scala Play db work

Firstly, I've never worked with a database in Scala Play Framework. I did a research and found that the only way(?) to work with it is using a plain SQL. Is that so? I wonder, isn't there a way to do that the same way I can do that in RoR using models? At least, I found a plenty of examples showing, even encouraging working with plain SQL.
Secondly, I can't compile the code from the official documentation:
import play.api.db._
import play.api.Play.current
val result:Boolean = SQL("Select 1").execute() //SQL is not found
Also, where is SQL located?
Importing anorm._ should fix the issue.
SQL is located in the package object anorm
Btw, SQL does not work without the sql connection, so wrap it like this:
DB.withConnection { implicit c =>
SQL("select 1").execute()
}
Have you added the sql dependency to your project as described in the docs?
http://www.playframework.com/documentation/2.2.x/ScalaDatabase

Configure play-slick and samples

I'm currently trying to use Play! Framework 2.2 and play-slick (master branch).
In the play-slick code I would like to override driver definition in order to add the Oracle Driver (I'm using slick-extension). In the Config.Scala of play-slick I just saw /** Extend this to add driver or change driver mapping */ ...
I'm coming from far far away (currently reading Programming In Scala) so there's a lot to learn. So my questions are :
Can someone explain me how to extend this Config object ? this object is used in others classes ... Is the cake apttern useful here ?
Talking about cake pattern, I read the computer-database example provided by play-slick. This sample uses the cake pattern and import play.api.db.slick.Config.driver.simple._ If I'm using Oracle driver I cannot use this import, am I wrong ? How can I use the cake pattern to define an implicit session ?
Thanks a lot.
Waiting for your advices and I'm still studying the play-slick code at home :)
To extend the Config trait I do not think the cake pattern is required. You should be able to create your Config object like this:
import scala.slick.driver.ExtendedDriver
object MyExtendedConfig extends play.api.db.slick.Config {
override def driverByName: String => Option[ExtendedDriver] = {name: String =>
super.driverByName(name) orElse Map("oracledriverstring" -> OracleDriver).get(name)
}
lazy val app = play.api.Play.current
lazy val driver: ExtendedDriver = driver()(app)
}
To be able to use it you only need to do: import MyExtendedConfig.driver._ instead of import play.slick.db.api.Config.driver._. BTW, I see that the type of the driverByName could have been a Map instead of a Function making it easier to extend. This shouldn't break though, but it would be easier to do it.
I think Jonas Bonér's old blog is a great place to read what the cake pattern is (http://jonasboner.com/2008/10/06/real-world-scala-dependency-injection-di/). My naive understanding of it is that you have a cake pattern when you have layers that uses the self types:
trait FooComponent{ driver: ExtendedDriver =>
import driver.simple._
class Foo extends Table[Int]("") {
//...
}
}
There are 2 use cases for the cake pattern in slick/play-slick: 1) if you have tables that references other tables (as in the computer database sample) 2) to have control over exactly which database is used at which time or if you use many many different types. By using the Config you do not really need the cake pattern as long as you only have 2 different DBs (one for prod and one for test), which is the point of the Config.
Hope this answers your questions and good luck on reading Programming in Scala (loved that book :)

Close Connection for Mongodb using Casbah API

I am not getting any useful information about "how to close connection for mongodb using casbah API". Actually, I have defined multiple methods and in each method I need to establish a connection with mongodb. After working I need to close that too. I am using Scala.
one of the method like (code example in scala):
import com.mongodb.casbah.Imports._
import com.mongodb.casbah.MongoConnection
def index ={
val mongoConn = MongoConnection(configuration("hostname"))
val log = mongoConn("ab")("log")
val cursor = log.find()
val data = for {x <- cursor} yield x.getAs[BasicDBObject]("message").get
html.index(data.toList)
//mongoConn.close() <-- here i want to close the connection but this .close() is not working
}
It is unclear, from your question why exactly close is not working. Does it throw some exception, it is not compiling, or has no effect?
But since MongoConnection is a thin wrapper over com.mongodb.Mongo, you could work with underlying Mongo directly, just like in plain old Java driver:
val mongoConn = MongoConnection(configuration("hostname"))
mongoConn.underlying.close()
Actually, that's exactly, how close is implemented in casbah.
Try using .close instead. If a function doesn't have arguments in scala, you sometimes don't use parentheses after it.
EDIT: I had wrong information, edited to include correct information + link.

Playframework evolutions files compatible with both postgres and h2

I've been developing a web site with the Play framework (scala) using H2 as a backend. Testing is nicely integrated, especially with the ability to run tests against an in-memory H2 db.
Now I'd like to move my datastore over to Postgres for various convenience reasons. This leaves me with a problem: How to continue to test and retain the simplicity of a fresh db for each test run? I see on the net that some people manage to run live against postgres and test against H2. However the two are not entirely compatible at the SQL level (even with H2 in Postgres compatibility mode). For instance SERIAL, BIGSERIAL and BYTEA are not supported on H2.
Can I do this by using a constrained compatible intersection of both dialects, or is there another technique I'm missing?
Thanks for any help.
Alex
i know this is an older post, but it looks like there isn't an obvious solution still a few years later. as a short term fix, in play 2.4.x-2.5.x (so far only tested there), you can alter the way evolutions get applied during tests by creating a custom evolutions reader:
package support
import play.api.db.evolutions.{ClassLoaderEvolutionsReader, Evolutions, ResourceEvolutionsReader}
import java.io.{ByteArrayInputStream, InputStream}
import java.nio.charset.StandardCharsets
import scala.io.Source
import scala.util.Try
class EvolutionTransformingReader(
classLoader: ClassLoader = classOf[ClassLoaderEvolutionsReader].getClassLoader,
prefix: String = "")
extends ResourceEvolutionsReader {
def loadResource(db: String, revision: Int): Option[InputStream] =
for {
stream <- Option(classLoader.getResourceAsStream(prefix + Evolutions.resourceName(db, revision)))
lines <- Try(Source.fromInputStream(stream).getLines).toOption
updated = lines map convertPostgresLinesToH2
} yield convertLinesToInputStream(updated)
private val ColumnRename = """(?i)\s*ALTER TABLE (\w+) RENAME COLUMN (\w+) TO (\w+);""".r
private def convertPostgresLinesToH2(line: String): String =
line match {
case ColumnRename(tableName, oldColumn, newColumn) =>
s"""ALTER TABLE $tableName ALTER COLUMN $oldColumn RENAME TO $newColumn;"""
case _ => line
}
private def convertLinesToInputStream(lines: Iterator[String]): InputStream =
new ByteArrayInputStream(lines.mkString("\n").getBytes(StandardCharsets.UTF_8))
}
then pass it into the place where you apply evolutions during your tests:
Evolutions.applyEvolutions(registry.database, new EvolutionTransformingReader())
note that the reader is still in a pretty dumb state (it assumes the SQL statements are oneliners, which is not guaranteed), but this should be enough to get anyone started.
As evolution files use SQL directly, unless you limited yourself to a common cross-db-compatible subset of SQL you may have issues.
There is no real solution on that, but you can still use a fresh db for testing. Just set the following:
%test.jpa.ddl=create-drop
%test.db.driver=org.postgresql.Driver
%test.db=<jdbc url>
//etc
This should create a new postgres connection for test, create the db from scratch, run the evolutions, do the tests and remove all data once done.