Always execute code at beginning and end of method - scala

I'm connecting to MongoDb using following code :
def insert() = {
val mc = new com.mongodb.MongoClient("localhost", 27017);
val db = mc.getDatabase("MyDb");
//My insert code
mc.close();
} //> insert: ()Unit
I have various methods which open and close the connection.
Can the lines :
val mc = new com.mongodb.MongoClient("localhost", 27017);
val db = mc.getDatabase("MyDb");
mc.close();
be extracted so that they are implicitly called at beginning and end of method.
Does Scala implicits cater for this scenario or is reflection required ?

A common pattern would be to use a call-by-name method where you can pass a function that accepts a DB and does something with it. The call-by-name method can facilitate the creation of the client, etc, and execute the code within.
def withDB[A](block: DB => A): A = {
val mc = new com.mongodb.MongoClient("localhost", 27017);
val db = mc.getDatabase("MyDb");
try block(db) finally mc.close()
}
And use it:
def insert() = withDB { db =>
// do something with `db`
}
However, a look at the documentation says:
A MongoDB client with internal connection pooling. For most applications, you should have one MongoClient instance for the entire JVM.
Which makes the above approach look like a bad idea, assuming that is the version you are using. I can definitely see some concurrency issues trying to do this and having too many connections open.
But, you can follow the same pattern, stuffing the connection creating into a singleton object. You would need to manage the closing of the client when your application is shutdown, however.
object Mongo {
lazy val mc = new com.mongodb.MongoClient("localhost", 27017);
lazy val db = mc.getDatabase("MyDb");
def withDB[A](block: DB => A): A = block(db)
def close(): Unit = mc.close()
}

You can define a method that executes some 'work' function e.g.
def withMongoDb[T](work: DB => T): T = {
val mc = new com.mongodb.MongoClient("localhost", 27017)
// I don't actually know what type `db` is so I'm calling it `DB`
val db: DB = mc.getDatabase("MyDb")
try { work(db) }
finally { mc.close() }
}
Then you can use it like:
withMongoDb { db =>
db.insert(...)
db.query(...)
}
This is a similar approach to the one used in Slick, pre-3.0, i.e. withSession and withTransaction.
Now, if you implemented some convenience methods e.g.
def insertStuff(values: Seq[Int])(implicit db: DB) = {
db.insert(values)
}
Then you could call mark the db as implicit in your withMongoDb call, effectively making sure you never accidentally call insertStuff outside of that block.
withMongoDb { implicit db =>
insertStuff(Seq(1,2,3,4))
}
insertStuff(Seq(1,2,3,4)) // compile error

Instead of implicits you could do something like this:
def mongoConn(ip:String, port:Int, dbName:String):(Database => Unit) => Unit = {
f => {
val mc = new com.mongodb.MongoClient(ip, port)
val db = mc.getDatabase(dbName)
f(db)
mc.close()
}
}
val conn = mongoConn("localhost", 27017, "MyDb")
conn(db => {
//insert code
})

Related

Why a Thread.sleep or closing the connection is required after waiting for a remove call to complete?

I'm again seeking you to share your wisdom with me, the scala padawan!
I'm playing with reactive mongo in scala and while I was writting a test using scalatest, I faced the following issue.
First the code:
"delete" when {
"passing an existent id" should {
"succeed" in {
val testRecord = TestRecord(someString)
Await.result(persistenceService.persist(testRecord), Duration.Inf)
Await.result(persistenceService.delete(testRecord.id), Duration.Inf)
Thread.sleep(1000) // Why do I need that to make the test succeeds?
val thrownException = intercept[RecordNotFoundException] {
Await.result(persistenceService.read(testRecord.id), Duration.Inf)
}
thrownException.getMessage should include(testRecord._id.toString)
}
}
}
And the read and delete methods with the code initializing connection to db (part of the constructor):
class MongoPersistenceService[R](url: String, port: String, databaseName: String, collectionName: String) {
val driver = MongoDriver()
val parsedUri: Try[MongoConnection.ParsedURI] = MongoConnection.parseURI("%s:%s".format(url, port))
val connection: Try[MongoConnection] = parsedUri.map(driver.connection)
val mongoConnection = Future.fromTry(connection)
def db: Future[DefaultDB] = mongoConnection.flatMap(_.database(databaseName))
def collection: Future[BSONCollection] = db.map(_.collection(collectionName))
def read(id: BSONObjectID): Future[R] = {
val query = BSONDocument("_id" -> id)
val readResult: Future[R] = for {
coll <- collection
record <- coll.find(query).requireOne[R]
} yield record
readResult.recover {
case NoSuchResultException => throw RecordNotFoundException(id)
}
}
def delete(id: BSONObjectID): Future[Unit] = {
val query = BSONDocument("_id" -> id)
// first read then call remove. Read will throw if not present
read(id).flatMap { (_) => collection.map(coll => coll.remove(query)) }
}
}
So to make my test pass, I had to had a Thread.sleep right after waiting for the delete to complete. Knowing this is evil usually punished by many whiplash, I want learn and find the proper fix here.
While trying other stuff, I found instead of waiting, entirely closing the connection to the db was also doing the trick...
What am I misunderstanding here? Should a connection to the db be opened and close for each call to it? And not do many actions like adding, removing, updating records with one connection?
Note that everything works fine when I remove the read call in my delete function. Also by closing the connection, I mean call close on the MongoDriver from my test and also stop and start again embed Mongo which I'm using in background.
Thanks for helping guys.
Warning: this is a blind guess, I've no experience with MongoDB on Scala.
You may have forgotten to flatMap
Take a look at this bit:
collection.map(coll => coll.remove(query))
Since collection is Future[BSONCollection] per your code and remove returns Future[WriteResult] per doc, so actual type of this expression is Future[Future[WriteResult]].
Now, you have annotated your function as returning Future[Unit]. Scala often makes Unit as a return value by throwing away possibly meaningful values, which it does in your case:
read(id).flatMap { (_) =>
collection.map(coll => {
coll.remove(query) // we didn't wait for removal
() // before returning unit
})
}
So your code should probably be
read(id).flatMap(_ => collection.flatMap(_.remove(query).map(_ => ())))
Or a for-comprehension:
for {
_ <- read(id)
coll <- collection
_ <- coll.remove(query)
} yield ()
You can make Scala warn you about discarded values by adding a compiler flag (assuming SBT):
scalacOptions += "-Ywarn-value-discard"

Implicits over function closures in Scala

I've been trying to understand implicits for Scala and trying to use them at work - one particular place im stuck at is trying to pass implicits in the following manner
object DBUtils {
case class DB(val jdbcConnection: Connection) {
def execute[A](op: =>Unit): Any = {
implicit val con = jdbcConnection
op
}
}
object DB {
def SQL(query: String)(implicit jdbcConnection: Connection): PreparedStatement = {
jdbcConnection.prepareStatement(query)
}
}
val someDB1 = DB(jdbcConnection)
val someDB2 = DB(jdbcConnection2)
val someSQL = SQL("SOME SQL HERE")
someDB1.execute{someSQL}
someDB2.execute{someSQL}
Currently i get an execption saying that the SQL() function cannot find the implicit jdbcConnection.What gives and what do i do to make it work in the format i need?
Ps-:Im on a slightly older version of Scala(2.10.4) and cannot upgrade
Edit: Changed the problem statement to be more clear - I cannot use a single implicit connection in scope since i can have multiple DBs with different Connections
At the point where SQL is invoked there is no implicit value of type Connection in scope.
In your code snippet the declaration of jdbcConnection is missing, but if you change it from
val jdbcConnection = //...
to
implicit val jdbcConnection = // ...
then you will have an implicit instance of Connection in scope and the compiler should be happy.
Try this:
implicit val con = jdbcConnection // define implicit here
val someDB = DB(jdbcConnection)
val someSQL = SQL("SOME SQL HERE") // need implicit here
someDB.execute{someSQL}
The implicit must be defined in the scope where you need it. (In reality, it's more complicated, because there are rules for looking elsewhere, as you can find in the documentation. But the simplest thing is to make sure the implicit is available in the scope where you need it.)
Make the following changes
1) execute method take a function from Connection to Unit
2) Instead of this val someDB1 = DB(jdbcConnection) use this someDB1.execute{implicit con => someSQL}
object DBUtils {
case class DB(val jdbcConnection: Connection) {
def execute[A](op: Connection =>Unit): Any = {
val con = jdbcConnection
op(con)
}
}
Here is the complete code.
object DB {
def SQL(query: String)(implicit jdbcConnection: Connection): PreparedStatement = {
jdbcConnection.prepareStatement(query)
}
}
val someDB1 = DB(jdbcConnection)
val someDB2 = DB(jdbcConnection2)
val someSQL = SQL("SOME SQL HERE")
someDB1.execute{implicit con => someSQL}
someDB2.execute{implicit con => someSQL}

Database transactions in Play framework scala applications (anorm)

I am developing an application using Play framework and scala. I am using anorm for data-access layer. And I've got a problem I could not solve.
Brief: I want to be able to have methods in data-access objects (dao) to work inside transactions as well as being called alone.
Details:
I have data-access layer consist of class with methods that only executes particular SQL over database. Traditionally they looks like:
def list() = DB.withConnection { implicit cn =>
...
}
Now I want to have some methods to be executed in a transaction scope. Like traditional select-update service methods but still be able to run them alone. So, what I have in my mind is like this:
class Service {
def fooTransacted() = {
inTransaction {
val old = dao.select(id = 2)
val newObj = old.copy(isActive = true)
dao.update(newObj)
}
}
def fooSinle() = {
dao.select(id = 2)
}
}
I tried around several ways, but could not come up with any solution.
What about
class Dao {
def foo(id: Long)(implicit connection: Connection) = {
SQL("select * from foo where id={id}").on('id->id).as(...)
}
}
class Service{
def withConnection = {
DB.withConnection {implicit connection =>
Dao.foo(1)
Dao.foo(2)
}
}
def withTransaction = {
DB.withTransaction {implicit connection =>
Dao.foo(1)
Dao.foo(2)
}
}
The solution I've seen used elsewhere (principally in Squeryl), is roughly the following:
import java.sql.Connection
object Helper {
private val conn: ThreadLocal[Connection] = new ThreadLocal
def inTransaction[X](f: Connection => X) = {
conn.get() match {
case null =>
DB.withConnection { newConn =>
conn.set(newConn)
try f(newConn)
finally conn.set(null)
}
case c => f(c)
}
}
}
This way, the inTransaction method is re-entrant, so there's no harm in calling it redundantly inside dao.select.
If you prefer, you can expose conn via a public method, and change the signature of f to => X - you lose some compile-time safety, but the API is a little cleaner.
One pitfall with this approach is that connections are tied to threads, which may cause problems if you're using futures or actors, and a process can resume on a different thread (this is a tricky area anyway, but one you should be aware of).
You might want to look into Squeryl too - it may already do what you need.

Can a scala self-type be satisfied via delegation?

Suppose I have (and this is rather contrived)
trait DbConnection {
val dbName: String
val dbHost: String
}
class Query {
self: DbConnection =>
def doQuery(sql: String) {
// connect using dbName, dbHost
// perform query
}
}
class HasADbConnection(override val dbName: String,
override val dbHost: String) extends DbConnection {
self =>
def doSomething() {
doSomethingElseFirst()
}
def doSomethingElseFirst() = {
val query = new Query() with DbConnection {
override val dbName = self.dbName
override val dbHost = self.dbHost
}
query.doQuery("")
}
}
Is there a way to avoid the redundant "override val dbName = self.dbName, override val dbHost = self.dbHost" in the new Query() creation, and instead indicate that the new Query object should inherit from / delegate to the HasADbConnection instance for these fields?
I realize it may be more appropriate for Query to take a DbConnection as a constructor argument. I'm interested though in other ways of satisfying the Query self-type. Perhaps there is no way of propagating the HasADbconnection fields onto the new Query instance, which is a completely valid answer.
Not sure exactly what you are trying to do, but this seems like a possible match to your intent:
trait C extends B {
def myA = new A() with C { // Note that this could be "with B" rather than "with C" and it would still work.
val name: String = C.this.name // explicit type not actually required - String type can be inferred.
}
}
then, for example:
scala> val myC = new C() { val name = "myC-name" }
myC: C = $anon$1#7f830771
scala> val anA = myC.myA
anA: A with C = C$$anon$1#249c38d5
scala> val aName = anA.name
aName: String = myC-name
Hopefully that can at least help guide your eventual solution. Might be able to give further help if you clarify what you want to do (or why you want to do it) further.
EDIT - after update to question:
I suggest you may be thinking about this the wrong way. I would not like tying the Query class down to knowing how to forge a connection. Rather, either pass a ready-made connection as a parameter to your call that uses the connection, or (as shown below) set up the Query class to supply a function from a connection to a result, then call that function using a connection established elsewhere.
Here is a how I would think about solving this (note that this example doesn't create an actual DB connection per se, I just take your DbConnection type at face value - really you have actually defined a 'DbConnConfig' type):
trait DbConnection {
val dbName: String
val dbHost: String
}
class Query(sql: String) {
def doQuery: DbConnection => String = { conn: DbConnection =>
// No need here to know how to: connect using dbName, dbHost
// perform query, using provided connection:
s"${conn.dbName}-${sql}-${conn.dbHost}" // <- Dummy implementation only here, so you can, for example, try it out in the REPL.
}
}
class HasADbConnection(override val dbName: String,
override val dbHost: String) extends DbConnection {
// You can create your actual connection here...
val conn = makeConnection
def doSomething(query: Query) = {
// ... or here, according to your program's needs.
val conn = makeConnection
query.doQuery(conn)
}
def makeConnection = this // Not a real implementation, just a quick cheat for this example.
}
In reality, doQuery (which could be named better) should have a type of DbConnection => ResultSet or similar. Example usage:
scala> val hasConn = new HasADbConnection("myDb", "myHost")
hasConn: HasADbConnection = HasADbConnection#6c1e5d2f
scala> hasConn.doSomething(new Query("#"))
res2: String = myDb-#-myHost

How could I know if a database table is exists in ScalaQuery

I'm trying ScalaQuery, it is really amazing. I could defined the database table using Scala class, and query it easily.
But I would like to know, in the following code, how could I check if a table is exists, so I won't call 'Table.ddl.create' twice and get a exception when I run this program twice?
object Users extends Table[(Int, String, String)]("Users") {
def id = column[Int]("id")
def first = column[String]("first")
def last = column[String]("last")
def * = id ~ first ~ last
}
object Main
{
val database = Database.forURL("jdbc:sqlite:sample.db", driver = "org.sqlite.JDBC")
def main(args: Array[String]) {
database withSession {
// How could I know table Users is alrady in the DB?
if ( ??? ) {
Users.ddl.create
}
}
}
}
ScalaQuery version 0.9.4 includes a number of helpful SQL metadata wrapper classes in the org.scalaquery.meta package, such as MTable:
http://scalaquery.org/doc/api/scalaquery-0.9.4/#org.scalaquery.meta.MTable
In the test code for ScalaQuery, we can see examples of these classes being used. In particular, see org.scalaquery.test.MetaTest.
I wrote this little function to give me a map of all the known tables, keyed by table name.
import org.scalaquery.meta.{MTable}
def makeTableMap(dbsess: Session) : Map[String, MTable] = {
val tableList = MTable.getTables.list()(dbsess);
val tableMap = tableList.map{t => (t.name.name, t)}.toMap;
tableMap;
}
So now, before I create an SQL table, I can check "if (!tableMap.contains(tableName))".
This thread is a bit old, but maybe someone will find this useful. All my DAOs include this:
def create = db withSession {
if (!MTable.getTables.list.exists(_.name.name == MyTable.tableName))
MyTable.ddl.create
}
Here's a full solution that checks on application start using a PostGreSQL DB for PlayFramework
import globals.DBGlobal
import models.UsersTable
import org.scalaquery.meta.MTable
import org.scalaquery.session.Session
import play.api.GlobalSettings
import play.api.Application
object Global extends GlobalSettings {
override def onStart(app: Application) {
DBGlobal.db.withSession { session : Session =>
import org.scalaquery.session.Database.threadLocalSession
import org.scalaquery.ql.extended.PostgresDriver.Implicit._
if (!makeTableMap(session).contains("tableName")) {
UsersTable.ddl.create(session)
}
}
}
def makeTableMap(dbsess: Session): Map[String, MTable] = {
val tableList = MTable.getTables.list()(dbsess)
val tableMap = tableList.map {
t => (t.name.name, t)
}.toMap
tableMap
}
}
With java.sql.DatabaseMetaData (Interface). Depending on your Database, more or less functions might be implemented.
See also the related discussion here.I personally prefer hezamu's suggestion and extend it as follows to keep it DRY:
def createIfNotExists(tables: TableQuery[_ <: Table[_]]*)(implicit session: Session) {
tables foreach {table => if(MTable.getTables(table.baseTableRow.tableName).list.isEmpty) table.ddl.create}
}
Then you can just create your tables with the implicit session:
db withSession {
implicit session =>
createIfNotExists(table1, table2, ..., tablen)
}
You can define in your DAO impl the following method (taken from Slick MTable.getTables always fails with Unexpected exception[JdbcSQLException: Invalid value 7 for parameter columnIndex [90008-60]]) that gives you a true o false depending if there a defined table in your db:
def checkTable() : Boolean = {
val action = MTable.getTables
val future = db.run(action)
val retVal = future map {result =>
result map {x => x}
}
val x = Await.result(retVal, Duration.Inf)
if (x.length > 0) {
true
} else {
false
}
}
Or, you can check if some "GIVENTABLENAME" or something exists with println method:
def printTable() ={
val q = db.run(MTable.getTables)
println(Await.result(q, Duration.Inf).toList(0)) //prints first MTable element
println(Await.result(q, Duration.Inf).toList(1))//prints second MTable element
println(Await.result(q, Duration.Inf).toList.toString.contains("MTable(MQName(public.GIVENTABLENAME_pkey),INDEX,null,None,None,None)"))
}
Don't forget to add
import slick.jdbc.meta._
Then call the methods from anywhere with the usual #Inject(). Using
play 2.4 and play-slick 1.0.0.
Cheers,