How to use Anorm outside of Play? - scala

How do you use Anorm outside of play in Scala? In the Anorm document for play, it simply uses something like:
DB.withConnection { implicit c =>
val result: Boolean = SQL("Select 1").execute()
}
The DB object is only for Play. How do you use Anorm alone without using Play?

There is no need of DB object (part of Play JDBC not Anorm). Anorm works as along as you provide it connection as implicit:
implicit val con: java.sql.Connection = ??? // whatever you want to resolve connection
SQL"SELECT * FROM Table".as(...)
You can resolve JDBC connection in many way: basic DriverManager.getConnection, JNDI, ...
As for dependency, it's easy to add it in SBT: How to declare dependency on Play's Anorm for a standalone application? .

You could also emulate the DB object as follows (i haven't tried this though)
object DB {
def withConnection[A](block: Connection => A): A = {
val connection: Connection = ConnectionPool.borrow()
try {
block(connection)
} finally {
connection.close()
}
}
}
Taken from https://github.com/TimothyKlim/anorm-without-play/blob/master/src/main/scala/Main.scala

Documenting the code that works for me below:
Entry to include in dependencies in build.sbt:
// https://mvnrepository.com/artifact/org.playframework.anorm/anorm
libraryDependencies += "org.playframework.anorm" %% "anorm" % "2.6.7"
Write helper classes:
#Singleton
class DBUtils {
val schema = AppConfig.defaultSchema
def withDefaultConnection(sqlQuery: SqlQuery) = {
// could replace with DBCP, not got a chance yet
val conn = DriverManager.getConnection(AppConfig.dbUrl,AppConfig.dbUser, AppConfig.dbPassword)
val result = Try(sqlQuery.execute()(conn))
conn.close()
result
}
}
object DBUtils extends DBUtils
Next, any query can use the withDefaultConnection method to execute:
def saveReviews(listOfReviews: List[Review]):Try[Boolean]= {
val query = SQL(
s"""insert into aws.reviews
| ( reviewerId,
| asin,
| overall,
| summary,
| unixReviewTime,
| reviewTime
| )
|values ${listOfReviews.mkString(",")}""".stripMargin)
//println(query.toString())
DBUtils.withDefaultConnection(query)
}

Related

How to pass an array to a slick SQL plain query?

How to pass an array to a slick SQL plain query?
I tried as follows but it fails:
// "com.typesafe.slick" %% "slick" % "3.3.2", // latest version
val ids = Array(1, 2, 3)
db.run(sql"""select name from person where id in ($ids)""".as[String])
Error: could not find implicit value for parameter e: slick.jdbc.SetParameter[Array[Int]]
However this ticket seems to say that it should work:
https://github.com/tminglei/slick-pg/issues/131
Note: I am not interested in the following approach:
db.run(sql"""select name from person where id in #${ids.mkString("(", ",", ")")}""".as[Int])
The issue you linked points to a commit which adds this:
def mkArraySetParameter[T: ClassTag](/* ... */): SetParameter[Seq[T]]
def mkArrayOptionSetParameter[T: ClassTag](/* ... */): SetParameter[Option[Seq[T]]]
Note that they are not implicit.
You'll need to do something like
implicit val setIntArray: SetParameter[Array[Int]] = mkArraySetParameter[Int](...)
and make sure that is in scope when you try to construct your sql"..." string.
I meet same problem and searched it.
And I resolved it with a implicit val like this:
implicit val strListParameter: slick.jdbc.SetParameter[List[String]] =
slick.jdbc.SetParameter[List[String]]{ (param, pointedParameters) =>
pointedParameters.setString(f"{${param.mkString(", ")}}")
}
put it into your slick-pg profile and import it with other val at where needed.
Or more strict, like this:
implicit val strListParameter: slick.jdbc.SetParameter[List[String]] =
slick.jdbc.SetParameter[List[String]]{ (param, pointedParameters) =>
pointedParameters.setObject(param.toArray, java.sql.Types.ARRAY)
}
implicit val strSeqParameter: slick.jdbc.SetParameter[Seq[String]] =
slick.jdbc.SetParameter[Seq[String]]{ (param, pointedParameters) =>
pointedParameters.setObject(param.toArray, java.sql.Types.ARRAY)
}
and use the val like:
val entries: Seq[String]
val query = {
sql"""select ... from xxx
where entry = ANY($entries)
order by ...
""".as[(Column, Types, In, Here)]
}

How to stream Anorm large query results to client in chunked response with Play 2.5

I have a pretty large result set (60k+ records columns) that I am pulling from a database and parsing with Anorm (though I can use play's default data access module that returns a ResultSet if needed). I need to transform and stream these results directly to the client (without holding them in a big list in memory) where they will then be downloaded directly to a file on the client's machine.
I have been referring to what is demonstrated in the Chunked Responses section in the ScalaStream 2.5.x Play documentation. I am having trouble implementing the "getDataStream" portion of what it shows there.
I've also been referencing what is demoed in the Streaming Results and Iteratee sections in the ScalaAnorm 2.5.x Play documentation. I have tried piping the results as an enumerator like what is returned here:
val resultsEnumerator = Iteratees.from(SQL"SELECT * FROM Test", SqlParser.str("colName"))
into
val dataContent = Source.fromPublisher(Streams.enumeratorToPublisher(resultsEnumerator))
Ok.chunked(dataContent).withHeaders(("ContentType","application/x-download"),("Content-disposition","attachment; filename=myDataFile.csv"))
But the resulting file/content is empty.
And I cannot find any sample code or references on how to convert a function in the data service that returns something like this:
#annotation.tailrec
def go(c: Option[Cursor], l: List[String]): List[String] = c match {
case Some(cursor) => {
if (l.size == 10000000) l // custom limit, partial processing
else {
go(cursor.next, l :+ cursor.row[String]("VBU_NUM"))
}
}
case _ => l
}
val sqlString = s"select colName FROM ${tableName} WHERE ${whereClauseStr}"
val results : Either[List[Throwable], List[String]] = SQL(sqlString).withResult(go(_, List.empty[String]))
results
into something i can pass to Ok.chunked().
So basically my question is, how should I feed each record fetch from the database into a stream that I can do a transformation on and send to the client as a chunked response that can be downloaded to a file?
I would prefer not to use Slick for this. But I can go with a solution that does not use Anorm, and just uses the play dbApi objects that returns the raw java.sql.ResultSet object and work with that.
After referencing the Anorm Akka Support documentation and much trial and error, I was able to achieve my desired solution. I had to add these dependencies
"com.typesafe.play" % "anorm_2.11" % "2.5.2",
"com.typesafe.play" % "anorm-akka_2.11" % "2.5.2",
"com.typesafe.akka" %% "akka-stream" % "2.4.4"
to by build.sbt file for Play 2.5.
and I implemented something like this
//...play imports
import anorm.SqlParser._
import anorm._
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import akka.stream.scaladsl.{Sink, Source}
...
private implicit val akkaActorSystem = ActorSystem("MyAkkaActorSytem")
private implicit val materializer = ActorMaterializer()
def streamedAnormResultResponse() = Action {
implicit val connection = db.getConnection()
val parser: RowParser[...] = ...
val sqlQuery: SqlQuery = SQL("SELECT * FROM table")
val source: Source[Map[String, Any] = AkkaStream.source(sqlQuery, parser, ColumnAliaser.empty).alsoTo(Sink.onComplete({
case Success(v) =>
connection.close()
case Failure(e) =>
println("Info from the exception: " + e.getMessage)
connection.close()
}))
Ok.chunked(source)
}

How to apply manually evolutions in tests with Slick and Play! 2.4

I would like to manually run my evolution script at the beginning of each test file. I'm working with Play! 2.4 and Slick 3.
According to the documentation, the way to go seems to be:
Evolutions.applyEvolutions(database)
but I don't manage to get an instance of my database. In the documentation play.api.db.Databases is imported in order to get a database instance but if I try to import it, I get this error: object Databases is not a member of package play.api.db
How can I get an instance of my database in order to run the evolution script?
Edit: as asked in the comments, here is the entire source code giving the error:
import models._
import org.scalatest.concurrent.ScalaFutures._
import org.scalatest.time.{Seconds, Span}
import org.scalatestplus.play._
import play.api.db.evolutions.Evolutions
import play.api.db.Databases
class TestAddressModel extends PlaySpec with OneAppPerSuite {
lazy val appBuilder = new GuiceApplicationBuilder()
lazy val injector = appBuilder.injector()
lazy val dbConfProvider = injector.instanceOf[DatabaseConfigProvider]
def beforeAll() = {
//val database: Database = ???
//Evolutions.applyEvolutions(database)
}
"test" must {
"test" in { }
}
}
I finally found this solution. I inject with Guice:
lazy val appBuilder = new GuiceApplicationBuilder()
lazy val injector = appBuilder.injector()
lazy val databaseApi = injector.instanceOf[DBApi] //here is the important line
(You have to import play.api.db.DBApi.)
And in my tests, I simply do the following (actually I use an other database for my tests):
override def beforeAll() = {
Evolutions.applyEvolutions(databaseApi.database("default"))
}
override def afterAll() = {
Evolutions.cleanupEvolutions(databaseApi.database("default"))
}
Considering that you are using Play 2.4, where evolutions were moved into a separate module, you have to add evolutions to your project dependencies.
libraryDependencies += evolutions
Source: Evolutions
Relevant commit: Split play-jdbc into three different modules
To have access to play.api.db.Databases, you must add jdbc to your dependencies :
libraryDependencies += jdbc
Hope it helps some people passing here.
EDIT: the code would then look like this :
import play.api.db.Databases
val database = Databases(
driver = "com.mysql.jdbc.Driver",
url = "jdbc:mysql://localhost/test",
name = "mydatabase",
config = Map(
"user" -> "test",
"password" -> "secret"
)
)
You now have an instance of the DB, and can execute queries on it :
val statement = database.getConnection().createStatement()
val resultSet = statement.executeQuery("some_sql_query")
You can see more from the docs
EDIT: typo
I find the easiest way to run tests with evolutions applied is to use FakeApplication, and input the connection info for the DB manually.
def withDB[T](code: => T): T =
// Create application to run database evolutions
running(FakeApplication(additionalConfiguration = Map(
"db.default.driver" -> "<my-driver-class>",
"db.default.url" -> "<my-db-url>",
"db.default.user" -> "<my-db>",
"db.default.password" -> "<my-password>",
"evolutionplugin" -> "enabled"
))) {
// Start a db session
withSession(code)
}
Use it like this:
"test" in withDB { }
This allows you, for example, to use an in-memory database for speeding up your unit tests.
You can access the DB instance as play.api.db.DB if you need it. You'll also need to import play.api.Play.current.
Use FakeApplication to read your DB configuration and provide a DB instance.
def withDB[T](code: => T): T =
// Create application to run database evolutions
running(FakeApplication(additionalConfiguration = Map(
"evolutionplugin" -> "disabled"))) {
import play.api.Play.current
val database = play.api.db.DB
Evolutions.applyEvolutions(database)
withSession(code)
Evolutions.cleanupEvolutions(database)
}
Use it like this:
"test" in withDB { }

Best Practise of Using Connection Pool in Slick 3.0.0 Together with Play Framework

I followed the documentation of Slick 3.0.0-RC1, using Typesafe Config as database connection configuration. Here is my conf:
database = {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost:5432/postgre"
user = "postgre"
}
I established a file Locale.scala as:
package models
import slick.driver.PostgresDriver.api._
import scala.concurrent.Future
case class Locale(id: String, name: String)
class Locales(tag: Tag) extends Table[Locale](tag, "LOCALES") {
def id = column[String]("ID", O.PrimaryKey)
def name = column[String]("NAME")
def * = (id, name) <> (Locale.tupled, Locale.unapply)
}
object Locales {
private val locales = TableQuery[Locales]
val db = Database.forConfig("database")
def count: Future[Int] =
try db.run(locales.length.result)
finally db.close
}
Then I got confused that when and where the proper time is to create Database object using
val db = Database.forConfig("database")
If I create db like this, there will be as many Database objects as my models. So what is the best practice to get this work?
You can create an Object DBLocator and load it using lazy operator so that its loaded only on demand.
You can always invoke the method defined in DBLocator class to get an instance of Session.

Add data type from PostgreSQL extension in Slick

I'm using the PostGIS extension for PostgreSQL and I'm trying to retrieve a PGgeometry object from a table.
This version is working fine :
import java.sql.DriverManager
import java.sql.Connection
import org.postgis.PGgeometry
object PostgersqlTest extends App {
val driver = "org.postgresql.Driver"
val url = "jdbc:postgresql://localhost:5432/gis"
var connection:Connection = null
try {
Class.forName(driver)
connection = DriverManager.getConnection(url)
val statement = connection.createStatement()
val resultSet = statement.executeQuery("SELECT geom FROM table;")
while ( resultSet.next() ) {
val geom = resultSet.getObject("geom").asInstanceOf[PGgeometry]
println(geom)
}
} catch {
case e: Exception => e.printStackTrace()
}
connection.close()
}
I need to be able to do the same thing using Slick custom query. But this version doesn't work :
Q.queryNA[PGgeometry]("SELECT geom FROM table;")
and gives me this compilation error
Error:(50, 40) could not find implicit value for parameter rconv: scala.slick.jdbc.GetResult[org.postgis.PGgeometry]
val query = Q.queryNA[PGgeometry](
^
Is there a simple way to add the PGgeometry data type in Slick without having to convert the returned object to a String and parse it?
To use it successfully, you need define a GetResult, and maybe SetParameter if you want to insert/update it to db.
Here's some codes extracted from slick tests (p.s. I assume you're using slick 2.1.0):
implicit val getUserResult = GetResult(r => new User(r.<<, r.<<))
case class User(id:Int, name:String)
val userForID = Q[Int, User] + "select id, name from USERS where id = ?"
But, if your java/scala type is jts.Geometry instead of PGgeometry, you can try to use slick-pg, which has built-in support for jts.Geometry and PostGIS for slick Lifted and Plain SQL.
To overcome the same issue, I used slick-pg (0.8.2)and JTS's Geometry classes as tminglei mentioned in the previous answer. There are two steps to use slick-pg to handle PostGIS's geometry types: (i) extend Slick's PostgresDriver with PgPostGISSupport and (ii) define an implicit converter for your plain query as shown below.
As shown in this page, you should first extend the PostgresDriver with PgPostGISSupport:
object MyPostgresDriver extends PostgresDriver with PgPostGISSupport {
override lazy val Implicit = new Implicits with PostGISImplicits
override val simple = new Implicits with SimpleQL with PostGISImplicits with PostGISAssistants
val plainImplicits = new Implicits with PostGISPlainImplicits
}
Using the implicit conversions defined in plainImplicits in the extended driver, you can write your query as:
import com.vividsolutions.jts.geom.LineString // Or any other JTS geometry types.
import MyPostgresDriver.plainImplicits._
import scala.slick.jdbc.GetResult
case class Row(id: Int, geom: LineString)
implicit val geomConverter = GetResult[Row](r => {
Row(r.nextInt, r.nextGeometry[LineString])
})
val query = Q.queryNA[Row](
"""SELECT id, geom FROM table;"""
)