I have imported to Intellij a Play/Scala project with the following method that gets a compilation error for a reason which I do not understand. Any ideas what is wrong here?
I am using Java 8 and Scala 2.11.6.
def fetchUser(id: Long): Option[UserRecord] =
Cache.getAs[UserRecord](id.toString).map { user =>
Some(user)
} getOrElse {
DB.withConnection { connection =>
val sql = DSL.using(connection, SQLDialect.POSTGRES_9_4)
val user = Option(sql.selectFrom[UserRecord](USER).where(USER.ID.equal(id)).fetchOne())
user.foreach { u =>
Cache.set(u.getId.toString, u)
}
user
}
}
The compilation error is on the call to the withConnection method. The error is: Cannot resolve overloaded method 'withConnection'.
When I try to jump to the implementation of the withConnection method, the compiler suggests two possible methods in the play.api.db.DB (2.4.3) class:
/**
* Execute a block of code, providing a JDBC connection. The connection is
* automatically released.
*
* #param name The datasource name.
* #param autocommit when `true`, sets this connection to auto-commit
* #param block Code block to execute.
*/
def withConnection[A](name: String = "default", autocommit: Boolean = true)(block: Connection => A)(implicit app: Application): A =
db.database(name).withConnection(autocommit)(block)
/**
* Execute a block of code, providing a JDBC connection. The connection and all created statements are
* automatically released.
*
* #param block Code block to execute.
*/
def withConnection[A](block: Connection => A)(implicit app: Application): A =
db.database("default").withConnection(block)
The compiler should find
def withConnection[A](block: Connection => A)(implicit app: Application): A =
db.database("default").withConnection(block)
because that matches the call
DB.withConnection { ... }
which is the same as
DB.withConnection( block = { ... })
as long as it can find the implicit Application. I don't know where this implicit Application exists but since this is from a sample project to a book I assume it exists somewhere and has worked in the past.
You've posted three questions pertaining to build errors about this project. Please don't post a new question every time you run into a build error for this project. That's not the point of SO. These questions are best aimed at the maintainer of the project itself, ie. the author on the issues page of the GitHub repo.
Now it doesn't look like this project is maintained. It's an old project. You're very likely to run into issues where things are broken because of outdated versions. Consider asking ONE question about building an old play project instead.
Related
Is there any benefit to using partially applied functions vs injecting dependencies into a class? Both approaches as I understand them shown here:
class DB(conn: String) {
def get(sql: String): List[Any] = _
}
object DB {
def get(conn: String) (sql: String): List[Any] = _
}
object MyApp {
val conn = "jdbc:..."
val sql = "select * from employees"
val db1 = new DB(conn)
db1.get(sql)
val db2 = DB.get(conn) _
db2(sql)
}
Using partially-applied functions is somewhat simpler, but the conn is passed to the function each time, and could have a different conn each time it is called. The advantage of using a class is that it can perform one-off operations when it is created, such as validation or caching, and retain the results in the class for re-use.
For example the conn in this code is a String but this is presumably used to connect to a database of some sort. With the partially-applied function it must make this connection each time. With a class the connection can be made when the class is created and just re-used for each query. The class version can also prevent the class being created unless the conn is valid.
The class is usually used when the dependency is longer-lived or used by multiple functions. Partial application is more common when the dependency is shorter-lived, like during a single loop or callback. For example:
list.map(f(context))
def f(context: Context)(element: Int): Result = ???
It wouldn't really make sense to create a class just to hold f. On the other hand, if you have 5 functions that all take context, you should probably just put those into a class. In your example, get is unlikely to be the only thing that requires the conn, so a class makes more sense.
I've been using doobie (cats) to connect to a postgresql database from a scalatra application. Recently I noticed that the app was creating a new connection pool for every transaction. I eventually worked around it - see below, but this approach is quite different from that taken in the 'managing connections' section of the book of doobie, I was hoping someone could confirm whether it is sensible or whether there is a better way of setting up the connection pool.
Here's what I had initially - this works but creates a new connection pool on every connection:
import com.zaxxer.hikari.HikariDataSource
import doobie.hikari.hikaritransactor.HikariTransactor
import doobie.imports._
val pgTransactor = HikariTransactor[IOLite](
"org.postgresql.Driver",
s"jdbc:postgresql://${postgresDBHost}:${postgresDBPort}/${postgresDBName}",
postgresDBUser,
postgresDBPassword
)
// every query goes via this function
def doTransaction[A](update: ConnectionIO[A]): Option[A] = {
val io = for {
xa <- pgTransactor
res <- update.transact(xa) ensuring xa.shutdown
} yield res
io.unsafePerformIO
}
My initial assumption was that the problem was having ensuring xa.shutdown on every request, but removing it results in connections quickly being used up until there are none left.
This was an attempt to fix the problem - enabled me to remove ensuring xa.shutdown, but still resulted in the connection pool being repeatedly opened and closed:
val pgTransactor: HikariTransactor[IOLite] = HikariTransactor[IOLite](
"org.postgresql.Driver",
s"jdbc:postgresql://${postgresDBHost}:${postgresDBPort}/${postgresDBName}",
postgresDBUser,
postgresDBPassword
).unsafePerformIO
def doTransaction[A](update: ConnectionIO[A]): Option[A] = {
val io = update.transact(pgTransactor)
io.unsafePerformIO
}
Finally, I got the desired behaviour by creating a HikariDataSource object and then passing it into the HikariTransactor constructor:
val dataSource = new HikariDataSource()
dataSource.setJdbcUrl(s"jdbc:postgresql://${postgresDBHost}:${postgresDBPort}/${postgresDBName}")
dataSource.setUsername(postgresDBUser)
dataSource.setPassword(postgresDBPassword)
val pgTransactor: HikariTransactor[IOLite] = HikariTransactor[IOLite](dataSource)
def doTransaction[A](update: ConnectionIO[A], operationDescription: String): Option[A] = {
val io = update.transact(pgTransactor)
io.unsafePerformIO
}
You can do something like this:
val xa = HikariTransactor[IOLite](dataSource).unsafePerformIO
and pass it to your repositories.
.transact applies the transaction boundaries, like Slick's .transactionally.
E.g.:
def interactWithDb = {
val q: ConnectionIO[Int] = sql"""..."""
q.transact(xa).unsafePerformIO
}
Yes, the response from Radu gets at the problem. The HikariTransactor (the underlying HikariDataSource really) has internal state so constructing it is a side-effect; and you want to do it once when your program starts and pass it around as needed. So your solution works, just note the side-effect.
Also, as noted, I don't monitor SO … try the Gitter channel or open an issue if you have questions. :-)
related with another question I posted (scala futures - keeping track of request context when threadId is irrelevant)
when debugging a future, the call stack isn't very informative (as the call context is usually in another thread and another time).
this is especially problematic when there can be different paths leading to the same future code (for instance usage of DAO called from many places in the code etc).
do you know of an elegant solution for this?
I was thinking of passing a token/request ID (for flows started by a web server request) - but this would require passing it around - and also won't include any of the state which you can see in the stack trace.
perhaps passing a stack around? :)
Suppose you make a class
case class Context(requestId: Int, /* other things you need to pass around */)
There are two basic ways to send it around implicitly:
1) Add an implicit Context parameter to any function that requires it:
def processInAnotherThread(/* explicit arguments */)(
implicit evaluationContext: scala.concurrent.EvaluationContext,
context: Context): Future[Result] = ???
def processRequest = {
/* ... */
implicit val context: Context = Context(getRequestId, /* ... */)
processInAnotherThread(/* explicit parameters */)
}
The drawback is that every function that needs to access Context must have this parameter and it litters the function signatures quite a bit.
2) Put it into a DynamicVariable:
// Context companion object
object Context {
val context: DynamicVariable[Context] =
new DynamicVariable[Context](Context(0, /* ... */))
}
def processInAnotherThread(/* explicit arguments */)(
implicit evaluationContext: scala.concurrent.EvaluationContext
): Future[Result] = {
// get requestId from context
Context.context.value.requestId
/* ... */
}
def processRequest = {
/* ... */
Context.context.withValue(Context(getRequestId, /* ... */)) {
processInAnotherThread(/* explicit parameters */)
}
}
The drawbacks are that
it's not immediately clear deep inside the processing that there is some context available and what contents it has and also referential transparency is broken. I believe it's better to strictly limit the number of available DynamicVariables, preferably don't have more than 1 or at most 2 and document their use.
context must either have default values or nulls for all its contents, or it must itself be a null by default (new DynamicVariable[Context](null)). Forgetting to initialize Context or its contents before processing may lead to nasty errors.
DynamicVariable is still much better than some global variable and doesn't influence the signatures of the functions that don't use it directly in any way.
In both cases you may update the contents of an existing Context with a copy method of a case class. For example:
def deepInProcessing(/* ... */): Future[Result] =
Context.context.withValue(
Context.context.value.copy(someParameter = newParameterValue)
) {
processFurther(/* ... */)
}
I am trying to leverage TypeSafe's slick library to interface with a MySQL server. All the gettingstarted/tutorial examples use withSession{} where the framework will automatically create a session, execute the queries within the {}'s, then terminate the session at the end of the block.
My program is rather chatty, and I would like to maintain a persistent connection throughout the execution of the script. So far I have pieced together this code to explicitly create and close sessions.
val db = Database.forURL("jdbc:mysql://localhost/sandbox", user = "root", password="***", driver = "com.mysql.jdbc.Driver")
val s = db.createSession()
...
s.close()
Where I can execute queries in between. However, when I try to execute a command, such as
(Q.u + "insert into TEST (name) values('"+name+"')").execute
It crashes because it cannot find the implicit session. I don't completely understand the syntax of the execute definition in the documentation, but it seems like there might be an optional parameter to pass an explicit session. I've tried using .execute(s), but that spits out a warning that (s) doesn't do anything in a pure expession.
How do I explicitly specify a pre-existing session to run a query on?
Appended: Trial code for JAB's solution
class ActorMinion(name: String) extends Actor
{
Database.forURL("jdbc:mysql://localhost/sandbox", user = "root", password="****", driver = "com.mysql.jdbc.Driver") withSession
{
def receive =
{
case Execute =>
{
(Q.u + "insert into TEST (name) values('"+name+"')").execute
sender ! DoneExecuting(name,output,err.toString)
}
}
}
}
Which returns compile error
[error] /home/ubuntu/helloworld/src/main/scala/hw.scala:41: missing parameter type for expanded function
[error] The argument types of an anonymous function must be fully known. (SLS 8.5)
[error] Expected type was: ?
[error] {
[error] ^
[error] one error found
I was able derive what I needed from this answer
//imports at top of file
//import Database.threadLocalSession <--this should be commented/removed
import scala.slick.session.Session // <-- this should be added
......
//These two lines in actor constructor
val db = Database.forURL("jdbc:mysql://localhost/sandbox", user = "root", password="****", driver = "com.mysql.jdbc.Driver")
implicit var session: Session = db.createSession()
......
session.close() //This line in actor destructor
Just enclose the relevant part of your script in withSession{}. Note that if you are keeping the session open for a while/are performing lots of database manipulation queries, you should also look into taking advantage of transactions.
And you should really be using prepared statements for inserts if the data has a potentially external source.
ScalaTest has very good documentation but they are short and do not give an example of
acceptance test.
How can I write acceptance test using ScalaTest for a web application?
Using Selenium 2 gets you some mileage. I'm using Selenium 2 WebDriver in combination with a variation of the Selenium DSL found here.
Originally, I changed the DSL in order to make it a little easier to run in from the REPL (see down below). However, one of the bigger challenges of building tests like these is that they quickly get invalidated, and then become a nightmare to maintain.
Later on, I started creating a wrapper class for every page in the application, with convenience operations mapping the event to be sent to that page to the underlying WebDriver invocations. That way, whenever the underling page is changing, I just need to change my page wrapper, rather than changing the entire script. With that, my test scripts are now expressed in terms of invocations on the individual page wrappers, where every invocation returns a page wrapper reflecting the new state of the UI. Seems to work out quite well.
I tend to build my tests with the FirefoxDriver and then before rolling the test to our QA environment check if the HtmlUnit driver is giving comparable results. If that holds, then I run the test using the HtmlUnit driver.
This was my original modification to the Selenium DSL:
/**
* Copied from [[http://comments.gmane.org/gmane.comp.web.lift/44563]], adjusting it to no longer be a trait that you need to mix in,
* but an object that you can import, to ease scripting.
*
* With this object's method imported, you can do things like:
*
* {{"#whatever"}}: Select the element with ID "whatever"
* {{".whatever"}}: Select the element with class "whatever"
* {{"%//td/em"}}: Select the "em" element inside a "td" tag
* {{":em"}}: Select the "em" element
* {{"=whatever"}}: Select the element with the given link text
*/
object SeleniumDsl {
private def finder(c: Char): String => By = s => c match {
case '#' => By id s
case '.' => By className s
case '$' => By cssSelector s
case '%' => By xpath s
case ':' => By name s
case '=' => By linkText s
case '~' => By partialLinkText s
case _ => By tagName c + s
}
implicit def str2by(s: String): By = finder(s.charAt(0))(s.substring(1))
implicit def by2El[T](t: T)(implicit conversion: (T) => By, driver: WebDriver): WebElement = driver / (conversion(t))
implicit def el2Sel[T <% WebElement](el: T): Select = new Select(el)
class Searchable(sc: SearchContext) {
def /[T <% By](b: T): WebElement = sc.findElement(b)
def /?[T <% By](b: T): Box[WebElement] = tryo(sc.findElement(b))
def /+[T <% By](b: T): Seq[WebElement] = sc.findElements(b)
}
implicit def scDsl[T <% SearchContext](sc: T): Searchable = new Searchable(sc)
}
ScalaTest now offers a Selenium DSL:
http://www.scalatest.org/user_guide/using_selenium