Slick threadLocalSession vs implicit session - scala

I encountered this problem while posting this question: Slick Write a Simple Table Creation Function
I am very new to Slick and concurrency, knowing only the basics. I worked with JDBC before, but in there you have to manually open a session and then close it. Nothing goes beyond that, and there is very little automation (at least I didn't have to make automated process).
However, I get confused with Slick session. In the tutorial, the example "Getting Started" encourages people to use threadLocalSession:
// Use the implicit threadLocalSession
import Database.threadLocalSession
http://slick.typesafe.com/doc/1.0.0/gettingstarted.html
The original recommendation is:
The only extra import we use is the threadLocalSession. This
simplifies the session handling by attaching a session to the current
thread so you do not have to pass it around on your own (or at least
assign it to an implicit variable).
Well, I researched a bit online, and some people suggest not to use threadLocalSession and only use implicit session. Some suggest using threadLocalSession.
One reason to support implicit session is that "makes sure at compile time that you have a session". Well, I only have two questions:
When people use "thread", are they referring to concurrency? Slick/JDBC data storage was handled through concurrency?
Which way is better? Implicit or threadLocalSession? Or when to use which?
If it is not too much to ask, I read the syntax of {implicit session:Session => ...} somewhere in my Scala book, and I forgot where it was. What's this expression?

A threadLocalSession is called this way because it is stored in a "thread local variable", local to the current execution thread.
As of Slick 2, we recommend not to use threadLocalSession (which is now called dynamicSession) unless you see a particular need for it and are aware of the disadvantages. threadLocalSession is implicit, too, by the way. The problem is, that a threadLocalSession is only valid at runtime when a withSession (in Slick 2.0 withDynSession) call happened somewhere down the call stack. If it didn't the code still compiles but fails at runtime
{implicit session:Session => ...} is a function from (the explicitly annotated type) Session to ..., where the session is available as an implicit value in ... . In db.withSession{ implicit session:Session => ... }, db creates a session, passes it into the closure handed to withSession. In the closure body ..., the session is implicit and can implicitly used by .list calls, etc.

Related

scala concepts involved in db withSession { implicit session =>

I am working in scala. I went through maximum all the concepts like high ordered function, curried functions, macros etc. But while working with slick i didn't understand this code snippet. db withSession { implicit session =>
What I understood is JdbcBackend.DatabaseDef is calling withSession method. So after that I dont know what is happening in that implementation. Please guys let me know or do I need to know even concepts related to this implementation. Tq
You already seem to know the concepts. withSession is a function defined on db that takes a single function as it's argument, i.e. Higher-order function: https://docs.scala-lang.org/tutorials/tour/higher-order-functions.html.html
Scala will allow you to omit the dots when calling that function, i.e. Infix notation: https://docs.scala-lang.org/style/method-invocation.html#infix-notation
The curly brackets just create a standard code block, but as you're using => you end up with a block that defines a function that is then passed to withSession as single argument using the infix notation.
withSession method separates business logic from resource management logic. db.withSession provides a connection to the database to a user, then a user can use it, and after the body of withSession block is finished (normally or exceptionally) this connection is returned to the connection pool.
This approach is similar to java's tryWithResource idiom.

Akka-stream. Combining materialized-values. Need to map futures. Which ExecutionContext/Dispatcher to use?

Given:
AtoB: GraphStageWithMaterializedValue[A, B, Future[AtoBRuntimeApi]];
BtoC: GraphStageWithMaterializedValue[B, C, Future[BtoCRuntimeApi]].
Wanted: AtoC: GraphStageWithMaterializedValue[A, C, Future[AtoCRuntimeApi]].
In my particular case it's really convenient to implement AtoCRuntimeApi in terms of both AtoBRuntimeApi and BtoARuntimeApi.
So I would like to define AtoCRuntimeApi as case class AtoCRuntimeApi(a2b: AtoBRuntimeApi, b2c: BtoCRuntimeApi);
And to define the new compound stage as stageAtoB.viaMat(stageBtoC)(combineIntoAtoC) where
combineIntoAtoC: (Future[AtoBRuntimeApi], Future[B2CRuntimeApi]) => Future[AtoCRuntimeApi].
Obviously the implementation of combineIntoAtoC requires some instance of ExecutionContext in order to map the futures.
The question: what execution context should I use in the described case?
Options, I would rather avoid:
bake in an instance that is currently available (while composition of stages) — the "blueprint" will not be safe to materialise if that execution-context becomes unavailable;
ExecutionContext.global — well, it's global. It seems terribly wrong to use it (probably, once per materialisation — is not that big deal).
The most wanted execution-context is the one that is available as the property of materializer (mat.executionContext). But there is no way I can access it within that combine-function.
I usually use the actor system context in such cases, which is indeed by default can be referred to from the materializer. As a general solution, though, you can pass the execution context to whatever function you construct the stream graph in with an implicit parameter:
def makeGraph(...)(implicit ec: ExecutionContext): RunnableGraph = {
def combineIntoAtoC(...) = ... // uses ec implicitly
}
This allows you to push the decision about which context to use up the call stack. At the appropriate level there most certainly will be some kind of access to the actor system's dispatcher.
The reason why I prefer to use the actor system's dispatcher instead of the global one is because it reduces the surface of dependencies - all execution contexts in this case come from one source, and you know how to configure them if the need arises.

Play 2.5.x (Scala) -- How does one put a value obtained via wsClient into a (lazy) val

The use case is actually fairly typical. A lot of web services use authorization tokens that you retrieve at the start of a session and you need to send those back on subsequent requests.
I know I can do it like this:
lazy val myData = {
val request = ws.url("/some/url").withAuth(user, password, WSAuthScheme.BASIC).withHeaders("Accept" -> "application/json")
Await.result(request.get().map{x => x.json }, 120.seconds)
}
That just feels wrong as all the docs say never us Await.
Is there a Future/Promise Scala style way of handling this?
I've found .onComplete which allows me to run code upon the completion of a Promise however without using a (mutable) var I see no way of getting a value in that scope into a lazy val in a different scope. Even with a var there is a possible timing issue -- hence the evils of mutable variables :)
Any other way of doing this?
Unfortunately, there is no way to make this non-blocking - lazy vals are designed to be synchronous and to block any thread accessing them until they are completed with a value (internally a lazy val is represented as a simple synchronized block).
A Future/Promise Scala way would be to use a Future[T] or a Promise[T] instead of a val x: T, but that way implies a great deal of overhead with executionContexts and maps upon each use of the val, and more optimal resource utilization may not be worth the decreased readability in all cases, so it may be OK to leave the Await there if you extensively use the value in many parts of your application.

Why are SessionVars in Lift implemented using singletons?

One typical way of managing state in Lift is to create a singleton object extending SessionVar, like in this example taken from the documentation:
object MySnippetCompanion {
object mySessionVar extends SessionVar[String]("hello")
}
The case for using SessionVars is clear and I've been using them in practice as needed. I also roughly understand how they work inside.
Still, I can't help but wonder why the mechanism for "session variables", which are clearly associated with the current session (usually just one out of many sessions in the system), was designed to be used via a singleton? This goes so against my intuition that at first glance I was tempted to believe that Lift was somehow able to override Scala's language features and to make object mean something different that in regular Scala.
Even though I now understand how it works, I can't grasp the rationale for such a design, which, at least for me, breaks the rule of least astonishment. Can someone point out any advantages or perhaps explain why such a design decision could have been made?
Session variables in Lift use Scala's DynamicVariable. Basically they allow you to statically reference a variable in a code-block and then later on call the code and substitute a value:
import scala.util.DynamicVariable
val x = new DynamicVariable(1)
def printIt() {
println(x.value)
}
printIt()
//> 1
x.withValue(2)(printIt())
//> 2
So each time a request is handled, the scope of these dynamic variables is changed to the current session, completely hiding the state change of the current session to you as a programmer.
The other option would be to pass around a "sessionID" object which you would have to use when you want to access session specific data. Not really handy.
The reason you have to use the object keyword is that object is unique in that it defines both a value and a class. This allows Lift to call getClass to get a name that uniquely identifies this SessionVar vs. any other one, which Lift needs in order to serialize and deserialize every piece of session state in the right place(s). Furthermore if the SessionVar is in a class that has two instances (for instance a snippet rendered in two tabs), they will both refer to the same piece of session state. (The flip side of the coin is that the same SessionVar instance can be referenced by two different sessions and mean the right thing to each.)
Actually at times this is insufficient --- for instance, if you define a SessionVar in a trait, and have two different classes that inherit the trait, but you need them two have two different values. The solution in that case is to override the def for the "name salt", which is combined with getClass to identify the SessionVar.

Scala Case Class Map Expansion

In groovy one can do:
class Foo {
Integer a,b
}
Map map = [a:1,b:2]
def foo = new Foo(map) // map expanded, object created
I understand that Scala is not in any sense of the word, Groovy, but am wondering if map expansion in this context is supported
Simplistically, I tried and failed with:
case class Foo(a:Int, b:Int)
val map = Map("a"-> 1, "b"-> 2)
Foo(map: _*) // no dice, always applied to first property
A related thread that shows possible solutions to the problem.
Now, from what I've been able to dig up, as of Scala 2.9.1 at least, reflection in regard to case classes is basically a no-op. The net effect then appears to be that one is forced into some form of manual object creation, which, given the power of Scala, is somewhat ironic.
I should mention that the use case involves the servlet request parameters map. Specifically, using Lift, Play, Spray, Scalatra, etc., I would like to take the sanitized params map (filtered via routing layer) and bind it to a target case class instance without needing to manually create the object, nor specify its types. This would require "reliable" reflection and implicits like "str2Date" to handle type conversion errors.
Perhaps in 2.10 with the new reflection library, implementing the above will be cake. Only 2 months into Scala, so just scratching the surface; I do not see any straightforward way to pull this off right now (for seasoned Scala developers, maybe doable)
Well, the good news is that Scala's Product interface, implemented by all case classes, actually doesn't make this very hard to do. I'm the author of a Scala serialization library called Salat that supplies some utilities for using pickled Scala signatures to get typed field information
https://github.com/novus/salat - check out some of the utilities in the salat-util package.
Actually, I think this is something that Salat should do - what a good idea.
Re: D.C. Sobral's point about the impossibility of verifying params at compile time - point taken, but in practice this should work at runtime just like deserializing anything else with no guarantees about structure, like JSON or a Mongo DBObject. Also, Salat has utilities to leverage default args where supplied.
This is not possible, because it is impossible to verify at compile time that all parameters were passed in that map.