Transactional method in Scala Play with Slick (similar to Spring #Transactional, maybe?) - scala

I know scala, as a funcional language, is supposed to work differently from a common OO language, such as Java, but I'm sure there has to be a way to wrap a group of database changes in a single transaction, ensuring atomicity as well as every other ACID property.
As explained in the slick docs (http://slick.lightbend.com/doc/3.1.0/dbio.html), DBIOAction allows to group db operations in a transaction like this:
val a = (for {
ns <- coffees.filter(_.name.startsWith("ESPRESSO")).map(_.name).result
_ <- DBIO.seq(ns.map(n => coffees.filter(_.name === n).delete): _*)
} yield ()).transactionally
val f: Future[Unit] = db.run(a)
However, my use case (and most real world examples I can think of), I have a code structure with a Controller, which exposes the code for my REST endpoint, that controller calls multiple services and each service will delegate database operations to DAOs.
A rough example of my usual code structure:
class UserController #Inject(userService: UserService) {
def register(userData: UserData) = {
userService.save(userData).map(result => Ok(result))
}
}
class UserService #Inject(userDao: UserDao, addressDao: AddressDao) {
def save(userData: UserData) = {
for {
savedUser <- userDao.save(userData.toUser)
savedAddress <- addressDao.save(userData.addressData.toAddress)
} yield savedUser.copy(address = savedAddress)
}
}
class SlickUserDao {
def save(user: User) = {
db.run((UserSchema.users returning UserSchema.users)).insertOrUpdate(user)
}
}
This is a simple example, but most have more complex business logic in the service layer.
I don't want:
My DAOs to have business logic and decide which database operations to run.
Return DBAction from my DAOs and expose the persistency classes. That completely defeats the purpose of using DAOs in the first place and makes further refactorings much harder.
But I definitely want a transaction around my entire Controller, to ensure that if any code fails, all the changes done in the execution of that method will be rolled back.
How can I implement full controller transactionality with Slick in a Scala Play application? I can't seem to find any documentation on how to do that.
Also, how can I disable auto-commit in slick? I'm sure there is a way and I'm just missing something.
EDIT:
So reading a bit more about it, I feel now I understand better how slick uses connections to the database and sessions. This helped a lot: http://tastefulcode.com/2015/03/19/modern-database-access-scala-slick/.
What I'm doing is a case of composing in futures and, based on this article, there's no way to use the same connection and session for multiple operation of the kind.
Problem is: I really can't use any other kind of composition. I have considerable business logic that needs to be executed in between queries.
I guess I can change my code to allow me to use action composition, but as I mentioned before, that forces me to code my business logic with aspects like transactionality in mind. That shouldn't happen. It pollutes the business code and it makes writing tests a lot harder.
Any workaround this issue? Any git project out there that sorts this out that I missed? Or, more drastic, any other persistence framework that supports this? From what I've read, Anorm supports this nicely, but I may be misunderstanding it and don't want to change framework to find out it doesn't (like it happened with Slick).

There is no such thing as transactional annotations or the like in slick. Your second "do not want" is actually the way to go. It's totally reasonable to return DBIO[User] from your DAO which does not defeat their purpose at all. It's the way slick works.
class UserController #Inject(userService: UserService) {
def register(userData: UserData) = {
userService.save(userData).map(result => Ok(result))
}
}
class UserService #Inject(userDao: UserDao, addressDao: AddressDao) {
def save(userData: UserData): Future[User] = {
val action = (for {
savedUser <- userDao.save(userData.toUser)
savedAddress <- addressDao.save(userData.addressData.toAddress)
whatever <- DBIO.successful(nonDbStuff)
} yield (savedUser, savedAddress)).transactionally
db.run(action).map(result => result._1.copy(result._2))
}
}
class SlickUserDao {
def save(user: User): DBIO[User] = {
(UserSchema.users returning UserSchema.users).insertOrUpdate(user)
}
}
The signature of save in your service class is still the same.
No db related stuff in controllers.
You have full control of transactions.
I cannot find a case where the code above is harder to maintain / refactor compared to your original example.
There is also a quite exhaustive discussion that might be interesting for you. See Slick 3.0 withTransaction blocks are required to interact with libraries.

Related

Vertx Web: How to split and organize routes across multiple files?

So far I'm really loving Vertx. The documentation is great, and the cross-language support is amazing.
However, the examples and documentation for my specific problem all seem to be out of date. I guess the API has changed a bit since 3.4.x (I'm currently using 3.9.1 with Scala 2.13.1.
I'd like to be able to split my routes among multiple files for the purpose of keeping things organized. For example, I'd like to have a UserRoutes file, and make a separate file for TodoRoutes, and both of those files can be used in my ServerVerticle.
The only way I've found to do this is basically:
UserRoutes:
object UserRoutes {
def createUser(context: RoutingContext): Unit = {
// do work
}
def login(context: RoutingContext): Unit = {
// do work
}
}
ServerVerticle:
class ServerVerticle(vertx: Vertx) extends AbstractVerticle {
override def start(): Unit = {
val router: Router = Router.router(vertx)
router.post("/user/new").handle(UserRoutes.createUser)
router.post("/user/login").handle(UserRoutes.login)
....
}
}
What I would really like to do instead:
object UserRoutes {
// somehow get a reference to `router` or create a new one
router.post("/user/new", (context: RoutingContext) => {
// do work
})
router.post("/user/login", (context: RoutingContext) => {
// do work
})
}
The reason I'd prefer this is because then it is easier to see exactly what is being done in UserRoutes, such as what path is being used, what parameters are required, etc.
I tried taking a similar approach to this example application, but apparently that's not really possible with the Vertx API as it exists in 3.9?
What is the best way to do this? What am I missing something? How do large REST APIs break up their routes?
I suppose one such way to do this would be something like the following:
UserRoutes
class UserRoutes(vertx: Vertx) {
val router: Router = {
val router = Router.router(vertx)
router.get("/users/login").handler(context => {
// do work
})
router
}
}
And then in the ServerVerticle:
...
server
.requestHandler(new UserVerticle(vertx).router)
.requestHandler(new TodoVerticle(vertx).router)
.listen(...)
This breaks stuff up nicely, but I still need to figure out a decent way to avoid having to repeats the cors options and stuff in each ___Routes file. There are plenty of way to do this of course, but the question remain: is this the right approach?
Further, I could really mix in some parts of the approach outlined in the initial question. The UserRoutes class could still have the createUser() method, and I could simply use this in the apply() call or somewhere else.
Sub-routers is another approach but there are still some issues with this.
It would also be nice if each ___Routes file could create it's own Verticle. If each set of Routes was running on it's own thread this could speed things up quite nicely.
Truth be told... as always in Scala there are a plethora of ways to tackle this. But which is the best, when we want:
Logical organization
Utilize the strengths of Vertx
Sub-verticles maybe?
Avoid shared state
I need guidance! There's so many options I don't know where to start.
EDIT:
Maybe it's best to avoid assigning each group of routes to a Verticle, and letting vertx do it's thing?

Slick 3: How to implement repository pattern with transactions?

In my play framework (2.5) app, I need to write unit tests for services.
I need to isolate data access logic to able to test service layer in isolation,
for this I want to create repository interfaces and MOCK them in my unit tests:
class UserService {
def signUpNewUser(username: String, memberName: String): Future[Unit] {
val userId = 1 // Set to 1 for demo
val user = User(userId, username)
val member = Member(memberName, userId)
// ---- I NEED TO EXECUTE THIS BLOCK WITHIN TRANSACTION ----
for {
userResult <- userRepository.save(user)
memberRepository.save(member)
} yield ()
// ---- END OF TRANSACTION ----
}
}
In the above example, userRepository.save(User) and memberRepository.save(member) operations should be performed within transaction.
I don't want to use slick directly in my service layer because it will complicate my tests.
Also, I don't want to use embedded database for my unit tests, elsewhere it would be a NOT unit test, I need complete isolation.
I do not want my repository interfaces to be depended on slick at all, but need something like this:
trait UserRepository {
findById(id: Long): Future[Option[User]]
save(user: User): Future[Unit]
}
how can I achieve this with slick?
OK - let's decompose your question into three parts.
How to execute block in transaction
Basically read this answer: How to use transaction in slick
As soon as you convert DBIO to Future you are done. No chances to compose multiple operation within single transaction. End of story.
How to avoid using Slick in tests
This is basically a design question - if you want to have a business layer on top of Repository / DAO / whatever - than let this service layer deal with transactions. You won't need to interact with Slick outside of this layer.
Avoiding your repository interfaces to depend on Slick
In most straightforward way - you need to depend on Slick DBIO to compose operations within transaction (and composing Repository methods within transaction is something that you cannot avoid in any serious application).
If you want to avoid depending on DBIO you would perhaps create you own monadic type, say TransactionBoundary[T] or TransactionContext[T].
Then you would have something like TransactionManager that would execute this TransactionContext[T].
IMHO not worth the effort, I'd simply use DBIO which has a brilliant name already (like Haskell's IO monad - DBIO informs you that you have a description of IO operations performed on your storage). But let's assume that you still want to avoid it.
You could do something like that perhaps:
package transaction {
object Transactions {
implicit class TransactionBoundary[T](private[transaction] val dbio: DBIO[T]) {
// ...
}
}
class TransactionManager {
def execute[T](boundary: TransactionBoundary[T]): Future[T] = db.run(boundary.dbio)
}
}
Your trait would look like this:
trait UserRepository {
findById(id: Long): TransactionBoundary[Option[User]]
save(user: User): TransactionBoundary[Unit]
}
and somewhere in your code you would do like this:
transactionManager.execute(
for {
userResult <- userRepository.save(user)
memberRepository.save(member)
} yield ()
)
By using implicit conversion you would have your results of methods in Repository be automatically converted to your TransactionBoundary.
But again - IMHO all of the above doesn't bring any actual advantage over using DBIO (except perhaps taste of esthetics). If you want to avoid using Slick related classes outside of certain layer, just make a type alias like this:
type TransactionBoundary[T] = DBIO[T]
and use it everywhere.

How can I perform session based logging in Play Framework

We are currently using the Play Framework and we are using the standard logging mechanism. We have implemented a implicit context to support passing username and session id to all service methods. We want to implement logging so that it is session based. This requires implementing our own logger. This works for our own logs but how do we do the same for basic exception handling and logs as a result. Maybe there is a better way to capture this then with implicits or how can we override the exception handling logging. Essentially, we want to get as many log messages to be associated to the session.
It depends if you are doing reactive style development or standard synchronous development:
If standard synchronous development (i.e. no futures, 1 thread per request) - then I'd recommend you just use MDC, which adds values onto Threadlocal for logging. You can then customise the output in logback / log4j. When you get the username / session (possibly in a Filter or in your controller), you can then set the values there and then and you do not need to pass them around with implicits.
If you are doing reactive development you have a couple options:
You can still use MDC, except you'd have to use a custom Execution Context that effectively copies the MDC values to the thread, since each request could in theory be handled by multiple threads. (as described here: http://code.hootsuite.com/logging-contextual-info-in-an-asynchronous-scala-application/)
The alternative is the solution which I tend to use (and close to what you have now): You could make a class which represents MyAppRequest. Set the username, session info, and anything else, on that. You can continue to pass it around as an implicit. However, instead of using Action.async, you make your own MyAction class which an be used like below
myAction.async { implicit myRequest => //some code }
Inside the myAction, you'd have to catch all Exceptions and deal with future failures, and do the error handling manually instead of relying on the ErrorHandler. I often inject myAction into my Controllers and put common filter functionality in it.
The down side of this is, it is just a manual method. Also I've made MyAppRequest hold a Map of loggable values which can be set anywhere, which means it had to be a mutable map. Also, sometimes you need to make more than one myAction.async. The pro is, it is quite explicit and in your control without too much ExecutionContext/ThreadLocal magic.
Here is some very rough sample code as a starter, for the manual solution:
def logErrorAndRethrow(myrequest:MyRequest, x:Throwable): Nothing = {
//log your error here in the format you like
throw x //you can do this or handle errors how you like
}
class MyRequest {
val attr : mutable.Map[String, String] = new mutable.HashMap[String, String]()
}
//make this a util to inject, or move it into a common parent controller
def myAsync(block: MyRequest => Future[Result] ): Action[AnyContent] = {
val myRequest = new MyRequest()
try {
Action.async(
block(myRequest).recover { case cause => logErrorAndRethrow(myRequest, cause) }
)
} catch {
case x:Throwable =>
logErrorAndRethrow(myRequest, x)
}
}
//the method your Route file refers to
def getStuff = myAsync { request:MyRequest =>
//execute your code here, passing around request as an implicit
Future.successful(Results.Ok)
}

How do I abstract the domain layer from the persistence layer in Scala

UPDATE:
I've edited the title and added this text to better explain what I'm trying to achieve: I'm trying to create a new application from the ground up, but don't want the business layer to know about the persistence layer, in the same way one would not want the business layer to know about a REST API layer. Below is an example of a persistence layer that I would like to use. I'm looking for good advice on integrating with this i.e. I need help with the design/architecture to cleanly split the responsibilities between business logic and persistence logic. Maybe a concept along the line of marshalling and unmarshalling of persistence objects to domain objects.
From a SLICK (a.k.a. ScalaQuery) test example, this is how you create a many-to-many database relationship. This will create 3 tables: a, b and a_to_b, where a_to_b keeps links of rows in table a and b.
object A extends Table[(Int, String)]("a") {
def id = column[Int]("id", O.PrimaryKey)
def s = column[String]("s")
def * = id ~ s
def bs = AToB.filter(_.aId === id).flatMap(_.bFK)
}
object B extends Table[(Int, String)]("b") {
def id = column[Int]("id", O.PrimaryKey)
def s = column[String]("s")
def * = id ~ s
def as = AToB.filter(_.bId === id).flatMap(_.aFK)
}
object AToB extends Table[(Int, Int)]("a_to_b") {
def aId = column[Int]("a")
def bId = column[Int]("b")
def * = aId ~ bId
def aFK = foreignKey("a_fk", aId, A)(a => a.id)
def bFK = foreignKey("b_fk", bId, B)(b => b.id)
}
(A.ddl ++ B.ddl ++ AToB.ddl).create
A.insertAll(1 -> "a", 2 -> "b", 3 -> "c")
B.insertAll(1 -> "x", 2 -> "y", 3 -> "z")
AToB.insertAll(1 -> 1, 1 -> 2, 2 -> 2, 2 -> 3)
val q1 = for {
a <- A if a.id >= 2
b <- a.bs
} yield (a.s, b.s)
q1.foreach(x => println(" "+x))
assertEquals(Set(("b","y"), ("b","z")), q1.list.toSet)
As my next step, I would like to take this up one level (I still want to use SLICK but wrap it nicely), to working with objects. So in pseudo code it would be great to do something like:
objectOfTypeA.save()
objectOfTypeB.save()
linkAtoB.save(ojectOfTypeA, objectOfTypeB)
Or, something like that. I have my ideas on how I might approach this in Java, but I'm starting to realize that some of my object-oriented ideas from pure OO languages are starting to fail me. Can anyone please give me some pointers as to how approach this problem in Scala.
For example: Do I create simple objects that just wrap or extend the table objects, and then include these (composition) into another class that manages them?
Any ideas, guidance, example (please), that will help me better approach this problem as a designer and coder will be greatly appreciated.
The best idea would be to implement something like data mapper pattern. Which, in contrast to active record, will not violate SRP.
Since I am not a Scala developer, I will not show any code.
The idea is following:
create domain object instance
set conditions on the element (for example setId(42), if you are looking for element by ID)
create data mapper instance
execute fetch() method on the mapper by passing in domain object as parameter
The mapper would look up current parameters of provided domain object and, based on those parameters, retrieve information from storage (which might be SQL database, or JSON file or maybe a remote REST API). If information is retrieved, it assigns the values to the domain object.
Also, I must note, that data mappers are created for work with specific domain object's interface, but the information, which they pass from domain object to storage and back, can be mapped to multiple SQL tables or multiple REST resources.
This way you can easily replace the mapper, when you switch to different storage medium, or even unit-test the logic in domain objects without touching the real storage. Also, if you decide to add caching at some point, that would be just another mapper, which tried to fetch information from cache, and, if it fails, the mapper for persistent storage kicks in.
Domain object (or, in some cases, a collection of domain objects) would be completely unaware of whether it is stored or retrieved. That would be the responsibility of the data mappers.
If this is all in MVC context, then, to fully implement this, you would need another group of structures in the model layer. I call them "services" (please share, of you come up with better name). They are responsible for containing the interaction between data mappers and domain objects. This way you can prevent the business logic from leaking in the presentation layer (controllers, to be exact), and these services create a natural interface for interaction between business (also know as model) layer and the presentation layer.
P.S. Once again, sorry that I cannot provide any code examples, because I am a PHP developer and have no idea how to write code in Scala.
P.P.S. If you are using data mapper pattern, the best option is to write mappers manually and not use any 3rd party ORM, which claims to implement it. It would give you more control over codebase and avoid pointless technical debt [1] [2].
A good solution for simple persistence requirements is the ActiveRecord pattern: http://en.wikipedia.org/wiki/Active_record_pattern . This is implemented in Ruby and in Play! framework 1.2, and you can easily implement it in Scala in a stand-alone application
The only requirement is to have a singleton DB or a singleton service to get a reference to the DB you require. I personally would go for an implementation based on the following:
A generic trait ActiveRecord
A generic typeclass ActiveRecordHandler
Exploiting the power of implicits, you could obtain an amazing syntax:
trait ActiveRecordHandler[T]{
def save(t:T):T
def delete[A<:Serializable](primaryKey:A):Option[T]
def find(query:String):Traversable[T]
}
object ActiveRecordHandler {
// Note that an implicit val inside an object with the same name as the trait
// is one of the way to have the implicit in scope.
implicit val myClassHandler = new ActiveRecordHandler[MyClass] {
def save(myClass:MyClass) = myClass
def delete[A <: Serializable](primaryKey: A) = None
def find(query: String) = List(MyClass("hello"),MyClass("goodbye"))
}
}
trait ActiveRecord[RecordType] {
self:RecordType=>
def save(implicit activeRecordHandler:ActiveRecordHandler[RecordType]):RecordType = activeRecordHandler.save(this)
def delete[A<:Serializable](primaryKey:A)(implicit activeRecordHandler:ActiveRecordHandler[RecordType]):Option[RecordType] = activeRecordHandler.delete(primaryKey)
}
case class MyClass(name:String) extends ActiveRecord[MyClass]
object MyClass {
def main(args:Array[String]) = {
MyClass("10").save
}
}
With such a solution, you only need your class to extends ActiveRecord[T] and have an implicit ActiveRecordHandler[T] to handle this.
There is actually also an implementation: https://github.com/aselab/scala-activerecord which is based on similar idea, but instead of making the ActiveRecord having an abstract type, it declares a generic companion object.
A general but very important comment on the ActiveRecord pattern is that it helps meet simple requirements in terms of persistence, but cannot deal with more complex requirements: for example is when you want to persist multiple objects under the same transaction.
If your application requires more complex persistence logic, the best approach is to introduce a persistence service which exposes only a limited set of functions to the client classes, for example
def persist(objectsofTypeA:Traversable[A],objectsOfTypeB:Traversable[B])
Please also note that according to your application complexity, you might want to expose this logic in different fashions:
as a singleton object in the case your application is simple, and you do not want your persistence logic to be pluggable
through a singleton object which acts as a sort as a "application context", so that in your application at startup you can decide which persistence logic you want to use.
with some sort of lookup service pattern, if your application is distributed.

Serialize Function1 to database

I know it's not directly possible to serialize a function/anonymous class to the database but what are the alternatives? Do you know any useful approach to this?
To present my situation: I want to award a user "badges" based on his scores. So I have different types of badges that can be easily defined by extending this class:
class BadgeType(id:Long, name:String, detector:Function1[List[UserScore],Boolean])
The detector member is a function that walks the list of scores and return true if the User qualifies for a badge of this type.
The problem is that each time I want to add/edit/modify a badge type I need to edit the source code, recompile the whole thing and re-deploy the server. It would be much more useful if I could persist all BadgeType instances to a database. But how to do that?
The only thing that comes to mind is to have the body of the function as a script (ex: Groovy) that is evaluated at runtime.
Another approach (that does not involve a database) might be to have each badge type into a jar that I can somehow hot-deploy at runtime, which I guess is how a plugin-system might work.
What do you think?
My very brief advice is that if you want this to be truly data-driven, you need to implement a rules DSL and an interpreter. The rules are what get saved to the database, and the interpreter takes a rule instance and evaluates it against some context.
But that's overkill most of the time. You're better off having a little snippet of actual Scala code that implements the rule for each badge, give them unique IDs, then store the IDs in the database.
e.g.:
trait BadgeEval extends Function1[User,Boolean] {
def badgeId: Int
}
object Badge1234 extends BadgeEval {
def badgeId = 1234
def apply(user: User) = {
user.isSufficientlyAwesome // && ...
}
}
You can either have a big whitelist of BadgeEval instances:
val weDontNeedNoStinkingBadges = Map(
1234 -> Badge1234,
5678 -> Badge5678,
// ...
}
def evaluator(id: Int): Option[BadgeEval] = weDontNeedNoStinkingBadges.get(id)
def doesUserGetBadge(user: User, id: Int) = evaluator(id).map(_(user)).getOrElse(false)
... or if you want to keep them decoupled, use reflection:
def badgeEvalClass(id: Int) = Class.forName("com.example.badge.Badge" + id + "$").asInstanceOf[Class[BadgeEval]]
... and if you're interested in runtime pluggability, try the service provider pattern.
You can try and use Scala Continuations - they can give you the ability to serialize the computation and run it at later time or even on another machine.
Some links:
Continuations
What are Scala continuations and why use them?
Swarm - Concurrency with Scala Continuations
Serialization relates to data rather than methods. You cannot serialize functionality because it is a class file which is designed to serialize that and object serialization serializes the fields of an object.
So like Alex says, you need a rule engine.
Try this one if you want something fairly simple, which is string based, so you can serialize the rules as strings in a database or file:
http://blog.maxant.co.uk/pebble/2011/11/12/1321129560000.html
Using a DSL has the same problems unless you interpret or compile the code at runtime.