I'm browsing around the web to find some kind of tutorial but I'm having trouble finding it.
I guess I could just use the twitter example provided with securesocial
example:
def onlyAdmin = SecuredAction(WithAuth("admin")) { implicit request =>
Ok("You could see this since you are admin")
}
case class WithAuth(role: String) extends Authorization {
def isAuthorized(user: Identity) = {
val existingDbUser = User.findUserByProviderUserId(user)
existingDbUser.hasRole(role)
}
User.findUserByProviderUserId(user) calls the db to find the stored user and it's roles.
I would prefer not to call the db every time and make use of the Identity.
How would you solve this?
That would be the right approach. You could, from the UserService.save() method return an instance of your own model (as long as it implements Identity). That would allow you to return your User object and then run user.hasRole(role) directly without querying the database again. But the query, needs to be done at some point.
Related
We are currently using the Play Framework and we are using the standard logging mechanism. We have implemented a implicit context to support passing username and session id to all service methods. We want to implement logging so that it is session based. This requires implementing our own logger. This works for our own logs but how do we do the same for basic exception handling and logs as a result. Maybe there is a better way to capture this then with implicits or how can we override the exception handling logging. Essentially, we want to get as many log messages to be associated to the session.
It depends if you are doing reactive style development or standard synchronous development:
If standard synchronous development (i.e. no futures, 1 thread per request) - then I'd recommend you just use MDC, which adds values onto Threadlocal for logging. You can then customise the output in logback / log4j. When you get the username / session (possibly in a Filter or in your controller), you can then set the values there and then and you do not need to pass them around with implicits.
If you are doing reactive development you have a couple options:
You can still use MDC, except you'd have to use a custom Execution Context that effectively copies the MDC values to the thread, since each request could in theory be handled by multiple threads. (as described here: http://code.hootsuite.com/logging-contextual-info-in-an-asynchronous-scala-application/)
The alternative is the solution which I tend to use (and close to what you have now): You could make a class which represents MyAppRequest. Set the username, session info, and anything else, on that. You can continue to pass it around as an implicit. However, instead of using Action.async, you make your own MyAction class which an be used like below
myAction.async { implicit myRequest => //some code }
Inside the myAction, you'd have to catch all Exceptions and deal with future failures, and do the error handling manually instead of relying on the ErrorHandler. I often inject myAction into my Controllers and put common filter functionality in it.
The down side of this is, it is just a manual method. Also I've made MyAppRequest hold a Map of loggable values which can be set anywhere, which means it had to be a mutable map. Also, sometimes you need to make more than one myAction.async. The pro is, it is quite explicit and in your control without too much ExecutionContext/ThreadLocal magic.
Here is some very rough sample code as a starter, for the manual solution:
def logErrorAndRethrow(myrequest:MyRequest, x:Throwable): Nothing = {
//log your error here in the format you like
throw x //you can do this or handle errors how you like
}
class MyRequest {
val attr : mutable.Map[String, String] = new mutable.HashMap[String, String]()
}
//make this a util to inject, or move it into a common parent controller
def myAsync(block: MyRequest => Future[Result] ): Action[AnyContent] = {
val myRequest = new MyRequest()
try {
Action.async(
block(myRequest).recover { case cause => logErrorAndRethrow(myRequest, cause) }
)
} catch {
case x:Throwable =>
logErrorAndRethrow(myRequest, x)
}
}
//the method your Route file refers to
def getStuff = myAsync { request:MyRequest =>
//execute your code here, passing around request as an implicit
Future.successful(Results.Ok)
}
I know scala, as a funcional language, is supposed to work differently from a common OO language, such as Java, but I'm sure there has to be a way to wrap a group of database changes in a single transaction, ensuring atomicity as well as every other ACID property.
As explained in the slick docs (http://slick.lightbend.com/doc/3.1.0/dbio.html), DBIOAction allows to group db operations in a transaction like this:
val a = (for {
ns <- coffees.filter(_.name.startsWith("ESPRESSO")).map(_.name).result
_ <- DBIO.seq(ns.map(n => coffees.filter(_.name === n).delete): _*)
} yield ()).transactionally
val f: Future[Unit] = db.run(a)
However, my use case (and most real world examples I can think of), I have a code structure with a Controller, which exposes the code for my REST endpoint, that controller calls multiple services and each service will delegate database operations to DAOs.
A rough example of my usual code structure:
class UserController #Inject(userService: UserService) {
def register(userData: UserData) = {
userService.save(userData).map(result => Ok(result))
}
}
class UserService #Inject(userDao: UserDao, addressDao: AddressDao) {
def save(userData: UserData) = {
for {
savedUser <- userDao.save(userData.toUser)
savedAddress <- addressDao.save(userData.addressData.toAddress)
} yield savedUser.copy(address = savedAddress)
}
}
class SlickUserDao {
def save(user: User) = {
db.run((UserSchema.users returning UserSchema.users)).insertOrUpdate(user)
}
}
This is a simple example, but most have more complex business logic in the service layer.
I don't want:
My DAOs to have business logic and decide which database operations to run.
Return DBAction from my DAOs and expose the persistency classes. That completely defeats the purpose of using DAOs in the first place and makes further refactorings much harder.
But I definitely want a transaction around my entire Controller, to ensure that if any code fails, all the changes done in the execution of that method will be rolled back.
How can I implement full controller transactionality with Slick in a Scala Play application? I can't seem to find any documentation on how to do that.
Also, how can I disable auto-commit in slick? I'm sure there is a way and I'm just missing something.
EDIT:
So reading a bit more about it, I feel now I understand better how slick uses connections to the database and sessions. This helped a lot: http://tastefulcode.com/2015/03/19/modern-database-access-scala-slick/.
What I'm doing is a case of composing in futures and, based on this article, there's no way to use the same connection and session for multiple operation of the kind.
Problem is: I really can't use any other kind of composition. I have considerable business logic that needs to be executed in between queries.
I guess I can change my code to allow me to use action composition, but as I mentioned before, that forces me to code my business logic with aspects like transactionality in mind. That shouldn't happen. It pollutes the business code and it makes writing tests a lot harder.
Any workaround this issue? Any git project out there that sorts this out that I missed? Or, more drastic, any other persistence framework that supports this? From what I've read, Anorm supports this nicely, but I may be misunderstanding it and don't want to change framework to find out it doesn't (like it happened with Slick).
There is no such thing as transactional annotations or the like in slick. Your second "do not want" is actually the way to go. It's totally reasonable to return DBIO[User] from your DAO which does not defeat their purpose at all. It's the way slick works.
class UserController #Inject(userService: UserService) {
def register(userData: UserData) = {
userService.save(userData).map(result => Ok(result))
}
}
class UserService #Inject(userDao: UserDao, addressDao: AddressDao) {
def save(userData: UserData): Future[User] = {
val action = (for {
savedUser <- userDao.save(userData.toUser)
savedAddress <- addressDao.save(userData.addressData.toAddress)
whatever <- DBIO.successful(nonDbStuff)
} yield (savedUser, savedAddress)).transactionally
db.run(action).map(result => result._1.copy(result._2))
}
}
class SlickUserDao {
def save(user: User): DBIO[User] = {
(UserSchema.users returning UserSchema.users).insertOrUpdate(user)
}
}
The signature of save in your service class is still the same.
No db related stuff in controllers.
You have full control of transactions.
I cannot find a case where the code above is harder to maintain / refactor compared to your original example.
There is also a quite exhaustive discussion that might be interesting for you. See Slick 3.0 withTransaction blocks are required to interact with libraries.
I'm looking for a good solution to log DB changes in a web application developed using Play/Scala/ReactiveMongo. I need to know who changed what.
I have a separate layer namely Services in which all data access and business logics happens. All saves/updates/removes are done by certain methods so I can log them safely in just 3 or 4 methods but I need user identity there.
I don't have access to current user in services! There is no global HttpContext or Request or something like that which let me get the user identity (I think this way of getting user identity was incorrect of course).
I have a solution:
Add an implicit parameter to all methods of services which have side effects (change DB) and pass user identity to them.
def save(model: A)(implicit userIdentity: Option[UserIndentity] = None) = { ... }
As I wrapped default request, it can extend UserIdentity trait so that the implicit request matches the implicit parameters.
class MyRequest[A](...) extends WrappedRequest[A](request) extends UserIdentity
Finally, actions can use services like this:
def index() = MyAction { implicit request =>
//...
defaultService.save(model)
//...
}
The bad thing is that I have to add implicit parameters to those service methods. Isn't there another solution to get current user without polluting method signatures.
What's the problem with simply adding UserIdentity as an argument to your functions? Knowing who the user is seems to be important for your business logic - after all, today you want to log who performed the operation, tomorrow you will want to make sure this particular user is allowed to do it.
And I would just use a real UserIdentity object, not some hack with WrappedRequest, your services don't need to mess with a WrappedRequest instance.
I have a simple Play application in which I need to check url being called and use different database accordingly. I know that it's easy to access current url in the controller, but for this to work I need to access it in the model.
Passing the url from controller to each call of a model method would be too big of an inconvenience. Is there any other way to solve this problem?
Play Framework 2.2.1 / Scala 2.10.3
UPDATE: This is my basic example
Controller (Application.scala):
package controllers
import play.api._
import play.api.mvc._
import models.Data
object Application extends Controller {
def index = Action {
//Call to model method - model should somehow get the URL without it being passed as a param
val smth: String = Data.getSmth()
Ok(smth);
}
}
Model (Data.scala):
package models
object Data {
def getSmth: Option[String] = DB.withSession {
val db = //this is where I need the url to decide which database to use
sql"""SELECT #$db.smth FROM smthTable""".as[String].firstOption
}
}
So, this is by design in the Play Scala API - there is no magic context, if you want data you will have to pass it along to whatever piece of your code that needs it.
You will have to take the url as a parameter of some kind, you could do it like this:
case class MyModel(someData: String, requestUrl: String)
object MyModel {
def apply(someData: String, request: Request) =
new MyModel(someData, request.url)
}
This would clearly express the dependency, but in your particular app you might call this from every request and want to avoid having to repeat providing that parameter, in that case you can use Scala implicits which makes the compiler look for a matching implicit instance that is of the same type in the current scope (you can read more about this here: http://www.scala-lang.org/old/node/114).
object MyModel {
def apply(someData: String)(implicit request: Request) =
new MyModel(someData, request.url)
}
which could then be called from a controller action like this
def myAction = Action { implicit request =>
val model = MyModel("blablabla")
...
}
Of course it may be a bad idea to tightly couple your model to the play Request API and you should probably introduce your own class to represent this 'context', you could then implicitly convert from Request to YourContext in you controllers and have the model implicitly use YourContext instead.
If all this sounds like gibberish to you, you should probably start with actually learning Scala before trying to build a web app in Scala. There are lots of good books nowadays ('Scala for the impatient' for example) as well as a multitude of good online resources (the neophytes guide to scala is a good one).
Good luck!
I know it's not directly possible to serialize a function/anonymous class to the database but what are the alternatives? Do you know any useful approach to this?
To present my situation: I want to award a user "badges" based on his scores. So I have different types of badges that can be easily defined by extending this class:
class BadgeType(id:Long, name:String, detector:Function1[List[UserScore],Boolean])
The detector member is a function that walks the list of scores and return true if the User qualifies for a badge of this type.
The problem is that each time I want to add/edit/modify a badge type I need to edit the source code, recompile the whole thing and re-deploy the server. It would be much more useful if I could persist all BadgeType instances to a database. But how to do that?
The only thing that comes to mind is to have the body of the function as a script (ex: Groovy) that is evaluated at runtime.
Another approach (that does not involve a database) might be to have each badge type into a jar that I can somehow hot-deploy at runtime, which I guess is how a plugin-system might work.
What do you think?
My very brief advice is that if you want this to be truly data-driven, you need to implement a rules DSL and an interpreter. The rules are what get saved to the database, and the interpreter takes a rule instance and evaluates it against some context.
But that's overkill most of the time. You're better off having a little snippet of actual Scala code that implements the rule for each badge, give them unique IDs, then store the IDs in the database.
e.g.:
trait BadgeEval extends Function1[User,Boolean] {
def badgeId: Int
}
object Badge1234 extends BadgeEval {
def badgeId = 1234
def apply(user: User) = {
user.isSufficientlyAwesome // && ...
}
}
You can either have a big whitelist of BadgeEval instances:
val weDontNeedNoStinkingBadges = Map(
1234 -> Badge1234,
5678 -> Badge5678,
// ...
}
def evaluator(id: Int): Option[BadgeEval] = weDontNeedNoStinkingBadges.get(id)
def doesUserGetBadge(user: User, id: Int) = evaluator(id).map(_(user)).getOrElse(false)
... or if you want to keep them decoupled, use reflection:
def badgeEvalClass(id: Int) = Class.forName("com.example.badge.Badge" + id + "$").asInstanceOf[Class[BadgeEval]]
... and if you're interested in runtime pluggability, try the service provider pattern.
You can try and use Scala Continuations - they can give you the ability to serialize the computation and run it at later time or even on another machine.
Some links:
Continuations
What are Scala continuations and why use them?
Swarm - Concurrency with Scala Continuations
Serialization relates to data rather than methods. You cannot serialize functionality because it is a class file which is designed to serialize that and object serialization serializes the fields of an object.
So like Alex says, you need a rule engine.
Try this one if you want something fairly simple, which is string based, so you can serialize the rules as strings in a database or file:
http://blog.maxant.co.uk/pebble/2011/11/12/1321129560000.html
Using a DSL has the same problems unless you interpret or compile the code at runtime.