Background
I am trying to understand best practices for bringing implicit objects into scope within a Scala application.
I have a Playframework 2.2.0 (Scala 2.10) web app that mixes in a trait for Authorization. It checks. The Authenticated object checks that there is a user_id in scope, attempts to retrieve the user info, access token, and a data package object called a MagicNotebook from cache, database, and web service call. If the request is valid, then various objects are added to the wrapped request.
object Authenticated extends ActionBuilder[AuthenticatedRequest] {
def invokeBlock[A](request: Request[A],
block: (AuthenticatedRequest[A] => Future[SimpleResult])) = {
request.session.get(userName).map { implicit userId =>
Cache.getAs[DbUser](userKey).map { user =>
Cache.getAs[String](accessTokenKey).map { accessToken =>
Cache.getAs[MagicNotebook](magicNotebookKey(userId)).map { notebook =>
block(AuthenticatedRequest(user, accessToken, notebook, request) )
}.getOrElse(startOver)
}.getOrElse {
requestNewAccessToken(user.token).flatMap { response =>
persistAccessToken(response).map { accessToken =>
Cache.getAs[MagicNotebook](magicNotebookKey(userId)).map { notebook =>
block(AuthenticatedRequest(user, accessToken, notebook, request))
}.getOrElse(startOver)
}.getOrElse(startOver)
}
}
}.getOrElse(startOver) // user not found in Cache
}.getOrElse(startOver) // userName not found in session
}
}
}
case class AuthenticatedRequest[A](user: DbUser,
accessToken: String,
magic: MagicNotebook,
request: Request[A])
extends WrappedRequest[A](request)
Question
What is the best way to bring these implicit variables into scope?
Through an implicit class?
I tried to use an implicit companion class, with the following code:
object Helper {
implicit class Magical(request: AuthenticatedRequest[AnyContent]) {
def folderMap = request.magic.fMap
def documentMap = request.magic.dMap
}
}
However, I don't really get the benefit of an implicit this way:
def testing = Authenticated { implicit request =>
import services.Helper._
request.magic.home.folders // doesn't compile
request.magic.home.folders(Magical(request).ffMap) // compiles, but not implicitly
Ok("testing 123")
}
Through an import statement?
One possibility I considered was through an import statement within the controller. Here, the request has a MagicNotebook object in scope that I would like to use as an implicit variable.
def testing = Authenticated { implicit request =>
import request.magic._
request.magic.home.folders // implicit map is a parameter to the `folder` method
Ok("testing 123")
}
Through a companion trait?
Here, I create a companion trait that is mixed into the Authenticate trait that includes the two maps of the MagicNotebook object into scope of the controller.
trait Magic {
implicit def folderMap[A](implicit request: AuthenticatedRequest[A]) =
request.magic.fMap
implicit def docMap[A](implicit request: AuthenticatedRequest[A]) =
request.magic.dMap
}
My preference is the companion trait solution, but I was wondering if there might be a better way that I overlooked. I ended up re-writing methods that use the implicit variable, to use the MagicNotebook's two maps instead of whole object as implicit parameters.
But again, I was wondering if there might be a better way.
I am quite partial to package objects for this sort of thing. See What's New in Scala 2.8: Package Objects for a description. Package objects effectively allow you to put implicit classes into a package, which you can't otherwise do.
However, the main snag with this approach is that you can't split the definition of an object across multiple source files, so because the implicit classes need to be defined within the package object, they also need to be all in the same source file. If you have many implicit classes you wish to have imported, this can result in a large and unwieldy source file. However, that in itself is a sign that you have a “package god object” which should be split.
One of the ways I know of defining implicits is by using Package Objects.
package implicitIdiomatic {
implicit def nullableLongToOption(l:java.lang.Long) = Option(l)
}
}
package implicitIdiomatic
class ImplicitIdiomaticTest{
val l:Long = 1
longOpt(l)
def longOpt(l:Option[Long]) = l match {case Some(l1) => println(l1); case None => println("No long")}
}
Kind of useless example but hope you get the idea. Now when longOpt gets l, it is converted to Option[Long] using the implicit.
As long as you are working in the same package as defined by the package object you don't need the import statement.
Related
No instance of play.api.libs.json.Format is available for akka.actor.typed.ActorRef[org.knoldus.eventSourcing.UserState.Confirmation] in the implicit scope (Hint: if declared in the same file, make sure it's declared before)
[error] implicit val userCommand: Format[AddUserCommand] = Json.format
I am getting this error even though I have made Implicit instance of Json Format for AddUserCommand.
Here is my code:
trait UserCommand extends CommandSerializable
object AddUserCommand{
implicit val format: Format[AddUserCommand] = Json.format[AddUserCommand]
}
final case class AddUserCommand(user:User, reply: ActorRef[Confirmation]) extends UserCommand
Can anyone please help me with this error and how to solve it?
As Gael noted, you need to provide a Format for ActorRef[Confirmation]. The complication around this is that the natural serialization, using the ActorRefResolver requires that an ExtendedActorSystem be present, which means that the usual approaches to defining a Format in a companion object won't quite work.
Note that because of the way Lagom does dependency injection, this approach doesn't really work in Lagom: commands in Lagom basically can't use Play JSON.
import akka.actor.typed.scaladsl.adapter.ClassicActorSystemOps
import play.api.libs.json._
class PlayJsonActorRefFormat(system: ExtendedActorSystem) {
def reads[A] = new Reads[ActorRef[A]] {
def reads(jsv: JsValue): JsResult[ActorRef[A]] =
jsv match {
case JsString(s) => JsSuccess(ActorRefResolver(system.toTyped).resolveActorRef(s))
case _ => JsError(Seq(JsPath() -> Seq(JsonValidationError(Seq("ActorRefs are strings"))))) // hopefully parenthesized that right...
}
}
def writes[A] = new Writes[ActorRef[A]] {
def writes(a: ActorRef[A]): JsValue = JsString(ActorRefResolver(system.toTyped).toSerializationFormat(a))
}
def format[A] = Format[ActorRef[A]](reads, writes)
}
You can then define a format for AddUserCommand as
object AddUserCommand {
def format(arf: PlayJsonActorRefFormat): Format[AddUserCommand] = {
implicit def arfmt[A]: Format[ActorRef[A]] = arf.format
Json.format[AddUserCommand]
}
}
Since you're presumably using JSON to serialize the messages sent around a cluster (otherwise, the ActorRef shouldn't be leaking out like this), you would then construct an instance of the format in your Akka Serializer implementation.
(NB: I've only done this with Circe, not Play JSON, but the basic approach is common)
The error says that it cannot construct a Format for AddUserCommand because there's no Format for ActorRef[Confirmation].
When using Json.format[X], all the members of the case class X must have a Format defined.
In your case, you probably don't want to define a formatter for this case class (serializing an ActorRef doesn't make much sense) but rather build another case class with data only.
Edit: See Levi's answer on how to provide a formatter for ActorRef if you really want to send out there the actor reference.
Imagine I have a service:
class ServiceA(serviceB: ServiceB) {
import Extractor._
def send(typeA: A) = serviceB.send(typeA.extract)
}
object Extractor {
implicit class Extractor(type: A) {
def extract = ???
}
}
I want the extract method to be an implicitly defined because it doesn't directly relate to A type/domain and is a solution specific adhoc extension.
Now I would like to write a very simple unit test that confirms that serviceB.send is called.
For that, I mock service and pass a mocked A to send. Then I could just assert that serviceB.send was called with the mocked A.
As seen in the example, the send method also does some transformation on typeA parameter so I would need to mock extract method to return my specified value. However, A doesn't have extract method - it comes from the implicit class.
So the question is - how do I mock out the implicit class as in the example above as imports are not first class citizens.
If you want to specify a bunch of customised extract methods, you can do something like this:
sealed trait Extractor[T] {
// or whatever signature you want
def extract(obj: T): String
}
object Extractor {
implicit case object IntExtractor extends Extractor[Int] {
def extract(obj: Int): String = s"I am an int and my value is $obj"
}
implicit case object StringExtractor extends Extractor[String] {
def extract(obj: String): String = s"I am "
}
def apply[A : Extractor](obj: A): String = {
implicitly[Extractor[A]].extract(obj)
}
}
So you have basically a sealed type family that's pre-materialised through case objects, which are arguably only useful in a match. But that would let you decouple everything.
If you don't want to mix this with Extractor, call them something else and follow the same approach, you can then mix it all in with a context bound.
Then you can use this with:
println(Extractor(5))
For testing, simply override the available implicits if you need to. A bit of work, but not impossible, you can simply control the scope and you can spy on whatever method calls you want.
e.g instead of import Extractor._ have some other object with test only logic where you can use mocks or an alternative implementation.
Combining an ActionBuilder that transforms a request into a custom WrappedRequest with an additional type parameter, and then combining that with an ActionFilter causes the type of the custom WrappedRequest to be dropped.
Why is this and is there a fix?
For example, lets say I need to an authentication ActionBuilder and an optional, authorisation ActionFilter where the user type we need can vary depending on use.
See this contrived code:
case class AuthRequest[A, U](val request: Request[A], val user: U) extends WrappedRequest[A](request)
case class TestUser(id: String)
case class AuthenticationAction[A]() extends ActionBuilder[({type λ[B] = AuthRequest[B, TestUser]})#λ] {
override def invokeBlock[A](request: Request[A], block: (AuthRequest[A, TestUser]) => Future[Result]): Future[Result] =
block(AuthRequest(request, TestUser("test-id")))
}
case class AuthorisedAction[A]() extends ActionFilter[({type λ[B] = AuthRequest[B, TestUser]})#λ] {
// does some additional authorisation checks for added security
def authorised(user: TestUser) = true
override def filter[A](request: AuthRequest[A, TestUser]) = Future.successful {
if( authorised(request.user) ) None
else Some(Unauthorized)
}
}
Please note: The type function in the above is required as per this answer because of the additional type parameter on the AuthRequest. Again this is required because this API will be used with more than one type of user.
If I then implement a controller that uses the above:
class ExampleController extends Controller {
def secured = (AuthenticationAction() andThen AuthorisedAction()) { request =>
Ok(request.user.id)
}
}
I get a compiler error (value user is not a member of play.api.mvc.WrappedRequest[play.api.mvc.AnyContent]). In other words, the type of the variable request above is WrappedRequest instead of the expected type of AuthRequest[play.api.mvc.AnyContent, TestUser]!
I understand that most use cases the AuthenticationAction and AuthorisedAction would be combined into a single action but, because authorisation is optional, I would like to keep these as separate concerns.
Is this possible? Is this a bug in the Play Framework API? Or a Scala bug? Or human error?
Using your example I was able to compose the actions like this:
class ExampleController extends Controller {
def secured = AuthorisedAction().compose(AuthenticationAction()) { request =>
Ok(request.user.id)
}
}
It's interesting to note that in both cases IntelliJ's type inspection sees request as being of type AuthRequest[AnyContent, TestUser] -- it's only scalac that sees it as a WrappedRequest.
I am trying to figure out how to get a custom JsonFormat write method to be invoked when using the routing directive complete. JsonFormat created with the jsonFormat set of helper functions work fine, but defining a complete JsonFormat will not get called.
sealed trait Error
sealed trait ErrorWithReason extends Error {
def reason: String
}
case class ValidationError(reason: String) extends ErrorWithReason
case object EntityNotFound extends Error
case class DatabaseError(reason: String) extends ErrorWithReason
case class Record(a: String, b: String, error: Error)
object MyJsonProtocol extends DefaultJsonProtocol {
implicit object ErrorJsonFormat extends JsonFormat[Error] {
def write(err: Error) = failure match {
case e: ErrorWithReason => JsString(e.reason)
case x => JsString(x.toString())
}
def read(value: JsValue) = {
value match {
//Really only intended to serialize to JSON for API responses, not implementing read
case _ => throw new DeserializationException("Can't reliably deserialize Error")
}
}
}
implicit val record2Json = jsonFormat3(Record)
}
And then a route like:
import MyJsonProtocol._
trait TestRoute extends HttpService with Json4sSupport {
path("testRoute") {
val response: Record = getErrorRecord()
complete(response)
}
}
If I add logging, I can see that the ErrorJsonFormat.write method never gets called.
The ramifications are as follows showing what output I'm trying to get and what I actually get. Let's say the Record instance was Record("something", "somethingelse", EntityNotFound)
actual
{
"a": "something",
"b": "somethingelse",
"error": {}
}
intended
{
"a": "something",
"b": "somethingelse",
"error": "EntityNotFound"
}
I was expecting that the complete(record) uses the implicit JsonFormat for Record which in turn relies on the implicit object ErrorJsonFormat that specifies the write method that creates the appropriate JsString field. Instead it seems to both recognize the provided ErrorJsonFormat while ignoring its instructions for serializing.
I feel like there should be a solution that does not involve needing to replace implicit val record2Json = jsonFormat3(Record) with an explicit implicit object RecordJsonFormat extends JsonFormat[Record] { ... }
So to summarize what I am asking
Why does the serialization of Record fail to call the ErrorJsonFormat write method (what does it even do instead?) answered below
Is there a way to match my expectation while still using complete(record)?
Edit
Digging through the spray-json source code, there is an sbt-boilerplate template that seems to define the jsonFormat series of methods: https://github.com/spray/spray-json/blob/master/src/main/boilerplate/spray/json/ProductFormatsInstances.scala.template
and the relevant product for jsonFormat3 from that seems to be :
def jsonFormat3[P1 :JF, P2 :JF, P3 :JF, T <: Product :ClassManifest](construct: (P1, P2, P3) => T): RootJsonFormat[T] = {
val Array(p1,p2,p3) = extractFieldNames(classManifest[T])
jsonFormat(construct, p1, p2, p3)
}
def jsonFormat[P1 :JF, P2 :JF, P3 :JF, T <: Product](construct: (P1, P2, P3) => T, fieldName1: String, fieldName2: String, fieldName3: String): RootJsonFormat[T] = new RootJsonFormat[T]{
def write(p: T) = {
val fields = new collection.mutable.ListBuffer[(String, JsValue)]
fields.sizeHint(3 * 4)
fields ++= productElement2Field[P1](fieldName1, p, 0)
fields ++= productElement2Field[P2](fieldName2, p, 0)
fields ++= productElement2Field[P3](fieldName3, p, 0)
JsObject(fields: _*)
}
def read(value: JsValue) = {
val p1V = fromField[P1](value, fieldName1)
val p2V = fromField[P2](value, fieldName2)
val p3V = fromField[P3](value, fieldName3)
construct(p1v, p2v, p3v)
}
}
From this it would seem that jsonFormat3 itself is perfectly fine (if you trace into the productElement2Field it grabs the writer and directly calls write). The problem must then be that the complete(record) doesn't involve JsonFormat at all and somehow alternately marshals the object.
So this seems to answer part 1: Why does the serialization of Record fail to call the ErrorJsonFormat write method (what does it even do instead?). No JsonFormat is called because complete marshals via some other means.
It seems the remaining question is if it is possible to provide a marshaller for the complete directive that will use the JsonFormat if it exists otherwise default to its normal behavior. I realize that I can generally rely on the default marshaller for basic case class serialization. But when I get a complicated trait/case class setup like in this example I need to use JsonFormat to get the proper response. Ideally, this distinction shouldn't have to be explicit for someone writing routes to need to know the situations where its the default marshaller as opposed to needing to invoke JsonFormat. Or in other words, needing to distinguish if the given type needs to be written as complete(someType) or complete(someType.toJson) feels wrong.
After digging further, it seems the root of the problem has been a confusion of the Json4s and Spray-Json libraries in the code. In trying to track down examples of various elements of JSON handling, I didn't recognize the separation between the two libraries readily and ended up with code that mixed some of each, explaining the unexpected behavior.
In this question, the offending piece is pulling in the Json4sSupport in the router. The proper definition should be using SprayJsonSupport:
import MyJsonProtocol._
trait TestRoute extends HttpService with SprayJsonSupport {
path("testRoute") {
val response: Record = getErrorRecord()
complete(response)
}
}
With this all considered, the answers are more apparent.
1: Why does the serialization of Record fail to call the ErrorJsonFormat write method (what does it even do instead)?.
No JsonFormat is called because complete marshals via some other means. That other means is the marshaling provided implicitly by Json4s with Json4sSupport. You can use record.toJson to force spray-json serialization of the object, but the output will not be clean (it will include nested JS objects and "fields" keys).
Is there a way to match my expectation while still using complete(record)?
Yes, using SprayJsonSupport will use implicit RootJsonReader and/or RootJsonWriter where needed to automatically create a relevant Unmarshaller and/or Marshaller. Documentation reference
So with SprayJsonSupport it will see the RootJsonWriter defined by the jsonFormat3(Record) and complete(record) will serialize as expected.
I'm having trouble finding an elegant way of designing a some simple classes to represent HTTP messages in Scala.
Say I have something like this:
abstract class HttpMessage(headers: List[String]) {
def addHeader(header: String) = ???
}
class HttpRequest(path: String, headers: List[String])
extends HttpMessage(headers)
new HttpRequest("/", List("foo")).addHeader("bar")
How can I make the addHeader method return a copy of itself with the new header added? (and keep the current value of path as well)
Thanks,
Rob.
It is annoying but the solution to implement your required pattern is not trivial.
The first point to notice is that if you want to preserve your subclass type, you need to add a type parameter. Without this, you are not able to specify an unknown return type in HttpMessage
abstract class HttpMessage(headers: List[String]) {
type X <: HttpMessage
def addHeader(header: String):X
}
Then you can implement the method in your concrete subclasses where you will have to specify the value of X:
class HttpRequest(path: String, headers: List[String])
extends HttpMessage(headers){
type X = HttpRequest
def addHeader(header: String):HttpRequest = new HttpRequest(path, headers :+header)
}
A better, more scalable solution is to use implicit for the purpose.
trait HeaderAdder[T<:HttpMessage]{
def addHeader(httpMessage:T, header:String):T
}
and now you can define your method on the HttpMessage class like the following:
abstract class HttpMessage(headers: List[String]) {
type X <: HttpMessage
def addHeader(header: String)(implicit headerAdder:HeaderAdder[X]):X = headerAdder.add(this,header) }
}
This latest approach is based on the typeclass concept and scales much better than inheritance. The idea is that you are not forced to have a valid HeaderAdder[T] for every T in your hierarchy, and if you try to call the method on a class for which no implicit is available in scope, you will get a compile time error.
This is great, because it prevents you to have to implement addHeader = sys.error("This is not supported")
for certain classes in the hierarchy when it becomes "dirty" or to refactor it to avoid it becomes "dirty".
The best way to manage implicit is to put them in a trait like the following:
trait HeaderAdders {
implicit val httpRequestHeaderAdder:HeaderAdder[HttpRequest] = new HeaderAdder[HttpRequest] { ... }
implicit val httpRequestHeaderAdder:HeaderAdder[HttpWhat] = new HeaderAdder[HttpWhat] { ... }
}
and then you provide also an object, in case user can't mix it (for example if you have frameworks that investigate through reflection properties of the object, you don't want extra properties to be added to your current instance) (http://www.artima.com/scalazine/articles/selfless_trait_pattern.html)
object HeaderAdders extends HeaderAdders
So for example you can write things such as
// mixing example
class MyTest extends HeaderAdders // who cares about having two extra value in the object
// import example
import HeaderAdders._
class MyDomainClass // implicits are in scope, but not mixed inside MyDomainClass, so reflection from Hiberante will still work correctly
By the way, this design problem is the same of Scala collections, with the only difference that your HttpMessage is TraversableLike. Have a look to this question Calling map on a parallel collection via a reference to an ancestor type