I have two controllers who writes and reads the same AccountModel case class. This class is an adapter for my "domain" object Account who flatten some collections and transform objects references (Map[Role, Auth]) to a explicit key reference (Set[AuthModel(rolekey:String, level:Int)]).
I would like to reuse this AccountModel and his implicits Writes and Reads but don't know how the achieve that 'the scala way'.
I would say in an object Models with my case classes as inner classes and all the related implicits but I think that this would become unreadable soon.
What are you used to do, where do you put your reusable Json classes, do you have some advices ?
Thanks a lot
There are two main approaches.
Approach 1: Put them on a companion object of your serializable object:
// in file AccountModel.scala
class AccountModel(...) {
...
}
object AccountModel {
implicit val format: Format[AccountModel] = {...}
}
This way everywhere you import AccountModel, the formatters will be also available, so everything will work seamlessly.
Approach 2: Prepare a trait with JSON formatters:
// in a separate file AccountModelJSONSupport.scala
import my.cool.package.AccountModel
trait AccountModelJsonSupport {
implicit val format: Format[AccountModel] = {...}
}
With this approach whenever you need serialization, you have to mix the trait in, like this:
object FirstController extends Controller with AccountModelJsonSupport {
// Format[AccountModel] is available now:
def create = Action(parse.json[AccountModel]) { ... }
}
EDIT: I forgot to add a comparison of the two approaches. I usually stick to approach 1, as it is more straightforward. The JSONSupport mixin strategy is however required when you need two different formatters for the same class or when the model class is not your own and you can't modify it. Thanks for pointing it out in the comments.
Related
I have an abstract class and object
sealed abstract class Granularity() {
// some values and methods
}
object Granularity {
final private case class WeekGranularity(name: String, windowSize: Int) extends Granularity("week", "'7' day") {
// overriding methods
}
val Week: Granularity = WeekGranularity(name = "week", windowSize = 1)
}
I am using it in some other class like this
case class Meta(granularity: Granularity)
object Meta {
implicit val granularityWrites = Writes[Granularity](d => JsString(d.toString))
implicit val metaWrites = Json.writes[Meta]
}
Now when writing a spec like this, I get an error
class ControllerSpec {
"MetaController" should {
"return" {
.
.
.
// play 2.13 resp
resp.body[JsValue].asOpt[Meta] should beSome(expectedMeta)
// ERROR: No Json deserializer found for type models.Meta. Try to implement an implicit Reads or Format for this type
}
}
}
When I add an implicit Read on top inside Spec class, I still get an error
implicit val granularityReads = Json.reads[Granularity] // ERROR: Sealed trait Granularity is not supported: no known subclasses
implicit val metaReads = Json.reads[Meta]
I can do this to compare json, which works and I don't have to create any implicit.
resp.body[JsValue] shouldEqual Json.toJson(metricSignTimeseries)
But I want to understand how can I implement an implicit Read on Granularity?
In order to read a Granularity from JSON data the library must be able to create an instance of Granularity in order to return it. But Granularity is an abstract class and you can't create instances of an abstract class.
Json.reads needs to be parameterised with a concrete class that can be instanciated by the library, or you need to write a custom Reads[Granularity] that creates and returns an appropriate subclass of Granularity.
I suggest that you don't put functionality in the classes that you use to read/write JSON or use complex class hierarchies. Just read the data into simple case class instances that directly match the JSON format, and then process them into application classes. This allows the storage formats and the internal application data formats to change independently rather than being tightly linked.
I'm trying to do something that I'm not completely sure that is either possible or makes sense.
I have an abstraction which, in order words, depends heavily on an object to tell which version of a given component is to be used. It goes like this:
object ComponentManager {
def component1Version: ComponentVersion = Component1Version1()
def component2Version: ComponentVersion = Component2Version3()
}
What I want to achieve here is to limited all methods in ComponentManager object to conform to the type ComponentVersion. I could define a trait to enforce the types, but I don't know in advance how many components will I have. Therefore, I might end up with people adding to the manager object some stuff like:
object ComponentManager {
def component1Version: ComponentVersion = Component1Version1()
def component2Version: ComponentVersion = Component2Version3()
def component3Version = objectWithWrongType() // this is the problematic line
}
For the component3Version, we have the offending object. It will compile, but I would rather have a compilation error when such a thing happens, because I have some checks that come "freely" with the proper typing.
Again, I don't know how many components will the manager have, so I can't really rely on a trait specifying each and every method type.
I've read about F-bound types / functions / what-not, but still couldn't figure out whether they are / how to make them applicable to my problem.
Any ideas? "Your restraing doesn't make sense" is also a possible answer, I reckon, but I'd like to get some ideas on this regardless.
I'm making the following assumptions:
When someone creates a manager, they know how many components they need.
One must declare versions for all components.
Traits have to declare all method names explicitly, so we can't declare methods for components we don't know about. Instead, let's model components as a type:
trait ComponentManager {
type Component
def version(component: Component): Version
}
When someone knows what components they need, they can implement a manager:
sealed trait MyComponent
case object Component1 extends MyComponent
case object Component2 extends MyComponent
case object Component3 extends MyComponent
object MyComponentManager extends ComponentManager {
type Component = MyComponent
def version(component: MyComponent): Version = component match {
case Component1 => Component1Version1()
case Component2 => Component2Version3()
case Component3 => Component3Version5()
}
}
Now:
Returning anything but a Version for any component is a type error.
Forgetting to match a component is a non-exhaustive match warning.
I have a trait that defines a function--I don't want to specify how it will work until later. This trait is mixed in with several case classes, like so:
trait AnItem
trait DataFormatable {
def render():String = "" // dummy implementation
}
case class Person(name:String, age:Int) extends DataFormatable with AnItem
case class Building(numFloors:Int) extends DataFormatable with AnItem
Ok, so now I want includable modules that pimp specific implementations of this render behavior. Trying to use value classes here:
object JSON {
implicit class PersonRender( val p:Person ) extends AnyVal {
def render():String = {
//render json
}
}
// others
}
object XML {
implicit class PersonRender( val p:Person ) extends AnyVal {
def render():String = {
//render xml
}
}
// others
}
The ideal use would look like this (presuming JSON output desired):
import JSON._
val p:AnItem = Person("John",24)
println(p.render())
All cool--but it doesn't work. Is there a way I can make this loadable-implementation thing work? Am I close?
The DataFormatable trait is doing nothing here but holding you back. You should just get rid of it. Since you want to swap out render implementations based on the existence of implicits in scope, Person can't have it's own render method. The compiler will only look for an implicit conversion to PersonRender if Person doesn't have a method named render in the first place. But because Person inherits (or is forced to implement) render from DataFormatable, there is no need to look for the implicit conversion.
Based on your edit, if you have a collection of List[AnItem], it is also not possible to implicitly convert the elements to have render. While each of the sub-classes may have an implicit conversion that gives them render, the compiler doesn't know that when they are all piled into a list of a more abstract type. Particularly an empty trait such as AnItem.
How can you make this work? You have two simple options.
One, if you want to stick with the implicit conversions, you need to remove DataFormatable as the super-type of your case classes, so that they do not have their own render method. Then you can swap out XML._ and JSON._, and the conversions should work. However, you won't be allowed mixed collections.
Two, drop the implicits altogether and have your trait look like this:
trait DataFormatable {
def toXML: String
def toJSON: String
}
This way, you force every class that mixes in DataFormatable to contain serialization information (which is the way it should be, rather than hiding them in implicits). Now, when you have a List[DataFormatable], you can prove all of the elements can both be converted to JSON or XML, so you can convert a mixed list. I think this would be much better overall, as the code should be more straightforward. What imports you have shouldn't really be defining the behavior of what follows. Imagine the confusion that can arise because XML._ has been imported at the top of the file instead of JSON._.
I'm having trouble finding an elegant way of designing a some simple classes to represent HTTP messages in Scala.
Say I have something like this:
abstract class HttpMessage(headers: List[String]) {
def addHeader(header: String) = ???
}
class HttpRequest(path: String, headers: List[String])
extends HttpMessage(headers)
new HttpRequest("/", List("foo")).addHeader("bar")
How can I make the addHeader method return a copy of itself with the new header added? (and keep the current value of path as well)
Thanks,
Rob.
It is annoying but the solution to implement your required pattern is not trivial.
The first point to notice is that if you want to preserve your subclass type, you need to add a type parameter. Without this, you are not able to specify an unknown return type in HttpMessage
abstract class HttpMessage(headers: List[String]) {
type X <: HttpMessage
def addHeader(header: String):X
}
Then you can implement the method in your concrete subclasses where you will have to specify the value of X:
class HttpRequest(path: String, headers: List[String])
extends HttpMessage(headers){
type X = HttpRequest
def addHeader(header: String):HttpRequest = new HttpRequest(path, headers :+header)
}
A better, more scalable solution is to use implicit for the purpose.
trait HeaderAdder[T<:HttpMessage]{
def addHeader(httpMessage:T, header:String):T
}
and now you can define your method on the HttpMessage class like the following:
abstract class HttpMessage(headers: List[String]) {
type X <: HttpMessage
def addHeader(header: String)(implicit headerAdder:HeaderAdder[X]):X = headerAdder.add(this,header) }
}
This latest approach is based on the typeclass concept and scales much better than inheritance. The idea is that you are not forced to have a valid HeaderAdder[T] for every T in your hierarchy, and if you try to call the method on a class for which no implicit is available in scope, you will get a compile time error.
This is great, because it prevents you to have to implement addHeader = sys.error("This is not supported")
for certain classes in the hierarchy when it becomes "dirty" or to refactor it to avoid it becomes "dirty".
The best way to manage implicit is to put them in a trait like the following:
trait HeaderAdders {
implicit val httpRequestHeaderAdder:HeaderAdder[HttpRequest] = new HeaderAdder[HttpRequest] { ... }
implicit val httpRequestHeaderAdder:HeaderAdder[HttpWhat] = new HeaderAdder[HttpWhat] { ... }
}
and then you provide also an object, in case user can't mix it (for example if you have frameworks that investigate through reflection properties of the object, you don't want extra properties to be added to your current instance) (http://www.artima.com/scalazine/articles/selfless_trait_pattern.html)
object HeaderAdders extends HeaderAdders
So for example you can write things such as
// mixing example
class MyTest extends HeaderAdders // who cares about having two extra value in the object
// import example
import HeaderAdders._
class MyDomainClass // implicits are in scope, but not mixed inside MyDomainClass, so reflection from Hiberante will still work correctly
By the way, this design problem is the same of Scala collections, with the only difference that your HttpMessage is TraversableLike. Have a look to this question Calling map on a parallel collection via a reference to an ancestor type
Suppose I wanted to represent a Book in Scala, and it is generated directly from XML.
I want a wrapping parent class XMLObject to encompass classes that can be directly mapped to and from XML.
Below is an example of a working implementation of this, but I want to know why constructors cannot be abstract and you can't use the override keyword, but you can still redefine a constructor with the same signature as its parent in a subclass, and have it work the way you would expect.
Is this considered "bad" coding practice, and if so, what would be a better way to get similar functionality?
abstract class XMLObject {
def toXML:Node
def this(xml:Node) = this()
}
class Book(
val author:String = "",
val title:String = "",
val genre:String = "",
val price:Double = 0,
val publishDate:Date = null,
val description:String = "",
val id:Int = 0
) extends XMLObject {
override def toXML:Node =
<book id="{id}">
...
</book>
def this(xml:Node) = {
this(
author = (xml \ "author").text,
title = (xml \ "title").text,
genre = (xml \ "genre").text,
price = (xml \ "price").text.toDouble,
publishDate = (new SimpleDateFormat("yyyy-MM-dd")).parse((xml \ "publish_date").text),
description = (xml \ "description").text
)
}
}
Example use:
val book = new Book(someXMLNode)
A constructor can only be called in the form:
new X(...)
That means you know the runtime type of the object you are going to create. Meaning overriding makes no sense here. You can still define constructors in abstract classes for example, but this is for chaining (calling the superclass constructor in the classes constructor).
What you seem to be looking for is rather a factory pattern:
Remove the constructor from XMLObject
If you want, add a function to XMLObject's companion that decides based on the XML you pass in, what sub-class to create.
For example:
object XMLObject {
def apply(xml: Node) = xml match {
case <book> _ </book> => new Book(xml)
// ...
case _ => sys.error("malformed element")
}
}
I would use type classes for this.
The fact that you want to be able to map a Book (and other things) to and from XML is orthogonal to what those entities are. You don't want to choose a class hierarchy just based on the fact that you want these objects to have some XML mapping functionality in common. A proper superclass for Book might be PublishedEntity or something similar, but not XMLObject.
And what happens if next week you want to add JSON parsing/rendering? You've already used the superclass for XML; what would you do?
A better solution would be to make a trait for the XML interface and mix it in at the root. That way you could mix in as many such things as you want, and still be free to choose a sensible class hierarchy.
But an even better solution would be type classes, because they allow you to add support for someone else's class, that you can't add methods to.
Here is a slide presentation that Erik Osheim prepared on type classes.
Many of the JSON parser/formatter packages around (e.g. Spray's) use type classes. I haven't used XML much in Scala, but I would guess there are type class implementations for XML as well. A quick search turned up this.