Limiting classes that can extend a scala trait - scala

It appears there are three (or more) ways to limit which classes can mix-in a given scala trait:
Using a common ancestor [trait]
Using abstract declaration
Using self-type in the trait
The common ancestor method requires additional restrictions and it seems suboptimal. Meanwhile, both self-typing and abstract declarations seems to be identical. Would someone care to explain the difference and use-cases (especially between 2 & 3)?
My example is:
val exampleMap = Map("one" -> 1, "two" -> 2)
class PropsBox (val properties : Map[String, Any])
// Using Common Ancestor
trait HasProperties {
val properties : Map[String, Any]
}
trait KeysAsSupertype extends HasProperties {
def keys : Iterable[String] = properties.keys
}
class SubProp(val properties : Map[String, Any]) extends HasProperties
val inCommonAncestor = new SubProp(exampleMap) with KeysAsSupertype
println(inCommonAncestor.keys)
// prints: Set(one, two)
// Using Abstract Declaration
trait KeysAsAbstract {
def properties : Map[String, Any]
def keys : Iterable[String] = properties.keys
}
val inAbstract = new PropsBox(exampleMap) with KeysAsAbstract
println(inSelfType.keys)
// prints: Set(one, two)
// Using Self-type
trait KeysAsSelfType {
this : PropsBox =>
def keys : Iterable[String] = properties.keys
}
val inSelfType = new PropsBox(exampleMap) with KeysAsSelfType
println(inSelfType.keys)
// prints: Set(one, two)

In your example, PropsBox does not impose any interesting constraints on properties - it simply has a member properties: Map[String, Any]. Therefore, there is no way to detect the difference between inheriting from PropsBox and simply requiring a def properties: Map[String, Any].
Consider the following example, where the difference is actually there. Suppose we have two classes GoodBox and BadBox.
GoodBox has properties, and all keys are short string that contain only digits
BadBox just has properties, and does not guarantee anything about the structure of the keys
In code:
/** Has `properties: Map[String, Any]`,
* and also guarantees that all the strings are
* actually decimal representations of numbers
* between 0 and 99.
*/
class GoodBox(val properties: Map[String, Any]) {
require(properties.keys.forall {
s => s.forall(_.isDigit) && s.size < 3
})
}
/** Has `properties: Map[String, Any]`, but
* guarantees nothing about the keys.
*/
class BadBox(val properties: Map[String, Any])
Now suppose that we for some reason want to transform the Map[String, Any] into a sparsely populated Array[Any], and use keys as array indices. Here, again, are two ways to do this: one with self-type declaration, and one with the abstract def properties member declaration:
trait AsArrayMapSelfType {
self: GoodBox =>
def asArrayMap: Array[Any] = {
val n = 100
val a = Array.ofDim[Any](n)
for ((k, v) <- properties) {
a(k.toInt) = v
}
a
}
}
trait AsArrayMapAbstract {
def properties: Map[String, Any]
def asArrayMap: Array[Any] = {
val n = 100
val a = Array.ofDim[Any](n)
for ((k, v) <- properties) {
a(k.toInt) = v
}
a
}
}
Now try it out:
val goodBox_1 =
new GoodBox(Map("1" -> "one", "42" -> "fourtyTwo"))
with AsArrayMapSelfType
val goodBox_2 =
new GoodBox(Map("1" -> "one", "42" -> "fourtyTwo"))
with AsArrayMapAbstract
/* error: illegal inheritance
val badBox_1 =
new BadBox(Map("Not a number" -> "mbxkxb"))
with AsArrayMapSelfType
*/
val badBox_2 =
new BadBox(Map("Not a number" -> "mbxkxb"))
with AsArrayMapAbstract
goodBox_1.asArrayMap
goodBox_2.asArrayMap
// badBox_1.asArrayMap - not allowed, good!
badBox_2.asArrayMap // Crashes with NumberFormatException, bad
With a goodBox, both methods will work and produce the same results. However, with a badBox, the self-type vs. abstract-def behave differently:
self-type version does not allow the code to compile (error catched at compile-time)
abstract-def version crashes at runtime with a NumberFormatException (error happens at runtime)
That's the difference.

Related

Scala ClassCastException on Option.orNull

When I try to run following code:
def config[T](key: String): Option[T] = {
//in reality this is a map of various instance types as values
Some("string".asInstanceOf[T])
}
config("path").orNull
I'm getting error:
java.lang.String cannot be cast to scala.runtime.Null$
java.lang.ClassCastException
Following attempts are working fine:
config[String]("path").orNull
config("path").getOrElse("")
Since getOrElse works its confusing why null is so special and throws an error. Is there a way for orNull to work without specifying generic type ?
scalaVersion := "2.12.8"
Just to show how you may avoid the use asInstanceOf to get the values from a typed config.
sealed trait Value extends Product with Serializable
final case class IntValue(value: Int) extends Value
final case class StringValue(value: String) extends Value
final case class BooleanValue(value: Boolean) extends Value
type Config = Map[String, Value]
sealed trait ValueExtractor[T] {
def extract(config: Config)(fieldName: String): Option[T]
}
object ValueExtractor {
implicit final val IntExtractor: ValueExtractor[Int] =
new ValueExtractor[Int] {
override def extract(config: Config)(fieldName: String): Option[Int] =
config.get(fieldName).collect {
case IntValue(value) => value
}
}
implicit final val StringExtractor: ValueExtractor[String] =
new ValueExtractor[String] {
override def extract(config: Config)(fieldName: String): Option[String] =
config.get(fieldName).collect {
case StringValue(value) => value
}
}
implicit final val BooleanExtractor: ValueExtractor[Boolean] =
new ValueExtractor[Boolean] {
override def extract(config: Config)(fieldName: String): Option[Boolean] =
config.get(fieldName).collect {
case BooleanValue(value) => value
}
}
}
implicit class ConfigOps(val config: Config) extends AnyVal {
def getAs[T](fieldName: String)(default: => T)
(implicit extractor: ValueExtractor[T]): T =
extractor.extract(config)(fieldName).getOrElse(default)
}
Then, you can use it like this.
val config = Map("a" -> IntValue(10), "b" -> StringValue("Hey"), "d" -> BooleanValue(true))
config.getAs[Int](fieldName = "a")(default = 0) // res: Int = 10
config.getAs[Int](fieldName = "b")(default = 0) // res: Int = 0
config.getAs[Boolean](fieldName = "c")(default = false) // res: Boolean = false
Now, the problem becomes how to create the typed config from a raw source.
And even better, how to directly map the config to a case class.
But, those are more complex, and probably is better to just use something already done, like pureconfig.
Just as an academic exercise, lets see if we can support Lists & Maps.
Lets start with lists, a naive approach would be to have another case class for values which are lists, and create a factory of extractors for every kind of list (this process is formally know as implicit derivation).
import scala.reflect.ClassTag
final case class ListValue[T](value: List[T]) extends Value
...
// Note that, it has to be a def, since it is not only one implicit.
// But, rather a factory of implicits.
// Also note that, it needs another implicit parameter to construct the specific implicit.
// In this case, it needs a ClasTag for the inner type of the list to extract.
implicit final def listExtractor[T: ClassTag]: ValueExtractor[List[T]] =
new ValueExtractor[List[T]] {
override def extract(config: Config)(fieldName: String): Option[List[T]] =
config.get(fieldName).collect {
case ListValue(value) => value.collect {
// This works as a safe caster, which will remove all value that couldn't been casted.
case t: T => t
}
}
}
Now, you can use it like this.
val config = Map("l" ->ListValue(List(1, 2, 3)))
config.getAs[List[Int]](fieldName = "l")(default = List.empty)
// res: List[Int] = List(1, 2, 3)
config.getAs[List[String]](fieldName = "l")(default = List("Hey"))
// res: String = List() - The default is not used, since the field is a List...
// whose no element could be casted to String.
However, this approach is limited to plain types, if you need a List of other generic type, like a List of Lists. Then, this won't work.
val config = Map("l" ->ListValue(List(List(1, 2), List(3))))
val l = config.getAs[List[List[String]]](fieldName = "l")(default = List.empty)
// l: List[List[String]] = List(List(1, 2), List(3)) ???!!!
l.head
// res: List[String] = List(1, 2)
l.head.head
// java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String
The problem here is type erasure, which ClassTags can not solve, you may try to use TypeTags which can preserve the complete type, but the solution becomes more cumbersome.
For Maps the solution is quite similar, especially if you fix the key type to String (assuming what you really want is a nested config). But, this post is too long now, so I would leave it as an exercise for the reader.
Nevertheless, as already said, this can be broken easily, and is not completely robust.
There are better approaches, but I myself am not very skilled on those (yet), and even if I would be, the answer would be more long and really not necessary at all.
Lucky for you, even if pureconfig does not support YAML directly, there is a module which does, pureconfig-yaml.
I would suggest you to take a look to the module, and if you have further problems ask a new question tagging pureconfig and yaml directly. Also, if it is just a small doubt, you may try asking in thegitter channel.

Pass a function with any case class return type as parameter

This might be a silly question but I've been struggling for quite some time. It is indeed similar to this question but I wasn't able to apply it in my code (duo to patterns or being a function).
I want to pass a flatMap (or map) transform function to a function argument and then proxy it to a strategy function that actually calls the df.rdd.flatMap method. I'll try to explain!
case class Order(id: String, totalValue: Double, freight: Double)
case class Product(id: String, price: Double)
... or any other case class, whatever one needs to transform a row into ...
The Entity class:
class Entity(path: String) = {
...
def flatMap[T](mapFunction: (Row) => ArrayBuffer[T]): Entity = {
this.getStrategy.flatMap[T](mapFunction)
return this
}
def save(path: String): Unit = {
... write logic ...
}
}
An Entity might have different strategies for its methods. EntityStrategy is as follows:
abstract class EntityStrategy(private val entity: Entity,
private val spark: SparkSession) {
...
def flatMap[T](mapFunction: (Row) => ArrayBuffer[T])
def map[T](mapFunction: (Row) => T)
}
And one sample EntityStrategy implementation:
class SparkEntityStrategy(private val entity: Entity, private val spark: SparkSession)
extends EntityStrategy(entity, spark) {
...
override def map[T](mapFunction: Row => T): Unit = {
val rdd = this.getData.rdd.map(f = mapFunction)
this.dataFrame = this.spark.createDataFrame(rdd)
}
override def flatMap[T](mapFunction: (Row) => ArrayBuffer[T]): Unit = {
var rdd = this.getData.rdd.flatMap(f = mapFunction)
this.dataFrame = this.spark.createDataFrame(rdd)
}
}
Finally, I would like to create a flatMap/map function and call it like this:
def transformFlatMap(row: Row): ArrayBuffer[Order] = {
var orders = new ArrayBuffer[Order]
var _deliveries = row.getAs[Seq[Row]]("deliveries")
_deliveries.foreach(_delivery => {
var order = Order(
id = row.getAs[String]("id"),
totalValue = _delivery.getAs("totalAmount").asInstanceOf[Double])
orders += order
})
return orders
}
val entity = new Entity("path")
entity.flatMap[Order](transformFlatMap).save("path")
Of course, this doesn't work. I get an error on SparkEntityStrategy:
Error:(95, 35) No ClassTag available for T
val rdd = this.getData.rdd.map(f = mapFunction)
I have tried adding an (implicit encoder: Encoder: T) to both entity and strategy methods but it was a no go. Probably done something wrong as I'm new to Scala.
If I remove the "T's" and pass an actual case class everything works fine.
Turns out in order for both the compiler and Spark's methods to be satisfied I needed to add the following type tags:
[T <: scala.Product : ClassTag : TypeTag]
So both methods became:
def map[T <: Product : ClassTag : TypeTag](mapFunction: (Row) => T): Entity
def flatMap[T <: scala.Product : ClassTag : TypeTag](mapFunction: (Row) => TraversableOnce[T]): Entity
About scala.Product:
Base trait for all products, which in the standard library include at
least scala.Product1 through scala.Product22 and therefore also their
subclasses scala.Tuple1 through scala.Tuple22. In addition, all case
classes implement Product with synthetically generated methods.
Since I am using a case class object as my function's return type, I needed the scala.Product so that Spark's createDataFrame could match the correct overload.
Why both ClassTag and TypeTag?
By removing the TypeTag, the compiler throws the following error:
Error:(96, 48) No TypeTag available for T
this.dataFrame = this.spark.createDataFrame(rdd)
And removing the ClassTag:
Error:(95, 35) No ClassTag available for T
val rdd = this.getData.rdd.map(f = mapFunction)
Adding them made both methods satisfied and everything worked as expected.
Found a good article explaining type erasure in Scala.

Scala Design using reflection, implicit and generics

I have different sources and corresponding parameters
Source1, Source2, Source3
Parameter1, Parameter2, Parameter3,
Source: is trait (can be changed)
trait Source[T] {
def get(Parameter)(implicit c: Context): MyData[T]
}
Parameter is also a trait
trait Parameter
I have different OutputType class: T1, T2, T3
I need output as: MyData[OutputType]
Fixed API signature (changes to the signature not quite preferable):
val data1: MyData[T1] = MyAPI.get[T1](Parameter1("a", "b")) // this should give MyData from Source1 of type T1
val data2: MyData[T2] = MyAPI.get[T2](Parameter3(123)) // this should give MyData from Source3 of type T2
Some source supports some output types (say T1, T2), but some may not.
What I did:
I tried using scala reflection typeTag to determine the type at runtime, but since return type will be MyData[T], and is in contra-variant position, it wont know the actual return type. (Why does TypeTag not work for return types?)
e.g.
object MyAPI {
get[T: TypeTag](p: Parameter)(implicit c: Context): MyData[T] = {}
}
I also tried using type-class pattern. Scala TypeTag Reflection returning type T
I can work with different OutputType creating implicit val for each, but would only work for single Source1. I can't manage to work for all sources.
I was trying to do:
object MyAPI {
get[T: SomeConverter](p: Parameter)(implicit c: Context): MyData[T] = {
p match {
case Parameter1 => Source1[T].read(p.asInstanceOf(Parameter1)
case Parameter2 => Source2[T].read(p.asInstanceOf(Parameter2)
}
}
}
Disclaimer: I think I figured out what you want. I'm also learning to design type-safe APIs, so here's one.
Provided variant uses implicits. You have to manually establish mapping between parameter types and results they yield, which may or may not include sources. It does not work on runtime, however, so I also removed common trait Parameter. It also does not impose any restrictions on the Sources at all.
It also "looks" the way you wanted it to look, but it's not exactly that.
case class User(id: Int) // Example result type
// Notice I totally removed any and all relation between different parameter types and sources
// We will rebuild those relations later using implicits
object Param1
case class Param2(id: Int)
case class Param3(key: String, filter: Option[String])
// these objects have kinda different APIs. We will unify them.
// I'm not using MyData[T] because it's completely irrelevant. Types here are Int, User and String
object Source1 {
def getInt = 42
}
object Source2 {
def addFoo(id: Int): Int = id + 0xF00
def getUser(id: Int) = User(id)
}
object Source3 {
def getGoodInt = 0xC0FFEE
}
// Finally, our dark implicit magic starts
// This type will provide a way to give requested result for provided parameter
// and sealedness will prevent user from adding more sources - remove if not needed
sealed trait CanGive[Param, Result] {
def apply(p: Param): Result
}
// Scala will look for implicit CanGive-s in companion object
object CanGive {
private def wrap[P, R](fn: P => R): P CanGive R =
new (P CanGive R) {
override def apply(p: P): R = fn(p)
}
// there three show how you can pass your Context here. I'm using DummyImplicits here as placeholders
implicit def param1ToInt(implicit source: DummyImplicit): CanGive[Param1.type, Int] =
wrap((p: Param1.type) => Source1.getInt)
implicit def param2ToInt(implicit source: DummyImplicit): CanGive[Param2, Int] =
wrap((p: Param2) => Source2.addFoo(p.id))
implicit def param2ToUser(implicit source: DummyImplicit): CanGive[Param2, User] =
wrap((p: Param2) => Source2.getUser(p.id))
implicit val param3ToInt: CanGive[Param3, Int] = wrap((p: Param3) => Source3.getGoodInt)
// This one is completely ad-hoc and doesn't even use the Source3, only parameter
implicit val param3ToString: CanGive[Param3, String] = wrap((p: Param3) => p.filter.map(p.key + ":" + _).getOrElse(p.key))
}
object MyApi {
// We need a get method with two generic parameters: Result type and Parameter type
// We can "curry" type parameters using intermediate class and give it syntax of a function
// by implementing apply method
def get[T] = new _GetImpl[T]
class _GetImpl[Result] {
def apply[Param](p: Param)(implicit ev: Param CanGive Result): Result = ev(p)
}
}
MyApi.get[Int](Param1) // 42: Int
MyApi.get[Int](Param2(5)) // 3845: Int
MyApi.get[User](Param2(1)) // User(1): User
MyApi.get[Int](Param3("Foo", None)) // 12648430: Int
MyApi.get[String](Param3("Text", Some(" read it"))) // Text: read it: String
// The following block doesn't compile
//MyApi.get[User](Param1) // Implicit not found
//MyApi.get[String](Param1) // Implicit not found
//MyApi.get[User](Param3("Slevin", None)) // Implicit not found
//MyApi.get[System](Param2(1)) // Same. Unrelated requested types won't work either
object Main extends App {
sealed trait Parameter
case class Parameter1(n: Int) extends Parameter with Source[Int] {
override def get(p: Parameter): MyData[Int] = MyData(n)
}
case class Parameter2(s: String) extends Parameter with Source[String] {
override def get(p: Parameter): MyData[String] = MyData(s)
}
case class MyData[T](t: T)
trait Source[T] {
def get(p: Parameter): MyData[T]
}
object MyAPI {
def get[T](p: Parameter with Source[T]): MyData[T] = p match {
case p1: Parameter1 => p1.get(p)
case p2: Parameter2 => p2.get(p)
}
}
val data1: MyData[Int] = MyAPI.get(Parameter1(15)) // this should give MyData from Source1 of type T1
val data2: MyData[String] = MyAPI.get(Parameter2("Hello World")) // this should give MyData from Source3 of type T2
println(data1)
println(data2)
}
Does this do what you want?
ScalaFiddle: https://scalafiddle.io/sf/FrjJz75/0

In Scala Reflection, How to get generic type parameter of a concrete subclass?

Assuming that I have a Generic superclass:
class GenericExample[T](
a: String,
b: T
) {
def fn(i: T): T = b
}
and a concrete subclass:
case class Example(
a: String,
b: Int
) extends GenericExample[Int](a, b)
I want to get the type parameter of function "fn" by scala reflection, so I select and filter through its members:
import ScalaReflection.universe._
val baseType = typeTag[Example]
val member = baseType
.tpe
.member(methodName: TermName)
.asTerm
.alternatives
.map(_.asMethod)
.head
val paramss = member.paramss
val actualTypess: List[List[Type]] = paramss.map {
params =>
params.map {
param =>
param.typeSignature
}
}
I was expecting scala to give me the correct result, which is List(List(Int)), instead I only got the generic List(List(T))
Crunching through the document I found that typeSignature is the culprit:
* This method always returns signatures in the most generic way possible, even if the underlying symbol is obtained from an
* instantiation of a generic type.
And it suggests me to use the alternative:
def typeSignatureIn(site: Type): Type
However, since class Example is no longer generic, there is no way I can get site from typeTag[Example], can anyone suggest me how to get typeOf[Int] given only typeTag[Example]? Or there is no way to do it and I have to revert to Java reflection?
Thanks a lot for your help.
UPDATE: After some quick test I found that even MethodSymbol.returnType doesn't work as intended, the following code:
member.returnType
also yield T, annd it can't be corrected by asSeenFrom, as the following code doesn't change the result:
member.returnType.asSeenFrom(baseType.tpe, baseType.tpe.typeSymbol.asClass)
There are two approaches which I can suggest:
1) Reveal generic type from base class:
import scala.reflect.runtime.universe._
class GenericExample[T: TypeTag](a: String, b: T) {
def fn(i: T) = "" + b + i
}
case class Example(a: String, b: Int) extends GenericExample[Int](a, b) {}
val classType = typeOf[Example].typeSymbol.asClass
val baseClassType = typeOf[GenericExample[_]].typeSymbol.asClass
val baseType = internal.thisType(classType).baseType(baseClassType)
baseType.typeArgs.head // returns reflect.runtime.universe.Type = scala.Int
2) Add implicit method which returns type:
import scala.reflect.runtime.universe._
class GenericExample[T](a: String, b: T) {
def fn(i: T) = "" + b + i
}
case class Example(a: String, b: Int) extends GenericExample[Int](a, b)
implicit class TypeDetector[T: TypeTag](related: GenericExample[T]) {
def getType(): Type = {
typeOf[T]
}
}
new Example("", 1).getType() // returns reflect.runtime.universe.Type = Int
I'm posting my solution: I think there is no alternative due to Scala's design:
The core difference between methods in Scala reflection & Java reflection is currying: Scala method comprises of many pairs of brackets, calling a method with arguments first merely constructs an anonymous class that can take more pairs of brackets, or if there is no more bracket left, constructs a NullaryMethod class (a.k.a. call-by-name) that can be resolved to yield the result of the method. So types of scala method is only resolved at this level, when method is already broken into Method & NullaryMethod Signatures.
As a result it becomes clear that the result type can only be get using recursion:
private def methodSignatureToParameter_ReturnTypes(tpe: Type): (List[List[Type]], Type) = {
tpe match {
case n: NullaryMethodType =>
Nil -> n.resultType
case m: MethodType =>
val paramTypes: List[Type] = m.params.map(_.typeSignatureIn(tpe))
val downstream = methodSignatureToParameter_ReturnTypes(m.resultType)
downstream.copy(_1 = List(paramTypes) ++ methodSignatureToParameter_ReturnTypes(m.resultType)._1)
case _ =>
Nil -> tpe
}
}
def getParameter_ReturnTypes(symbol: MethodSymbol, impl: Type) = {
val signature = symbol.typeSignatureIn(impl)
val result = methodSignatureToParameter_ReturnTypes(signature)
result
}
Where impl is the class that owns the method, and symbol is what you obtained from Type.member(s) by scala reflection

Retaining trait individualities while mixing them in

I want to create an enity system with some special properties, based on Scala traits.
The main idea is this: all components are traits that inherit from the common trait:
trait Component
trait ComponentA extends Component
sometimes, in case of a more complex hierarchy and inter-dependant components it can get like this:
trait ComponentN extends ComponentM {
self: ComponentX with ComponentY =>
var a = 1
var b = "hello"
}
and so on. I have come to the conclusion that the data relevant to each component should be contained in itself and not in some storage inside an Entity or elsewhere because of the speed of the access. As a side note - that is also why everything is mutable, so there is no need in thinking about immutability.
Then Entities are created, mixing in the traits:
class Entity
class EntityANXY extends ComponentA
with ComponentN
with ComponentX
with ComponentY
Here all is fine, however I do have a special requirement that I do not know how to fulfill with the code. The requirement is this:
Each trait must provide an encoding method(?) that facilitates collection of the trait-related data in a universal form, for example in a form of a JSON or a Map like Map("a" -> "1", "b" -> "hello") and a decoding method to translate such a map, if received, back into the trait-related variables. Also: 1) all the encoding and decoding methods of all the mixed-in traits are called in a bunch, in an arbitrary order by Entity's methods encode and decode(Map) and 2) should be made available to be called separately by specifying a trait type, or better, by a string parameter like decode("component-n", Map).
It is not possible to use methods with the same name as they will be lost due to shadowing or overriding. I can think of a solution, where all the methods are stored in a Map[String, Map[String, String] => Unit] for decode and Map[String, () => Map[String, String]] for encode in every entity. This would work - the by-name as well as a bunch call would certainly be available. However, this will result in storing the same information in every entity which is unacceptable.
It is also possible to store these maps in a companion object so that it is not duplicated anywhere and call the object's encode and decode method with an extra parameter denoting a particular instance of the entity.
The requirement may seem strange, but it is necessary because of the required speed and modularity. All of these solutions are clumsy and i think there is a better and idiomatic solution in Scala, or maybe I am missing some important architectural pattern here. So is there any simpler and more idiomatic approach than the one with the companion object?
EDIT: I think that aggregation instead of inheritance could probably resolve these problems but at a cost of not being able to call methods directly on an entity.
UPDATE: Exploring the pretty promising way proposed by Rex Kerr, I have stumbled upon something that hinders. Here is the test case:
trait Component {
def encode: Map[String, String]
def decode(m: Map[String, String])
}
abstract class Entity extends Component // so as to enforce the two methods
trait ComponentA extends Component {
var a = 10
def encode: Map[String, String] = Map("a" -> a.toString)
def decode(m: Map[String, String]) {
println("ComponentA: decode " + m)
m.get("a").collect{case aa => a = aa.toInt}
}
}
trait ComponentB extends ComponentA {
var b = 100
override def encode: Map[String, String] = super.encode + ("b" -> b.toString)
override def decode (m: Map[String, String]) {
println("ComponentB: decoding " + m)
super.decode(m)
m.get("b").foreach{bb => b = bb.toInt}
}
}
trait ComponentC extends Component {
var c = "hey!"
def encode: Map[String, String] = Map("c" -> c)
def decode(m: Map[String, String]) {
println("ComponentC: decode " + m)
m.get("c").collect{case cc => c = cc}
}
}
trait ComponentD extends ComponentB with ComponentC {
var d = 11.6f
override def encode: Map[String, String] = super.encode + ("d" -> d.toString)
override def decode(m: Map[String, String]) {
println("ComponentD: decode " + m)
super.decode(m)
m.get("d").collect{case dd => d = dd.toFloat}
}
}
and finally
class EntityA extends ComponentA with ComponentB with ComponentC with ComponentD
so that
object Main {
def main(args: Array[String]) {
val ea = new EntityA
val map = Map("a" -> "1", "b" -> "3", "c" -> "what?", "d" -> "11.24")
println("BEFORE: " + ea.encode)
ea.decode(map)
println("AFTER: " + ea.encode)
}
}
which gives:
BEFORE: Map(c -> hey!, d -> 11.6)
ComponentD: decode Map(a -> 1, b -> 3, c -> what?, d -> 11.24)
ComponentC: decode Map(a -> 1, b -> 3, c -> what?, d -> 11.24)
AFTER: Map(c -> what?, d -> 11.24)
The A and B components are not influenced, being cut-off by the inheritance resolution. So this approach is only applicable in certain hierarchy cases. In this case we see that the ComponentD has shadowed everything else. Any comments are welcomed.
UPDATE 2: I place the comment that answers this problem here, for better reference: "Scala linearizes all the traits. There should be a supertrait of everything which will terminate the chain. In your case, that means that C and A should still call super, and Component should be the one to terminate the chain with a no-op." – Rex Kerr
Travis had an essentially correct answer; not sure why he deleted it. But, anyway, you can do this without too much grief as long as you're willing to make your encoding method take an extra parameter, and that when you decode you're happy to just set mutable variables, not create a new object. (Complex trait-stacking effectively-at-runtime ranges from difficult to impossible.)
The basic observation is that when you chain traits together, it defines a hierarchy of superclass calls. If each of these calls takes care of the data in that trait, you'd be set, as long as you can find a way to get all that data back. So
trait T {
def encodeMe(s: Seq[String]): Seq[String] = Seq()
def encode = encodeMe(Seq())
}
trait A extends T {
override def encodeMe(s: Seq[String]) = super.encodeMe(s) :+ "A"
}
trait B extends T {
override def encodeMe(s: Seq[String]) = super.encodeMe(s) :+ "B"
}
Does it work?
scala> val a = new A with B
a: java.lang.Object with A with B = $anon$1#41a92be6
scala> a.encode
res8: Seq[String] = List(A, B)
scala> val b = new B with A
b: java.lang.Object with B with A = $anon$1#3774acff
scala> b.encode
res9: Seq[String] = List(B, A)
Indeed! Not only does it work, but you get the order for free.
Now we need a way to set variables based on this encoding. Here, we follow the same pattern--we take some input and just go up the super chain with it. If you have very many traits stacked on, you may want to pre-parse text into a map or filter out those parts applicable to the current trait. If not, just pass on everything to super, and then set yourself after it.
trait T {
var t = 0
def decode(m: Map[String,Int]) { m.get("t").foreach{ ti => t = ti } }
}
trait C extends T {
var c = 1
override def decode(m: Map[String,Int]) {
super.decode(m); m.get("c").foreach{ ci => c = ci }
}
}
trait D extends T {
var d = 1
override def decode(m: Map[String,Int]) {
super.decode(m); m.get("d").foreach{ di => d = di }
}
}
And this too works just like one would hope:
scala> val c = new C with D
c: java.lang.Object with C with D = $anon$1#549f9afb
scala> val d = new D with C
d: java.lang.Object with D with C = $anon$1#548ea21d
scala> c.decode(Map("c"->4,"d"->2,"t"->5))
scala> "%d %d %d".format(c.t,c.c,c.d)
res1: String = 5 4 2