I'm trying to use Scala Pickling to program some generic unpickling logic.
Say you have two types, A and B, and you pickle them into a byte array.
You take this byte array and send it to another machine and that's received as a byte array.
Now you need to unpickle it, but you don't know whether the byte array is for type A or type B.
How would you program the unpicking part? Do you make A and B extend another type, say T, and then call unpickle[T], and then pattern match on the result for A or B?
Or do you add a instance variable to T, say a Byte, which uses different number for instances of type A or B, and based on that, call unpickle[A] or unpickle[B]?
UPDATE: Looking into the Scala Pickling testsuite, the closest thing I found is base.scala, which kinda follows the first option.
The following works by printing:
It's A
It's B
Code:
import scala.pickling._
import binary._
object MultiTypePickling extends App {
sealed abstract class Base
final class A extends Base { override def toString = "A" }
final class B extends Base { override def toString = "B" }
val a = new A
val pa = a.pickle
val b = new B
val pb = b.pickle
// ----
pa.value.unpickle[Base] match {
case aa: A =>
println("It's " + aa)
case _ =>
assert(assertion = false)
}
pb.value.unpickle[Base] match {
case bb: B =>
println("It's " + bb)
case _ =>
assert(assertion = false)
}
}
Related
case class Address( address :String ,pinCode : Int)
case class Person (a: String,b:Int ,c: Address)
def getClassDefinition[T:TypeTag] =
(typeOf[T].members.filter(!_.isMethod).map(r => (r.name -> r.typeSignature)))
val m = getClassDefinition[Address]
(pinCode ,scala.Int)
(address ,String)
val p =getClassDefinition[Person]
(c ,A$A332.this.Address)
(b ,scala.Int)
(a ,String)
I am looking for an nested result instead of just going till Address case class
If I understand the problem, it is to get a class definition that includes a name -> type signature pair for each of the the non-method members of the type. If the type signature is a standard type, that is all that is required; but for some types a nested definition is required. I've modelled this either/or case with an Either[Definition, NestedDefinition].
In this example I've chosen an isNested(...) rule that matches on the package name, but it could be any rule that identifies a class for deeper inspection.
I'm not sure if the output is in the format that you expected, but you can either modify the example code, or map the result into your own data structure.
import scala.reflect.runtime.universe._
object TypeTags {
// Define your domain name here for the isNested(...) test!
val myDomainName: String = ???
// Define some type aliases
type Definition = ((AnyRef with SymbolApi)#NameType, Type)
type EitherDefinitionOrNested = Either[Definition, NestedDefinition]
// A nested definition contains the original definition and an iterable collection of its member definitions (recursively).
case class NestedDefinition(t: Definition, m: Iterable[EitherDefinitionOrNested])
// The test to determine if a nested definition is needed.
def isNested(r: Symbol): Boolean = {
r.info.typeSymbol.fullName.startsWith(myDomainName)
}
// Obtain a class definition from a Symbol.
def classDefinition(symbol: Symbol): Iterable[EitherDefinitionOrNested] = {
symbol.typeSignature.members.filter(!_.isMethod).map {
case r # nested if isNested(r) => Right(NestedDefinition(nested.name -> nested.typeSignature, classDefinition(nested)))
case r => Left(r.name -> r.typeSignature)
}
}
// Obtain a class definition from a type.
def getClassDefinition[T: TypeTag]: Iterable[EitherDefinitionOrNested] = classDefinition(typeOf[T].typeSymbol)
// The test case
case class Address(address: String ,pinCode: Int)
case class Person(a: String, b: Int ,c: Address)
def main(args: Array[String]): Unit = {
val m = getClassDefinition[Address]
val p = getClassDefinition[Person]
println(s"address: $m")
println(s"person: $p")
}
}
Assuming that I have a Generic superclass:
class GenericExample[T](
a: String,
b: T
) {
def fn(i: T): T = b
}
and a concrete subclass:
case class Example(
a: String,
b: Int
) extends GenericExample[Int](a, b)
I want to get the type parameter of function "fn" by scala reflection, so I select and filter through its members:
import ScalaReflection.universe._
val baseType = typeTag[Example]
val member = baseType
.tpe
.member(methodName: TermName)
.asTerm
.alternatives
.map(_.asMethod)
.head
val paramss = member.paramss
val actualTypess: List[List[Type]] = paramss.map {
params =>
params.map {
param =>
param.typeSignature
}
}
I was expecting scala to give me the correct result, which is List(List(Int)), instead I only got the generic List(List(T))
Crunching through the document I found that typeSignature is the culprit:
* This method always returns signatures in the most generic way possible, even if the underlying symbol is obtained from an
* instantiation of a generic type.
And it suggests me to use the alternative:
def typeSignatureIn(site: Type): Type
However, since class Example is no longer generic, there is no way I can get site from typeTag[Example], can anyone suggest me how to get typeOf[Int] given only typeTag[Example]? Or there is no way to do it and I have to revert to Java reflection?
Thanks a lot for your help.
UPDATE: After some quick test I found that even MethodSymbol.returnType doesn't work as intended, the following code:
member.returnType
also yield T, annd it can't be corrected by asSeenFrom, as the following code doesn't change the result:
member.returnType.asSeenFrom(baseType.tpe, baseType.tpe.typeSymbol.asClass)
There are two approaches which I can suggest:
1) Reveal generic type from base class:
import scala.reflect.runtime.universe._
class GenericExample[T: TypeTag](a: String, b: T) {
def fn(i: T) = "" + b + i
}
case class Example(a: String, b: Int) extends GenericExample[Int](a, b) {}
val classType = typeOf[Example].typeSymbol.asClass
val baseClassType = typeOf[GenericExample[_]].typeSymbol.asClass
val baseType = internal.thisType(classType).baseType(baseClassType)
baseType.typeArgs.head // returns reflect.runtime.universe.Type = scala.Int
2) Add implicit method which returns type:
import scala.reflect.runtime.universe._
class GenericExample[T](a: String, b: T) {
def fn(i: T) = "" + b + i
}
case class Example(a: String, b: Int) extends GenericExample[Int](a, b)
implicit class TypeDetector[T: TypeTag](related: GenericExample[T]) {
def getType(): Type = {
typeOf[T]
}
}
new Example("", 1).getType() // returns reflect.runtime.universe.Type = Int
I'm posting my solution: I think there is no alternative due to Scala's design:
The core difference between methods in Scala reflection & Java reflection is currying: Scala method comprises of many pairs of brackets, calling a method with arguments first merely constructs an anonymous class that can take more pairs of brackets, or if there is no more bracket left, constructs a NullaryMethod class (a.k.a. call-by-name) that can be resolved to yield the result of the method. So types of scala method is only resolved at this level, when method is already broken into Method & NullaryMethod Signatures.
As a result it becomes clear that the result type can only be get using recursion:
private def methodSignatureToParameter_ReturnTypes(tpe: Type): (List[List[Type]], Type) = {
tpe match {
case n: NullaryMethodType =>
Nil -> n.resultType
case m: MethodType =>
val paramTypes: List[Type] = m.params.map(_.typeSignatureIn(tpe))
val downstream = methodSignatureToParameter_ReturnTypes(m.resultType)
downstream.copy(_1 = List(paramTypes) ++ methodSignatureToParameter_ReturnTypes(m.resultType)._1)
case _ =>
Nil -> tpe
}
}
def getParameter_ReturnTypes(symbol: MethodSymbol, impl: Type) = {
val signature = symbol.typeSignatureIn(impl)
val result = methodSignatureToParameter_ReturnTypes(signature)
result
}
Where impl is the class that owns the method, and symbol is what you obtained from Type.member(s) by scala reflection
I want to write a generic function functionChooser which will choose which function to use from a few options, based on a String argument.
This works:
def a (arg: String) = arg + " with a"
def b (arg: String) = arg + " with b"
def c (arg: String) = arg + " with c"
def functionChooser(func: String, additionalArg: String) = {
val f = func match {
case "a" => a _
case "b" => b _
case _ => c _
}
f(additionalArg)
}
scala> functionChooser("a", "foo")
res18: String = foo with a
I'm having trouble in making functionChooser generic, e.g. when functions a, b, and c return different case classes:
case class A(s: String)
case class B(s: String)
case class C(s: String)
def a (arg: String) = A(arg)
def b (arg: String) = B(arg)
def c (arg: String) = C(arg)
//functionChooser def as before
scala> functionChooser("a", "foo")
res19: Product with Serializable = A(foo)
I don't quite understand what I got there, I know I get an error when calling functionChooser("a", "foo").s ("error: value s is not a member of Product with Serializable").
Lastly, what I really want is that the functions would return Lists of these case classes, e.g.:
def a (arg: String) = List(A(arg))
def b (arg: String) = List(B(arg))
def c (arg: String) = List(C(arg))
So functionChooser should be generic to List[T] where T is some class.
The function functionChooser will return the most specific common super type of the case classes A, B and C. Since case classes inherit from Product and Serializable, the common super type is Product with Serializable.
If you want to access the case class field s you either have to cast the result, via pattern matching, or you provide a common base class of all your classes A, B and C which allows you to access the field.
trait Base {
def s: String
}
case class A(s: String) extends Base
case class B(s: String) extends Base
case class C(s: String) extends Base
With this type definition the return type of functionChooser would be Product with Serializable with Base and, thus, the result would allow you to access s.
If your function a, b and c return a List of the respective case class, then the return type of functionChooser would be List[Product with Serializable with Base].
Update
If you cannot change the class hierarchy, then you either have to cast the result or you could extract the necessary information in the functionChooser and wrap it in another type which contains the super set of all data you need. E.g.
def functionChooser(func: String, additionalArg: String): String = {
val f = func match {
case "a" => (a _).s
case "b" => (b _).s
case _ => (c _).s
}
f(additionalArg)
}
Note: Here I only extract the field s which is the super set of all required information.
You should return the upper common type for all three functions. Object (AnyRef) always fit.
def functionChooser(func: String, additionalArg: String) : AnyRef = {
In your case, where all possible returning values are Lists you may use more specific type:
def functionChooser(func: String, additionalArg: String) : List[_] = {
Of course that will eliminate type information. Any method should return the same type, it could not be polymorphic on it. So, you need to use .asInstanceOf[T] case further, to get this information back.
That make sense, because the actual type is unknown during runtime. e.g. the dispatcher string may be entered by user. If it would be known during compile time, then you could just use appropriate method without referring to descriptive string.
If you would like to get some common behaviour for all possible return types, than you should define a common trait for them and place common methods to it.
I want to create an enity system with some special properties, based on Scala traits.
The main idea is this: all components are traits that inherit from the common trait:
trait Component
trait ComponentA extends Component
sometimes, in case of a more complex hierarchy and inter-dependant components it can get like this:
trait ComponentN extends ComponentM {
self: ComponentX with ComponentY =>
var a = 1
var b = "hello"
}
and so on. I have come to the conclusion that the data relevant to each component should be contained in itself and not in some storage inside an Entity or elsewhere because of the speed of the access. As a side note - that is also why everything is mutable, so there is no need in thinking about immutability.
Then Entities are created, mixing in the traits:
class Entity
class EntityANXY extends ComponentA
with ComponentN
with ComponentX
with ComponentY
Here all is fine, however I do have a special requirement that I do not know how to fulfill with the code. The requirement is this:
Each trait must provide an encoding method(?) that facilitates collection of the trait-related data in a universal form, for example in a form of a JSON or a Map like Map("a" -> "1", "b" -> "hello") and a decoding method to translate such a map, if received, back into the trait-related variables. Also: 1) all the encoding and decoding methods of all the mixed-in traits are called in a bunch, in an arbitrary order by Entity's methods encode and decode(Map) and 2) should be made available to be called separately by specifying a trait type, or better, by a string parameter like decode("component-n", Map).
It is not possible to use methods with the same name as they will be lost due to shadowing or overriding. I can think of a solution, where all the methods are stored in a Map[String, Map[String, String] => Unit] for decode and Map[String, () => Map[String, String]] for encode in every entity. This would work - the by-name as well as a bunch call would certainly be available. However, this will result in storing the same information in every entity which is unacceptable.
It is also possible to store these maps in a companion object so that it is not duplicated anywhere and call the object's encode and decode method with an extra parameter denoting a particular instance of the entity.
The requirement may seem strange, but it is necessary because of the required speed and modularity. All of these solutions are clumsy and i think there is a better and idiomatic solution in Scala, or maybe I am missing some important architectural pattern here. So is there any simpler and more idiomatic approach than the one with the companion object?
EDIT: I think that aggregation instead of inheritance could probably resolve these problems but at a cost of not being able to call methods directly on an entity.
UPDATE: Exploring the pretty promising way proposed by Rex Kerr, I have stumbled upon something that hinders. Here is the test case:
trait Component {
def encode: Map[String, String]
def decode(m: Map[String, String])
}
abstract class Entity extends Component // so as to enforce the two methods
trait ComponentA extends Component {
var a = 10
def encode: Map[String, String] = Map("a" -> a.toString)
def decode(m: Map[String, String]) {
println("ComponentA: decode " + m)
m.get("a").collect{case aa => a = aa.toInt}
}
}
trait ComponentB extends ComponentA {
var b = 100
override def encode: Map[String, String] = super.encode + ("b" -> b.toString)
override def decode (m: Map[String, String]) {
println("ComponentB: decoding " + m)
super.decode(m)
m.get("b").foreach{bb => b = bb.toInt}
}
}
trait ComponentC extends Component {
var c = "hey!"
def encode: Map[String, String] = Map("c" -> c)
def decode(m: Map[String, String]) {
println("ComponentC: decode " + m)
m.get("c").collect{case cc => c = cc}
}
}
trait ComponentD extends ComponentB with ComponentC {
var d = 11.6f
override def encode: Map[String, String] = super.encode + ("d" -> d.toString)
override def decode(m: Map[String, String]) {
println("ComponentD: decode " + m)
super.decode(m)
m.get("d").collect{case dd => d = dd.toFloat}
}
}
and finally
class EntityA extends ComponentA with ComponentB with ComponentC with ComponentD
so that
object Main {
def main(args: Array[String]) {
val ea = new EntityA
val map = Map("a" -> "1", "b" -> "3", "c" -> "what?", "d" -> "11.24")
println("BEFORE: " + ea.encode)
ea.decode(map)
println("AFTER: " + ea.encode)
}
}
which gives:
BEFORE: Map(c -> hey!, d -> 11.6)
ComponentD: decode Map(a -> 1, b -> 3, c -> what?, d -> 11.24)
ComponentC: decode Map(a -> 1, b -> 3, c -> what?, d -> 11.24)
AFTER: Map(c -> what?, d -> 11.24)
The A and B components are not influenced, being cut-off by the inheritance resolution. So this approach is only applicable in certain hierarchy cases. In this case we see that the ComponentD has shadowed everything else. Any comments are welcomed.
UPDATE 2: I place the comment that answers this problem here, for better reference: "Scala linearizes all the traits. There should be a supertrait of everything which will terminate the chain. In your case, that means that C and A should still call super, and Component should be the one to terminate the chain with a no-op." – Rex Kerr
Travis had an essentially correct answer; not sure why he deleted it. But, anyway, you can do this without too much grief as long as you're willing to make your encoding method take an extra parameter, and that when you decode you're happy to just set mutable variables, not create a new object. (Complex trait-stacking effectively-at-runtime ranges from difficult to impossible.)
The basic observation is that when you chain traits together, it defines a hierarchy of superclass calls. If each of these calls takes care of the data in that trait, you'd be set, as long as you can find a way to get all that data back. So
trait T {
def encodeMe(s: Seq[String]): Seq[String] = Seq()
def encode = encodeMe(Seq())
}
trait A extends T {
override def encodeMe(s: Seq[String]) = super.encodeMe(s) :+ "A"
}
trait B extends T {
override def encodeMe(s: Seq[String]) = super.encodeMe(s) :+ "B"
}
Does it work?
scala> val a = new A with B
a: java.lang.Object with A with B = $anon$1#41a92be6
scala> a.encode
res8: Seq[String] = List(A, B)
scala> val b = new B with A
b: java.lang.Object with B with A = $anon$1#3774acff
scala> b.encode
res9: Seq[String] = List(B, A)
Indeed! Not only does it work, but you get the order for free.
Now we need a way to set variables based on this encoding. Here, we follow the same pattern--we take some input and just go up the super chain with it. If you have very many traits stacked on, you may want to pre-parse text into a map or filter out those parts applicable to the current trait. If not, just pass on everything to super, and then set yourself after it.
trait T {
var t = 0
def decode(m: Map[String,Int]) { m.get("t").foreach{ ti => t = ti } }
}
trait C extends T {
var c = 1
override def decode(m: Map[String,Int]) {
super.decode(m); m.get("c").foreach{ ci => c = ci }
}
}
trait D extends T {
var d = 1
override def decode(m: Map[String,Int]) {
super.decode(m); m.get("d").foreach{ di => d = di }
}
}
And this too works just like one would hope:
scala> val c = new C with D
c: java.lang.Object with C with D = $anon$1#549f9afb
scala> val d = new D with C
d: java.lang.Object with D with C = $anon$1#548ea21d
scala> c.decode(Map("c"->4,"d"->2,"t"->5))
scala> "%d %d %d".format(c.t,c.c,c.d)
res1: String = 5 4 2
How do I use case class matching with aliased types? This works when I pull CB etc out of Container.
class DoStuff[TKey](
val c : Container[TKey]#CB
)
{
type CB = Container[TKey]#CB
type C1 = Container[TKey]#C1
type C2 = Container[TKey]#C2
c match {
case C1(e1) => e1 // - not found: value e1 - not found: value C1
case C2(e2) => e2 // - not found: value e2 - not found: value C2
}
}
trait Container[TKey]
{
abstract trait CB
case class C1(val e : AnyRef) extends CB
case class C2(val e : AnyRef) extends CB
}
Thanks!
Right... Inner classes in Scala are a bit fiddly. Let's try a simple example before I show you the rewritten version of the code you have provided.
case class Foo(x: Int) {
case class Bar(y: String)
}
Now, consider the following code snippet:
val x = new Foo(1)
val y = new Foo(2)
val a = new x.Bar("one")
val b = new y.Bar("two")
The most generic type of a and b is Foo#Bar, which means the inner class Bar with any outer object of type Foo. But we could be more specific in saying that the type of a is x.Bar and the type of b is y.Bar - which means that a is an instance of the inner class Bar with the outer object x, similar for b.
You can actually see that the types are different by calling typeOf(a) and typeOf(b), where typeOf is a utility method defined as such. (it just gives the type of its argument by quite nice type inference and a bit of use of Manifests)
def typeOf[T](x: T)(implicit m: scala.reflect.Manifest[T]) = m.toString
As an inner object holds a reference to its enclosing object, you cannot instantiate an inner object without somehow specifying its outer object. Therefore, you can call new x.Bar("one") but you cannot call new Foo#Bar("?") - as in the second case you haven't specified what is the inner object for the new object you try to construct.
So, let's return to your code snippet. When you are pattern matching, you are actually calling a constructor - when calling C1(e1). As C1 is an alias for Container[TKey]#C1
you have tried to call a constructor of an inner class without specifying its outer object, which fails due to the reasons outlined above. The way I would write the code would be as follows:
trait Container[TKey] {
abstract trait CB
case class C1(val e : AnyRef) extends CB
case class C2(val e : AnyRef) extends CB
}
class DoStuff[TKey] (val c: Container[TKey], val element: Container[TKey]#CB) {
element match {
case c.C1(e1) => Some(e1)
case c.C2(e2) => Some(e2)
case _ => None
}
}
Now this compiles and hopefully it does what you want. But take this with great care! Due to type erasure, Scala cannot guarantee that the element is actually of type c.CB or of type d.CB where the CB in the case of c and d happen to be the same.
Consider this example:
def matcher(arg: Foo#Bar) = {
arg match {
case x.Bar(n) => println("x");
case y.Bar(n) => println("y");
}
}
where x and y are as before. Try running the following:
matcher(a)
matcher(b)
They both print x!
Therefore I would rewrite the code to explicitly have an element in the container:
trait Container[TKey] {
abstract trait CB
case class C1(val e : AnyRef) extends CB
case class C2(val e : AnyRef) extends CB
val element: CB
}
class DoStuff[TKey](val c: Container[TKey]) {
c.element match {
case c.C1(e1) => Some(e1)
case c.C2(e2) => Some(e2)
case _ => None
}
}
Hope it helps :)
-- Flaviu Cipcigan