Extending generic Serializables with implicit conversions - scala

I am trying to add extensions methods to Serializable types and there seems to be a hole in my understanding of the class. Here is a snippet of the basics of what I'm trying to do:
class YesSer extends Serializable
class NoSer
implicit class SerOps[S <: Serializable](s: S) {
def isSer(msg: String) = {
println(msg)
assert(s.isInstanceOf[Serializable])
}
}
val n = new NoSer
val ln = List(new NoSer, new NoSer)
val y = new YesSer
val ly = List(new YesSer, new YesSer)
// n.isSer("non Serializable")
ln.isSer("list of non Serializable")
y.isSer("Serializable")
ly.isSer("list of Serializable")
List extends Serializable
It's obvious to me the line n.isSer won't compile, but it also seems that ln.isSer also shouldn't compile, as its "inner" type is NoSer. Is there some kind of coercion to Serializeable of the inner type of ln? Am I trying to do something absolutely bonkers??

List extends Serializable. So List[A].isSer(String) is defined; the type of A does not matter.
Serializable is just a marker interface, used to indicate whether a class is designed to be serializable. Whether or not you will be able to actually serialize the object depends on whether the entire transitive object graph rooted at the object is serializable. Your ln will fail serialization at runtime with a NotSerializableException because it contains non-serializable types. See the javadoc for java.lang.Serializable (which scala.Serializable extends) for more details.

Related

Elegant grouping of implicit value classes

I'm writing a set of implicit Scala wrapper classes for an existing Java library (so that I can decorate that library to make it more convenient for Scala developers).
As a trivial example, let's say that the Java library (which I can't modify) has a class such as the following:
public class Value<T> {
// Etc.
public void setValue(T newValue) {...}
public T getValue() {...}
}
Now let's say I want to decorate this class with Scala-style getters and setters. I can do this with the following implicit class:
final implicit class RichValue[T](private val v: Value[T])
extends AnyVal {
// Etc.
def value: T = v.getValue
def value_=(newValue: T): Unit = v.setValue(newValue)
}
The implicit keyword tells the Scala compiler that it can convert instances of Value to be instances of RichValue implicitly (provided that the latter is in scope). So now I can apply methods defined within RichValue to instances of Value. For example:
def increment(v: Value[Int]): Unit = {
v.value = v.value + 1
}
(Agreed, this isn't very nice code, and is not exactly functional. I'm just trying to demonstrate a simple use case.)
Unfortunately, Scala does not allow implicit classes to be top-level, so they must be defined within a package object, object, class or trait and not just in a package. (I have no idea why this restriction is necessary, but I assume it's for compatibility with implicit conversion functions.)
However, I'm also extending RichValue from AnyVal to make this a value class. If you're not familiar with them, they allow the Scala compiler to make allocation optimizations. Specifically, the compiler does not always need to create instances of RichValue, and can operate directly on the value class's constructor argument.
In other words, there's very little performance overhead from using a Scala implicit value class as a wrapper, which is nice. :-)
However, a major restriction of value classes is that they cannot be defined within a class or a trait; they can only be members of packages, package objects or objects. (This is so that they do not need to maintain a pointer to the outer class instance.)
An implicit value class must honor both sets of constraints, so it can only be defined within a package object or an object.
And therein lies the problem. The library I'm wrapping contains a deep hierarchy of packages with a huge number of classes and interfaces. Ideally, I want to be able to import my wrapper classes with a single import statement, such as:
import mylib.implicits._
to make using them as simple as possible.
The only way I can currently see of achieving this is to put all of my implicit value class definitions inside a single package object (or object) within a single source file:
package mylib
package object implicits {
implicit final class RichValue[T](private val v: Value[T])
extends AnyVal {
// ...
}
// Etc. with hundreds of other such classes.
}
However, that's far from ideal, and I would prefer to mirror the package structure of the target library, yet still bring everything into scope via a single import statement.
Is there a straightforward way of achieving this that doesn't sacrifice any of the benefits of this approach?
(For example, I know that if I forego making these wrappers value classes, then I can define them within a number of different traits - one for each component package - and have my root package object extend all of them, bringing everything into scope through a single import, but I don't want to sacrifice performance for convenience.)
implicit final class RichValue[T](private val v: Value[T]) extends AnyVal
Is essentially a syntax sugar for the following two definitions
import scala.language.implicitConversions // or use a compiler flag
final class RichValue[T](private val v: Value[T]) extends AnyVal
#inline implicit def RichValue[T](v: Value[T]): RichValue[T] = new RichValue(v)
(which, you might see, is why implicit classes have to be inside traits, objects or classes: they also have matching def)
There is nothing that requires those two definitions to live together. You can put them into separate objects:
object wrappedLibValues {
final class RichValue[T](private val v: Value[T]) extends AnyVal {
// lots of implementation code here
}
}
object implicits {
#inline implicit def RichValue[T](v: Value[T]): wrappedLibValues.RichValue[T] = new wrappedLibValues.RichValue(v)
}
Or into traits:
object wrappedLibValues {
final class RichValue[T](private val v: Value[T]) extends AnyVal {
// implementation here
}
trait Conversions {
#inline implicit def RichValue[T](v: Value[T]): RichValue[T] = new RichValue(v)
}
}
object implicits extends wrappedLibValues.Conversions

Passing a type parameter for instantiation

Why wouldn't the scala compiler dig this:
class Clazz
class Foo[C <: Clazz] {
val foo = new C
}
class type required but C found
[error] val a = new C
[error] ^
Related question - How to get rid of : class type required but T found
This is a classic generic problem that also happens in Java - you cannot create an instance of a generic type variable. What you can do in Scala to fix this, however, is to introduce a type evidence to your type parameter that captures the runtime type:
class Foo[C <: Clazz](implicit ct: ClassTag[C]) {
val foo = ct.runtimeClass.newInstance
}
Note that this only works if the class has a constructor without any arguments. Since the parameter is implicit, you don't need to pass it when calling the Foo constructor:
Foo[Clazz]()
I came up with this scheme, couldn't simplify it through a companion object thought.
class Clazz
class ClazzFactory {
def apply = new Clazz
}
class Foo(factory: ClazzFactory) {
val foo: Clazz = factory.apply
}
It's very annoying that ClazzFactory can't be an object rather than a class though. A simplified version:
class Clazz {
def apply() = new Clazz
}
class Foo(factory: Clazz) {
val foo: Clazz = factory.apply
}
This requires the caller to use the new keyword in order to provide the factory argument, which is already a minor enough annoyance relative to the initial problem. But, scala could have made this scenario all more elegant; I had to fallback here to passing a parameter of the type I wish to instantiate, plus the new keyword. Maybe there's a better way.
(motivation was to instantiate that type many times within the real Foo, that's why this is at all a solution; otherwise my pattern above is just redundantly meaningless).

Compiler doesn't generate a field for implicit val when an implicit val with the same type is present in the superclass

I have a class Foo defined as follows:
class Elem[A]
abstract class BaseDef[T](implicit val selfType: Elem[T])
case class Foo[A, T]()(implicit val eA: Elem[A], val eT: Elem[T]) extends BaseDef[A]
To my surprise, getDeclaredFields does not include eA:
object Test extends App {
private val fields = classOf[Foo[_, _]].getDeclaredFields
println(fields.mkString("\n"))
assert(fields.exists(_.getName == "eA"))
}
produces
private final scalan.Elem scalan.Foo.eT
Exception in thread "main"
java.lang.AssertionError: assertion failed
at scala.Predef$.assert(Predef.scala:151)
at scalan.Test$.delayedEndpoint$scalan$Test$1(JNIExtractorOps.scala:15)
at scalan.Test$delayedInit$body.apply(JNIExtractorOps.scala:11)
Is there an explanation for this or is this a known bug (Scala version is 2.11.7)? I can access eA from outside the class.
It seems the compiler decides it can reuse selfType field for eA, which would be great if it didn't break access to eA in scala-reflect.
Reply from Jason Zaugg on scala-user:
The compiler avoids redundant fields in subclasses if it statically determines that the value is stored in a field in a super class that exposes it via an accessible accessor.
To prevent this, you might be able to change the superclass constructor by removing the val keyword and add another val to that class that stores the parameter.

Serializable trait not sticking in Scala with SBT/Eclipse

I have a hierarchy in Scala with some marker traits. At the top of the hierarchy I declare the root trait to be Serializable. Several layers down, when I get to concrete objects, inheritors of the trait seem to forget that they are Serializable. The hierarchy looks like this:
trait DataModel extends Serializable // Marker trait
trait StaticModel extends DataModel // Marker trait
trait RectangleModel[T] extends StaticModel { // Trait with type param
def rows: Int
def columns: Int
def apply(row: Int, column: Int): Option[T]
}
object MakeRectangleModelFromFile { // Factory object
def apply(file: File): RectangleModel[String] =
new RectangleModel[String] { // Anonymous class that
def rows = 2 // implements the trait.
def columns = 3
def apply(row:Int, column:Int): Option[String] = Some("One")
}
}
val x = MakeRectangleModelFromFile(null) // Make object using factory.
println(x.isInstanceOf[Serializable]) // Object should be Serializable!
When I compile and run from the command line (Scala 2.10.3) the last statement prints out "true", as expected. When I do the same from Eclipse using Scala IDE for Eclipse, and a project created by SBT 0.13, I get "false". The concrete data model seems to have forgotten that it is Serializable. If I remind it that it is Serializable, by constructing it as follows:
new RectangleModel[String] with Serializable {
...
}
All is well again! I am wondering if there is something fishy in the SBT cache, maybe having to do with the trait being generic. I've produced a similar example with a named subclass of RectangleModel, so I don't think it's a problem with the class being anonymous.
Back when I first wrote the RectangleModel[T] trait, I forgot to make it extend StaticModel, and so there was a time when the compiler had it right. But now it seems like the compiler is remembering that prior time. Even when I Scaladoc this stuff, Scaladoc shows that the RectangleModel is Serializable.
Any clues as to how to flush out this old, bad memory?

Implementing '.clone' in Scala

I'm trying to figure out how to .clone my own objects, in Scala.
This is for a simulation so mutable state is a must, and from that arises the whole need for cloning. I'll clone a whole state structure before moving the simulation time ahead.
This is my current try:
abstract trait Cloneable[A] {
// Seems we cannot declare the prototype of a copy constructor
//protected def this(o: A) // to be defined by the class itself
def myClone= new A(this)
}
class S(var x: String) extends Cloneable[S] {
def this(o:S)= this(o.x) // for 'Cloneable'
def toString= x
}
object TestX {
val s1= new S("say, aaa")
println( s1.myClone )
}
a. Why does the above not compile. Gives:
error: class type required but A found
def myClone= new A(this)
^
b. Is there a way to declare the copy constructor (def this(o:A)) in the trait, so that classes using the trait would be shown to need to provide one.
c. Is there any benefit from saying abstract trait?
Finally, is there a way better, standard solution for all this?
I've looked into Java cloning. Does not seem to be for this. Also Scala copy is not - it's only for case classes and they shouldn't have mutable state.
Thanks for help and any opinions.
Traits can't define constructors (and I don't think abstract has any effect on a trait).
Is there any reason it needs to use a copy constructor rather than just implementing a clone method? It might be possible to get out of having to declare the [A] type on the class, but I've at least declared a self type so the compiler will make sure that the type matches the class.
trait DeepCloneable[A] { self: A =>
def deepClone: A
}
class Egg(size: Int) extends DeepCloneable[Egg] {
def deepClone = new Egg(size)
}
object Main extends App {
val e = new Egg(3)
println(e)
println(e.deepClone)
}
http://ideone.com/CS9HTW
It would suggest a typeclass based approach. With this it is possible to also let existing classes be cloneable:
class Foo(var x: Int)
trait Copyable[A] {
def copy(a: A): A
}
implicit object FooCloneable extends Copyable[Foo] {
def copy(foo: Foo) = new Foo(foo.x)
}
implicit def any2Copyable[A: Copyable](a: A) = new {
def copy = implicitly[Copyable[A]].copy(a)
}
scala> val x = new Foo(2)
x: Foo = Foo#8d86328
scala> val y = x.copy
y: Foo = Foo#245e7588
scala> x eq y
res2: Boolean = false
a. When you define a type parameter like the A it gets erased after the compilation phase.
This means that the compiler uses type parameters to check that you use the correct types, but the resulting bytecode retains no information of A.
This also implies that you cannot use A as a real class in code but only as a "type reference", because at runtime this information is lost.
b & c. traits cannot define constructor parameters or auxiliary constructors by definition, they're also abstract by definition.
What you can do is define a trait body that gets called upon instantiation of the concrete implementation
One alternative solution is to define a Cloneable typeclass. For more on this you can find lots of blogs on the subject, but I have no suggestion for a specific one.
scalaz has a huge part built using this pattern, maybe you can find inspiration there: you can look at Order, Equal or Show to get the gist of it.