Why does spray-json apply this hierarchy way in RootJsonFormat? - scala

Recently, I am reading the source code of Spray-json. I noted that the following hierarchy relation in JsonFormat.scala, please see below code snippet
/**
* A special JsonFormat signaling that the format produces a legal JSON root
* object, i.e. either a JSON array
* or a JSON object.
*/
trait RootJsonFormat[T] extends JsonFormat[T] with RootJsonReader[T] with RootJsonWriter[T]
To express the confusion more convenient, I draw the following diagram of hierarchy:
According to my limited knowledge of Scala, I think the JsonFormat[T] with should be removed from the above code. Then I cloned the repository of Spary-json, and comment the code JsonFormat[T] with
trait RootJsonFormat[T] extends RootJsonReader[T] with RootJsonWriter[T]
Then I compile it in SBT(use package/compile command) and it passed to the compiling process and generates a spray-json_2.11-1.3.4.jar successfully.
However, when I run the test cases via test command of SBT, it failed.
So I would like to know why. Thanks in advance.

I suggest you to not think of it in terms of OOP. Think of it in terms of type classes. In case when some entity must be serialized and deserialized at the same time, there is a type class JsonFormat that includes both JsonWriter and JsonReader. This is convenient since you don't need to search for 2 type class instances when you need both capabilities. But in order for this approach to work, there has to be an instance of JsonFormat type class. This is why you can't just throw it away from hierarchy. For instance:
def myMethod[T](t: T)(implicit format: JsonFormat[T]): Unit = {
format.read(format.write(t))
}
If you want this method to work properly there has to be a direct descendant of JsonFormat and a concrete implicit instance of it for a specific type T.
UPD: By creating an instance of the JsonFormat type class, you get instances for JsonWriter and JsonReader type classes automatically (in case when you need both). So this is also a way to reduce boilerplate.

Related

Why the first base class in parent list must be non-trait class?

In the Scala spec, it's said that in a class template sc extends mt1, mt2, ..., mtn
Each trait reference mti must denote a trait. By contrast, the
superclass constructor sc normally refers to a class which is not a
trait. It is possible to write a list of parents that starts with a
trait reference, e.g. mt1 with …… with mtn. In that case the
list of parents is implicitly extended to include the supertype of
mt1 as first parent type. The new supertype must have at least one
constructor that does not take parameters. In the following, we will
always assume that this implicit extension has been performed, so that
the first parent class of a template is a regular superclass
constructor, not a trait reference.
If I understand it correctly, I think it means:
trait Base1 {}
trait Base2 {}
class Sub extends Base1 with Base2 {}
Will be implicitly extended to:
trait Base1 {}
trait Base2 {}
class Sub extends Object with Base1 with Base2 {}
My questions are:
Is my understanding correct?
Does this requirement (the first subclass in the parent list must be non-trait class) and the implicit extension only applies to class template (e.g. class Sub extends Mt1, Mt2) or also trait template (e.g. trait Sub extends Mt1, Mt2)?
Why this requirement and the implicit extension is necessary?
Disclaimer: I'm not and never was a member of the "Scala design committee" or anything like that, so the answer on the "why?" question is mostly speculation but I think a useful one.
Disclaimer #2: I've written this post over several hours and in several takes so it is probably not very consistent
Disclaimer #3 (a shameful self-promotion for the future readers): If you find this quite long answer useful, you might also take a look at my another long answer to another question by Lifu Huang on a similar topic.
Short answers
This is one of those complicated things for which I don't think there is a good short answer unless you already know what the answer is. Although my real answer will be long, here are my best short answers:
Why the first base class in parent list must be non-trait class?
Because there has to be only one non-trait base class and it makes thing easier if it is always the first
Is my understanding correct?
Yes, your implicit example is what will happen. However I'm not sure that it shows full understanding of the topic.
Does this requirement (the first subclass in the parent list must be non-trait class) and the implicit extension only applies to class template (e.g. class Sub extends Mt1, Mt2) or also trait template (e.g. trait Sub extends Mt1, Mt2)?
No, implicit extensions happens for traits as well. Actually how else you could expect Mt1 to have its own "supertype" to be promoted down to the class that extends it?
Actually here are two IMHO non-obvious examples proving this is true:
Example #1
trait TAny extends Any
trait TNo
// works
class CGood(val value: Int) extends AnyVal with TAny
// fails
// illegal inheritance; superclass AnyVal is not a subclass of the superclass Object
class CBad(val value: Int) extends AnyVal with TNo
This example fails because the spec says
The extends clause extends scsc with mt1mt1 with …… with mtnmtn can be omitted, in which case extends scala.AnyRef is assumed.
so TNo actually extends AnyRef which is incompatible with AnyVal.
Example #2
class CFirst
class CSecond extends CFirst
// did you know that traits can extend classes as well?
trait TFirst extends CFirst
trait TSecond extends CSecond
// works
class ChildGood extends TSecond with TFirst
// fails
// illegal inheritance; superclass CFirst is not a subclass of the superclass CSecond of the mixin trait TSecond
class ChildBad extends TFirst with TSecond
Again ChildBad fails because TSecond requires CSecond but TFirst only provides CFirst as the base class.
Why this requirement and the implicit extension is necessary?
There are three major reasons:
Compatibility with the main target platform (JVM)
Traits have "mixin" semantics: you have a class and you mix additional behavior in
Completeness, consistency and simplicity of the rest of the spec (e.g. of linearization rules). This might be restated as following: each class must declare 0 or 1 base non-trait classes and after compilation the target platform enforces that there will be exactly 1 non-trait base class. So it makes the rest of the spec easier if you just assume there is always exactly one base class. In such way you have to write this implicit extension rules only once rather than each time when the behavior depends on the base class.
Scala spec goals/intentions
I believe that when one reads a spec there are two different sets of questions:
What exactly is written? What is the meaning of the spec?
Why it is written so? What was the intention?
Actually I think in many cases #2 is more important than #1 but unfortunately specs rarely explicitly contain insights into that area. Anyway I will start with my speculations over #2: what were the intentions/goals/limitations of the classes system in Scala? The main high-level goal was to create a type system richer than the one in Java or .Net (which are quite similar) but that can be:
compiled back to an efficient code in those target platforms
allow reasonable two-way interaction between the Scala code and the "native" code in the target platforms
Side note: Support of the .Net was dropped years ago but it was one of the target platforms for years and this affected the design.
Single base class
Short summary: this section describes some reasons why Scala designers had a strong motivation to have the "exactly one base class" rule in the language.
A major problem with OO design and particularly inheritance is that AFAIK the question: "where exactly is the border between the "good and useful" practices and the "bad" ones?" is open. It means that each language must find out its own trade off between making impossible what is wrong and making possible (and easy) what is useful. Many believe that in C++, which obviously was a major inspiration for Java and .Net, that trade off is shifted too much into "allow everything even if it is potentially harmful" zone. It made many designers of newer languages to seek for more restricting trade off. Particularly both JVM and .Net platform enforce the rule that all types are split into "value types" (aka primitive types), "classes" and "interfaces" and each class, except the root class (java.lang.Object/System.Object), has exactly one "base class" and zero or more "base interfaces". This decision was a reaction to many issues of multiple inheritance including infamous "diamond problem" but actually many others as well.
Sidenote (about memory layout): Another major problem with multiple inheritance is objects layout in memory. Consider following ridiculous (and impossible in current Scala) example inspired by Achilles and the tortoise:
trait Achilles {
def getAchillesPos: Int
def stepAchilles(): Unit
}
class AchillesImpl(var achillesPos: Int) extends Achilles {
def getAchillesPos: Int = achillesPos
def stepAchilles(): Unit = {
achillesPos += 2
}
}
class TortoiseImpl(var tortoisePos: Int) {
def getTortoisePos: Int = tortoisePos
def stepTortoise(): Unit = {
tortoisePos += 1
}
}
class AchillesAndTortoise(handicap: Int) extends AchillesImpl(0) with TortoiseImpl(handicap) {
def catchTortoise(): Int = {
var time = 0
while (getAchillesPos < getTortoisePos) {
time += 1
stepAchilles()
stepTortoise()
}
time
}
}
The tricky part here is how to actually lay achillesPos and tortoisePos fields out in the memory (of the object). The issue is that you probably want to have only one compiled copy of all the methods in the memory and you want the code to be efficient. This means that getAchillesPos and stepAchilles should have know some fixed offset of the achillesPos regarding to the this pointer. Similarly getTortoisePos and stepTortoise should have know some fixed offset of the tortoisePos regarding to the this pointer. And all choices you have to achieve this goal don't look nice. For example:
You might decide that achillesPos is always first and tortoisePos is always second. But this means that in the instances of TortoiseImpl tortoisePos should also be the second field but there is nothing to fill the first field with so you waste some memory. Moreover if both AchillesImpl and TortoiseImpl come from pre-compiled libraries, you should have some way to move access to the fields in them as well.
You might try to "fix" this pointer on-the-fly when you call into TortoiseImpl (AFAIK this is the way C++ really works). This becomes especially funny when TortoiseImpl is an abstract class that is aware of the trait Achilles (but not the specific class AchillesImpl) via extends and tries to call back some methods from there via this or pass this to some method that takes Achilles as an argument so this has to be "fixed back". Note that this is not the same as the "diamond problem" because there is only one copy of all fields and implementations.
You might agree to have a unique copy of the methods compiled for each specific class that are aware of the specific layout. This is bad for memory usage and performance because it blows CPU caches and forces JIT to make independent optimizations for each.
You might say that no method except for getter and setter can have direct access to the fields and should use getters and setters instead. Or store all the fields in some kind of a dictionary which is effectively the same. This might be bad for performance (but this is the closest to what Scala does with mixin-traits).
In the actual Scala this issue does not exist because trait can't really declare any fields. When you declare val or var in a trait, you actually declare a getter (and a setter) method(s) that will be implemented by particular class that extends the trait and each class has full control over layout of the fields. And actually in terms of performance this most probably would work OK because JVM (JIT) can inline such a virtual call in many real-world scenarios.
End of the Sidenote
Another major point is interoperability with the target platform. Even if Scala somehow supported true multiple-inheritance so you can have a type that inherits from String with Date and that can be passed to both methods that expect String and that expect Date, how this would look like from the Java point of view? Also if the target platform enforces the rule that every class has to be an (indirect) sub-type of the same root class (Object), you can't work this around in your higher level language.
Traits and Mix-ins
Many think that "one class and many interfaces" trade-off that was made in Java and .Net is too restrictive. For example it makes it hard to share common default implementation of some of the interface methods between different classes. Actually over the time Java and .Net designers seem to come to the same conclusion and rolled out they own fixes for this kind of issues: Extension methods in .Net and then Default methods in Java. Scala designers added a feature called Mixins that was known to fare well in many practical cases. However unlike many other dynamic languages that has similar feature, Scala still had to meet the "exactly one base class" rule and other limitations of the target platform.
It is important to note that there are important scenarios when mixins are used in practice is to implement a variation of the Decorator or Adapter patterns both of which relies on the fact that you can restrict your base type to something more specific than Any or AnyRef. Prime example of such usage is the scala.collection package.
Scala syntax
So now you have following goals/restrictions:
Exactly one base class for each class
Ability to add logic to classes from mixins
Support of mixins with restricted base type
Classes from the target platform (Java) when seen from Scala are mapped to the Scala classes (because what else they can be mapped to?) and they come pre-compiled and we don't want to mess with their implementation
Other good qualities such as simplicity, type safety, determinism, etc.
If you want some kind of multiple inheritance support in your language, you need to develop conflict resolution rules: what happens when several base types provide some logic that would fit the same "slot" in your class. After prohibition of fields in traits we are left with the following "slots":
Base class in terms of the target platform
Constructors
Methods with the same name and signature
And possible conflict resolution strategies are:
Prohibit (fail compilation)
Decide which one wins and wipes others
Somehow chain them
Somehow preserve all with renaming. This is not really possible in JVM. For example in .Net see Explicit Interface Implementation
In a sense Scala uses all available (i.e. first 3) strategies but the high-level goal is: let's try to preserve as many logic as we can.
The most important part for this discussion is conflicts resolution for constructors and methods.
We want the rules to be the same for different slots because otherwise it is not clear how to achieve safety (if traits A and B both override methods foo and bar but resolution rules for foo and bar are different, invariants for A and B might easily be broken). Scala's approach is based on the class linearization. In short these is the way to "flatten" hierarchy of the base classes into a simple linear structure in some predictive way that is based on the idea that the lefter type in the with chain - the more "base" (higher in the inheritance) it is. After you do this, conflict resolution rule for methods becomes simple: you go through the list of the base types and chain behavior via super calls; if super is not called, you stop chaining. This produce quite predictable semantics that people can reason about.
Now assume you allow non-trait class to be not first. Consider following example:
class CBase {
def getValue = 2
}
trait TFirst extends CBase {
override def getValue = super.getValue + 1
}
trait TSecond extends CFirst {
override def getValue = super.getValue * 2
}
class CThird extends CBase with TSecond {
override def getValue = 100 - super.getValue
}
class Child extends TFirst with TSecond with CThird
In which order TFirst.getValue and TSecond.getValue should be called? Obviously CThird is already compiled and you can't change what the super for it is, so it has to be moved to the first position and there is already TSecond.getValue call inside it. But on the other hand this breaks the rule that everything on the left is base and everything on the right is child. The simplest way to not introduce such confusion is to enforce the rule that non-trait classes must go first.
The same logic applies if you just extend the previous example by substituting class CThird with a trait that extends it:
trait TFourth extends CThird
class AnotherChild extends TFirst with TSecond with TFourth
Again, the only non-trait class AnotherChild can extend is CThird and this again makes conflict resolution rules quite hard to reason about.
That's why Scala makes a rule much simpler: whatever provides the base class must come from the first position. And then it makes sense to extend the same rule upon the traits as well so if the first position is occupied by some trait - it also defines the base class.
1) Basically yes, your understanding is correct. Like in Java, every class inherits from java.lang.Object (AnyRef in Scala). So, since you are defining a concrete class, you will implicitly inherits from Object. If you check with the REPL, you got:
scala> trait Base1 {}
defined trait Base1
scala> trait Base2 {}
defined trait Base2
scala> class Sub extends Base1 with Base2 {}
defined class Sub
scala> classOf[Sub].getSuperclass
res0: Class[_ >: Sub] = class java.lang.Object
2) Yes, from the "Traits" paragraph in the specs, this applies also to them. In "Templates" paragraph we have:
The new supertype must have at least one constructor that does not take parameters
And then in "Traits" paragraph:
Unlike normal classes, traits cannot have constructor parameters. Furthermore, no constructor arguments are passed to the superclass of the trait. This is not necessary as traits are initialized after the superclass is initialized.
Assume a trait D defines some aspect of an instance x of type C (i.e. D is a base class of C). Then the actual supertype of D in x is the compound type consisting of all the base classes in L(C) that succeed D.
This is needed to define the base constructor with no-parameters.
3) As per answer (2), it's needed to define the base constructor

Case object extending class with constructor in Scala

I am a beginner in Scala and was playing around to learn more about Abstract data types. I defined the following definition to replicate Option type:
sealed abstract class Maybe[+A](x:A)
case object Nothing extends Maybe[Nothing](Nothing)
case class Just[A](x:A) extends Maybe[A](x)
But I encountered the following error.
found : Nothing.type
required: Nothing
case object Nothing extends Maybe[Nothing](Nothing)
How do I pass Nothing instead of Nothing.type?
I referred to the following question for hints:
How to extend an object in Scala with an abstract class with constructor?, but it was not helpful.
Maybe more like this. Your Nothing shouldnt have a value, just the type. Also people usually use traits instead of abstract classes.
sealed trait Maybe[+A]
case object None extends Maybe[Nothing]
case class Just[A](x:A) extends Maybe[A]
You probably shouldnt create your own Nothing, thats going to be confusing, you will confuse yourself and the compiler about if you are referring to your one, or the one at the bottom of the type hierarchy.
As mentioned by Stephen, the correct way to do this would be not to have trait and not an abstract class, however, I thought it might be informative to explain why the current methodology fails and how to fix it.
The main issue is with this line:
case object Nothing extends Maybe[Nothing](Nothing)
First thing (as mentioned) you shouldn't call your object Nothing. Secondly, you set the object to extend Maybe[Nothing]. Nothing can't have any actual values so you can't use it as an object. Also, you can't use the object itself as the constructor parameter because that would cause a cyclic behavior.
What you need is to have a bottom type (i.e. a type which all A have in common) and an object of that type. Nothing is a bottom type but has no objects.
A possible solution is to limit yourself to AnyRef (i.e. nullable objects) and use the Null bottom type which has a valid object (null):
sealed abstract class Maybe[+A <: AnyRef](x:A)
case object None extends Maybe[Null](null)
This is a bit of clarification for Assaf Mendelson's answer, but it's too big for a comment.
case object Nothing extends Maybe[Nothing](Nothing)
Scala has separate namespaces for types and values. Nothing in case object Nothing is a value. Nothing in Maybe[Nothing] is a type. Since you didn't define a type called Nothing, it refers to the automatically imported scala.Nothing and you must pass a value of this type as an argument. By definition it has no values but e.g. case object Nothing extends Maybe[Nothing](throw new Exception) would compile, as the type of throw expressions is Nothing. Instead you pass the value Nothing, i.e. the same case object you are defining; its type is written as Nothing.type.
How do I pass Nothing instead of Nothing.type?
It seems like there is no way to do so.
As it says at http://www.scala-lang.org/api/2.9.1/scala/Nothing.html:
there exist no instances of this type.

Scala: How to invoke method with type parameter and manifest without knowing the type at compile time?

I have a function with the following signature:
myFunc[T <: AnyRef](arg: T)(implicit m: Manifest[T]) = ???
How can I invoke this function if I do not know the exact type of the argument at the compile time?
For example:
val obj: AnyRef = new Foo() // At compile time obj is defined as AnyRef,
val objClass = obj.getClass // At runtime I can figure out that it is actually Foo
// Now I would need to call `myFunc[Foo](obj.asInstanceOf[Foo])`,
// but how would I do it without putting [Foo] in the square braces?
I would want to write something logically similar to:
myFunc[objClass](obj.asInstanceOf[objClass])
Thank you!
UPDATE:
The question is invalid - As #DaoWen, #Jelmo and #itsbruce correctly pointed, the thing I was trying to do was a complete nonsense! I just overthought the problem severely.
THANK YOU guys! It's too bad I cannot accept all the answers as correct :)
So, the problem was caused by the following situation:
I am using Salat library to serialize the objects to/from BSON/JSON representation.
Salat has an Grater[T] class which is used for both serialization and deserialization.
The method call for deserialization from BSON looks this way:
val foo = grater[Foo].asObject(bson)
Here, the role of type parameter is clear. What I was trying to do then is to use the same Grater to serialize any entity from my domain model. So I wrote:
val json = grater[???].toCompactJSON(obj)
I immediately rushed for reflection and just didn't see an obvious solution lying on the surface. Which is:
grater[Entity].toCompactJSON(obj) // where Entity...
#Salat trait Entity // is a root of the domain model hierarchy
Sometimes things are much easier than we think they are! :)
It appears that while I was writing this answer the author of the question realized that he does not need to resolve Manifests at runtime. However, in my opinion it is perfectly legal problem which I resolved successfully when I was writing Yaml [de]serialization library, so I'm leaving the answer here.
It is possible to do what you want using ClassTags or even TypeTags. I don't know about Manifests because that API is deprecated and I haven't worked with it, but I believe that with manifests it will be easier since they weren't as sophisticated as new Scala reflection. FYI, Manifest's successor is TypeTag.
Suppose you have the following functions:
def useClasstag[T: ClassTag](obj: T) = ...
def useTypetag[T: TypeTag](obj: T) = ...
and you need to call then with obj: AnyRef as an argument while providing either ClassTag or TypeTag for obj.getClass class as the implicit parameter.
ClassTag is the easiest one. You can create ClassTag directly from Class[_] instance:
useClasstag(obj)(ClassTag(obj.getClass))
That's all.
TypeTags are harder. You need to use Scala reflection to obtain one from the object, and then you have to use some internals of Scala reflection.
import scala.reflect.runtime.universe._
import scala.reflect.api
import api.{Universe, TypeCreator}
// Obtain runtime mirror for the class' classloader
val rm = runtimeMirror(obj.getClass.getClassLoader)
// Obtain instance mirror for obj
val im = rm.reflect(obj)
// Get obj's symbol object
val sym = im.symbol
// Get symbol's type signature - that's what you really want!
val tpe = sym.typeSignature
// Now the black magic begins: we create TypeTag manually
// First, make so-called type creator for the type we have just obtained
val tc = new TypeCreator {
def apply[U <: Universe with Singleton](m: api.Mirror[U]) =
if (m eq rm) tpe.asInstanceOf[U # Type]
else throw new IllegalArgumentException(s"Type tag defined in $rm cannot be migrated to other mirrors.")
}
// Next, create a TypeTag using runtime mirror and type creator
val tt = TypeTag[AnyRef](rm, tc)
// Call our method
useTypetag(obj)(tt)
As you can see, this machinery is rather complex. It means that you should use it only if you really need it, and, as others have said, the cases when you really need it are very rare.
This isn't going to work. Think about it this way: You're asking the compiler to create a class Manifest (at compile time!) for a class that isn't known until run time.
However, I have the feeling you're approaching the problem the wrong way. Is AnyRef really the most you know about the type of Foo at compile time? If that's the case, how can you do anything useful with it? (You won't be able to call any methods on it except the few that are defined for AnyRef.)
It's not clear what you are trying to achieve and a little more context could be helpful. Anyway, here's my 2 cents.
Using Manifest will not help you here because the type parameter needs to be known at compile time. What I propose is something along these lines:
def myFunc[T](arg: AnyRef, klass: Class[T]) = {
val obj: T = klass.cast(arg)
//do something with obj... but what?
}
And you could call it like this:
myFunc(obj, Foo.class)
Note that I don't see how you can do something useful inside myFunc. At compile time, you cannot call any method on a object of type T beside the methods available for AnyRef. And if you want to use reflection to manipulate the argument of myFunc, then there is no need to cast it to a specific type.
This is the wrong way to work with a type-safe OO language. If you need to do this, your design is wrong.
myFunc[T <: AnyRef](arg: T)(implicit m: Manifest[T]) = ???
This is, of course, useless, as you have probably discovered. What kind of meaningful function can you call on an object which might be anything? You can't make any direct reference to its properties or methods.
I would want to write something logically similar to:
myFunc[objClass](obj.asInstanceOf[objClass])
Why? This kind of thing is generally only necessary for very specialised cases. Are you writing a framework that will use dependency injection, for example? If you're not doing some highly technical extension of Scala's capabilities, this should not be necessary.
I bet you know something more about the class, since you say you don't know the exact type. One big part of the way class-based OO works is that if you want to do something to a general type of objects (including all its subtypes), you put that behaviour into a method belonging to the class. Let subclasses override it if they need to.
Frankly, the way to do what you are attempting is to invoke the function in a context where you know enough about the type.

Can a Scala method from a base-class be renamed?

I'm rather new to Scala, and I am trying to use lift-squeryl-record in Lift. Scala is 2.8.1 and Lift is 2.3. My problem is that I wanted to use (Mega)ProtoUser from Record, but it conflicts with lift-squeryl-record.
I followed the instruction of:
lift-squeryl-record example
which did not use ProtoUser, and tried to define my user like this:
trait AbstractUser[MyType <: AbstractUser[MyType]] extends
ProtoUser[MyType] with Record[MyType] with KeyedRecord[Long] {
NB: KeyedRecord is from package net.liftweb.squerylrecord, not net.liftweb.record
Then I get the following error:
overriding lazy value id in trait ProtoUser of type net.liftweb.record.field.LongField[MyType]; method id in trait KeyedRecord of type => Long needsoverride' modifier`
Because both KeyedRecord and ProtoUser define a differing id method. Since I do not control the code of neither classes/traits, is there any "Scala" way around it, like renaming one of the methods? I really don't want to have to choose between the two. :(
No you cannot rename methods in a subclass. If there are two conflicting method signatures from parent types, you will need to resort to another pattern, such as indirection through delegation ( http://en.wikipedia.org/wiki/Delegation_pattern )
trait AbstractUser[MyType <: AbstractUser[MyType]] extends ProtoUser[MyType] {
def record: Record[MyType] with KeyedRecord[Long]
}

Why does Scala complain about illegal inheritance when there are raw types in the class hierarchy?

I'm writing a wrapper that takes a Scala ObservableBuffer and fires events compatible with the Eclipse/JFace Databinding framework.
In the Databinding framework, there is an abstract ObservableList that decorates a normal Java list. I wanted to reuse this base class, but even this simple code fails:
val list = new java.util.ArrayList[Int]
val obsList = new ObservableList(list, null) {}
with errors:
illegal inheritance; anonymous class $anon inherits different type instances of trait Collection: java.util.Collection[E] and java.util.Collection[E]
illegal inheritance; anonymous class $anon inherits different type instances of trait Iterable: java.lang.Iterable[E] and java.lang.Iterable[E]
Why? Does it have to do with raw types? ObservableList implements IObservableList, which extends the raw type java.util.List. Is this expected behavior, and how can I work around it?
Having a Java raw type in the inheritance hierarchy causes this kind of problem. One solution is to write a tiny bit of Java to fix up the raw type as in the answer for Scala class cant override compare method from Java Interface which extends java.util.comparator
For more about why raw types are problematic for scala see this bug http://lampsvn.epfl.ch/trac/scala/ticket/1737 . That bug has a workaround using existential types that probably won't work for this particular case, at least not without a lot of casting, because the java.util.List type parameter is in both co and contra variant positions.
From looking at the javadoc the argument of the constructor isn't parameterized.
I'd try this:
val list = new java.util.ArrayList[_]
val obsList = new ObservableList(list, null) {}