I have a trouble with class declaration in Scala:
class Class2[
A,
B <: Class2[A,B,C],
C <: Class3[A,C]
]
class Class3[
A,
C <: Class3[A,C]
]
class Class1[
A,
B <: Class2[A,B,C],
C <: Class3[A,C]
](obj : B) { ... }
It is correct declaration, but every time I want to create an instance of that class I need to specify parameters A and C by hand. Like:
val X = Class1[Type1, Type2, Type3](correct_object_of_Type2)
If I try val X = Class1(OBJ) it will lead to error ...types [Nothing, B, Nothing] do not conform to [A, B, C]...
Why doesn't Scala infer types A and C from B? And how to declare class for a Scala compiler so that it will be able to specify A, C by itself? Thanks
EDIT
I'm sorry for formulation, the original task is to correctly define Class1 like:
class Class2[
A,
B <: Class2[A,B,C],
C <: Class3[A,C]
]
class Class3[
A,
C <: Class2[A,C]
]
??class Class1(obj : Class2) { ... }??
... so that it will be correct to call val x = Class1(obj), where obj: Class2. When I try to define it as above, I'm getting error Class2 takes type parameters. Any ideas?
Sorry for inaccuracy.
It's possible to have the type parameters inferred by encoding the constraints as implicit parameters:
class Class2[X, Y, Z]
class Class3[X, y]
class Class1[A, B, C](obj: B)(implicit
evB: B <:< Class2[A, B, C],
evC: C <:< Class3[A, C]
)
And then:
scala> class Foo extends Class3[String, Foo]
defined class Foo
scala> class Bar extends Class2[String, Bar, Foo]
defined class Bar
scala> new Class1(new Bar)
res0: Class1[String,Bar,Foo] = Class1#ff5b51f
If you need to use instances of B or C as Class2[A, B, C] or Class3[A, C] in the definition of Class1, you can apply the appropriate evidence parameter (evB or evC) to them.
Your version doesn't work because Scala's type inference system is very limited. It will first solve for A and end up at Nothing, since the argument to the constructor doesn't have an A in it. Next it will try to solve for B and not be able to find a value that satisfies the constraint, since it's already decided A is Nothing.
So you have to decide whether type inference is worth an extra bit of complexity and runtime overhead. Sometimes it is, but in my experience when you have classes like this with relationships that are already fairly complex, it's usually not.
Related
Generally, I use <: to represent a subtype relationship like A <: B, either as part of type parameter or as a type member. When going through some stuff, I came across this "<:<" representation. Found it in Predef.scala and surprisingly, it is defined as an abstract class.
Doc says that:
An instance of A <:< B witnesses that A is a subtype of B. Requiring an implicit argument of the type A <:< B encodes the generalized constraint A <: B.
Could some one please clarify what exactly is the difference between the two, given the fact that both represents the same "subtype" relationship (AFAIK). Also, please suggest their correct usage (I mean, where <:< is preferred over <:) ?
[A <: B] declares a type parameter, A, with a known property/restriction: it must be type B (an existing type) or a sub-type thereof.
class A // A and B are unrelated
class B
// these both compile
def f1[A <: B]() = ??? // A is the type parameter, not a reference to class A
def f2[B <: A]() = ??? // B is the type parameter, not a reference to class B
[A <:< B] is used to test existing types.
class B
class A extends B
// A and B must already exist and have this relationship or this won't compile
implicitly[A <:< B]
Sometimes Scala has trouble inferring the types. The goal here is to understand why and try to help Scala to do better inference.
The problem is better explained with a simple example:
trait A
trait B
class G[Ao <: A, Bo <: B](a: Ao, b: Bo)
// This is my "complex" class with 3 type parameters
class X[Ao <: A, Bo <: B, Go <: G[Ao,Bo]](a: Ao, b: Bo, g: Go)
val a = new A{}
val b = new B{}
val g = new G(a,b)
Here the type of OO below is perfectly inferred:
class OO extends X(a,b,g) // Type of X[A,B,G[A,B]]
However, if we change one of the arguments to be an option (or any collection that is), and we provide an empty collection (or None) then inference doesn't work.
case class X[Ao <: A, Bo <: B, Go <: G[Ao,Bo]](a: Ao, b: List[Bo], g: Go)
class OO extends X(a,List(),g)
<console>:XX: error: inferred type arguments [A,Nothing,G[A,B]] do not conform to class X's type parameter bounds [Ao <: A,Bo <: B,Go <: G[Ao,Bo]]
class OO extends X(a,List(),g)
^
<console>:XX: error: type mismatch;
found : A
required: Ao
class OO extends X(a,List(),g)
^
<console>:XX: error: type mismatch;
found : G[A,B]
required: Go
class OO extends X(a,List(),g)
This can be fixed by explicitly passing all the parameters of OO like this:
class OO extends X[A,B,G[A,B]](a,List(),g)
The thing is, G already has its types, and from the definition of X we see that G takes the first two parameters of X, so if we know G we have everything we need to infer the parameters of X. Can I do something different to help Scala with the inference? I have a class that has much more parameters than X and now I always have to explicitly define them. I am trying to see if there is something I can do to help Scala successfully infer the types.
I guess you could use higher kinded types:
case class X[Ao <: A, Bo <: B, Go[GAo <: Ao, GBo <: Bo] <: G[GAo, GBo]](a: Ao, b: List[Bo], g: Go[Ao, Bo])
class OO extends X(a,List(),g)
or, if your type parameters are covariant:
case class G[+Ao <: A, +Bo <: B](a: Ao, b: Bo)
case class X[Ao <: A, Bo <: B, Go[_ <: Ao, _ <: Bo] <: G[Ao, Bo]](a: Ao, b: List[Bo], g: Go[Ao, Bo])
I know from experience that the scala compiler can't infer types from parameters of the same group when they depend on each other on "different levels" like in this example. A higher kinded type allows me to explicitly declare type bounds and help the compiler infer the right types.
Say I have:
class Class[CC[A, B]]
class Thing[A, B <: Int]
class Test extends Class[Thing] // compile error here
I get the compiler error:
kinds of the type arguments (cspsolver.Thing) do not conform to the expected kinds of the type parameters (type CC) in class Class. cspsolver.
Thing's type parameters do not match type CC's expected parameters: type C's bounds <: Int are stricter than type B's declared bounds >: Nothing <: Any
However when I modify the code such that it looks like this:
class Class[CC[A, B]]
class Thing[A, B] {
type B <: Int
}
class Test extends Class[Thing]
it compiles fine. Aren't they both functionally equivalent?
The reason is given in the compiler message. In Class you expect an unrestricted CC, while Thing has the restriction that the second type argument must be <: Int. One possibility is to add the same constraint to Class as in
class Class[CC[A,B <: Int]]
class Thing[A, B <: Int]
class Test extends Class[Thing]
Elaborating on Petr Pudlák's explanation, here is what I assume happens: The compiler tries to unify CC[A, B] with Thing[A, B <: Int]. According to the declaration of B in CC, B's upper type-bound is Any, which is picked to instantiate B. The B in Thing, however, is expected to have Int a its upper type-bound, and the compiler thus fails with the error message you got.
This is necessary in order to preserve soundness of the type system, as illustrated in the following sketch. Assume that Thing defines an operation that relies on the fact that its B <: Int, e.g.,
class Thing[A, B <: Int] {
def f(b: B) = 2 * b
}
If you declared Class as
class Class[CC[A,B]] {
val c: CC
}
and Test as
class Test extends Class[Thing] {
val t: Thing
}
without the compiler complaining, then you could make the following call
new Test().t.f(true)
which is obviously not safe.
I am trying to get a better understanding of the following behaviour:
scala> class C[-A, +B <: A]
<console>:7: error: contravariant type A occurs in covariant position
in type >: Nothing <: A of type B
class C[-A, +B <: A]
^
However the following works:
scala> class C[-A, +B <% A]
defined class C
I can see that there could be issues from the variance of the bounding and bounded variables being opposite, though I have am not clear on what the specific problem is.
I am even less clear on why changing the type bound to a view bound makes things ok. In the absence of applicable implicit conversions I would expect the two definitions to be have largely the same effect. If anything I would expect a view bound to provide more opportunities for mischief.
For a bit of background I defining classes that are in some ways like functions, and I wanted to do something like
CompositeFunc[-A, +B <: C, -C, +D] (f1 : BaseFunc[A, B], f2 : BaseFunc[C, D])
extends BaseFunc[A, D]
Arguably
CompositeFunc[-A, +B <% C, -C, +D] (f1 : BaseFunc[A, B], f2 : BaseFunc[C, D])
extends BaseFunc[A, D]
is actually preferable, but I would still like to better understand what is going on here.
First the easy one:
class C[-A, +B <% A]
This is equivalent to
class C[-A, +B](implicit view: B => A)
Since view is not publically returned, it is not in a position which would constrain the variance of A or B. E.g.
class C[-A, +B](val view: B => A) // error: B in contravariant position in view
In other words, C[-A, +B <% A] is no different than C[-A, +B] in terms of constraints, the view argument doesn't change anything.
The upper bound case C[-A, +B <: A] I'm not sure about. The Scala Language Specification in §4.5 states
The variance position of the lower bound of a type declaration or type parameter is the opposite of the variance position of the type declaration or parameter.
The variance of B does not seem to be involved, but generally the upper bound must be covariant:
trait C[-A, B <: A] // contravariant type A occurs in covariant position
This must somehow produce a problem? But I couldn't come up with an example which proofs that this construction becomes unsound in a particular case....
As for the composed function, why not just
class Composite[-A, B, +C](g: A => B, h: B => C) extends (A => C) {
def apply(a: A) = h(g(a))
}
EDIT: For example:
import collection.LinearSeq
def compose[A](g: Traversable[A] => IndexedSeq[A], h: Traversable[A] => LinearSeq[A]) =
new Composite(g, h)
The Scala example below shows a situation where a required implicit parameter (of type TC[C]) can be provided by both implicit methods in scope, a and b. But when run, no ambiguity results, and it prints "B".
object Example extends App{
trait A
trait B extends A
class C extends B
class TC[X](val label: String)
implicit def a[T <: A]: TC[T] = new TC[T]("A")
implicit def b[T <: B]: TC[T] = new TC[T]("B")
println(implicitly[TC[C]].label)
}
Note that the only different between implicit methods a and b are the type bounds, and both of them can match TC[C]. If the method b is removed, then a is implicitly resolved instead.
While I find this behaviour convenient in practice, I'd like to understand whether it is a specified language feature, or just an implementation quirk.
Is there a language rule or principle by which the compiler prioritizes b over a, rather than viewing them as equivalent and thus ambiguous options?
There is a rule about this prioritization, which says that the most specific out of the two matches will get a higher priority. From Scala Reference, under 6.26.3 'Overloading Resolution':
The relative weight of an alternative A over an alternative B is a number from 0 to 2, defined as the sum of
1 if A is as specific as B, 0 otherwise, and
1 if A is defined in a class or object which is derived from the class or object defining B, 0 otherwise.
A class or object C is derived from a class or object D if one of the
following holds:
C is a subclass of D, or
C is a companion object of a class derived from D, or
D is a companion object of a class from which C is derived.