on code below.
My expectation is that T must be a of type B or A, so call to lowerBound(new D) should probably not compile (?). Similar experiments with upperbound give me expected typecheck errors.
Thanks for giving the hint.
object varianceCheck {
class A {
override def toString = this.getClass.getCanonicalName
}
class B extends A
class C extends B
class D extends C
def lowerBound[T >: B](param: T) = { param }
println(lowerBound(new D)) //> varianceCheck.D
}
With your implementation you can write:
scala> def lowerBound[T >: B](param: T) = { param }
lowerBound: [T >: B](param: T)T
scala> lowerBound(new AnyRef {})
res0: AnyRef = $anon$1#2eef224
where AnyRef is a super type of all object/reference types (actually it is an alias for Java Object class). And this is right, T >: B expresses that the type parameter T or the abstract type T refer to a supertype of type B.
You just have a bad example with toString, cause this method has all object types, but if you change it to, let's say on someMethod, your lowerBound won't compile:
<console>:18: error: value someMethod is not a member of type parameter T
def lowerBound[T >: B](param: T) = { param.someMethod }
If you change this to T <: B, which means that parameter of type T is a subclass of B, than everything is good, cause this param has someMethod method:
def lowerBound[T <: B](param: T) = { param.someMethod }
Faced same question as well. Seems compiler doing great job to help us shoot our feets. If you extract result of the lowerBound you may notice that it is of type B
val b: B = lowerBound(new D)
println(b) //> varianceCheck.D
And then if you try though to request type D explicitly
lowerBound[D](new D)
you will see the compiler error that you expected:
Error:(12, 21) type arguments [D] do not conform to method lowerBound's type parameter bounds [T >: B]
lowerBound[D](new D)
Related
My scenario is like:
trait A {
type B
def foo(b: B)
}
trait C[D <: A] {
val d: D
def createB(): D#B
def bar() {
d.foo(createB)
}
}
In REPL, it complains
<console>:24: error: type mismatch;
found : D#B
required: C.this.d.B
a.bar(createB())
What's wrong with this ? And (if possible at all) how to correct this code ?
D#B is a type projection, and is not the same as d.B. You have a type mismatch because in foo, Bactually meant this.B, which as said is not the same as D#B (the latter being more general).
Informally, you can think of D#Bas representing any possible type that the abstract type B can take for any instance of D, while d.B is the type of B for the specific instance d.
See What does the `#` operator mean in Scala? and What is meant by Scala's path-dependent types? for some context.
One way to make it compile it is by changing createB's return type to d.B:
def createB(): d.B
However in many cases such a solution is too restrictive because you are tied to the specific instance d, which might not be what you had in mind.
Another solution is then to replace the abstract type with a type parameter (though it is more verbose):
trait A[B] {
def foo(b: B)
}
trait C[B, D <: A[B]] {
val d: D
def createB(): B
def bar() {
d.foo(createB)
}
}
Update given this answer I'm not sure whether this should be considered a bug or not
This is a bug: SI-4377. An explicit type ascription yields
trait C[D <: A] {
val d: D
def createB(): D#B
def bar() {
(d:D).foo(createB)
// [error] found : D#B
// [error] required: _3.B where val _3: D
}
}
which looks like the implementation leaking. There's a workaround which involves casting to an intersection type (dangerous, casting is wrong etc; see my other answer here)
trait A {
type B
def foo(b: B)
}
case object A {
type is[A0 <: A] = A0 {
type B = A0#B
}
def is[A0 <: A](a: A0): is[A0] = a.asInstanceOf[is[A0]]
}
trait C[D <: A] {
val d: D
def createB(): D#B
def bar() {
A.is(d).foo(createB) // use it here!
}
}
I have this scala code:
class Creature {
override def toString = "I exist"
}
class Person(val name: String) extends Creature {
override def toString = name
}
class Employee(override val name: String) extends Person(name) {
override def toString = name
}
class Test[T](val x: T = null) {
def upperBound[U <: T](v: U): Test[U] = {
new Test[U](v)
}
def lowerBound[U >: T](v: U): Test[U] = {
new Test[U](v)
}
}
We can see the hierarchy relationship between Creature, Person, and Employee:
Creature <- Person <- Employee
In the def main:
val test = new Test[Person]()
val ub = test.upperBound(new Employee("John Derp")) //#1 ok because Employee is subtype of Person
val lb = test.lowerBound(new Creature()) //#2 ok because Creature is supertype of Person
val ub2 = test.upperBound(new Creature()) //#3 error because Creature is not subtype of Person
val lb2 = test.lowerBound(new Employee("Scala Jo")) //#4 ok? how could? as Employee is not supertype of Person
What can I understand is:
A <: B define A must be subtype or equal to B (upper bound)
A >: B define A must be supertype or equal to B (lower bound)
But what happened to #4 ? Why there is no error? As the Employee is not supertype of Person, I expect it shouldn't conform to the bound type parameter [U >: T].
Anyone can explain?
This example may help
scala> test.lowerBound(new Employee("Scala Jo"))
res9: Test[Person] = Test#1ba319a7
scala> test.lowerBound[Employee](new Employee("Scala Jo"))
<console>:21: error: type arguments [Employee] do not conform to method lowerBound's type parameter bounds [U >: Person]
test.lowerBound[Employee](new Employee("Scala Jo"))
^
In general, it's connected to the Liskov Substitution Principle - you can use subtype anywhere instead of supertype (or "subtype can always be casted to its supertype"), so type inference trying to infer as nearest supertype as it can (like Person here).
So, for ub2 there is no such intersection between [Nothing..Person] and [Creature..Any], but for lb2 there is one between [Person..Any] and [Employee..Any] - and that's the Person. So, you should specify type explicitly (force Employee instead of [Employee..Any]) to avoid type inference here.
The example where lb2 expectedly failing even with type inference:
scala> def aaa[T, A >: T](a: A)(t: T, a2: A) = t
aaa: [T, A >: T](a: A)(t: T, a2: A)T
scala> aaa(new Employee(""))(new Person(""), new Employee(""))
<console>:19: error: type arguments [Person,Employee] do not conform to method aaa's type parameter bounds [T,A >: T]
aaa(new Employee(""))(new Person(""), new Employee(""))
^
Here type A is inferred inside first parameter list and fixated as Employee, so second parameter list (which throws an error) just have only choice - to use it as is.
Or most commonly used example with invariant O[T]:
scala> case class O[T](a: T)
defined class O
scala> def aaa[T, A >: T](t: T, a2: O[A]) = t
aaa: [T, A >: T](t: T, a2: O[A])T
scala> aaa(new Person(""), O(new Employee("")))
<console>:21: error: type mismatch;
found : O[Employee]
required: O[Person]
Note: Employee <: Person, but class O is invariant in type T.
You may wish to define T as +T instead. (SLS 4.5)
aaa(new Person(""), O(new Employee("")))
^
T is fixed to Employee here and it's not possible to cast O[Employee] to
O[Person] due to invariance by default.
I think it happens because you can pass any Person to your
Test[Person].lowerBound(Person)
and as Employee is a subclass of Person it is considered as Person which is legal here.
Consider the following code
trait Foo[T] {
def one: Foo[_ >: T]
def two: T
def three(x: T)
}
def test[T](f: Foo[T]) = {
val b = f.one
b.three(b.two)
}
The method test fails to type check. It says:
found : (some other)_$1(in value b)
required: _$1(in value b)
val x = b.three(b.two)
If I am interpreting this correctly, the compiler thinks that b in method test has a type that looks like this (not legal syntax, but hopefully clearer):
trait Foo {
def two: X where ā X >: T
def three(x: X where ā X >: T)
}
What I was hoping for was that it would have a type like this:
ā X >: T such that trait Foo {
def two: X
def three(x: X)
}
The intent being that while the precise type X is not known, the compiler knows that its the same unknown type being returned by "two" and expected by "three". This seems different from what happens with normal univerally quantified generics. The following compiles, but exposes the type parameter X which I want to hide as it will vary between instances of Foo:
trait Foo[T] {
def one[X >: T]: Foo[X]
def two: T
def three(x: T)
}
def test[T, X >: T](f: Foo[T]) = {
val b = f.one[X]
b.three(b.two)
}
Is there a way to get the same behaviour for existentially quantified generics we get when they're univerally quanified?
def one: Foo[_ >: T] is equivalent to
def one: Foo[U >: T] forSome {type U >: T}
this one instead works
def one: Foo[U forSome {type U >: T}]
I do not however understand why this would make a difference. It seems like it should not to me. (shrug)
The problem is the compiler thinks that
b.two: _>:T
b.three(_>:T)
i.e. two is a supertype of T and three requires a supertype of T. But a supertype of T is not necessarily assignment compatible with another supertype of T, as in this example:
A >: B >: C
def get:A
def put(B)
put(get) // type mismatch
So if all the information we have is that they are supertypes of T then we cannot do this safely.
We have to explicitly tell the compiler that they are the same supertype of T.
trait Foo[T] {
type U <: T
def one: Foo[U]
def two: T
def three(x: T)
}
Then just set U when you implement the trait:
val x = new Foo[Dog]{
type U = Mammal
...
I would prefer this approach over the existential types due to the cleaner syntax and the fact that this is core Scala and does not need the feature to be imported.
I already know that:
<: is the Scala syntax type constraint
while <:< is the type that leverage the Scala implicit to reach the type constrait
for example:
object Test {
// the function foo and bar can have the same effect
def foo[A](i:A)(implicit ev : A <:< java.io.Serializable) = i
foo(1) // compile error
foo("hi")
def bar[A <: java.io.Serializable](i:A) = i
bar(1) // compile error
bar("hi")
}
but I want to know when we need to use <: and <:< ?
and if we already have <:, why we need <:< ?
thanks!
The main difference between the two is, that the <: is a constraint on the type, while the <:< is a type for which the compiler has to find evidence, when used as an implicit parameter. What that means for our program is, that in the <: case, the type inferencer will try to find a type that satisfies this constraint. E.g.
def foo[A, B <: A](a: A, b: B) = (a,b)
scala> foo(1, List(1,2,3))
res1: (Any, List[Int]) = (1,List(1, 2, 3))
Here the inferencer finds that Int and List[Int] have the common super type Any, so it infers that for A to satisfy B <: A.
<:< is more restrictive, because the type inferencer runs before the implicit resolution. So the types are already fixed when the compiler tries to find the evidence. E.g.
def bar[A,B](a: A, b: B)(implicit ev: B <:< A) = (a,b)
scala> bar(1,1)
res2: (Int, Int) = (1,1)
scala> bar(1,List(1,2,3))
<console>:9: error: Cannot prove that List[Int] <:< Int.
bar(1,List(1,2,3))
^
1. def bar[A <: java.io.Serializable](i:A) = i
<: - guarantees that instance of i of type parameter A will be subtype of Serializable
2. def foo[A](i:A)(implicit ev : A <:< java.io.Serializable) = i
<:< - guarantees that execution context will contains implicit value (for ev paramenter) of type A what is subtype of Serializable.
This implicit defined in Predef.scala and for foo method and it is prove if instance of type parameter A is subtype of Serializable:
implicit def conforms[A]: A <:< A = singleton_<:<.asInstanceOf[A <:< A]
fictional case of using <:< operator:
class Boo[A](x: A) {
def get: A = x
def div(implicit ev : A <:< Double) = x / 2
def inc(implicit ev : A <:< Int) = x + 1
}
val a = new Boo("hi")
a.get // - OK
a.div // - compile time error String not subtype of Double
a.inc // - compile tile error String not subtype of Int
val b = new Boo(10.0)
b.get // - OK
b.div // - OK
b.inc // - compile time error Double not subtype of Int
val c = new Boo(10)
c.get // - OK
c.div // - compile time error Int not subtype of Double
c.inc // - OK
if we not call methods what not conform <:< condition than all compile and execute.
There are definitely differences between <: and <:<; here is my attempt at explaining which one you should pick.
Let's take two classes:
trait U
class V extends U
The type constraint <: is always used because it drives type inference. That's the only thing it can do: constrain the type on its left-hand side.
The constrained type has to be referenced somewhere, usually in the parameter list (or return type), as in:
def whatever[A <: U](p: A): List[A] = ???
That way, the compiler will throw an error if the input is not a subclass of U, and at the same time allow you to refer to the input's type by name for later use (for example in the return type). Note that if you don't have that second requirement, all this isn't necessary (with exceptions...), as in:
def whatever(p: U): String = ??? // this will obviously only accept T <: U
The Generalized Type Constraint <:< on the other hand, has two uses:
You can use it as an after-the-fact proof that some type was inferred. As in:
class List[+A] {
def sum(implicit ev: A =:= Int) = ???
}
You can create such a list of any type, but sum can only be called when you have the proof that A is actually Int.
You can use the above 'proof' as a way to infer even more types. This allows you to infer types in two steps instead of one.
For example, in the above List class, you could add a flatten method:
def flatten[B](implicit ev: A <:< List[B]): List[B]
This isn't just a proof, this is a way to grab that inner type B with A now fixed.
This can be used within the same method as well: imagine you want to write a utility sort function, and you want both the element type T and the collection type Coll. You could be tempted to write the following:
def sort[T, Coll <: Seq[T]](l: Coll): Coll
But T isn't constrained to be anything in there: it doesn't appear in the arguments nor output type. So T will end up as Nothing, or Any, or whatever the compiler wants, really (usually Nothing). But with this version:
def sort[T, Coll](l: Coll)(implicit ev: Coll <:< Seq[T]): Coll
Now T appears in the parameter's types. There will be two inference runs (one per parameter list): Coll will be inferred to whatever was given, and then, later on, an implicit will be looked for, and if found, T will be inferred with Coll now fixed. This essentially extracts the type parameter T from the previously-inferred Coll.
So essentially, <:< checks (and potentially infers) types as a side-effect of implicit resolution, so it can be used in different places / at different times than type parameter inference. When they happen to do the same thing, stick to <:.
After some thinking, I think it has some different.
for example:
object TestAgain {
class Test[A](a: A) {
def foo[A <: AnyRef] = a
def bar(implicit ev: A <:< AnyRef) = a
}
val test = new Test(1)
test.foo // return 1
test.bar // error: Cannot prove that Int <:< AnyRef.
}
this menas:
the scope of <: is just in the method param generic tpye scope foo[A <: AnyRef]. In the example, the method foo have it's generic tpye A, but not the A in class Test[A]
the scope of <:< , will first find the method's generic type, but the method bar have no param generic type, so it will find the Test[A]'s generic type.
so, I think it's the main difference.
Below, the first case succeeds and the second fails. Why do I need an explicit evidence type / why does this Scala type bound fail? What's the type inferencer's particular limitation here in solving for A?
scala> implicit def view[A, C](xs: C)(implicit ev: C <:< Iterable[A]) = new { def bar = 0 }
view: [A, C](xs: C)(implicit ev: <:<[C,scala.collection.immutable.Iterable[A]])java.lang.Object{def bar: Int}
scala> view(List(1)) bar
res37: Int = 0
scala> implicit def view[A, C <: Seq[A]](xs: C) = new { def bar = 0 }
view: [A, C <: scala.collection.immutable.Seq[A]](xs: C)java.lang.Object{def bar: Int}
scala> view(List(1)) bar
<console>:149: error: inferred type arguments [Nothing,List[Int]] do not conform to method view's type parameter bounds [A,C <: scala.collection.immutable.Seq[A]]
view(List(1)) bar
^
Type inference unfortunately does not deal well with type parameters (such as C) that are bounded by (types that contain) other type parameters in the same type parameter list (here, A).
The version that encodes the constraint using an implicit argument does not suffer from this restriction since constraints imposed by implicits are solved separately from constraints imposed by type parameter bounds.
You can also avoid the cycle by splitting up the type of xs into the type constructor that abstracts over the collection (CC) and the (proper) type (A) that abstracts over its elements, like so:
scala> implicit def view[A, CC[x] <: Seq[x]](xs: CC[A]) = new { def bar = 0 }
view: [A, CC[x] <: Seq[x]](xs: CC[A])Object{def bar: Int}
scala> view(List(1)) bar
res0: Int = 0
For more details on types like CC, please see What is a higher kinded type in Scala?
I don't really know why this happens, although my hunch is it has to do with the variance of Seq's type parameter. I could get the following to work, though:
implicit def view[A, C[~] <: Seq[~] forSome { type ~ }](xs: C[A]) =
new { def bar = 0 }
Is there a reason why you want CāI mean, do you plan to use C inside the method? Because if not, why not just
implicit def view[A](xs: Seq[A]) = new { def bar = 0 }
?