Covariance and Contravariance in Scala [duplicate] - scala

This question already has answers here:
Contravariance vs Covariance in Scala
(3 answers)
Closed 2 years ago.
I have a confusion in understanding covariance type being restricted in method parameters. I read through many materials and I am unable to get them the below concept.
class SomeThing[+T] {
def method(a:T) = {...} <-- produces error
}
In the above piece of code, a is of type T. Why can we not pass subtypes of T? All the expectations of method on parameter x, can be fulfilled by subtype of T perfectly.
Similarly when we have contravariant type T (-T), it can not be passed as method argument; but it is allowed. Why I think it can not be passed is: for e.g, say method invokes a method (present in object a)
on a which is present in T. When we pass super type of T, it may NOT be present. But it is allowed by compiler. This confuses me.
class SomeThing[-T] {
def method(a:T) = {...} <-- allowed
}
So by looking at the above, it is covariant that should be allowed in method arguments as well as in the return type. Contravariant can not be applied.
Can someone please help me to understand.

The key thing about variance is that it affects how the class looks from the outside.
Covariance says that an instance of SomeThing[Int] can be treated as an instance of SomeThing[AnyVal] because AnyVal is a superclass of Int.
In this case your method
def method(a: Int)
would become
def method(a: AnyVal)
This is clearly a problem because you can now pass a Double to a method of SomeThing[Int] that should only accept Int values. Remember that the actual object does not change, only the way that it is perceived by the type system.
Contravariance says that SomeThing[AnyVal] can be treated as SomeThing[Int] so
def method(a: AnyVal)
becomes
def method(a: Int)
This is OK because you can always pass an Int where AnyVal is required.
If you follow through the logic for return types you will see that it works the other way round. It is OK to return covariant types because they can always be treated as being of the superclass type. You can't return contravariant types because the return type might be a subtype of the actual type, which cannot be guaranteed.

I think you are attacking the problem backwards. The fact that you can't have a:T as an argument of a method if T is covariant comes as a constraint because otherwise some illogical code would be completely valid
class A
class B extends A
class C extends B
val myBThing = new SomeThing[B]
Here, myBThing.method accepts a B, and you are right that we can pass it anything that extends B, so myBThing.method(new C) is completely fine. However, myBThing.method(new A) isn't!
Now, since we've defined SomeThing with a covariant, I can also write this
val myAThing: SomeThing[A] = myBThing // Valid since B <: A entails SomeThing[B] <: Something[A] by definition of covariance
myAThing.method(new A) // What? You're managing to send an A to a method that was implemented to receives B and subtypes!
You can see now why we impose the constraint of not passing T as a parameter (parameters are in a "contravariant position").
We can make a similar argument for contravariance in the return position. Remember that contravariance means B <: A entails ``SomeThing[A] <: Something[B]`.
Assume you're defining the following
class A
class B extends A
class SomeThingA[-T](val value: T) // Compiler won't like T in a return type like myThing.value
// If the class definition compiled, we could write
val myThingA: SomeThing[A] = new SomeThing(new A)
val someA: A = myThingA.value
val myThingB: SomeThing[B] = myThingA // Valid because T contravariant
val someB: B = myThingB.value // What? I only ever stored an A!
For more details, see this answer.

In the case of class SomeThing[T], placing a + or - before the T actually effects the class itself more than the type parameter.
Consider the following:
val instanceA = new SomeThing[A]
val instanceB = new SomeThing[B]
If SomeThing is invariant on T (no + or -) then the instances will have no variance relationship.
If SomeThing is covariant on T ([+T]) then the instances will have the same variance relationship as A and B have. In other words, if A is a sub-type of B (or vice versa) then the instances will reflect that same relationship.
If SomeThing is contravariant on T ([-T]) then the instances will have the opposite variance relationship as A and B have. In other words, if A is a sub-type of B then instanceB will be a sub-type of instanceA.
But the variance indicator does effect how the type parameter can be used. If T is marked + then it can't be placed in a contravariant position and, likewise, if marked - then it can't be placed in a covariant position. We bump up against this most often when defining methods.
Scala methods are very closely related to the Scala function traits: Function0, Function1, Function2, etc.
Consider the definition of Function1:
trait Function1[-T1, +R] extends AnyRef
Now let's say you want to pass a function of this type around.
def useThisFunc(f: A => B):Unit = {...}
Because a Function1 is contravariant on its received parameter and covariant on its result, all of the following are acceptable as a useThisFunc() parameter.
val a2b : A => B = ???
val supa2b : SuperOfA => B = ???
val a2subb : A => SubOfB = ???
val supa2subb : SuperOfA => SubOfB = ???
So, in conclusion, if SomeThing is covariant on T then you can't have T as a passed parameter of a member method because FunctionX is contravariant on its parameter types. Likewise, if SomeThing is contravariant on T the you can't have T as member method return type because FunctionX is covariant on its return type.

Related

Covariant type A accepted in contravariant position of function argument A => B

Consider covariant type parameter A
case class Foo[+A](a: A):
def bar(a: A) = a // error: covariant type A occurs in contravariant position
def zar(f: A => Int) = f(a) // ok
|
This is contravariant position. Why is it ok?
Foo(41).zar(_ + 1) // : Int = 42
Why is it accepted as argument to zar when it occurs in contravariant position in A => Int?
Following the idea by #sarveshseri I will provide an intuitive explanation rather than a formal one. Both because I do not know a the details of a formal one and because I hope this one would be more helpful for readers.
First some disclaimers:
I may have typos or some errors, please edit the answer if you notice it.
As mentioned above the mental model I am to describe is an approximation, what really happens at compile time and at runtime will be different.
During this answer I will be referring to types and classes interchangeable. This is WRONG and I myself have pointed this on other answers. In this case it is for simplicity of this mental model; but I recommend that after variance clicks then you mix the distinction of types and classes to that model: https://typelevel.org/blog/2017/02/13/more-types-than-classes.html
Connected to the previous point, I will be using concrete/simple types on my example. Thankfully, thanks to type erasure and parametricity what I will explain applies to type constructors and other complex types.
I will also be using methods and functions interchangeably, this is also WRONG: https://docs.scala-lang.org/tutorials/FAQ/index.html#whats-the-difference-between-methods-and-functions
Now let's get to it. First let's assume that if a method accepts a Foo then it can only accept values of type Foo and nothing else, period.
Then let's assume that subtyping is actually an "implicit" function that "cast" values. So B <: A is just B => A
And let's imagine that such "cast" is a disguise, the value is actually the same one but seen differently (this is basically the Liskov principle).
So, when you try to pass a B to a method that expects an A then this implicit cast will be inserted by the compiler. That way in runtime, that method receives a value that is seen like one of type A and not one of type B; but the value is still of type B (actually here we would be talking about classes not types, but I hope you get the idea).
With that then let's see what happens to bar and baz methods of your covariant class Foo
Since Foo[Dog] <: Foo[Animal] the I can cast the former as the latter, then given Cat <: Animal I can also cast the former to the latter.
Finally, I could pass this Cat disguised as a Animal to the bar method of Foo[Dog] disguised as a Foo[Animal] but then at runtime we would be passing a Cat to a method that expects a Dog kataplum! Unless, of course, such method was always prepared for such situation.
That is why bar has to be defined like [B >: A](b: B). Here we are saying that we can accept any B for which the compiler can produce the implicit cast function A => B (the opposite as before, thanks to Any such type and such function is always possible). And then the implementation of bar should be able to work for that new type B and using the cast function when is required.
Which would mean that the previous example doesn't blow up at runtime, since we could have just passed the Cat directly without going via indirect disguises; this works because the compiler is always able to infer that B should be Animal (the LUB of Cat and Dog) so it would cast the Cat as Animal and would pass that to bar as well as the cast function Dog => Animal
Note, the presence of the A => B cast function implies that the compiler can also create the F[A] => F[B] function, if F is covariant.
Now let's see what happens to baz.
Again, we would cast a Foo[Dog] as a Foo[Animal] and then we would try to call baz with a function Animal => Int which should work at runtime because we do not even need the disguise at all, we could have passed such a function directly to Foo[Dog] because (Animal => Int) <: (Dog => Int) this because functions are contravariant on their inputs.
But how would this work at all? Simple, the intuition says that if I am able to process / consume / receive / use any Animal then I should be able to process any Dog since those are Animals, right? Let's see it how that would work with our mental model.
I have baz(Dog => Int) and I have f(Animal => Int) what the compiler can do is create a new function g(Dog => Int) = cast(Dog => Animal) andThen f(Animal => Int) and use g instead.
Hope this helps, feel free to leave any question.
Your class can be viewed as
trait Function1[-Input, +Result] // Renamed the type parameters for clarity
case class Foo[+A](a: A) {
val bar: Function1[A, A] = identity
val zar: Function1[Function1[A, Int], Int] = { f => f(a) }
}
The approach taken by the compiler is to assign positive, negative, and neutral annotations at each type position in the signature; I'll notate the positions by putting + and - after the type in that position
First the top-level vals are positive (i.e. covariant):
case class Foo[+A](a: A) {
val bar: Function1[A, A]+
val zar: Function1[Function1[A, Int], Int]+
}
bar can be anything that's a subtype of Function1[A, A] and zar can be anything that's a subtype of Function1[Function1[A, Int], Int], so this makes sense (LSP, etc.).
The compiler then moves into the type arguments.
case class Foo[+A](a: A) {
val bar: Function1[A-, A+]
val zar: Function1[Function1[A, Int]-, Int+]
}
Since Input is contravariant, this "flips" the classification (+ -> -, - -> +, neutral unchanged) relative to its surrounding classification. Result being covariant does not flip the classification (if there was an invariant parameter in Function1, that would force the classification to neutral). Applying this a second time
case class Foo[+A](a: A) {
val bar: Function1[A-, A+]
val zar: Function1[Function1[A+, Int-], Int+]
}
A type parameter for the class being defined can only be used in + position if it's covariant, - position if contravariant, and anywhere if it's invariant (Int, which is not a type parameter can be considered invariant for the purpose of this analysis: i.e. we could have dispensed with annotating once we got to a type which neither was a type parameter nor had a type parameter). In bar, we have a conflict (the A-), but in zar the A+ means no conflict.
The produce/consume relationship presented in Luis's answer is a good intuitive summary. This is a partial (methods with type parameters complicate this, though I'm not totally sure how my transformation would work there...) exploration of how the compiler actually concludes that the A in zar is in a covariant position; Programming in Scala (Odersky, Spoon, Venners) describes it in more detail (in the 3rd edition, it's in section 19.4).

covariant type A occurs in contravariant position in type A of value a

I have following class:
case class Box[+A](value: A) {
def set(a: A): Box[A] = Box(a)
}
And the compiler complain:
Error:(4, 11) covariant type A occurs in contravariant position in type A of value a
def set(a: A): Box[A] = Box(a)
I was searching a lot about the error, but could not find something useful that
help me to understand the error.
Could someone please explain, why the error occurs?
The error message is actually very clear once you understand it. Let's get there together.
You are declaring class Box as covariant in its type parameter A. This means that for any type X extending A (i.e. X <: A), Box[X] can be seen as a Box[A].
To give a clear example, let's consider the Animal type:
sealed abstract class Animal
case class Cat extends Animal
case class Dog extends Animal
If you define Dog <: Animal and Cat <: Animal, then both Box[Dog] and Box[Cat] can be seen as Box[Animal] and you can e.g. create a single collection containing both types and preserve the Box[Animal] type.
Although this property can be very handy in some cases, it also imposes constraints on the operations you can make available on Box. This is why the compiler doesn't allow you to define def set.
If you allow defining
def set(a:A): Unit
then the following code is valid:
val catBox = new Box[Cat]
val animalBox: Box[Animal] = catBox // valid because `Cat <: Animal`
val dog = new Dog
animalBox.set(dog) // This is non-sensical!
The last line is obviously a problem because catBox will now contain a Dog! The arguments of a method appear in what is called "contravariant position", which is the opposite of covariance. Indeed, if you define Box[-A], then Cat <: Animal implies Box[Cat] >: Box[Animal] (Box[Cat] is a supertype of Box[Animal]). For our example, this is of course non-sensical.
One solution to your problem is to make the Box class immutable (i.e. to not provide any way to change the content of a Box), and instead use the apply method defined in your case class companion to create new boxes. If you need to, you can also define set locally and not expose it anywhere outside Box by declaring it as private[this]. The compiler will allow this because the private[this] guarantees that the last line of our faulty example will not compile since the set method is completely invisible outside of a specific instance of Box.
If for some reason you do not want to create new instances using the apply method, you can also define set as follows.
def set[B >: A](b: B): Box[B] = Box(b)
Others have already given an answer why the code doesn't compile, but they haven't given a solution on how to make the code compile:
> case class Box[+A](v: A) { def set[B >: A](a: B) = Box(a) }
defined class Box
> trait Animal; case class Cat() extends Animal
defined trait Animal
defined class Cat
> Box(Cat()).set(new Animal{})
res4: Box[Animal] = Box($anon$1#6588b715)
> Box[Cat](Cat()).set[Animal](new Animal{})
res5: Box[Animal] = Box($anon$1#1c30cb85)
The type argument B >: A is a lower bound that tells the compiler to infer a supertype if necessary. As one can see in the example, Animal is inferred when Cat is given.
Try to understand what it means for your Box[+A] to be covariant in A:
It means that a Box[Dog] should also be a Box[Animal], so any instance of Box[Dog] should have all the methods a Box[Animal] has.
In particular, a Box[Dog] should have a method
set(a: Animal): Box[Animal]
However, it only has a method
set(a: Dog): Box[Dog]
Now, you'd think you can infer the first one from the second, but that's not the case: I do you want to box a Cat using only the second signature? That's not doable, and that's what the compiler tells you: a parameter in a method is a contravariant position (you can only put contravariant (or invariant) type parameters).
in addition to the other answers i'd like to provide another approach:
def set[B >: A](x: B): Box[B] = Box(x)
Basically you cant put As in if A is covariant, you can only take it out (ex: returning A). If you wish to put As in, then you would need to make it contravariant.
case class Box[-A](value: A)
Do you want to do both, then just make it invariant
case class Box[A](value: A)
The best is to keep it covariant and get rid of the setter and go for an immutable approach.

covariance and variance flip in scala

In Scala for the Impatient It is said that
functions are contra-variant in their arguments and covariant in their result type
This is straightforward and easy to understand ,however in the same topic it says
However inside a function parameter ,the variance flips- its parameters are covariant
and it takes the example of foldLeft method of Iterator
as :
def foldLeft[B](z : B)(op : (B, A) => B) : B
I am not getting it clearly what it says.
I tried some of blogs as
http://www.artima.com/pins1ed/type-parameterization.html
http://blog.kamkor.me/Covariance-And-Contravariance-In-Scala/
http://blogs.atlassian.com/2013/01/covariance-and-contravariance-in-scala/
But didn't get clear understanding.
A function is always contravariant in its argument type and covariant in its return type
e.g.
trait Function1[-T1, +R] extends AnyRef
trait Function2[-T1, -T2, +R] extends AnyRef
Here, T1,T2, ..., Tn (where n <= 22) are arguments and R is the return type.
In higher order functions (functions that take function as argument), an argument can have the type parameter that is passed into the function
e.g.
foldLeft in trait Iterable
Iterable is declared as
trait Iterable[+A] extends AnyRef
and foldLeft is decalred as
def foldLeft[B](z : B)(op : (B, A) => B) : B
Since A is declared covariant, it can be used as the return type. But here it is instead an argument type due to
trait Function2[-T1, -T2, +R] extends AnyRef
because op : (B, A) => B is the literal type of Function2.
The key to this is trait Function2 is contravariant in its argument type.
Hence covariance type is appearing in method argument due to
trait Function2 is contravariant in its argument type
This is called variance flip:
Flip of covariance is contravariance.
Flip of contravariance is covariance.
Flip is invariant is invariant.
That's why invariant may appear at any position (covariance/contravariance)
It comes down to what it means for one function to be a subtype of another. It sounds like you are comfortable with A->B is a subtype of C->D if C is subtype of A (contravariant in the input type) and B is a subtype of D (covariant in the return type).
Now consider functions that take other functions as arguments. For example, consider (A->B)->B. We just apply the same reasoning twice. The argument is a function of type A->B and the return type is B. What needs to be true to supply a function of type C->B as the input type? Since functions are contravariant in the input type C->B must be a subtype of A->B. But as we discussed in the first paragraph, that means that A must be a subtype of C. So after two applications of the reasoning in the first paragraph we find that (A->B)->B is covariant in the A position.
You can reason similarly with more complicated functions. In fact, you should convince yourself that a position is covariant if it is to the left of an even number of arrows applying to it.
To begin with look at a function as a class or rather a typeclass . Think about its type Function1[-A,+B]
Say we have the following ,
class x
class y extends b
Now i have two functions like the following ,
val test1:x=>Int = //do something
val test2:y=>int = //do something
Now if i have another method like the following ,
def acceptFunction(f: y => Unit, b: B) = //do something
As per the type signature Function1[-A,+B] i can pass test2 into acceptFunction as well as test1 because of the contravariance.
Kind of like test1<:test2 .
This is a completely different thing than saying parameters to a function are covariant. This means if you have the following ,
class Fruit { def name: String="abstract" }
class Orange extends Fruit { override def name = "Orange" }
class Apple extends Fruit { override def name = "Apple" }
You can write the following ,
testM(new Apple())
def testM(fruit:Fruit)={}
But you cannot write this ,
testM(new Fruit())
def testM(fruit:Apple)={}

Scala type lowerbound bug?

case class Level[B](b: B){
def printCovariant[A<:B](a: A): Unit = println(a)
def printInvariant(b: B): Unit = println(b)
def printContravariant[C>:B](c: C): Unit = println(c)
}
class First
class Second extends First
class Third extends Second
//First >: Second >: Third
object Test extends App {
val second = Level(new Second) //set B as Second
//second.printCovariant(new First) //error and reasonable
second.printCovariant(new Second)
second.printCovariant(new Third)
//second.printInvariant(new First) //error and reasonable
second.printInvariant(new Second)
second.printInvariant(new Third) //why no error?
second.printContravariant(new First)
second.printContravariant(new Second)
second.printContravariant(new Third) //why no error?
}
It seems scala's lowerbound type checking has bugs... for invariant case and contravariant case.
I wonder above code are have bugs or not.
Always keep in mind that if Third extends Second then whenever a Second is wanted, a Third can be provided. This is called subtype polymorhpism.
Having that in mind, it's natural that second.printInvariant(new Third) compiles. You provided a Third which is a subtype of Second, so it checks out. It's like providing an Apple to a method which takes a Fruit.
This means that your method
def printCovariant[A<:B](a: A): Unit = println(a)
can be written as:
def printCovariant(a: B): Unit = println(a)
without losing any information. Due to subtype polymorphism, the second one accepts B and all its subclasses, which is the same as the first one.
Same goes for your second error case - it's another case of subtype polymorphism. You can pass the new Third because Third is actually a Second (note that I'm using the "is-a" relationship between subclass and superclass taken from object-oriented notation).
In case you're wondering why do we even need upper bounds (isn't subtype polymorphism enough?), observe this example:
def foo1[A <: AnyRef](xs: A) = xs
def foo2(xs: AnyRef) = xs
val res1 = foo1("something") // res1 is a String
val res2 = foo2("something") // res2 is an Anyref
Now we do observe the difference. Even though subtype polymorphism will allow us to pass in a String in both cases, only method foo1 can reference the type of its argument (in our case a String). Method foo2 will happily take a String, but will not really know that it's a String. So, upper bounds can come in handy when you want to preserve the type (in your case you just print out the value so you don't really care about the type - all types have a toString method).
EDIT:
(extra details, you may already know this but I'll put it for completeness)
There are more uses of upper bounds then what I described here, but when parameterizing a method this is the most common scenario. When parameterizing a class, then you can use upper bounds to describe covariance and lower bounds to describe contravariance. For example,
class SomeClass[U] {
def someMethod(foo: Foo[_ <: U]) = ???
}
says that parameter foo of method someMethod is covariant in its type. How's that? Well, normally (that is, without tweaking variance), subtype polymorphism wouldn't allow us to pass a Foo parameterized with a subtype of its type parameter. If T <: U, that doesn't mean that Foo[T] <: Foo[U]. We say that Foo is invariant in its type. But we just tweaked the method to accept Foo parameterized with U or any of its subtypes. Now that is effectively covariance. So, as long as someMethod is concerned - if some type T is a subtype of U, then Foo[T] is a subtype of Foo[U]. Great, we achieved covariance. But note that I said "as long as someMethod is concerned". Foo is covariant in its type in this method, but in others it may be invariant or contravariant.
This kind of variance declaration is called use-site variance because we declare the variance of a type at the point of its usage (here it's used as a method parameter type of someMethod). This is the only kind of variance declaration in, say, Java. When using use-site variance, you have watch out for the get-put principle (google it). Basically this principle says that we can only get stuff from covariant classes (we can't put) and vice versa for contravariant classes (we can put but can't get). In our case, we can demonstrate it like this:
class Foo[T] { def put(t: T): Unit = println("I put some T") }
def someMethod(foo: Foo[_ <: String]) = foo.put("asd") // won't compile
def someMethod2(foo: Foo[_ >: String]) = foo.put("asd")
More generally, we can only use covariant types as return types and contravariant types as parameter types.
Now, use-site declaration is nice, but in Scala it's much more common to take advantage of declaration-site variance (something Java doesn't have). This means that we would describe the variance of Foo's generic type at the point of defining Foo. We would simply say class Foo[+T]. Now we don't need to use bounds when writing methods that work with Foo; we proclaimed Foo to be permanently covariant in its type, in every use case and every scenario.
For more details about variance in Scala feel free to check out my blog post on this topic.

Understanding the limits of Scala GADT support

The error in Test.test seems unjustified:
sealed trait A[-K, +V]
case class B[+V]() extends A[Option[Unit], V]
case class Test[U]() {
def test[V](t: A[Option[U], V]) = t match {
case B() => null // constructor cannot be instantiated to expected type; found : B[V] required: A[Option[U],?V1] where type ?V1 <: V (this is a GADT skolem)
}
def test2[V](t: A[Option[U], V]) = Test2.test2(t)
}
object Test2 {
def test2[U, V](t: A[Option[U], V]) = t match {
case B() => null // This works
}
}
There are a couple ways to make the error change, or go away:
If we remove the V parameter on trait A (and case class B), the 'GADT-skolem' part of the error goes away but the 'constructor cannot be instantiated' part remains.
If we move the U parameter of the Test class to the Test.test method, the error goes away. Why ? (Similarly, the error is not present in Test2.test2)
The following link also identifies that problem, but I do not understand the provided explanation. http://lambdalog.seanseefried.com/tags/GADTs.html
Is this an error in the compiler ? (2.10.2-RC2)
Thank you for any help with clarifying that.
2014/08/05: I have managed to further simplify the code, and provide another example where U is bound outside the immediate function without causing a compilation error. I still observe this error in 2.11.2.
sealed trait A[U]
case class B() extends A[Unit]
case class Test[U]() {
def test(t: A[U]) = t match {
case B() => ??? // constructor cannot be instantiated to expected type; found : B required: A[U]
}
}
object Test2 {
def test2[U](t: A[U]) = t match {
case B() => ??? // This works
}
def test3[U] = {
def test(t: A[U]) = t match {
case B() => ??? // This works
}
}
}
Simplified like that this looks more like a compiler bug or limitation. Or am I missing something ?
Constructor patterns must conform to the expected type of the pattern, which means B <: A[U], a claim which is true if U is a type parameter of the method presently being called (because it can be instantiated to the appropriate type argument) but untrue if U is a previously bound class type parameter.
You can certainly question the value of the "must conform" rule. I know I have. We generally evade this error by upcasting the scrutinee until the constructor pattern conforms. Specifically,
// Instead of this
def test1(t: A[U]) = t match { case B() => ??? }
// Write this
def test2(t: A[U]) = (t: A[_]) match { case B() => ??? }
Addendum: in a comment the questioner says "The question is simple on why it doesn't works with type parameter at class level but works with at method level." Instantiated class type parameters are visible externally and have an indefinite lifetime, neither of which is true for instantiated method type parameters. This has implications for type soundness. Given Foo[A], once I'm in possession of a Foo[Int] then A must be Int when referring to that instance, always and forever.
In principle you could treat class type parameters similarly inside a constructor call, because the type parameter is still unbound and can be inferred opportunistically. That's it, though. Once it's out there in the world, there's no room to renegotiate.
One more addendum: I see people doing this a lot, taking as a premise that the compiler is a paragon of correctness and all that's left for us to do is ponder its output until our understanding has evolved far enough to match it. Such mental gymnastics have taken place under that tent, we could staff Cirque de Soleil a couple times over.
scala> case class Foo[A](f: A => A)
defined class Foo
scala> def fail(foo: Any, x: Any) = foo match { case Foo(f) => f(x) }
fail: (foo: Any, x: Any)Any
scala> fail(Foo[String](x => x), 5)
java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String
at $anonfun$1.apply(<console>:15)
at .fail(<console>:13)
... 33 elided
That's the current version of scala - this is still what it does. No warnings. So maybe ask yourselves whether it's wise to be offering the presumption of correctness to a language and implementation which are so trivially unsound after more than ten years of existence.
Looks like it is a compiler caveat. From this, martin odersky puts it as:
In the method case, what you have here is a GADT:
Patterns determine the type parameters of an corresponding methods in the scope of a pattern case.
GADTs are not available for class parameters.
As far as I know, nobody has yet explored this combination, and it looks like it would be quite tricky to get this right.
PS: Thanks to #retronym who provided the reference from discussion here
On why it throws an error in case of class:
This works:
sealed trait A[-K, +V]
case class B[+V]() extends A[Option[Unit], V]
case class Test[U]() {
def test[V, X <: Unit](t: A[Option[X], V]) = t match {
case B() => null
}
def test2[V](t: A[Option[U], V]) = Test2.test2(t)
}
object Test2 {
def test2[U, V](t: A[Option[U], V]) = t match {
case B() => null // This works
}
}
To examplain why the compiler threw an error: Try doing this:
scala> :paste
// Entering paste mode (ctrl-D to finish)
abstract class Exp[A]{
def eval:A = this match {
case Temp(i) => i //line-a
}
}
case class Temp[A](i:A) extends Exp[A]
// Exiting paste mode, now interpreting.
To understand this, from §8.1.6: above Temp is a polymorphic type. If the case class is polymorphic then:
If the case class is polymorphic, then its type parameters are
instantiated so that the instantiation of c conforms to the expected
type of the pattern. The instantiated formal parameter types of c’s
primary constructor are then taken as the expected types of the
component patterns p 1 , . . . , p n . The pattern matches all objects
created from constructor invocations c(v1 , . . . , v n ) where each
element pattern p i matches the corresponding value v i .
i.e. the compiler smartly tries to instantiate Temp in line-a such that it conforms to the primary constructor of this( which if above compiled successfully then in case of Temp(1) would be something like Exp[Int] and hence compiler instantiates Temp in line-a with parameter as Int.
Now in our case: the compiler is trying to instantiate B. It sees that t is of type A[Option[U],V] where U is already fixed and obtained from class parameter and V is generic type of method. On trying to initialize B it tries to create in such a way that it ultimately gets A[Option[U],V]. So with B() it is somehow trying to get A[Option[U],V]. But it cant as B is A[Option[Unit],V]. Hence it ultimately cannot initialize B. Fixing this makes it work
Its not required in case of test-2 because: The compiler as explained in above process is trying to initialize type parameter of B. It knows t has type parameter [Option[U], V] where U and V are both generic wrt method and are obtained from argument. It tries to initialize B based on the agrument. If the argument was new B[String] it tries deriving B[String] and hence U is automatically obtained as Option[Unit]. If the argument was new A[Option[Int],String] then it obviously it wont match.
The difference has to do with what information the compiler and runtime have in each case, combined with what the restrictions on the types are.
Below the ambiguity is clarified by having U be the trait and class parameter, and X be the method type paramter.
sealed trait A[U]
case class B(u: Unit) extends A[Unit]
class Test[U]() {
def test(t: A[U]) = t match {
case B(u) => u // constructor cannot be instantiated to expected type; found : B required: A[U]
}
}
object Test2 {
def test2[X](t: A[X]) = t match {
case B(x) => x // This works
}
def test3[X] = {
def test(t: A[X]) = t match {
case B(x) => x // This works
}
}
}
Test2.test2(new B(println("yo")))
In Test2.test2, the compiler knows that an instance of A[X] will be provided, with no limits on X.It can generate code that inspects the parameter provided to see if it is B, and handle the case.
In class Test method test, the compiler knows not that some A[X] is provided when called, but that some specific type, U, is presented. This type U can be anything. However, the pattern match is on an algebraic data type. This data type has exactly one valid variation, of type A[Unit]. But this signature is for A[U] where U is not limited do Unit. This is a contradiction. The class definition says that U is a free type parameter, the method says it is Unit.
Imagine you instantiate Test:
val t = new Test[Int]()
Now, at the use site, the method is:
def test(t: A[Int])
There is no such type A[Int]. Pattern matching on B is when the compiler inspects this condition. A[U] for any U is incompatible with the type, hence "constructor cannot be instantiated to expected type; found : B required: A[U]"
The difference with the method versus class type parameter is that one is bound while one is free, at the time the method is called in the runtime.
Another way to think about it is that defining the method
def test(t: A[U])
implies that U is Unit, which is inconsistent with the class declaration that U is anything.
Edit: this is not an answer to the question. I'm keeping it for reference (and the comments).
The link you provided already gives the answer.
In the case of the parameterised method, U is infered from the argument of the actual method call. So, the fact that the case B() was chosen implies that U >: Unit (otherwise the method could not have been called with a B) and the compiler is happy.
In the case of the parameterized class, U is independent from the method argument. So, the fact that the case B() was chosen tells the compiler nothing about U and it can not confirm that U >: Unit. I think if you add such a type bound to U it should work. I haven't tried it, though.
The compiler is absolutely correct with its error message. An easy example should describe the issue well:
sealed trait A[U]
class B extends A[Unit]
class T[U] {
val b: A[Unit] = new B
f(b) // invalid, the compiler can't know which types are valid for `U`
g(b) // valid, the compiler can choose an arbitrary type for `V`
def f(t: A[U]) = g(t)
def g[V](t: A[V]) = ???
}
f and g have different type signatures. g takes an A[V] where V is an arbitrary type parameter. On the other hand, f takes an A[U] where U is not arbitrary but dependent from the outer class T.
The compiler does not know what type U can be at the moment when it typechecks T - it needs to typecheck an instantiation of T to find out which types are used. One can say that U is a concrete type inside of T, but a generic type outside of it. This means it has to reject every code fragment that gets inside of T more concrete about the type of U.
Calling g from inside f is clearly allowed - after all an arbitrary type for V can be chosen, which is U in this case. Because U is never again referenced in g it doesn't matter what type it may have.
Your other code example underlies the same limitations - it is just an example that is more complex.
Btw, that this code
def test3[U] = {
def test(t: A[U]) = t match {
case B() => ??? // This works
}
}
is valid looks weird to me. It doesn't matter if U is bound by a class or a method - from inside of the binding scope we can't give any guarantees about its type, therefore the compiler should reject this pattern match too.