Possible bug in Scala 2.10.3 compiler - scala

I ran into some code that looks pretty much like this:
object SomeOddThing extends App {
val c: C = new C
val a: A = c
val b: B = c
// these will all print None
println(a.x)
println(b.x)
println(c.x)
}
abstract class A {
def x: Option[String] = None
}
abstract class B extends A {
override def x: Option[String] = Some("xyz")
}
class C extends B {
// x now has type None.type instead of Option[String]
override def x = None
}
trait T {
this: B =>
override def x = Some("ttt")
}
// this won't compile with the following error
// error: overriding method x in class C of type => None.type;
// method x in trait T of type => Some[String] has incompatible type
// class D extends C with T {}
// ^
class D extends C with T {}
This looks like a bug to me. Either the type of C.x should be inferred correctly or the class shouldn't compile at all.

The spec 4.6.4 only promises to infer a conforming type for the result type of the overriding method.
See https://issues.scala-lang.org/browse/SI-7212 and be thankful you didn't throw.
Maybe it will change:
https://groups.google.com/forum/#!topic/scala-internals/6vemF4hOA9A
Update:
Covariant result types are natural. It's not grotesque that it infers what it would normally do. And that you can't widen the type again. It's maybe suboptimal, as that other issue puts it. (Speaking to my comment that it's an improvement rather than a bug per se.)
My comment on the ML:
This falls under "always annotate interface methods, including
interface for extension by subclass."
There are lots of cases where using an inferred type for an API method commits you to a type you had rather avoided. This is also true for the interface you present to subclasses.
For this problem, we're asking the compiler to retain the last explicitly ascribed type, which is reasonable, unless it isn't. Maybe we mean don't infer singleton types, or Nothing, or something like that. /thinking-aloud
scala> trait A ; trait B extends A ; trait C extends B
defined trait A
defined trait B
defined trait C
scala> trait X { def f: A }
defined trait X
scala> trait Y extends X { def f: B }
defined trait Y
scala> trait Z extends Y { def f: C }
defined trait Z
scala> trait X { def f: A = new A {} }
defined trait X
scala> trait Y extends X { override def f: B = new B {} }
defined trait Y
scala> trait Z extends Y { override def f: C = new C {} }
defined trait Z
scala> trait ZZ extends Z { override def f: A = new C {} }
<console>:13: error: overriding method f in trait Z of type => C;
method f has incompatible type
trait ZZ extends Z { override def f: A = new C {} }
^
To the question, what does it mean for None.type to conform to Option?
scala> implicitly[None.type <:< Option[_]]
res0: <:<[None.type,Option[_]] = <function1>
to show that None.type, the singleton type of the None object, is in fact an Option.
Usually, one says that the compiler avoids inferring singleton types, even when you kind of want it to.
Programming in Scala says, "Usually such types are too specific to be useful."

Related

Implement trait methods using subclasses' types

I want a trait Foo to provide transform method that would apply a function to it. Also, I want to force implementing classes to have an increment method that would somehow transform the object as well. Naive solution:
trait Foo {
def transform(fun: Foo => Foo): Foo = fun(this)
def increment(n: Int): Foo
}
case class A(a: Int) extends Foo {
// expecting available: transform(fun: A => A): A
// to be implemented: increment(n: Int): A
...
}
Above won't work... Inherited transform still expects Foo => Foo, not A => A and increment still wants to return Foo, not A.
One more attempt:
trait Foo {
def transform[C <: Foo](fun: C => C): C = fun(this.asInstanceOf[C])
def increment[C <: Foo](n: Int): C
}
case class A(a: Int) extends Foo {
def increment(n: Int) = A(a + n)
}
A will not compile - it will still complain about the signature.
Taking out the increment function, transform works. However asInstanceOf looks a bit unsafe. Also, I need to explicitly provide type parameter to transform:
val a = A(1)
a.transform[A](x => x.copy(x.a + 1)) // returns A(2)
I wonder if there's a smart way to have it done.
The most direct way of getting what you want is to move the type parameter up to the trait declaration. That gives trait Foo[C]{...}. However, using copy in your transform still won't work, since the Foo trait doesn't know anything about anything that extends it. You can give it a bit more information with self typing:
trait Foo[C] {
this: C =>
def transform(fun: C => C): C = fun(this)
def increment(n: Int): C
}
case class A(a: Int) extends Foo[A] {
def increment(n: Int) = A(a + n)
}
The A extends Foo[A] is a little awkward to use here, but it works, since now when you extend Foo, it provides that type information back to the trait. That's still a little bit awkward though. It turns out there's a technique called type classes that we can use here to potentially improve things. First, you set up your trait. In a type class, there is exactly one implementation of the trait per type, so each method should also take in the instance that you want to operate on:
trait Foo[C] {
def transform(c: C)(f: C => C): C
def increment(c: C, inc: Int): C
}
Next, in the companion object you set up instances of the typeclass for the types you care about:
case class A(a: Int)
object Foo {
implicit val ATransform = new Foo[A] {
def transform (base: A)(f: A => A) = f(base)
def increment(base: A, inc: Int) = A(base.a+inc)
}
//Convenience function for finding the instance for a type.
//With this, Foo[A] is equivalent to implicitly[Foo[A]]
def apply[C](implicit foo: Foo[C]) = foo
}
Now we can use the type class as follows:
val b = A(3)
Foo[A].transform(b)(x=>x.copy(a=x.a+1)) //A(4)
Foo[A].increment(b,5) //A(8)

Why is this an invalid use of Scala's abstract types?

I have this code:
class A extends Testable { type Self <: A }
class B extends A { type Self <: B }
trait Testable {
type Self
def test[T <: Self] = {}
}
object Main {
val h = new A
// this throws an error
h.test[B]
}
And my error is:
error: type arguments [B] do not conform to method test's type parameter bounds [T <: Main.h.Self]
h.test[B]
On this question, it was said that this was due to path dependent types. Can anyone figure out how to have T <: Self, without having the path-dependent types problem?
Any help would be appreciated.
Your code need to be looks like:
// --- fictional scala syntax ---
class A extends Testable { type Self = A }
class B extends A { override type Self = B }
But it is imposible in current version of scala.
I would propose little bit long way (not longer than using path dependent types but another), and it conforms your requirements.
a) Use Type-class pattern for test method;
b) Use implicit parameters for conforms type relations.
Class hierarchy:
trait Testable
class A extends Testable
class B extends A
Conforms trait:
trait Conforms[X, Y]
Testable Type-class:
object TestableTypeClass {
implicit def testMethod[T <: Testable](testable : T) = new {
def test[X](implicit ev : Conforms[X, T]) = {}
}
}
test method type parameter conditions in companion objects:
object A {
// P <: A is your conditon (Self <: A) for class A
implicit def r[P <: A] = new Conforms[P , A] {}
}
object B {
// P <: B is your conditon (Self <: B) for class B
implicit def r[P <: B] = new Conforms[P , B] {}
}
Tests:
import TestableTypeClass._
val a = new A
a.test[A] // - Ok
a.test[B] // - Ok
val b = new B
// b.test[A] // - did not compile
b.test[B] // - Ok
UPDATE:
1) It is possible to collect all implicits in one object, and in this case object with implicits need to import (it is not needed before by rules of implicit scope in companion object):
object ImplicitContainer {
implicit def r1[P <: A] = new Conforms[P , A] {}
implicit def r2[P <: B] = new Conforms[P , B] {}
}
and using:
import TestableTypeClass._
import ImplicitContainer._
val a = new A
a.test[A]
a.test[B]
2,3) trait Conforms defined for 2 type parameter X & Y
X - used for future type constraint (and this constraint come from parametric method)
Y - used for determine the type for which will be define type constraint
implicit parameter choise by Comforms instance type, and idea of this design is playing with combinations X & Y
in Type-class TestableTypeClass type Y captured by implicit conversion from Testable to anonimous class with test method, and type X captured in test method call.
And a main feature is invariance of Conforms trait, this is why implicits is not ambiguous and correctly manage bound rules.
And for better understanding, one more example with more strict rules:
//...
class C extends B
object ImplicitContainer {
implicit def r1[P <: A] = new Conforms[P , A] {}
implicit def r2[P](implicit ev : P =:= B) = new Conforms[P , B] {}
implicit def r3[P <: C] = new Conforms[P , C] {}
}
import TestableTypeClass._
import ImplicitContainer._
val b = new B
//b.test[A] // - did not compile
b.test[B] // - Ok
//b.test[C] // - did not compile
I think what you are trying to achieve is something like this:
//this is of course not a correct Scala code
def test[T <: upperBoundOf[Self]]
But this doesn't make sense. Why? Because you can very easily circumvent such constraint, effectively rendering it pointless:
val h = new B
h.test[A] //nope, A is not a subtype of B
but...
val h: A = new B
h.test[A] //same thing, but this time it's apparently OK
Not only the constraint provides zero additional type safety, I think it also leads to breakage one of the most fundamental rules of OOP - the Liskov Substitution Principle. The snippet above compiles when h is of type A but does not compile when h is of type B even though it's a subtype of A, so everything should be fine according to the LSP.
So, essentially if you just leave your types like this:
class A extends Testable
class B extends A
trait Testable {
def test[T <: A]
}
you have exactly the same level of type safety.

How to infer the right type parameter from a projection type?

I have some troubles having Scala to infer the right type from a type projection.
Consider the following:
trait Foo {
type X
}
trait Bar extends Foo {
type X = String
}
def baz[F <: Foo](x: F#X): Unit = ???
Then the following compiles fine:
val x: Foo#X = ???
baz(x)
But the following won't compile:
val x: Bar#X = ???
baz(x)
Scala sees the "underlying type String" for x, but has lost the information that x is a Bar#X. It works fine if I annotate the type:
baz[Bar](x)
Is there a way to make Scala infer the right type parameter for baz?
If not, what is the general answer that makes it impossible?
The program compiles by adding this implicit conversion in the context:
implicit def f(x: Bar#X): Foo#X = x
As this implicit conversion is correct for any F <: Foo, I wonder why the compiler does not do that by itself.
You can also:
trait Foo {
type X
}
trait Bar extends Foo {
type X = String
}
class BarImpl extends Bar{
def getX:X="hi"
}
def baz[F <: Foo, T <: F#X](clz:F, x: T): Unit = { println("baz worked!")}
val bi = new BarImpl
val x: Bar#X = bi.getX
baz(bi,x)
but:
def baz2[F <: Foo, T <: F#X](x: T): Unit = { println("baz2 failed!")}
baz2(x)
fails with:
test.scala:22: error: inferred type arguments [Nothing,java.lang.String] do not conform to method baz2's type parameter bounds [F <: this.Foo,T <: F#X]
baz2(x)
^
one error found
I think basically, F <: Foo tells the compiler that F has to be a subtype of Foo, but when it gets an X it doesn't know what class your particular X comes from. Your X is just a string, and doesn't maintain information pointing back to Bar.
Note that:
def baz3[F<: Foo](x : F#X) = {println("baz3 worked!")}
baz3[Bar]("hi")
Also works. The fact that you defined a val x:Bar#X=??? just means that ??? is restricted to whatever Bar#X might happen to be at compile time... the compiler knows Bar#X is String, so the type of x is just a String no different from any other String.

Type upper bound in Scala Mapper/Record traits

I am puzzled by what seems to be a standard pattern in the Lift's Mapper and Record frameworks:
trait Mapper[A<:Mapper[A]] extends BaseMapper {
self: A =>
type MapperType = A
What does it mean with regard to the type parameter of the Mapper trait? The type A, which is a parameter of Mapper, is required to be a subclass of Mapper[A], how is it even possible, or may be I just don't understand the meaning of this definition.
This pattern is used to be able to capture the actual subtype of Mapper, which is useful for accepting arguments of that exact type in methods.
Traditionally you can't declare that constraint:
scala> trait A { def f(other: A): A }
defined trait A
scala> class B extends A { def f(other: B): B = sys.error("TODO") }
<console>:11: error: class B needs to be abstract,
since method f in trait A of type (other: A)A is not defined
(Note that A does not match B)
class B extends A { def f(other: B): B = sys.error("TODO") }
While when you have access to the precise type you can do:
scala> trait A[T <: A[T]] { def f(other: T): T }
defined trait A
scala> class B extends A[B] { def f(other: B): B = sys.error("TODO") }
defined class B
Note that this is also possible via bounded type members:
trait A { type T <: A; def f(other: T): T }
class B extends A { type T <: B; def f(other: T): T = sys.error("TODO") }

Is it possible to override a type field?

scala> class C
defined class C
scala> class subC extends C
defined class subC
scala> class A { type T = C}
defined class A
scala> class subA extends A { override type T = subC}
<console>:10: error: overriding type T in class A, which equals C;
type T has incompatible type
class subA extends A { override type T = subC}
^
In the example above, I get an error message, that I can not override the type field in class A ( even if the chosen type subC extends the class C).
Is overriding a type field possible at all ? And if yes, what is wrong with the example above ?
You wouldn't speak of 'overriding' with respect to types, but rather narrowing their bounds.
type T ... no bounds
type T <: C ... T is C or a subtype of C (which is called upper bound)
type T >: C ... T is C or a supertype of C (which is called lower bound)
type T = C ... T is exactly C (type alias)
Therefore, if T is a type member of trait A, and SubA is a subtype of A, in case (2) SubA may narrow T to a more particular subtype C, whereas in case (3) it could narrow it to a higher supertype of C. Case (1) doesn't impose any restrictions for SubA, while case (4) means that T is 'final' so to speak.
This has consequences for the useability of T in A—whether it may appear as a method argument's type or a method's return type.
Example:
trait C { def foo = () }
trait SubC extends C { def bar = () }
trait MayNarrow1 {
type T <: C // allows contravariant positions in MayNarrow1
def m(t: T): Unit = t.foo // ...like this
}
object Narrowed1 extends MayNarrow1 {
type T = SubC
}
object Narrowed2 extends MayNarrow1 {
type T = SubC
override def m(t: T): Unit = t.bar
}
It is possible to define method m in MayNarrow1 because type T occurs in contravariant position (as a method argument's type), therefore it is still valid even if T is narrowed down in a subtype of MayNarrow1 (the method body can treat t as if it were type C).
In contrast, type T = C inevitably fixes T, which would kind of correspond to making a method final. By fixing T, it can be used in a covariant position (as a method's return type):
trait Fixed extends MayNarrow1 {
type T = C // make that T <: C to see that it won't compile
final def test: T = new C {}
}
You can now easily see that it must be forbidden to further 'override' T:
trait Impossible extends Fixed {
override type T = SubC
test.bar // oops...
}
To be complete, here is the less common case of a lower bound:
trait MayNarrow2 {
type T >: SubC // allows covariant positions in MayNarrow2
def test: T = new SubC {}
}
object Narrowed3 extends MayNarrow2 {
type T = C
test.foo
}
object Narrowed4 extends MayNarrow2 {
type T = C
override def test: T = new C {}
}