I have been looking into the details of shapeless singletons and have encountered a small example that doesn't work as expected. I thought that as long as we pass a singleton type to a method, there should be an implicit Witness.Aux[_] available in scope:
import shapeless._
import syntax.singleton._
object SingletonTest extends App {
def check[K](a: K)(implicit witness: Witness.Aux[K]): Unit = {
println(witness.value)
}
val b = 'key.narrow
val c: Witness.`'key`.T = b
check(c) // Works!
check(b) /* Fails: shapeless.this.Witness.apply is not a valid implicit value for shapeless.Witness.Aux[Symbol with shapeless.tag.Tagged[String("key")]] because:
hasMatchingSymbol reported error: Type argument Symbol with shapeless.tag.Tagged[String("key")] is not a singleton type */
}
I would expect the types of b and c in the example to be the same (and checking it with =:= succeeds). If I add an implicit Witness[Witness.'key.T] into scope manually, the code compiles.
Environment: Scala 2.11.8; Shapeless 2.3.0
The explanation is that the Scala compiler will widen the singleton type on the right hand side when inferring the type of the val on the left ... you'll find chapter and verse in the relevant section of the Scala Language Specification.
Here's an example of the same phenomenon independent from shapeless,
scala> class Foo ; val foo = new Foo
defined class Foo
foo: Foo = Foo#8bd1b6a
scala> val f1 = foo
f1: Foo = Foo#8bd1b6a
scala> val f2: foo.type = foo
f2: foo.type = Foo#8bd1b6a
As you can see from the definition of f2 the Scala compiler knows that the value foo has the more precise type foo.type (ie. the singleton type of val foo), however, unless explicitly requested it won't infer that more precise type. Instead it infers the non-singleton (ie. widened) type Foo as you can see in the case of f1.
See also this related answer.
Related
There is a difference in the way the scala 2.13.3 compiler determines which overloaded function to call compared to which overloaded implicit to pick.
object Thing {
trait A;
trait B extends A;
trait C extends A;
def f(a: A): String = "A"
def f(b: B): String = "B"
def f(c: C): String = "C"
implicit val a: A = new A {};
implicit val b: B = new B {};
implicit val c: C = new C {};
}
import Thing._
scala> f(new B{})
val res1: String = B
scala> implicitly[B]
val res2: Thing.B = Thing$$anon$2#2f64f99f
scala> f(new A{})
val res3: String = A
scala> implicitly[A]
^
error: ambiguous implicit values:
both value b in object Thing of type Thing.B
and value c in object Thing of type Thing.C
match expected type Thing.A
As we can see, the overload resolution worked for the function call but not for the implicit pick. Why isn't the implicit offered by val a be chosen as occurs with function calls? If the callers ask for an instance of A why the compilers considers instances of B and C when an instance of A is in scope. There would be no ambiguity if the resolution logic were the same as for function calls.
Edit 2:
The Edit 1 was removed because the assertion I wrote there was wrong.
In response to the comments I added another test to see what happens when the implicit val c: C is removed. In that case the compiler don't complains and picks implicit val b: B despite the caller asked for an instance of A.
object Thing {
trait A { def name = 'A' };
trait B extends A { def name = 'B' };
trait C extends A { def name = 'C' };
def f(a: A): String = "A"
def f(b: B): String = "B"
implicit val a: A = new A {};
implicit val b: B = new B {};
}
import Thing._
scala> f(new A{})
val res0: String = A
scala> implicitly[A].name
val res3: Char = B
So, the overloading resolution of implicit differs from function calls more than I expected.
Anyway, I still don't find a reason why the designers of scala decided to apply a different resolution logic for function and implicit overloading. (Edit: Later noticed why).
Let's see what happens in a real world example.
Suppose we are doing a Json parser that converts a Json string directly to scala Abstract data types, and we want it to support many standard collections.
The snippet in charge of parsing the iterable collections would be something like this:
trait Parser[+A] {
def parse(input: Input): ParseResult;
///// many combinators here
}
implicit def summonParser[T](implicit parserT: Parser[T]) = parserT;
/** #tparam IC iterator type constructor
* #tparam E element's type */
implicit def iterableParser[IC[E] <: Iterable[E], E](
implicit
parserE: Parser[E],
factory: IterableFactory[IC]
): Parser[IC[E]] = '[' ~> skipSpaces ~> (parserE <~ skipSpaces).repSepGen(coma <~ skipSpaces, factory.newBuilder[E]) <~ skipSpaces <~ ']';
Which requires a Parser[E] for the elements and a IterableFactory[IC] to construct the collection specified by the type parameters.
So, we have to put in implicit scope an instance of IterableFactory for every collection type we want to support.
implicit val iterableFactory: IterableFactory[Iterable] = Iterable
implicit val setFactory: IterableFactory[Set] = Set
implicit val listFactory: IterableFactory[List] = List
With the current implicit resolution logic implemented by the scala compiler, this snippet works fine for Set and List, but not for Iterable.
scala> def parserInt: Parser[Int] = ???
def parserInt: read.Parser[Int]
scala> Parser[List[Int]]
val res0: read.Parser[List[Int]] = read.Parser$$anonfun$pursue$3#3958db82
scala> Parser[Vector[Int]]
val res1: read.Parser[Vector[Int]] = read.Parser$$anonfun$pursue$3#648f48d3
scala> Parser[Iterable[Int]]
^
error: could not find implicit value for parameter parserT: read.Parser[Iterable[Int]]
And the reason is:
scala> implicitly[IterableFactory[Iterable]]
^
error: ambiguous implicit values:
both value listFactory in object IterableParser of type scala.collection.IterableFactory[List]
and value vectorFactory in object IterableParser of type scala.collection.IterableFactory[Vector]
match expected type scala.collection.IterableFactory[Iterable]
On the contrary, if the overloading resolution logic of implicits was like the one for function calls, this would work fine.
Edit 3: After many many coffees I noticed that, contrary to what I said above, there is no difference between the way the compiler decides which overloaded functions to call and which overloaded implicit to pick.
In the case of function call: from all the functions overloads such that the type of the argument is asignable to the type of the parameter, the compiler chooses the one such that the function's parameter type is assignable to all the others. If no function satisfies that, a compilation error is thrown.
In the case of implicit pick up: from all the implicit in scope such that the type of the implicit is asignable to the asked type, the compiler chooses the one such that the declared type is asignable to all the others. If no implicit satisfies that, an compilation error is thrown.
My mistake was that I didn't notice the inversion of the assignability.
Anyway, the resolution logic I proposed above (give me what I asked for) is not entirely wrong. It's solves the particular case I mentioned. But for most uses cases the logic implemented by the scala compiler (and, I suppose, all the other languages that support type classes) is better.
As explained in the Edit 3 section of the question, there are similitudes between the way the compiler decides which overloaded functions to call and which overloaded implicit to pick. In both cases the compiler does two steps:
Filters out all the alternatives that are not asignable.
From the remaining alternatives choses the most specific or complains if there is more than one.
In the case of the function call, the most specific alternative is the function with the most specific parameter type; and in the case of implicit pick is the instance with the most specific declared type.
But, if the logic in both cases were exactly the same, then why did the example of the question give different results? Because there is a difference: the assignability requirement that determines which alternatives pass the first step are oposite.
In the case of the function call, after the first step remain the functions whose parameter type is more generic than the argument type; and in the case of implicit pick, remain the instances whose declared type is more specific than the asked type.
The above words are enough to answers the question itself but don't give a solution to the problem that motivated it, which is: How to force the compiler to pick the implicit instance whose declared type is exactly the same than the summoned type? And the answer is: wrapping the implicit instances inside a non variant wrapper.
Browsing Shapeless code, I came across this seemingly extraneous {} here and here:
trait Witness extends Serializable {
type T
val value: T {}
}
trait SingletonOps {
import record._
type T
def narrow: T {} = witness.value
}
I almost ignored it as a typo since it does nothing but apparently it does something. See this commit: https://github.com/milessabin/shapeless/commit/56a3de48094e691d56a937ccf461d808de391961
I have no idea what it does. Can someone explain?
Any type can be followed by a {} enclosed sequence of type and abstract non-type member definitions. This is known as a "refinement" and is used to provide additional precision over the base type that is being refined. In practice refinements are most commonly used to express constraints on abstract type members of the type being refined.
It's a little known fact that this sequence is allowed to be empty, and in the form that you can see in the shapeless source code, T {} is the type T with an empty refinement. Any empty refinement is ... empty ... so doesn't add any additional constraints to the refined type and hence the types T and T {} are equivalent. We can get the Scala compiler to verify that for us like so,
scala> implicitly[Int =:= Int {}]
res0: =:=[Int,Int] = <function1>
So why would I do such an apparently pointless thing in shapeless? It's because of the interaction between the presence of refinements and type inference. If you look in the relevant section of the Scala Language Specification you will see that the type inference algorithm attempts to avoid inferring singleton types in at least some circumstances. Here is an example of it doing just that,
scala> class Foo ; val foo = new Foo
defined class Foo
foo: Foo = Foo#8bd1b6a
scala> val f1 = foo
f1: Foo = Foo#8bd1b6a
scala> val f2: foo.type = foo
f2: foo.type = Foo#8bd1b6a
As you can see from the definition of f2 the Scala compiler knows that the value foo has the more precise type foo.type (ie. the singleton type of val foo), however, unless explicitly requested it won't infer that more precise type. Instead it infers the non-singleton (ie. widened) type Foo as you can see in the case of f1.
But in the case of Witness in shapeless I explicitly want the singleton type to be inferred for uses of the value member (the whole point of Witness is enable us to pass between the type and value levels via singleton types), so is there any way the Scala compiler can be persuaded to do that?
It turns out that an empty refinement does exactly that,
scala> def narrow[T <: AnyRef](t: T): t.type = t
narrow: [T <: AnyRef](t: T)t.type
scala> val s1 = narrow("foo") // Widened
s1: String = foo
scala> def narrow[T <: AnyRef](t: T): t.type {} = t // Note empty refinement
narrow: [T <: AnyRef](t: T)t.type
scala> val s2 = narrow("foo") // Not widened
s2: String("foo") = foo
As you can see in the above REPL transcript, in the first case s1 has been typed as the widened type String whereas s2 has been assigned the singleton type String("foo").
Is this mandated by the SLS? No, but it is consistent with it, and it makes some sort of sense. Much of Scala's type inference mechanics are implementation defined rather than spec'ed and this is probably one of the least surprising and problematic instances of that.
I got stuck for like an hour to discover this fact:
class Foo {
trait TypeClass[X]
object TypeClass {
implicit val gimme = new TypeClass[Int]{}
}
def foo[X : TypeClass](p: X): Unit = println("yeah " + p)
}
// compiles
val foo = new Foo()
foo.foo(4)
//does not compile
new Foo().foo(4)
could not find implicit value for evidence parameter of type _1.TypeClass[Int]
[error] new Foo().foo(4)
[error]
I can't figure out why that is. The only thing that I can think of is that scalac doesn't find implicits within a Type that doesn't have a Value Type accessible on any prefix. It cannot be referenced. Scalac apparently needs to access that Foo.this.foo to resolve implicits in it, which it can't in this case.
I feel like that if you combine type classes and path dependent types, you are effectively doomed. You'll end up dealing with this kind of stuff. I did that because scalac wouldn't otherwise infer types in my API methods and user would have to declare them explicitly. So I chose this kind of design so that types are constructed in Foo[T] and api methods use the existing type, but I hit several really ugly problems and bugs of this kind that made my app look like an overengineered crap...
Path dependent types may be bound only to some stable immutable values, so the more obvious example also will not work, because immutability is not guaranteed:
scala> var foo = new Foo()
foo: Foo = Foo#4bc814ba
scala> foo.foo(4)
<console>:17: error: could not find implicit value for evidence parameter of type _37.TypeClass[Int]
foo.foo(4)
^
scala> def foo = new Foo()
foo: Foo
scala> foo.foo(4)
<console>:17: error: could not find implicit value for evidence parameter of type _39.TypeClass[Int]
foo.foo(4)
^
_37 means that type was not inferred. So, it seems like scala simply infer type only after it's assigned to some val. It's not related to implicits actually, this will give you the more clear explanation:
scala> class C {type K = Int}
defined class C
scala> var z = new C
z: C = C#4d151931
scala> def aaa(a: z.K) = a
<console>:16: error: stable identifier required, but z found.
def aaa(a: z.K) = a
^
scala> def z = new C
z: C
scala> def aaa(a: z.K) = a
<console>:16: error: stable identifier required, but z found.
def aaa(a: z.K) = a
^
Your new Foo expression is similar to def newFoo = new Foo, so there it's considered as unstable.
I'm reading the book Scala in Depth, chapter 5 about implicits. The author says this on page 102:
The implicit scope used for implicit views is the same as for implicit parameters. But when the compiler is looking for type associations, it uses the type it's attempting to convert from [my emphasis], not the type it's attempting to convert to.
And yet, a few pages later he shows an example, with a complexmath.ComplexNumber class. You import i, which is a ComplexNumber, and call it's * method, which takes a ComplexNumber argument.
import complexmath.i
i * 1.0
To convert 1.0 into a ComplexNumber, this finds an implicit conversion that was defined like so:
package object complexmath {
implicit def realToComplex(r: Double) = new ComplexNumber(r, 0)
val i = ComplexNumber(0, 1)
But that contradicts the first statement, no? It needed to find Double => ComplexNumber. Why did it look in the complexmath package, which is part of the implicit scope for ComplexNumber but not for Double?
The spec says about views:
the implicit scope is the one of T => pt.
i.e., Function[T, pt], so implicit scope includes the classes associated with both T and pt, the source and target of the conversion.
scala> :pa
// Entering paste mode (ctrl-D to finish)
class B
class A
object A { implicit def x(a: A): B = new B }
// Exiting paste mode, now interpreting.
warning: there were 1 feature warning(s); re-run with -feature for details
defined class B
defined class A
defined object A
scala> val b: B = new A
b: B = B#63b41a65
scala> def f(b: B) = 3 ; def g = f(new A)
f: (b: B)Int
g: Int
scala> :pa
// Entering paste mode (ctrl-D to finish)
class A
class B
object B { implicit def x(a: A): B = new B }
// Exiting paste mode, now interpreting.
scala> val b: B = new A
b: B = B#6ba3b481
I think you are misunderstanding his text.
The compiler will look for the implicit conversion in all available scopes until it finds a suitable one.
In the example you specified it'll find one being provided by the complexmath package.
However, is the definition of 'suitable one' that we are interested here. In case of implicit conversions, the compiler will look for a conversion from *Double* to the expected type ComplexNumber.
In other words, it'll inspect all conversions from *Double* until it finds one that can convert a Double to the target type.
Josh, the author, is not saying that the compiler needs a conversion defined in an object associated with the Double object. The conversion can be defined everywhere.
In this particular case, the conversion is defined in the package object associated with the ComplexNumber object. And that's normal, is the ComplexNumber object that 'wants' to be compatible with Double.
And since the usage implies the import of ComplexNumber and therefore the import of package 'complexmath', the conversion will be always in scope.
So this is more about what the compiler already *know*s about an expression.
You have in the example:
import complexmath.i
i * 1.0
The compiler looks at this to start with:
1) I have a type for i, it is Complex
2) I have a method on i called *, it takes an argument of type Complex
3) You passed me an Int, but I need a Complex. let's see if i have an implicit which gives me that.
This example works because the * method is defined on Complex.
Hope that helps!
Either source or target works:
object Foo {
implicit def bar(b: Bar): Foo = new Foo {}
implicit def foo(f: Foo): Bar = new Bar {}
}
trait Foo
trait Bar
implicitly[Foo => Bar] // ok
implicitly[Bar => Foo] // ok
val b = new Bar {}
val bf: Foo = b // ok
val f = new Foo {}
val fb: Bar = f // ok
So I think that sentence is wrong (?)
I would like to define a method parameterized with type T that has behavior dependent on what implicit argument can be found of type Box[T]. The following code has this method defined as foo. When called with foo[Int] or foo[String] it will without issue return 1 or "two" as expected.
Where things get weird is with the method bar. It is defined as returning an Int, but instead of foo[Int] I have just foo. My hope was that the compiler would infer that T must be of type Int. It does not do that and instead fails:
bash $ scalac Code.scala
Types.scala:15: error: ambiguous implicit values:
both value one in object Main of type => Main.Box[Int]
and value two in object Main of type => Main.Box[java.lang.String]
match expected type Main.Box[T]
def bar: Int = foo
^
one error found
What is causing this error? Replacing foo with foo[Int] compiles fine. The simpler situation where there is no Box[T] type also compiles fine. That example is also below and uses argle and bargle instead of foo and bar.
object Main extends Application {
case class Box[T](value: T)
implicit val one = Box(1)
implicit val two = Box("two")
def foo[T](implicit x: Box[T]): T = {
x.value
}
// does not compile:
// def bar: Int = foo
// does compile
def bar: Int = foo[Int]
println(bar)
// prints 1
// the simpler situation where there is no Box type
implicit val three = 3
implicit val four = "four"
def argle[T](implicit x: T): T = x
def bargle: String = argle
println(bargle)
// prints "four"
}
What is going on in this snippet that causes this behavior? What about this interaction of implicit arguments, type inference, and erasure is causing problems? Is there a way to modify this code such that the line def foo: Int = bar works?
Someone else will have to explain why the type inference mechanism cannot handle that case, but if you are looking to cleanup your code you could probably do this:
object Test extends App {
case class Box[T](value: T)
implicit val one: Box[Int] = Box(1)
implicit val two: Box[String] = Box("two")
def foo[T : Box]: T = implicitly[Box[T]].value
val bar = foo[Int]
}
Note that:
I removed the type annotation from bar so you are really just indicating the type once (just in a different spot than you wanted)
I am using App instead of deprecated Application
Using a context bound in the type signature of foo
This might be related to SI-3346, though there it is implicit arguments to implicit conversions, and here you have a single implicit.