A diverging implicit expansion in Scala, involving chained implicits - scala

(note: this problem is fixed as of Scala 2.13, see here: https://github.com/scala/scala/pull/6050)
I am working on a system of Scala types that involves chained implicits. This system behaves as I expected in many cases, but fails via diverging expansion in others. So far I haven't come up with a good explanation for the divergence, and I'm hoping the community can explain it for me!
Here is a simplified system of types that reproduces the problem:
object repro {
import scala.reflect.runtime.universe._
trait +[L, R]
case class Atomic[V](val name: String)
object Atomic {
def apply[V](implicit vtt: TypeTag[V]): Atomic[V] = Atomic[V](vtt.tpe.typeSymbol.name.toString)
}
case class Assign[V, X](val name: String)
object Assign {
def apply[V, X](implicit vtt: TypeTag[V]): Assign[V, X] = Assign[V, X](vtt.tpe.typeSymbol.name.toString)
}
trait AsString[X] {
def str: String
}
object AsString {
implicit def atomic[V](implicit a: Atomic[V]): AsString[V] =
new AsString[V] { val str = a.name }
implicit def assign[V, X](implicit a: Assign[V, X], asx: AsString[X]): AsString[V] =
new AsString[V] { val str = asx.str }
implicit def plus[L, R](implicit asl: AsString[L], asr: AsString[R]): AsString[+[L, R]] =
new AsString[+[L, R]] { val str = s"(${asl.str}) + (${asr.str})" }
}
trait X
implicit val declareX = Atomic[X]
trait Y
implicit val declareY = Atomic[Y]
trait Z
implicit val declareZ = Atomic[Z]
trait Q
implicit val declareQ = Assign[Q, (X + Y) + Z]
trait R
implicit val declareR = Assign[R, Q + Z]
}
Following is a demo of the behavior, with some working cases and then the diverging failure:
scala> :load /home/eje/divergence-repro.scala
Loading /home/eje/divergence-repro.scala...
defined module repro
scala> import repro._
import repro._
scala> implicitly[AsString[X]].str
res0: String = X
scala> implicitly[AsString[X + Y]].str
res1: String = (X) + (Y)
scala> implicitly[AsString[Q]].str
res2: String = ((X) + (Y)) + (Z)
scala> implicitly[AsString[R]].str
<console>:12: error: diverging implicit expansion for type repro.AsString[repro.R]
starting with method assign in object AsString
implicitly[AsString[R]].str

You'll be surprised to know you haven't done anything wrong! Well on a logical level at least. What you've encountered as error here is a known behavior of the Scala compiler when resolving implicits for recursive data structures. a good explanation of this behavior is given in the book The Type Astronaut's Guide to Shapeless:
Implicit resolution is a search process. The compiler uses heuristics to determine whether it is “converging” on a solution. If the heuristics don’t yield
favorable results for a particular branch of search, the compiler assumes the
branch is not converging and moves onto another.
One heuristic is specifically designed to avoid infinite loops. If the compiler
sees the same target type twice in a particular branch of search, it gives up
and moves on. We can see this happening if we look at the expansion for
CsvEncoder[Tree[Int]] The implicit resolution process goes through the
following types:
CsvEncoder[Tree[Int]] // 1
CsvEncoder[Branch[Int] :+: Leaf[Int] :+: CNil] // 2
CsvEncoder[Branch[Int]] // 3
CsvEncoder[Tree[Int] :: Tree[Int] :: HNil] // 4
CsvEncoder[Tree[Int]] // 5 uh oh
We see Tree[A] twice in lines 1 and 5, so the compiler moves onto
another branch of search. The eventual consequence is that it fails to
find a suitable implicit.
In your case if the compiler had kept going and not given up on so early it would have eventually reached the solution! But remember not every diverging implicit error is false compiler alarm. some are in fact diverging / infinitely expanding.
I know of two solutions to this issue:
macro based lazy evaluation of recursive types:
The shapeless library has a Lazy type that differs evaluation of it's Hlists head to runtime and as a result prevents this diverging implicit error. I find explaining or providing examples of it is beyond the OP topic. but you should check it.
create implicit checkpoints so that the implicit for the recursive type is available to the compiler beforehand:
implicitly[AsString[X]].str
implicitly[AsString[X + Y]].str
val asQ = implicitly[AsString[Q]]
asQ.str
{
implicit val asQImplicitCheckpoint: AsString[Q] = asQ
implicitly[AsString[R]].str
}
It's not a shame if you are a fan of neither of these solutions. the shapeless's Lazy solution while tried and true is still a third party library dependency and also with the removal of macros in scala 3.0 I'm not certain what'll become of all these macro based technics.

Related

scala overloading resolution differences between function calls and implicit search

There is a difference in the way the scala 2.13.3 compiler determines which overloaded function to call compared to which overloaded implicit to pick.
object Thing {
trait A;
trait B extends A;
trait C extends A;
def f(a: A): String = "A"
def f(b: B): String = "B"
def f(c: C): String = "C"
implicit val a: A = new A {};
implicit val b: B = new B {};
implicit val c: C = new C {};
}
import Thing._
scala> f(new B{})
val res1: String = B
scala> implicitly[B]
val res2: Thing.B = Thing$$anon$2#2f64f99f
scala> f(new A{})
val res3: String = A
scala> implicitly[A]
^
error: ambiguous implicit values:
both value b in object Thing of type Thing.B
and value c in object Thing of type Thing.C
match expected type Thing.A
As we can see, the overload resolution worked for the function call but not for the implicit pick. Why isn't the implicit offered by val a be chosen as occurs with function calls? If the callers ask for an instance of A why the compilers considers instances of B and C when an instance of A is in scope. There would be no ambiguity if the resolution logic were the same as for function calls.
Edit 2:
The Edit 1 was removed because the assertion I wrote there was wrong.
In response to the comments I added another test to see what happens when the implicit val c: C is removed. In that case the compiler don't complains and picks implicit val b: B despite the caller asked for an instance of A.
object Thing {
trait A { def name = 'A' };
trait B extends A { def name = 'B' };
trait C extends A { def name = 'C' };
def f(a: A): String = "A"
def f(b: B): String = "B"
implicit val a: A = new A {};
implicit val b: B = new B {};
}
import Thing._
scala> f(new A{})
val res0: String = A
scala> implicitly[A].name
val res3: Char = B
So, the overloading resolution of implicit differs from function calls more than I expected.
Anyway, I still don't find a reason why the designers of scala decided to apply a different resolution logic for function and implicit overloading. (Edit: Later noticed why).
Let's see what happens in a real world example.
Suppose we are doing a Json parser that converts a Json string directly to scala Abstract data types, and we want it to support many standard collections.
The snippet in charge of parsing the iterable collections would be something like this:
trait Parser[+A] {
def parse(input: Input): ParseResult;
///// many combinators here
}
implicit def summonParser[T](implicit parserT: Parser[T]) = parserT;
/** #tparam IC iterator type constructor
* #tparam E element's type */
implicit def iterableParser[IC[E] <: Iterable[E], E](
implicit
parserE: Parser[E],
factory: IterableFactory[IC]
): Parser[IC[E]] = '[' ~> skipSpaces ~> (parserE <~ skipSpaces).repSepGen(coma <~ skipSpaces, factory.newBuilder[E]) <~ skipSpaces <~ ']';
Which requires a Parser[E] for the elements and a IterableFactory[IC] to construct the collection specified by the type parameters.
So, we have to put in implicit scope an instance of IterableFactory for every collection type we want to support.
implicit val iterableFactory: IterableFactory[Iterable] = Iterable
implicit val setFactory: IterableFactory[Set] = Set
implicit val listFactory: IterableFactory[List] = List
With the current implicit resolution logic implemented by the scala compiler, this snippet works fine for Set and List, but not for Iterable.
scala> def parserInt: Parser[Int] = ???
def parserInt: read.Parser[Int]
scala> Parser[List[Int]]
val res0: read.Parser[List[Int]] = read.Parser$$anonfun$pursue$3#3958db82
scala> Parser[Vector[Int]]
val res1: read.Parser[Vector[Int]] = read.Parser$$anonfun$pursue$3#648f48d3
scala> Parser[Iterable[Int]]
^
error: could not find implicit value for parameter parserT: read.Parser[Iterable[Int]]
And the reason is:
scala> implicitly[IterableFactory[Iterable]]
^
error: ambiguous implicit values:
both value listFactory in object IterableParser of type scala.collection.IterableFactory[List]
and value vectorFactory in object IterableParser of type scala.collection.IterableFactory[Vector]
match expected type scala.collection.IterableFactory[Iterable]
On the contrary, if the overloading resolution logic of implicits was like the one for function calls, this would work fine.
Edit 3: After many many coffees I noticed that, contrary to what I said above, there is no difference between the way the compiler decides which overloaded functions to call and which overloaded implicit to pick.
In the case of function call: from all the functions overloads such that the type of the argument is asignable to the type of the parameter, the compiler chooses the one such that the function's parameter type is assignable to all the others. If no function satisfies that, a compilation error is thrown.
In the case of implicit pick up: from all the implicit in scope such that the type of the implicit is asignable to the asked type, the compiler chooses the one such that the declared type is asignable to all the others. If no implicit satisfies that, an compilation error is thrown.
My mistake was that I didn't notice the inversion of the assignability.
Anyway, the resolution logic I proposed above (give me what I asked for) is not entirely wrong. It's solves the particular case I mentioned. But for most uses cases the logic implemented by the scala compiler (and, I suppose, all the other languages that support type classes) is better.
As explained in the Edit 3 section of the question, there are similitudes between the way the compiler decides which overloaded functions to call and which overloaded implicit to pick. In both cases the compiler does two steps:
Filters out all the alternatives that are not asignable.
From the remaining alternatives choses the most specific or complains if there is more than one.
In the case of the function call, the most specific alternative is the function with the most specific parameter type; and in the case of implicit pick is the instance with the most specific declared type.
But, if the logic in both cases were exactly the same, then why did the example of the question give different results? Because there is a difference: the assignability requirement that determines which alternatives pass the first step are oposite.
In the case of the function call, after the first step remain the functions whose parameter type is more generic than the argument type; and in the case of implicit pick, remain the instances whose declared type is more specific than the asked type.
The above words are enough to answers the question itself but don't give a solution to the problem that motivated it, which is: How to force the compiler to pick the implicit instance whose declared type is exactly the same than the summoned type? And the answer is: wrapping the implicit instances inside a non variant wrapper.

Conditionally generating implicits in scala

I am working on a system of chained implicit functions, which is similar to the simplified example below. The test c1.payload == c2.payload represents a test I need to do that is not in "type-space"; I had expected that I would drop into a macro for the definition of witnessEvidence, however Scala apparently does not allow macro definitions with implicit arguments of arbitrary type (WeakTypeTag values only!), and so I am a bit stumped about how to proceed with this. The code below shows logically what I'd like to happen, however an implicit function can't conditionally produce or not produce evidence (unless it is inside a macro implementation).
case class Capsule[T](payload: Int)
trait A
trait B
trait C
implicit val capa = Capsule[A](3)
implicit val capb = Capsule[B](3)
implicit val capc = Capsule[C](7)
case class Evidence[T1, T2](e: Int)
implicit def witnessEvidence[T1, T2](implicit c1: Capsule[T1], c2: Capsule[T2]): Evidence[T1, T2] = {
if (c1.payload == c2.payload)
Evidence[T1, T2](c1.payload)
else
// Do not produce the evidence
}
def foo[T1, T2](implicit ev: Evidence[T1, T2]) = ev.e
val f1 = foo[A, B] // this should compile
val f2 = foo[A, C] // this should fail with missing implicit!
This would not be possible as-is, since the implicit resolution is done at compilation, while testing for value equivalence is done at runtime.
To make this work, you need to make the compiler understand values as types, so that you can ask for the type equality of the two 3s and use that to infer that capa =:= capb. To do that you can use singleton types: https://github.com/milessabin/shapeless/wiki/Feature-overview:-shapeless-2.0.0#singleton-typed-literals
If you need to do arithmetic beyond plain equality comparison, you will need to use Nat:https://github.com/milessabin/shapeless/blob/master/core/src/main/scala/shapeless/nat.scala

Who can explain the meaning of this scala code

I am reading this code for a long time. I typed it into REPL and it works as well.
but I don't have any idea of what's going on here. Why and how does this even work!!!
import shapeless._
case class Size[L <: HList](get : Int)
object Size {
implicit val hnilSize = Size[HNil](0)
implicit def hconsSize[H, T <: HList](implicit tailSize: Size[T]) =
Size[H :: T](1 + tailSize.get)
def apply[L <: HList](l : L)(implicit size: Size[L]) : Int = size.get
}
Size(1 :: "Foo" :: true :: HNil)
Can someone explain this step by step and help me understand what is going on here.
Yeah, that's pretty thick stuff.
The mind-bender here is that hconsSize is recursive without actually being self referential.
Both apply and hconsSize pull in an implicit of type Size[X]. There are only two implicits that could fit that bill:
hnilSize, but only if X is type HNil
hconsSize itself
So apply pulls in the hconsSize implicit, which adds 1 to the stack and pulls in another hconsSize implicit (not necessarily in that order). This continues until we encounter an element of type HNil. Then the hnilSize implicit is pulled in, the get is zero, the stack is unrolled and all those 1's are added up.
Result: number of elements in the shapeless HList.

Infinite recursion with Shapeless select[U]

I had a neat idea (well, that's debatable, but let's say I had an idea) for making implicit dependency injection easier in Scala. The problem I have is that if you call any methods which require an implicit dependency, you must also decorate the calling method with the same dependency, all the way through until that concrete dependency is finally in scope. My goal was to be able to encode a trait as requiring a group of implicits at the time it's mixed in to a concrete class, so it could go about calling methods that require the implicits, but defer their definition to the implementor.
The obvious way to do this is with some kind of selftype a la this psuedo-scala:
object ThingDoer {
def getSomething(implicit foo: Foo): Int = ???
}
trait MyTrait { self: Requires[Foo and Bar and Bubba] =>
//this normally fails to compile unless doThing takes an implicit Foo
def doThing = ThingDoer.getSomething
}
After a few valiant attempts to actually implement a trait and[A,B] in order to get that nice syntax, I thought it would be smarter to start with shapeless and see if I could even get anywhere with that. I landed on something like this:
import shapeless._, ops.hlist._
trait Requires[L <: HList] {
def required: L
implicit def provide[T]:T = required.select[T]
}
object ThingDoer {
def needsInt(implicit i: Int) = i + 1
}
trait MyTrait { self: Requires[Int :: String :: HNil] =>
val foo = ThingDoer.needsInt
}
class MyImpl extends MyTrait with Requires[Int :: String :: HNil] {
def required = 10 :: "Hello" :: HNil
def showMe = println(foo)
}
I have to say, I was pretty excited when this actually compiled. But, it turns out that when you actually instantiate MyImpl, you get an infinite mutual recursion between MyImpl.provide and Required.provide.
The reason that I think it's due to some mistake I've made with shapeless is that when I step through, it's getting to that select[T] and then steps into HListOps (makes sense, since HListOps is what has the select[T] method) and then seems to bounce back into another call to Requires.provide.
My first thought was that it's attempting to get an implicit Selector[L,T] from provide, since provide doesn't explicitly guard against that. But,
The compiler should have realized that it wasn't going to get a Selector out of provide, and either chosen another candidate or failed to compile.
If I guard provide by requiring that it receive an implicit Selector[L,T] (in which case I could just apply the Selector to get the T) then it doesn't compile anymore due to diverging implicit expansion for type shapeless.ops.hlist.Selector[Int :: String :: HNil], which I don't really know how to go about addressing.
Aside from the fact that my idea is probably misguided to begin with, I'm curious to know how people typically go about debugging these kinds of mysterious, nitty-gritty things. Any pointers?
When I get confused about something related to implicits / type-level behaviour, I tend to find the reify technique useful:
scala> import scala.reflect.runtime.universe._
import scala.reflect.runtime.universe._
scala> val required: HList = HNil
required: shapeless.HList = HNil
scala> reify { implicit def provide[T]:T = required.select[T] }
res3: reflect.runtime.universe.Expr[Unit] =
Expr[Unit]({
implicit def provide[T]: T = HList.hlistOps($read.required).select[T](provide);
()
})
At this point it's easy to see what's gone wrong - the compiler thinks provide can provide any arbitrary T (because that's what you've told it), so it just calls provide to get the required Selector[L, T]. At compile time it only resolves this once, so there is no diverging implicit, no confusion at compile time - only at run-time.
The diverging implicit expansion happens because the compiler looks for a Selector[Int :: String :: HNil], it thinks provide could give it one if given a Selector[Int :: String :: HNil, Selector[Int :: String :: HNil]], it thinks provide could give it one if given a Selector[Int :: String :: HNil, Selector[Int :: String :: HNil, Selector[Int :: String :: HNil]] and at some point it realises this is an infinite loop. Where/how are you expecting it to get the Selector it needs? I think your provide is misguided because it's too general. Try making the call to ThingDoer.needsInt with an explicit int work first before trying to make it all implicit.
This general approach does work - I've written applications that use it as a DI mechanism -though beware of quadratic compile times.

Does the order of implicit parameters matter in Scala?

Given some method
def f[A,B](p: A)(implicit a: X[A,B], b: Y[B])
Does the order of a before b within the implicit parameter list matter for type inference?
I thought only the placement of parameters within different parameter lists matters, e.g. type information flows only through parameter lists from left to right.
I'm asking because I noticed that changing the order of implicit parameters within the singly implicit list made a program of mine compile.
Real example
The following code is using:
shapeless 2.1.0
Scala 2.11.5
Here is a simple sbt build file to help along with compiling the examples:
scalaVersion := "2.11.5"
libraryDependencies += "com.chuusai" %% "shapeless" % "2.1.0"
scalaSource in Compile := baseDirectory.value
Onto the example. This code compiles:
import shapeless._
import shapeless.ops.hlist.Comapped
class Foo {
trait NN
trait Node[X] extends NN
object Computation {
def foo[LN <: HList, N <: HList, TupN <: Product, FunDT]
(dependencies: TupN)
(computation: FunDT)
(implicit tupToHlist: Generic.Aux[TupN, LN], unwrap: Comapped.Aux[LN, Node, N]) = ???
// (implicit unwrap: Comapped.Aux[LN, Node, N], tupToHlist: Generic.Aux[TupN, LN]) = ???
val ni: Node[Int] = ???
val ns: Node[String] = ???
val x = foo((ni,ns))((i: Int, s: String) => s + i.toString)
}
}
and this code fails
import shapeless._
import shapeless.ops.hlist.Comapped
class Foo {
trait NN
trait Node[X] extends NN
object Computation {
def foo[LN <: HList, N <: HList, TupN <: Product, FunDT]
(dependencies: TupN)
(computation: FunDT)
// (implicit tupToHlist: Generic.Aux[TupN, LN], unwrap: Comapped.Aux[LN, Node, N]) = ???
(implicit unwrap: Comapped.Aux[LN, Node, N], tupToHlist: Generic.Aux[TupN, LN]) = ???
val ni: Node[Int] = ???
val ns: Node[String] = ???
val x = foo((ni,ns))((i: Int, s: String) => s + i.toString)
}
}
with the following compile error
Error:(22, 25) ambiguous implicit values:
both method hnilComapped in object Comapped of type [F[_]]=> shapeless.ops.hlist.Comapped.Aux[shapeless.HNil,F,shapeless.HNil]
and method hlistComapped in object Comapped of type [H, T <: shapeless.HList, F[_]](implicit mt: shapeless.ops.hlist.Comapped[T,F])shapeless.ops.hlist.Comapped.Aux[shapeless.::[F[H],T],F,shapeless.::[H,mt.Out]]
match expected type shapeless.ops.hlist.Comapped.Aux[LN,Foo.this.Node,N]
val x = foo((ni,ns))((i: Int, s: String) => s + i.toString)
^
Error:(22, 25) could not find implicit value for parameter unwrap: shapeless.ops.hlist.Comapped.Aux[LN,Foo.this.Node,N]
val x = foo((ni,ns))((i: Int, s: String) => s + i.toString)
^
Error:(22, 25) not enough arguments for method foo: (implicit unwrap: shapeless.ops.hlist.Comapped.Aux[LN,Foo.this.Node,N], implicit tupToHlist: shapeless.Generic.Aux[(Foo.this.Node[Int], Foo.this.Node[String]),LN])Nothing.
Unspecified value parameters unwrap, tupToHlist.
val x = foo((ni,ns))((i: Int, s: String) => s + i.toString)
^
Normally it should not matter. If you look at the language spec it makes no mention about resolution being dependent on parameter order.
I looked at the source code of shapeless, and I could not come up with any reason why this error would present itself.
And doing a quick search through the bug repo of the language I found a similar issue that was apparently resolved. But it does not state if the fix involved treating the symptom (making context bounds not break compilation) or the cause (restrictions on implicit parameter ordering.)
Therefore I would argue that this is a compiler bug, and it is tightly related to the issue linked in point 3.
Also, I would suggest you submit a bug report if you can find a second opinion that resulted from a more rigorous analysis than my own :)
Hope this puts your mind at rest. Cheers!
According to my reading of the comments of the issue mentioned by Lorand Szakacs, I come to the conclusion that the order of implicit parameters matters in the current version 2.11 of the Scala compiler.
This is because the developers participating in the discussion appear to assume that the order matters; they do not state it explicitly.
I'm not aware of the language spec mentioning anything about this topic.
Reordering them will only break code that explicitly passes them, as well as all compiled code. Everything else will be unaffected.