I am working on a system of chained implicit functions, which is similar to the simplified example below. The test c1.payload == c2.payload represents a test I need to do that is not in "type-space"; I had expected that I would drop into a macro for the definition of witnessEvidence, however Scala apparently does not allow macro definitions with implicit arguments of arbitrary type (WeakTypeTag values only!), and so I am a bit stumped about how to proceed with this. The code below shows logically what I'd like to happen, however an implicit function can't conditionally produce or not produce evidence (unless it is inside a macro implementation).
case class Capsule[T](payload: Int)
trait A
trait B
trait C
implicit val capa = Capsule[A](3)
implicit val capb = Capsule[B](3)
implicit val capc = Capsule[C](7)
case class Evidence[T1, T2](e: Int)
implicit def witnessEvidence[T1, T2](implicit c1: Capsule[T1], c2: Capsule[T2]): Evidence[T1, T2] = {
if (c1.payload == c2.payload)
Evidence[T1, T2](c1.payload)
else
// Do not produce the evidence
}
def foo[T1, T2](implicit ev: Evidence[T1, T2]) = ev.e
val f1 = foo[A, B] // this should compile
val f2 = foo[A, C] // this should fail with missing implicit!
This would not be possible as-is, since the implicit resolution is done at compilation, while testing for value equivalence is done at runtime.
To make this work, you need to make the compiler understand values as types, so that you can ask for the type equality of the two 3s and use that to infer that capa =:= capb. To do that you can use singleton types: https://github.com/milessabin/shapeless/wiki/Feature-overview:-shapeless-2.0.0#singleton-typed-literals
If you need to do arithmetic beyond plain equality comparison, you will need to use Nat:https://github.com/milessabin/shapeless/blob/master/core/src/main/scala/shapeless/nat.scala
Related
There is a difference in the way the scala 2.13.3 compiler determines which overloaded function to call compared to which overloaded implicit to pick.
object Thing {
trait A;
trait B extends A;
trait C extends A;
def f(a: A): String = "A"
def f(b: B): String = "B"
def f(c: C): String = "C"
implicit val a: A = new A {};
implicit val b: B = new B {};
implicit val c: C = new C {};
}
import Thing._
scala> f(new B{})
val res1: String = B
scala> implicitly[B]
val res2: Thing.B = Thing$$anon$2#2f64f99f
scala> f(new A{})
val res3: String = A
scala> implicitly[A]
^
error: ambiguous implicit values:
both value b in object Thing of type Thing.B
and value c in object Thing of type Thing.C
match expected type Thing.A
As we can see, the overload resolution worked for the function call but not for the implicit pick. Why isn't the implicit offered by val a be chosen as occurs with function calls? If the callers ask for an instance of A why the compilers considers instances of B and C when an instance of A is in scope. There would be no ambiguity if the resolution logic were the same as for function calls.
Edit 2:
The Edit 1 was removed because the assertion I wrote there was wrong.
In response to the comments I added another test to see what happens when the implicit val c: C is removed. In that case the compiler don't complains and picks implicit val b: B despite the caller asked for an instance of A.
object Thing {
trait A { def name = 'A' };
trait B extends A { def name = 'B' };
trait C extends A { def name = 'C' };
def f(a: A): String = "A"
def f(b: B): String = "B"
implicit val a: A = new A {};
implicit val b: B = new B {};
}
import Thing._
scala> f(new A{})
val res0: String = A
scala> implicitly[A].name
val res3: Char = B
So, the overloading resolution of implicit differs from function calls more than I expected.
Anyway, I still don't find a reason why the designers of scala decided to apply a different resolution logic for function and implicit overloading. (Edit: Later noticed why).
Let's see what happens in a real world example.
Suppose we are doing a Json parser that converts a Json string directly to scala Abstract data types, and we want it to support many standard collections.
The snippet in charge of parsing the iterable collections would be something like this:
trait Parser[+A] {
def parse(input: Input): ParseResult;
///// many combinators here
}
implicit def summonParser[T](implicit parserT: Parser[T]) = parserT;
/** #tparam IC iterator type constructor
* #tparam E element's type */
implicit def iterableParser[IC[E] <: Iterable[E], E](
implicit
parserE: Parser[E],
factory: IterableFactory[IC]
): Parser[IC[E]] = '[' ~> skipSpaces ~> (parserE <~ skipSpaces).repSepGen(coma <~ skipSpaces, factory.newBuilder[E]) <~ skipSpaces <~ ']';
Which requires a Parser[E] for the elements and a IterableFactory[IC] to construct the collection specified by the type parameters.
So, we have to put in implicit scope an instance of IterableFactory for every collection type we want to support.
implicit val iterableFactory: IterableFactory[Iterable] = Iterable
implicit val setFactory: IterableFactory[Set] = Set
implicit val listFactory: IterableFactory[List] = List
With the current implicit resolution logic implemented by the scala compiler, this snippet works fine for Set and List, but not for Iterable.
scala> def parserInt: Parser[Int] = ???
def parserInt: read.Parser[Int]
scala> Parser[List[Int]]
val res0: read.Parser[List[Int]] = read.Parser$$anonfun$pursue$3#3958db82
scala> Parser[Vector[Int]]
val res1: read.Parser[Vector[Int]] = read.Parser$$anonfun$pursue$3#648f48d3
scala> Parser[Iterable[Int]]
^
error: could not find implicit value for parameter parserT: read.Parser[Iterable[Int]]
And the reason is:
scala> implicitly[IterableFactory[Iterable]]
^
error: ambiguous implicit values:
both value listFactory in object IterableParser of type scala.collection.IterableFactory[List]
and value vectorFactory in object IterableParser of type scala.collection.IterableFactory[Vector]
match expected type scala.collection.IterableFactory[Iterable]
On the contrary, if the overloading resolution logic of implicits was like the one for function calls, this would work fine.
Edit 3: After many many coffees I noticed that, contrary to what I said above, there is no difference between the way the compiler decides which overloaded functions to call and which overloaded implicit to pick.
In the case of function call: from all the functions overloads such that the type of the argument is asignable to the type of the parameter, the compiler chooses the one such that the function's parameter type is assignable to all the others. If no function satisfies that, a compilation error is thrown.
In the case of implicit pick up: from all the implicit in scope such that the type of the implicit is asignable to the asked type, the compiler chooses the one such that the declared type is asignable to all the others. If no implicit satisfies that, an compilation error is thrown.
My mistake was that I didn't notice the inversion of the assignability.
Anyway, the resolution logic I proposed above (give me what I asked for) is not entirely wrong. It's solves the particular case I mentioned. But for most uses cases the logic implemented by the scala compiler (and, I suppose, all the other languages that support type classes) is better.
As explained in the Edit 3 section of the question, there are similitudes between the way the compiler decides which overloaded functions to call and which overloaded implicit to pick. In both cases the compiler does two steps:
Filters out all the alternatives that are not asignable.
From the remaining alternatives choses the most specific or complains if there is more than one.
In the case of the function call, the most specific alternative is the function with the most specific parameter type; and in the case of implicit pick is the instance with the most specific declared type.
But, if the logic in both cases were exactly the same, then why did the example of the question give different results? Because there is a difference: the assignability requirement that determines which alternatives pass the first step are oposite.
In the case of the function call, after the first step remain the functions whose parameter type is more generic than the argument type; and in the case of implicit pick, remain the instances whose declared type is more specific than the asked type.
The above words are enough to answers the question itself but don't give a solution to the problem that motivated it, which is: How to force the compiler to pick the implicit instance whose declared type is exactly the same than the summoned type? And the answer is: wrapping the implicit instances inside a non variant wrapper.
I'm trying to do some implicit magic in my code but the issue is very simple and I have extracted it out here. It seems a bit strange since from what I've read the following should work.
implicit class Foo(value: Double) {
def twice = 2*value
}
2.0.twice
implicit def strToDouble(x: String) = Try(x.toDouble) match {
case Success(d) => d
case Failure(_) => 0.0
}
strToDouble("2.0").twice
val a: Double = "2.0"
val b: Double = "equals 0.0"
"2.0".twice
I get a compile error
value twice is not a member of String
[error] "2.0".twice
I get you compiler, twice is defined for Doubles, not Strings. But I did tell you how to go from Strings to Doubles, and there is no ambiguity here (as far as I can tell), so shouldn't you be able to note that "2.0".twice can be done by doing strToDouble("2.0").twice?
Am I missing something here? Or is this an optimisation so that the compiler doesn't try out all the possible permutations of implicits (which would grow super-exponentially, I think as a factorial). I suppose I'm looking for a confirmation or rejection of this really.
Thanks
If you want extension method to be applicable even after implicit conversion, you can fix the definition of implicit class
implicit class Foo[A](value: A)(implicit ev: A => Double) {
def twice: Double = 2 * value
}
implicit def strToDouble(x: String): Double = ???
2.0.twice //compiles
"2.0".twice //compiles
I get you compiler, twice is defined for Doubles, not Strings. But I
did tell you how to go from Strings to Doubles, and there is no
ambiguity here (as far as I can tell), so shouldn't you be able to
note that "2.0".twice can be done by doing strToDouble("2.0").twice?
According to specification implicit conversions are applicable in three cases only
Why can't the compiler select the correct String.contains method when using this lambda shorthand?
https://scala-lang.org/files/archive/spec/2.13/07-implicits.html#views
The conversion of 2.0.twice to Foo(2.0).twice is the 2nd case and the conversion of "2.0" to strToDouble("2.0") is the 1st case. As you can see there is no item that they can be applied together. So if you want them to be applicable together you should specify that explicitly like I showed above.
Similarly if you defined conversions from A to B and from B to C this doesn't mean you have a conversion from A to C
case class A(i: Int)
case class B(i: Int)
case class C(i: Int)
implicit def aToB(a: A): B = B(a.i)
implicit def bToC(b: B): C = C(b.i)
A(1): B // compiles
B(1): C // compiles
// A(1): C //doesn't compile
I've tried the following:
type Params = String :: Int :: HNil
implicit val params: Params = "hello" :: 5 :: HNil
// Supposed to create an implicit for string and int if needed
implicit def meberImplicit[A](
implicit
params: Params,
selector: Selector[Params, A]
): A = params.select[A]
// Summoning a string
implicitly[String] // compile-time error
However, I'm getting a diverging implicit error:
diverging implicit expansion for type String
Am I missing something here? And maybe there already is a built-in or better way to achieve this?
The problem is that you are way too generic:
implicit def memberImplicit[A](
implicit // what you put here is irrelevant
): A = ...
With that you basically provided implicit for any value. This clashed with any other implicit you defined, as well as with any implicit parameter that you need to fetch.
But let's ask why compiler cannot prove that you just cannot provide the implicits that you pass into memberImplicit for bad cases, and so it won't consider it a viable alternative, ans so it would be able to prove that this branch of derivation should be cut (where you don't intent it), ambiguity is resolved, then cake.
Thing is, that the type you are returning is A. Which means that even if you added some constraint there like e.g. A =:!= Params - while normally it would work... you just provided all these implicits, so type constraints stopped working, and suddenly derivation for things like e.g. Selector[Params, String] have more that one way of being instantiated. In that situation virtually any implementation you'll try - as long as it returns A - will fail.
In order for things to work you HAVE TO constrain the output to be something that won't match everything - as a matter of the fact, the less it match the better. For instance create a separate type-class for extracting values from the HLists:
trait Extractable[A] { def extract(): A }
object Extractable {
implicit def extractHList[H <: HList, A](
implicit
h: H,
selector: Selector[H, A]
): Extractable[A] = () => selector(h)
}
def extract[A](implicit extractable: Extractable[A]): A = extractable.extract()
and then
extract[String] // "hello"
According to the style guide - is there a rule of thumb what one should use for typeclasses in Scala - context bound or implicit ev notation?
These two examples do the same
Context bound has more concise function signature, but requires val evaluation with implicitly call:
def empty[T: Monoid, M[_] : Monad]: M[T] = {
val M = implicitly[Monad[M]]
val T = implicitly[Monoid[T]]
M.point(T.zero)
}
The implicit ev approach automatically inserts typeclasses into function parameters but pollutes method signature:
def empty[T, M[_]](implicit T: Monoid[T], M: Monad[M]): M[T] = {
M.point(T.zero)
}
Most of the libraries I've checked (e.g. "com.typesafe.play" %% "play-json" % "2.6.2") use implicit ev
What are you using and why?
This is very opinion-based, but one pratical reason for using an implicit parameter list directly is that you perform fewer implicit searches.
When you do
def empty[T: Monoid, M[_] : Monad]: M[T] = {
val M = implicitly[Monad[M]]
val T = implicitly[Monoid[T]]
M.point(T.zero)
}
this gets desugared by the compiler into
def empty[T, M[_]](implicit ev1: Monoid[T], ev2: Monad[M]): M[T] = {
val M = implicitly[Monad[M]]
val T = implicitly[Monoid[T]]
M.point(T.zero)
}
so now the implicitly method needs to do another implicit search to find ev1 and ev2 in scope.
It's very unlikely that this has a noticeable runtime overhead, but it may affect your compile time performance in some cases.
If instead you do
def empty[T, M[_]](implicit T: Monoid[T], M: Monad[M]): M[T] =
M.point(T.zero)
you're directly accessing M and T from the first implicit search.
Also (and this is my personal opinion) I prefer the body to be shorter, at the price of some boilerplate in the signature.
Most libraries I know that make heavy use of implicit parameters use this style whenever they need to access the instance, so I guess I simply became more familiar with the notation.
Bonus, if you decide for the context bound anyway, it's usually a good idea to provide an apply method on the typeclass that searches for the implicit instance. This allows you to write
def empty[T: Monoid, M[_]: Monad]: M[T] = {
Monad[M].point(Monoid[T].zero)
}
More info on this technique here: https://blog.buildo.io/elegant-retrieval-of-type-class-instances-in-scala-32a524bbd0a7
One caveat you need to be aware of when working with implicitly is when using dependently typed functions. I'll quote from the book "The type astronauts guide to shapeless". It looks at the Last type class from Shapeless which retrieves the last type of an HList:
package shapeless.ops.hlist
trait Last[L <: HList] {
type Out
def apply(in: L): Out
}
And says:
The implicitly method from scala.Predef has this behaviour (this
behavior means losing the inner type member information). Compare the
type of an instance of Last summoned with implicitly:
implicitly[Last[String :: Int :: HNil]]
res6: shapeless.ops.hlist.Last[shapeless.::[String,shapeless
.::[Int,shapeless.HNil]]] = shapeless.ops.hlist$Last$$anon$34#20bd5df0
to the type of an instance summoned with Last.apply:
Last[String :: Int :: HNil]
res7: shapeless.ops.hlist.Last[shapeless.::[String,shapeless
.::[Int,shapeless.HNil]]]{type Out = Int} = shapeless.ops.hlist$Last$$anon$34#4ac2f6f
The type summoned by implicitly has no Out type member, that is an important caveat and generally why you would use the summoner pattern which doesn't use context bounds and implicitly.
Other than that, generally I find that it is a matter of style. Yes, implicitly might slightly increase compile times, but if you have an implicit rich application you'll most likely not "feel" the difference between the two at compile time.
And on a more personal note, sometimes writing implicitly[M[T]] feels "uglier" than making the method signature a bit longer, and might be clearer to the reader when you declare the implicit explicitly with a named field.
Note that on top of doing the same, your 2 examples are the same. Context bounds is just syntactic sugar for adding implicit parameters.
I am being opportunistic, using context bound as much as I can i.e., when I don't already have implicit function parameters. When I already have some, it is impossible to use context bound and I have no other choice but adding to the implicit parameter list.
Note that you don't need to define vals as you did, this works just fine (but I think you should go for what makes the code easier to read):
def empty[T: Monoid, M[_] : Monad]: M[T] = {
implicitly[Monad[M]].point(implicitly[Monoid[T]].zero)
}
FP libraries usually give you syntax extensions for typeclasses:
import scalaz._, Scalaz._
def empty[T: Monoid, M[_]: Monad]: M[T] = mzero[T].point[M]
I use this style as much as possible. This gives me syntax consistent with standard library methods and also lets me write for-comprehensions over generic Functors / Monads
If not possible, I use special apply on companion object:
import cats._, implicits._ // no mzero in cats
def empty[T: Monoid, M[_]: Monad]: M[T] = Monoid[T].empty.pure[M]
I use simulacrum to provide these for my own typeclasses.
I resort to implicit ev syntax for cases where context bound is not enough (e.g. multiple type parameters)
I have a big flat denormalized csv file containing multiples objects on single row like this:
a1, a2, a3, b1, b2, b3 ...
...
and I have objects:
case class A(a1: Int, a2: String, a3: Float)
case class B...
...
and the legacy is writing complicated adapters for extracting each class. I recently read some talks about shapeless and I know I can solve this with generic programming with shapeless.
and there's even a csv parser example, perfect. my thoughts would be:
parse the csv into List[String]
filter the ListString with object field information
using the filtered ListString and fed it with the Csv Parser example
thus I can extract multiple objects a row from the csv file.
problems I have:
I am still on scala 2.10, I seem to have configured the compiler plugin correctly( egg. mvn clean install works properly). but intellij fails to compile occasionally, throws exceptions.
<groupId>org.scala-lang.plugins</groupId>
<artifactId>macro-paradise_2.10</artifactId>
<version>2.0.0-SNAPSHOT</version>
this code is from shapeless example
implicit def deriveHConsOption[V, T <: HList](
implicit
scv: Lazy[CSVConverter[V]],
sct: Lazy[CSVConverter[T]]
):CSVConverter[Option[V] :: T] = new CSVConverter[Option[V] :: T] {
override def from(s: String): Try[shapeless.::[Option[V], T]] = s.span(_ != ',')
however I'm having following compiler errors:
Error:(70, 28) wrong number of type arguments for ::, should be 1
):CSVConverter[Option[V] :: T] = new CSVConverter[Option[V] :: T] {
^
my attempts to filter the csv using shapeless:
//code to filter and extract one object
def extractCSVColumnsAndParse[T]:
val labl = LabelledGeneric[T]
val keys = Keys[labl.Repr].apply
val keyNames = keys.toList.map(_.name)
however it seems T could only be concrete Class Type
Error:(86, 35) could not find implicit value for parameter lgen:
shapeless.LabelledGeneric[T]
val labl = LabelledGeneric[T]
I don't know much about your IntelliJ problem, but I'll give the other two some answer:
2: :: is scala.collection.immutable.:: (the List constructor), unless you explicitly import shapeless.:: (the HList constructor).
3: While you're dealing with generic types, you cannot assume anything on them. They might be Int, Any, MyCaseClass, Nothing, ... So the compiler cannot find a LabelledGeneric for them (indeed, what would be the LabelledGeneric for Any?). Therefore, you must explicitly tell all your generic methods that your type has an instance of LabelledGeneric[T], and this is done by giving it as an implicit parameter (or a context bounds, which is the same thing, under the hood). So for instance, you could do
// alternatively, def extractCSVColumnsAndParse[T: LabelledGeneric]
def extractCSVColumnsAndParse[T](implicit ev: LabelledGeneric[T]) = {
val labl = LabelledGeneric[T]
val keys = Keys[labl.Repr].apply
val keyNames = keys.toList.map(_.name)
...
}
And then, when you use it, for explicit case classes:
extractCSVColumnsAndParse[MyCaseClass] //no need to pass the parameter, it is already in scope
The "magic" of shapeless is that it generates your implicit ev: LabelledGeneric[MyCaseClass] for you, but it can only do so for specific types (using macros), so you have to tell the compiler that it exists, if you're dealing with generic types.
EDIT
After that, you get an error with val keys, because the type parameter for Keys must be an HList, so you have to enforce this in some way, because LabelledGeneric[T]#Repr is not necessarily an HList. And, you also need to provide an implicit Keys[Repr], for the same reason as with LabelledGeneric.
def extractCSVColumnsAndParse[T, Repr <: HList](labl: LabelledGeneric.Aux[T, Repr], K: Keys[Repr]) {
val keys = K()
val keyNames = keys.toList.map(_.name)
...
}
However, this makes it less easy to call with a specific case class, since you cannot do extractCSVColumnsAndParse[MyCaseClass] anymore. This is because scala methods only have one list of type parameters, so you must give them all or none of them.
A convoluted way to avoid this is the following pattern, assuming your method will actually have some parameter (say, the List[String] from the csv file, or the csv file path):
def extractCSVColumnsAndParse[T] = new Extractor[T]
trait Extractor[T] {
def apply[Repr <: HList](csv: List[String])(implicit labl: LabelledGeneric.Aux[T, Repr], K: Keys[Repr]) = {
... // put the logic here
}
}
Now you can call it using
extractCSVColumnsAndParse[MyCaseClass](csv)
This pattern allows you to specify only the first type parameter, the second one being inferred at compile time.