Related
I have posted this question in scala-user forum,
https://groups.google.com/forum/#!topic/scala-user/xlr7KmlWdWI
and I received an answer which I am happy with. However, at the same time I want to make sure if that is the only conclusion. Thanks
My question is, say, I have,
trait Combinable[A] {
def join[B](l: List[A]): B
}
When I implement this trait with A as String and B as Int, for example,
class CombineString extends Combinable[String] {
override def join[Int](strings: List[String]) = string.size
}
Obviously, Int, next to join method, is not the Scala integer class and
compiling this code will fail. Alternatively, I could rewrite my trait as
trait Combinable[A, B] {
def join(l: List[A]): B
}
or
trait Combinable[A] {
def join[B](l: List[A])(f: List[A] => B): B
}
My question is, how can I implement the trait as defined in the first example as it is? If the first example has no practical use because of the way it is defined, why does the compiler not complaints? Thanks again.
You can not expect for compiler to understand what type combinations make sense for you, but you can specify this sense as relation between types in term of multiparameter implicits.
Result is basically combination of two of your rejected approaches, but with desired syntax.
First of two rejected forms became implicit type:
trait Combine[A, B] {
def apply(l: List[A]): B
}
Next you can define appropriate type combinations and their meaning
implicit object CombineStrings extends Combine[String, String] {
def apply(l: List[String]) = l.mkString
}
implicit object CombineInts extends Combine[Int, Int] {
def apply(l: List[Int]) = l.sum
}
implicit object CombinableIntAsString extends Combine[Int, String] {
def apply(l: List[Int]) = l.mkString(",")
}
Finally we modify second rejected form that hides f argument for implicit resulution:
trait Combinable[A] {
def join[B](l: List[A])(implicit combine: Combine[A, B]): B = combine(l)
}
Now you can define
val a = new Combinable[String] {}
val b = new Combinable[Int] {}
And check that
a.join[String](List("a", "b", "c"))
b.join[Int](List(1, 2, 3))
b.join[String](List(1, 2, 3))
runs nice while
a.join[Int](List("a", "b", "c"))
makes compiler cry until you could provide evidence of practical use of relation between String and Int in form of implicit value
My question is, how can I implement the trait as defined in the first example as it is?
class CombineString extends Combinable[String] {
override def join[B](strings: List[String]) = null.asInstanceOf[B]
// or, for that matter, anything and then .asInstanceOf[B]
}
How could the compiler know that's not what you want?
I've been experimenting with implicit conversions, and I have a decent understanding of the 'enrich-my-libray' pattern that uses these. I tried to combine my understanding of basic implicits with the use of implicit evidence... But I'm misunderstanding something crucial, as shown by the method below:
import scala.language.implicitConversions
object Moo extends App {
case class FooInt(i: Int)
implicit def cvtInt(i: Int) : FooInt = FooInt(i)
implicit def cvtFoo(f: FooInt) : Int = f.i
class Pair[T, S](var first: T, var second: S) {
def swap(implicit ev: T =:= S, ev2: S =:= T) {
val temp = first
first = second
second = temp
}
def dump() = {
println("first is " + first)
println("second is " + second)
}
}
val x = new Pair(FooInt(200), 100)
x.dump
x.swap
x.dump
}
When I run the above method I get this error:
Error:(31, 5) Cannot prove that nodescala.Moo.FooInt =:= Int.
x.swap
^
I am puzzled because I would have thought that my in-scope implict conversion would be sufficient 'evidence' that Int's can be converted to FooInt's and vice versa. Thanks in advance for setting me straight on this !
UPDATE:
After being unconfused by Peter's excellent answer, below, the light bulb went on for me one good reason you would want to use implicit evidence in your API. I detail that in my own answer to this question (also below).
=:= checks if the two types are equal and FooInt and Int are definitely not equal, although there exist implicit conversion for values of these two types.
I would create a CanConvert type class which can convert an A into a B :
trait CanConvert[A, B] {
def convert(a: A): B
}
We can create type class instances to transform Int into FooInt and vise versa :
implicit val Int2FooInt = new CanConvert[Int, FooInt] {
def convert(i: Int) = FooInt(i)
}
implicit val FooInt2Int = new CanConvert[FooInt, Int] {
def convert(f: FooInt) = f.i
}
Now we can use CanConvert in our Pair.swap function :
class Pair[A, B](var a: A, var b: B) {
def swap(implicit a2b: CanConvert[A, B], b2a: CanConvert[B, A]) {
val temp = a
a = b2a.convert(b)
b = a2b.convert(temp)
}
override def toString = s"($a, $b)"
def dump(): Unit = println(this)
}
Which we can use as :
scala> val x = new Pair(FooInt(200), 100)
x: Pair[FooInt,Int] = (FooInt(200), 100)
scala> x.swap
scala> x.dump
(FooInt(100), 200)
A =:= B is not evidence that A can be converted to B. It is evidence that A can be cast to B. And you have no implicit evidence anywhere that Int can be cast to FooInt vice versa (for good reason ;).
What you are looking for is:
def swap(implicit ev: T => S, ev2: S => T) {
After working through this excercise I think I have a better understanding of WHY you'd want to use implicit evidence serves in your API.
Implicit evidence can be very useful when:
you have a type parameterized class that provides various methods
that act on the types given by the parameters, and
when one or more of those methods only make sense when additional
constraints are placed on parameterized types.
So, in the case of the simple API given in my original question:
class Pair[T, S](var first: T, var second: S) {
def swap(implicit ev: T =:= S, ev2: S =:= T) = ???
def dump() = ???
}
We have a type Pair, which keeps two things together, and we can always call dump() to examine the two things. We can also, under certain conditions, swap the positions of the first and second items in the pair. And those conditions are given by the implicit evidence constraints.
The Programming in Scala book gives a nice example of how this technique
is used in Scala collections, specifically on the toMap method of Traversables.
The book points out that Map's constructor
wants key-value pairs, i.e., two-tuples, as arguments. If we have a
sequence [Traversable] of pairs, wouldn’t it be nice to create a Map
out of them in one step? That’s what toMap does, but we have a
dilemma. We can’t allow the user to call toMap if the sequence is not
a sequence of pairs.
So there's an example of a type [Traversable] that has a method [toMap] that can't be used in all situations... It can only be used when the compiler can 'prove' (via implicit evidence) that the items in the Traversable are pairs.
In scala, the following code compiles properly:
class a {}
class b {}
object Main {
implicit class Conv[f, t](val v: f ⇒ t) extends AnyVal {
def conv = v
}
def main(args: Array[String]) {
val m = (a: a) ⇒ new b
m.conv
}
}
But for some reason the following fails to compile:
class a {}
class b {}
object Main {
type V[f, t] = f ⇒ t
implicit class Conv[f, t](val v: V[f, t]) extends AnyVal {
def conv = v
}
def main(args: Array[String]) {
val m = (a: a) ⇒ new b
m.conv
}
}
with the following message:
value conv is not a member of a => b
m.conv
Why does this happen?
EDIT: Yes, there is still an error even with
val m: V[a,b] = new V[a,b] { def apply(a: a) = new b }
In your first example, val v: f => t is inferred to a type signature [-A, +B], because it is shorthand for a Function of one parameter. Function1 has the type signature, Function1[-A, +B]. So, a type which is Contravariant in the A parameter and Covariant in the B parameter.
Then the lambda function, (a: a) => new b later in the code, has it's type inferred as a function from a to b. So, the type signature is identical and implicit resolution works.
In your second example type V[f, t] = f => t and the parameter created from it: val v: V[f, t], have their type explicitly specified as V[f, t]. The function f => t would still be [-A, +B], but you explicitly restrict your types to being Invariant in the type signature, so the type V is Invariant in both type parameters.
Later, when you declare: val m = (a: a) => new b the type signature of this would still be [-A, +B] as in the first example. Implicit resolution fails to work because the val is Contravariant in its first type parameter but the type V is Invariant in its first type parameter.
Changing the type signature of V to either V[-f, +t] or V[-f, t] resolves this and implicit resolution works once again.
This does raise a question about why the Covariance of the second type parameter is not a problem for implicit resolution, whilst the Contravariance of the first type parameter is. I played around and did a little bit of research. I came across a few interesting links, which indicates that there are definitely some limitations/issues around implicit resolution specifically when it comes to Contravariance.
A Scala Ticket related to Contravariance and implicit resolution.
A long discussion on google groups related to the ticket
Some workaround code for Contravariance and implicit resolutions in some scenarios.
I would have to defer to someone with knowledge of the internals of the Scala compiler and the resolution mechanics for more detail, but it seems that this is running afoul of some limitations around implicit resolution in the context of Contravariance.
For your third example, I think you mean:
class a {}
class b {}
object Main {
type V[f, t] = f => t
implicit class Conv[f, t](val v: V[f, t]) extends AnyVal {
def conv = v
}
def main(args: Array[String]) {
val m: V[a,b] = new V[a,b] { def apply(a: a) = new b }
m.conv // does not compile
}
}
This is an interesting one and I think it is a slightly different cause. Type Aliases are restricted in what they can change when it comes to variance, but making the variance stricter is allowed. I can't say for sure what is happening here, but here is an interesting Stack Overflow question related to type aliases and variance.
Given the complexity of implicit resolution, combined with the additional factors of the type declaration variance versus the implied Function1 variance I suspect the compiler is just not able to resolve anything in that specific scenario.
Changing to:
implicit class Conv(val v: V[_, _]) extends AnyVal {
def conv = v
}
means that it works in all scenarios, because you are basically saying to the compiler, that for the purposes of the Conv implicit class, you don't care about the variance of the type parameters on V.
e.g: the following also works
class a {}
class b {}
object Main {
type V[f, t] = f ⇒ t
implicit class Conv(val v: V[_, _]) extends AnyVal {
def conv = v
}
def main(args: Array[String]) {
val m = (a: a) ⇒ new b
m.conv
}
}
In the following simplified sample code:
case class One[A](a: A) // An identity functor
case class Twice[F[_], A](a: F[A], b: F[A]) // A functor transformer
type Twice1[F[_]] = ({type L[α] = Twice[F, α]}) // We'll use Twice1[F]#L when we'd like to write Twice[F]
trait Applicative[F[_]] // Members omitted
val applicativeOne: Applicative[One] = null // Implementation omitted
def applicativeTwice[F[_]](implicit inner: Applicative[F]): Applicative[({type L[α] = Twice[F, α]})#L] = null
I can call applicativeTwice on applicativeOne, and type inference works, as soon as I try to call it on applicativeTwice(applicativeOne), inference fails:
val aOK = applicativeTwice(applicativeOne)
val bOK = applicativeTwice[Twice1[One]#L](applicativeTwice(applicativeOne))
val cFAILS = applicativeTwice(applicativeTwice(applicativeOne))
The errors in scala 2.10.0 are
- type mismatch;
found : tools.Two.Applicative[[α]tools.Two.Twice[tools.Two.One,α]]
required: tools.Two.Applicative[F]
- no type parameters for method applicativeTwice:
(implicit inner: tools.Two.Applicative[F])tools.Two.Applicative[[α]tools.Two.Twice[F,α]]
exist so that it can be applied to arguments
(tools.Two.Applicative[[α]tools.Two.Twice[tools.Two.One,α]])
--- because ---
argument expression's type is not compatible with formal parameter type;
found : tools.Two.Applicative[[α]tools.Two.Twice[tools.Two.One,α]]
required: tools.Two.Applicative[?F]
Why wouldn't "?F" match with anything (of the right kind) ?
Ultimately I'd like applicativeTwice to be an implicit function, but I'd have to get the type inference working first.
I have seen similar questions, and the answers pointed to limitations in the type inference algorithms. But this case seems pretty limitative, and must be quite an annoyance in monad transformers, so I suspect I'm missing some trick to work around this.
You've hit a common annoyance: SI-2712. For clarity, I'm going to minimize your code a bit:
import language.higherKinds
object Test {
case class Base[A](a: A)
case class Recursive[F[_], A](fa: F[A])
def main(args: Array[String]): Unit = {
val one = Base(1)
val two = Recursive(one)
val three = Recursive(two) // doesn't compile
println(three)
}
}
This demonstrates the same type error as yours:
argument expression's type is not compatible with formal parameter type;
found : Test.Recursive[Test.Base,Int]
required: ?F
val three = Recursive(two) // doesn't compile
^
First a bit of syntax and terminology you probably already know:
In Scala we say that a plain, unparameterized data type (such as Int) has kind _. It's monomorphic.
Base, on the other hand, is parameterized. we can't use it as the type of a value without providing the type it contains, so we say has kind _[_]. It's rank-1 polymorphic: a type constructor that takes a type.
Recursive goes further still: it has two parameters, F[_] and A. The number of type parameters don't matter here, but their kinds do. F[_] is rank-1 polymorphic, so Recursive is rank-2 polymorphic: it's a type constructor that takes a type constructor.
We call anything rank two or above higher-kinded, and this is where the fun starts.
Scala in general doesn't have trouble with higher-kinded types. This is one of several key features that distinguishes its type system from, say, Java's. But it does have trouble with partial application of type parameters when dealing with higher-kinded types.
Here's the problem: Recursive[F[_], A] has two type parameters. In your sample code, you did the "type lambda" trick to partially apply the first parameter, something like:
val one = Base(1)
val two = Recursive(one)
val three = {
type λ[α] = Recursive[Base, α]
Recursive(two : λ[Int])
}
This convinces the compiler that you're providing something of the correct kind (_[_]) to the Recursive constructor. If Scala had curried type parameter lists, I'd definitely have used that here:
case class Base[A](a: A)
case class Recursive[F[_]][A](fa: F[A]) // curried!
def main(args: Array[String]): Unit = {
val one = Base(1) // Base[Int]
val two = Recursive(one) // Recursive[Base][Int]
val three = Recursive(two) // Recursive[Recursive[Base]][Int]
println(three)
}
Alas, it does not (see SI-4719). So, to the best of my knowledge, the most common way of dealing with this problem is the "unapply trick," due to Miles Sabin. Here is a greatly simplified version of what appears in scalaz:
import language.higherKinds
trait Unapply[FA] {
type F[_]
type A
def apply(fa: FA): F[A]
}
object Unapply {
implicit def unapply[F0[_[_], _], G0[_], A0] = new Unapply[F0[G0, A0]] {
type F[α] = F0[G0, α]
type A = A0
def apply(fa: F0[G0, A0]): F[A] = fa
}
}
In somewhat hand-wavey terms, this Unapply construct is like a "first-class type lambda." We define a trait representing the assertion that some type FA can be decomposed into a type constructor F[_] and a type A. Then in its companion object, we can define implicits to provide specific decompositions for types of various kinds. I've only defined here the specific one that we need to make Recursive fit, but you could write others.
With this extra bit of plumbing, we can now do what we need:
import language.higherKinds
object Test {
case class Base[A](a: A)
case class Recursive[F[_], A](fa: F[A])
object Recursive {
def apply[FA](fa: FA)(implicit u: Unapply[FA]) = new Recursive(u(fa))
}
def main(args: Array[String]): Unit = {
val one = Base(1)
val two = Recursive(one)
val three = Recursive(two)
println(three)
}
}
Ta-da! Now type inference works, and this compiles. As an exercise, I'd suggest you create an additional class:
case class RecursiveFlipped[A, F[_]](fa: F[A])
... which isn't really different from Recursive in any meaningful way, of course, but will again break type inference. Then define the additional plumbing needed to fix it. Good luck!
Edit
You asked for a less simplified version, something aware of type-classes. Some modification is required, but hopefully you can see the similarity. First, here's our upgraded Unapply:
import language.higherKinds
trait Unapply[TC[_[_]], FA] {
type F[_]
type A
def TC: TC[F]
def apply(fa: FA): F[A]
}
object Unapply {
implicit def unapply[TC[_[_]], F0[_[_], _], G0[_], A0](implicit TC0: TC[({ type λ[α] = F0[G0, α] })#λ]) =
new Unapply[TC, F0[G0, A0]] {
type F[α] = F0[G0, α]
type A = A0
def TC = TC0
def apply(fa: F0[G0, A0]): F[A] = fa
}
}
Again, this is completely ripped off from scalaz. Now some sample code using it:
import language.{ implicitConversions, higherKinds }
object Test {
// functor type class
trait Functor[F[_]] {
def map[A, B](fa: F[A])(f: A => B): F[B]
}
// functor extension methods
object Functor {
implicit class FunctorOps[F[_], A](fa: F[A])(implicit F: Functor[F]) {
def map[B](f: A => B) = F.map(fa)(f)
}
implicit def unapply[FA](fa: FA)(implicit u: Unapply[Functor, FA]) =
new FunctorOps(u(fa))(u.TC)
}
// identity functor
case class Id[A](value: A)
object Id {
implicit val idFunctor = new Functor[Id] {
def map[A, B](fa: Id[A])(f: A => B) = Id(f(fa.value))
}
}
// pair functor
case class Pair[F[_], A](lhs: F[A], rhs: F[A])
object Pair {
implicit def pairFunctor[F[_]](implicit F: Functor[F]) = new Functor[({ type λ[α] = Pair[F, α] })#λ] {
def map[A, B](fa: Pair[F, A])(f: A => B) = Pair(F.map(fa.lhs)(f), F.map(fa.rhs)(f))
}
}
def main(args: Array[String]): Unit = {
import Functor._
val one = Id(1)
val two = Pair(one, one) map { _ + 1 }
val three = Pair(two, two) map { _ + 1 }
println(three)
}
}
Note (3 years later, July 2016), scala v2.12.0-M5 is starting to implement SI-2172 (support for higher order unification)
See commit 892a6d6 from Miles Sabin
-Xexperimental mode now only includes -Ypartial-unification
It follows Paul Chiusano's simple algorithm:
// Treat the type constructor as curried and partially applied, we treat a prefix
// as constants and solve for the suffix. For the example in the ticket, unifying
// M[A] with Int => Int this unifies as,
//
// M[t] = [t][Int => t] --> abstract on the right to match the expected arity
// A = Int --> capture the remainder on the left
The test/files/neg/t2712-1.scala includes:
package test
trait Two[A, B]
object Test {
def foo[M[_], A](m: M[A]) = ()
def test(ma: Two[Int, String]) = foo(ma) // should fail with -Ypartial-unification *disabled*
}
And (test/files/neg/t2712-2.scala):
package test
class X1
class X2
class X3
trait One[A]
trait Two[A, B]
class Foo extends Two[X1, X2] with One[X3]
object Test {
def test1[M[_], A](x: M[A]): M[A] = x
val foo = new Foo
test1(foo): One[X3] // fails with -Ypartial-unification enabled
test1(foo): Two[X1, X2] // fails without -Ypartial-unification
}
Apparently unapply/unapplySeq in extractor objects do not support implicit parameters. Assuming here an interesting parameter a, and a disturbingly ubiquitous parameter b that would be nice to hide away, when extracting c.
[EDIT]: It appears something was broken in my intellij/scala-plugin installation that caused this. I cannot explain. I was having numerous strange problems with my intellij lately. After reinstalling, I can no longer reprodce my problem. Confirmed that unapply/unapplySeq do allow for implicit parameters! Thanks for your help.
This does not work (**EDIT:yes, it does):**
trait A; trait C; trait B { def getC(a: A): C }
def unapply(a:A)(implicit b:B):Option[C] = Option(b.getC(a))
In my understanding of what an ideal extractor should be like, in which the intention is intuitively clear also to Java folks, this limitation basically forbids extractor objects which depend on additional parameter(s).
How do you typically handle this limitation?
So far I've got those four possible solutions:
1) The simplest solution that I want to improve on: don't hide b, provide parameter b along with a, as normal parameter of unapply in form of a tuple:
object A1{
def unapply(a:(A,B)):Option[C] = Option(a._2.getC(a._1)) }
in client code:
val c1 = (a,b) match { case A1(c) => c1 }
I don't like it because there is more noise deviating that deconstruction of a into c is important here. Also since java folks, that have to be convinced to actually use this scala code, are confronted with one additional synthactic novelty (the tuple braces). They might get anti-scala aggressions "What's all this? ... Why then not use a normal method in the first place and check with if?".
2) define extractors within a class encapsulating the dependence on a particular B, import extractors of that instance. At import site a bit unusual for java folks, but at pattern match site b is hidden nicely and it is intuitively evident what happens. My favorite. Some disadvantage I missed?
class BDependent(b:B){
object A2{
def unapply(a:A):Option[C] = Option(b.getC(a))
} }
usage in client code:
val bDeps = new BDependent(someB)
import bDeps.A2
val a:A = ...
val c2 = a match { case A2(c) => c }
}
3) declare extractor objects in scope of client code. b is hidden, since it can use a "b" in local scope. Hampers code reuse, heavily pollutes client code (additionally, it has to be stated before code using it).
4) have unapply return Option of function B => C. This allows import and usage of an ubitious-parameter-dependent extractor, without providing b directly to the extractor, but instead to the result when used. Java folks maybe confused by usage of function values, b not hidden:
object A4{
def unapply[A,C](a:A):Option[B => C] = Option((_:B).getC(a))
}
then in client code:
val b:B = ...
val soonAC: B => C = a match { case A4(x) => x }
val d = soonAC(b).getD ...
Further remarks:
As suggested in this answer, "view bounds" may help to get extractors work with implicit conversions, but this doesn't help with implicit parameters. For some reason I prefer not to workaround with implicit conversions.
looked into "context bounds", but they seem to have the same limitation, don't they?
In what sense does your first line of code not work? There's certainly no arbitrary prohibition on implicit parameter lists for extractor methods.
Consider the following setup (I'm using plain old classes instead of case classes to show that there's no extra magic happening here):
class A(val i: Int)
class C(val x: String)
class B(pre: String) { def getC(a: A) = new C(pre + a.i.toString) }
Now we define an implicit B value and create an extractor object with your unapply method:
implicit val b = new B("prefix: ")
object D {
def unapply(a: A)(implicit b: B): Option[C] = Option(b getC a)
}
Which we can use like this:
scala> val D(c) = new A(42)
c: C = C#52394fb3
scala> c.x
res0: String = prefix: 42
Exactly as we'd expect. I don't see why you need a workaround here.
The problem you have is that implicit parameters are compile time (static) constraints, whereas pattern matching is a runtime (dynamic) approach.
trait A; trait C; trait B { def getC(a: A): C }
object Extractor {
def unapply(a: A)(implicit b: B): Option[C] = Some(b.getC(a))
}
// compiles (implicit is statically provided)
def withImplicit(a: A)(implicit b: B) : Option[C] = a match {
case Extractor(c) => Some(c)
case _ => None
}
// does not compile
def withoutImplicit(a: A) : Option[C] = a match {
case Extractor(c) => Some(c)
case _ => None
}
So this is a conceptual problem, and the solution depends on what you actually want to achieve. If you want something along the lines of an optional implicit, you might use the following:
sealed trait FallbackNone {
implicit object None extends Optional[Nothing] {
def toOption = scala.None
}
}
object Optional extends FallbackNone {
implicit def some[A](implicit a: A) = Some(a)
final case class Some[A](a: A) extends Optional[A] {
def toOption = scala.Some(a)
}
}
sealed trait Optional[+A] { def toOption: Option[A]}
Then where you had implicit b: B you will have implicit b: Optional[B]:
object Extractor {
def unapply(a:A)(implicit b: Optional[B]):Option[C] =
b.toOption.map(_.getC(a))
}
def test(a: A)(implicit b: Optional[B]) : Option[C] = a match {
case Extractor(c) => Some(c)
case _ => None
}
And the following both compile:
test(new A {}) // None
{
implicit object BImpl extends B { def getC(a: A) = new C {} }
test(new A {}) // Some(...)
}