Related
What's the point of using a typeclass over inheritance?
This is a function that uses the Monad typeclass by a context bound:
def f[A : Monad](x:A) = ???
( Yeah, we get flatMap method now)
This, however, uses inheritance with a subtype bound:
def f[A <: Monad](x:A) = ???
f(x) // where x is a CatsList which implements Monad trait.
( We also get flatMap method now.)
Don't both achieve the same thing?
Typeclasses are more flexible. In particular, it's possible to retrofit a typeclass to an existing type easily. To do this with inheritance, you would need to use the adapter pattern, which can get cumbersome when you have multiple traits.
For example, suppose you had two libraries which added the traits Measurable and Functor, respectively, using inheritance:
trait Measurable {
def length: Int
}
trait Functor[A] {
def map[B](f: A => B): Functor[B]
}
The first library helpfully defined an adapter for List => Measurable:
class ListIsMeasurable(ls: List[_]) extends Measurable {
def length = ls.length
}
And the second library did the same for List => Functor. Now we want to write a function that takes things that have both length and map methods:
def foo(x: Measurable with Functor) = ???
Of course, we should hope to be able to pass it a List. However, our adapters are useless here, and we have to write yet another adapter to make List conform to Measurable with Functor. In general, if you have n interfaces that might apply to List, you have 2^n possible adapters. On the other hand, if we had used typeclasses, there would have been no need for the extra boilerplate of a third typeclass instance:
def foo[A : Measurable : Functor](a: A) = ???
What you're saying is possible for the simplest cases, but type classes can give you much better flexibility in a lot of cases.
Take for example the Monoid type class:
trait Monoid[A] {
def empty: A
def combine(x: A, y: A): A
}
If you try to express the empty function with inheritance, you'll see it is pretty much impossible. That is because type classes operate on types rather than individual class instances.
Now let's define a Monoid for something like Tuple2:
implicit def tupleMonoid[A: Monoid, B: Monoid]: Monoid[(A, B)] = ...
Now you're in a territory where inheritance can't get you anywhere at all. With type classes we can add conditions to type class instances. I.e. a Tuple is a Monoid, when both of its types are also Monoids. With inheritance you can't do this.
According to the style guide - is there a rule of thumb what one should use for typeclasses in Scala - context bound or implicit ev notation?
These two examples do the same
Context bound has more concise function signature, but requires val evaluation with implicitly call:
def empty[T: Monoid, M[_] : Monad]: M[T] = {
val M = implicitly[Monad[M]]
val T = implicitly[Monoid[T]]
M.point(T.zero)
}
The implicit ev approach automatically inserts typeclasses into function parameters but pollutes method signature:
def empty[T, M[_]](implicit T: Monoid[T], M: Monad[M]): M[T] = {
M.point(T.zero)
}
Most of the libraries I've checked (e.g. "com.typesafe.play" %% "play-json" % "2.6.2") use implicit ev
What are you using and why?
This is very opinion-based, but one pratical reason for using an implicit parameter list directly is that you perform fewer implicit searches.
When you do
def empty[T: Monoid, M[_] : Monad]: M[T] = {
val M = implicitly[Monad[M]]
val T = implicitly[Monoid[T]]
M.point(T.zero)
}
this gets desugared by the compiler into
def empty[T, M[_]](implicit ev1: Monoid[T], ev2: Monad[M]): M[T] = {
val M = implicitly[Monad[M]]
val T = implicitly[Monoid[T]]
M.point(T.zero)
}
so now the implicitly method needs to do another implicit search to find ev1 and ev2 in scope.
It's very unlikely that this has a noticeable runtime overhead, but it may affect your compile time performance in some cases.
If instead you do
def empty[T, M[_]](implicit T: Monoid[T], M: Monad[M]): M[T] =
M.point(T.zero)
you're directly accessing M and T from the first implicit search.
Also (and this is my personal opinion) I prefer the body to be shorter, at the price of some boilerplate in the signature.
Most libraries I know that make heavy use of implicit parameters use this style whenever they need to access the instance, so I guess I simply became more familiar with the notation.
Bonus, if you decide for the context bound anyway, it's usually a good idea to provide an apply method on the typeclass that searches for the implicit instance. This allows you to write
def empty[T: Monoid, M[_]: Monad]: M[T] = {
Monad[M].point(Monoid[T].zero)
}
More info on this technique here: https://blog.buildo.io/elegant-retrieval-of-type-class-instances-in-scala-32a524bbd0a7
One caveat you need to be aware of when working with implicitly is when using dependently typed functions. I'll quote from the book "The type astronauts guide to shapeless". It looks at the Last type class from Shapeless which retrieves the last type of an HList:
package shapeless.ops.hlist
trait Last[L <: HList] {
type Out
def apply(in: L): Out
}
And says:
The implicitly method from scala.Predef has this behaviour (this
behavior means losing the inner type member information). Compare the
type of an instance of Last summoned with implicitly:
implicitly[Last[String :: Int :: HNil]]
res6: shapeless.ops.hlist.Last[shapeless.::[String,shapeless
.::[Int,shapeless.HNil]]] = shapeless.ops.hlist$Last$$anon$34#20bd5df0
to the type of an instance summoned with Last.apply:
Last[String :: Int :: HNil]
res7: shapeless.ops.hlist.Last[shapeless.::[String,shapeless
.::[Int,shapeless.HNil]]]{type Out = Int} = shapeless.ops.hlist$Last$$anon$34#4ac2f6f
The type summoned by implicitly has no Out type member, that is an important caveat and generally why you would use the summoner pattern which doesn't use context bounds and implicitly.
Other than that, generally I find that it is a matter of style. Yes, implicitly might slightly increase compile times, but if you have an implicit rich application you'll most likely not "feel" the difference between the two at compile time.
And on a more personal note, sometimes writing implicitly[M[T]] feels "uglier" than making the method signature a bit longer, and might be clearer to the reader when you declare the implicit explicitly with a named field.
Note that on top of doing the same, your 2 examples are the same. Context bounds is just syntactic sugar for adding implicit parameters.
I am being opportunistic, using context bound as much as I can i.e., when I don't already have implicit function parameters. When I already have some, it is impossible to use context bound and I have no other choice but adding to the implicit parameter list.
Note that you don't need to define vals as you did, this works just fine (but I think you should go for what makes the code easier to read):
def empty[T: Monoid, M[_] : Monad]: M[T] = {
implicitly[Monad[M]].point(implicitly[Monoid[T]].zero)
}
FP libraries usually give you syntax extensions for typeclasses:
import scalaz._, Scalaz._
def empty[T: Monoid, M[_]: Monad]: M[T] = mzero[T].point[M]
I use this style as much as possible. This gives me syntax consistent with standard library methods and also lets me write for-comprehensions over generic Functors / Monads
If not possible, I use special apply on companion object:
import cats._, implicits._ // no mzero in cats
def empty[T: Monoid, M[_]: Monad]: M[T] = Monoid[T].empty.pure[M]
I use simulacrum to provide these for my own typeclasses.
I resort to implicit ev syntax for cases where context bound is not enough (e.g. multiple type parameters)
I can see in the API docs for Predef that they're subclasses of a generic function type (From) => To, but that's all it says. Um, what? Maybe there's documentation somewhere, but search engines don't handle "names" like "<:<" very well, so I haven't been able to find it.
Follow-up question: when should I use these funky symbols/classes, and why?
These are called generalized type constraints. They allow you, from within a type-parameterized class or trait, to further constrain one of its type parameters. Here's an example:
case class Foo[A](a:A) { // 'A' can be substituted with any type
// getStringLength can only be used if this is a Foo[String]
def getStringLength(implicit evidence: A =:= String) = a.length
}
The implicit argument evidence is supplied by the compiler, iff A is String. You can think of it as a proof that A is String--the argument itself isn't important, only knowing that it exists. [edit: well, technically it actually is important because it represents an implicit conversion from A to String, which is what allows you to call a.length and not have the compiler yell at you]
Now I can use it like so:
scala> Foo("blah").getStringLength
res6: Int = 4
But if I tried use it with a Foo containing something other than a String:
scala> Foo(123).getStringLength
<console>:9: error: could not find implicit value for parameter evidence: =:=[Int,String]
You can read that error as "could not find evidence that Int == String"... that's as it should be! getStringLength is imposing further restrictions on the type of A than what Foo in general requires; namely, you can only invoke getStringLength on a Foo[String]. This constraint is enforced at compile-time, which is cool!
<:< and <%< work similarly, but with slight variations:
A =:= B means A must be exactly B
A <:< B means A must be a subtype of B (analogous to the simple type constraint <:)
A <%< B means A must be viewable as B, possibly via implicit conversion (analogous to the simple type constraint <%)
This snippet by #retronym is a good explanation of how this sort of thing used to be accomplished and how generalized type constraints make it easier now.
ADDENDUM
To answer your follow-up question, admittedly the example I gave is pretty contrived and not obviously useful. But imagine using it to define something like a List.sumInts method, which adds up a list of integers. You don't want to allow this method to be invoked on any old List, just a List[Int]. However the List type constructor can't be so constrainted; you still want to be able to have lists of strings, foos, bars, and whatnots. So by placing a generalized type constraint on sumInts, you can ensure that just that method has an additional constraint that it can only be used on a List[Int]. Essentially you're writing special-case code for certain kinds of lists.
Not a complete answer (others have already answered this), I just wanted to note the following, which maybe helps to understand the syntax better: The way you normally use these "operators", as for example in pelotom's example:
def getStringLength(implicit evidence: A =:= String)
makes use of Scala's alternative infix syntax for type operators.
So, A =:= String is the same as =:=[A, String] (and =:= is just a class or trait with a fancy-looking name). Note that this syntax also works with "regular" classes, for example you can write:
val a: Tuple2[Int, String] = (1, "one")
like this:
val a: Int Tuple2 String = (1, "one")
It's similar to the two syntaxes for method calls, the "normal" with . and () and the operator syntax.
Read the other answers to understand what these constructs are. Here is when you should use them. You use them when you need to constrain a method for specific types only.
Here is an example. Suppose you want to define a homogeneous Pair, like this:
class Pair[T](val first: T, val second: T)
Now you want to add a method smaller, like this:
def smaller = if (first < second) first else second
That only works if T is ordered. You could restrict the entire class:
class Pair[T <: Ordered[T]](val first: T, val second: T)
But that seems a shame--there could be uses for the class when T isn't ordered. With a type constraint, you can still define the smaller method:
def smaller(implicit ev: T <:< Ordered[T]) = if (first < second) first else second
It's ok to instantiate, say, a Pair[File], as long as you don't call smaller on it.
In the case of Option, the implementors wanted an orNull method, even though it doesn't make sense for Option[Int]. By using a type constraint, all is well. You can use orNull on an Option[String], and you can form an Option[Int] and use it, as long as you don't call orNull on it. If you try Some(42).orNull, you get the charming message
error: Cannot prove that Null <:< Int
It depends on where they are being used. Most often, when used while declaring types of implicit parameters, they are classes. They can be objects too in rare instances. Finally, they can be operators on Manifest objects. They are defined inside scala.Predef in the first two cases, though not particularly well documented.
They are meant to provide a way to test the relationship between the classes, just like <: and <% do, in situations when the latter cannot be used.
As for the question "when should I use them?", the answer is you shouldn't, unless you know you should. :-) EDIT: Ok, ok, here are some examples from the library. On Either, you have:
/**
* Joins an <code>Either</code> through <code>Right</code>.
*/
def joinRight[A1 >: A, B1 >: B, C](implicit ev: B1 <:< Either[A1, C]): Either[A1, C] = this match {
case Left(a) => Left(a)
case Right(b) => b
}
/**
* Joins an <code>Either</code> through <code>Left</code>.
*/
def joinLeft[A1 >: A, B1 >: B, C](implicit ev: A1 <:< Either[C, B1]): Either[C, B1] = this match {
case Left(a) => a
case Right(b) => Right(b)
}
On Option, you have:
def orNull[A1 >: A](implicit ev: Null <:< A1): A1 = this getOrElse null
You'll find some other examples on the collections.
An implicit question to newcomers to Scala seems to be: where does the compiler look for implicits? I mean implicit because the question never seems to get fully formed, as if there weren't words for it. :-) For example, where do the values for integral below come from?
scala> import scala.math._
import scala.math._
scala> def foo[T](t: T)(implicit integral: Integral[T]) {println(integral)}
foo: [T](t: T)(implicit integral: scala.math.Integral[T])Unit
scala> foo(0)
scala.math.Numeric$IntIsIntegral$#3dbea611
scala> foo(0L)
scala.math.Numeric$LongIsIntegral$#48c610af
Another question that does follow up to those who decide to learn the answer to the first question is how does the compiler choose which implicit to use, in certain situations of apparent ambiguity (but that compile anyway)?
For instance, scala.Predef defines two conversions from String: one to WrappedString and another to StringOps. Both classes, however, share a lot of methods, so why doesn't Scala complain about ambiguity when, say, calling map?
Note: this question was inspired by this other question, in the hopes of stating the problem in a more general manner. The example was copied from there, because it is referred to in the answer.
Types of Implicits
Implicits in Scala refers to either a value that can be passed "automatically", so to speak, or a conversion from one type to another that is made automatically.
Implicit Conversion
Speaking very briefly about the latter type, if one calls a method m on an object o of a class C, and that class does not support method m, then Scala will look for an implicit conversion from C to something that does support m. A simple example would be the method map on String:
"abc".map(_.toInt)
String does not support the method map, but StringOps does, and there's an implicit conversion from String to StringOps available (see implicit def augmentString on Predef).
Implicit Parameters
The other kind of implicit is the implicit parameter. These are passed to method calls like any other parameter, but the compiler tries to fill them in automatically. If it can't, it will complain. One can pass these parameters explicitly, which is how one uses breakOut, for example (see question about breakOut, on a day you are feeling up for a challenge).
In this case, one has to declare the need for an implicit, such as the foo method declaration:
def foo[T](t: T)(implicit integral: Integral[T]) {println(integral)}
View Bounds
There's one situation where an implicit is both an implicit conversion and an implicit parameter. For example:
def getIndex[T, CC](seq: CC, value: T)(implicit conv: CC => Seq[T]) = seq.indexOf(value)
getIndex("abc", 'a')
The method getIndex can receive any object, as long as there is an implicit conversion available from its class to Seq[T]. Because of that, I can pass a String to getIndex, and it will work.
Behind the scenes, the compiler changes seq.IndexOf(value) to conv(seq).indexOf(value).
This is so useful that there is syntactic sugar to write them. Using this syntactic sugar, getIndex can be defined like this:
def getIndex[T, CC <% Seq[T]](seq: CC, value: T) = seq.indexOf(value)
This syntactic sugar is described as a view bound, akin to an upper bound (CC <: Seq[Int]) or a lower bound (T >: Null).
Context Bounds
Another common pattern in implicit parameters is the type class pattern. This pattern enables the provision of common interfaces to classes which did not declare them. It can both serve as a bridge pattern -- gaining separation of concerns -- and as an adapter pattern.
The Integral class you mentioned is a classic example of type class pattern. Another example on Scala's standard library is Ordering. There's a library that makes heavy use of this pattern, called Scalaz.
This is an example of its use:
def sum[T](list: List[T])(implicit integral: Integral[T]): T = {
import integral._ // get the implicits in question into scope
list.foldLeft(integral.zero)(_ + _)
}
There is also syntactic sugar for it, called a context bound, which is made less useful by the need to refer to the implicit. A straight conversion of that method looks like this:
def sum[T : Integral](list: List[T]): T = {
val integral = implicitly[Integral[T]]
import integral._ // get the implicits in question into scope
list.foldLeft(integral.zero)(_ + _)
}
Context bounds are more useful when you just need to pass them to other methods that use them. For example, the method sorted on Seq needs an implicit Ordering. To create a method reverseSort, one could write:
def reverseSort[T : Ordering](seq: Seq[T]) = seq.sorted.reverse
Because Ordering[T] was implicitly passed to reverseSort, it can then pass it implicitly to sorted.
Where do Implicits come from?
When the compiler sees the need for an implicit, either because you are calling a method which does not exist on the object's class, or because you are calling a method that requires an implicit parameter, it will search for an implicit that will fit the need.
This search obey certain rules that define which implicits are visible and which are not. The following table showing where the compiler will search for implicits was taken from an excellent presentation (timestamp 20:20) about implicits by Josh Suereth, which I heartily recommend to anyone wanting to improve their Scala knowledge. It has been complemented since then with feedback and updates.
The implicits available under number 1 below has precedence over the ones under number 2. Other than that, if there are several eligible arguments which match the implicit parameter’s type, a most specific one will be chosen using the rules of static overloading resolution (see Scala Specification §6.26.3). More detailed information can be found in a question I link to at the end of this answer.
First look in current scope
Implicits defined in current scope
Explicit imports
wildcard imports
Same scope in other files
Now look at associated types in
Companion objects of a type
Implicit scope of an argument's type (2.9.1)
Implicit scope of type arguments (2.8.0)
Outer objects for nested types
Other dimensions
Let's give some examples for them:
Implicits Defined in Current Scope
implicit val n: Int = 5
def add(x: Int)(implicit y: Int) = x + y
add(5) // takes n from the current scope
Explicit Imports
import scala.collection.JavaConversions.mapAsScalaMap
def env = System.getenv() // Java map
val term = env("TERM") // implicit conversion from Java Map to Scala Map
Wildcard Imports
def sum[T : Integral](list: List[T]): T = {
val integral = implicitly[Integral[T]]
import integral._ // get the implicits in question into scope
list.foldLeft(integral.zero)(_ + _)
}
Same Scope in Other Files
Edit: It seems this does not have a different precedence. If you have some example that demonstrates a precedence distinction, please make a comment. Otherwise, don't rely on this one.
This is like the first example, but assuming the implicit definition is in a different file than its usage. See also how package objects might be used in to bring in implicits.
Companion Objects of a Type
There are two object companions of note here. First, the object companion of the "source" type is looked into. For instance, inside the object Option there is an implicit conversion to Iterable, so one can call Iterable methods on Option, or pass Option to something expecting an Iterable. For example:
for {
x <- List(1, 2, 3)
y <- Some('x')
} yield (x, y)
That expression is translated by the compiler to
List(1, 2, 3).flatMap(x => Some('x').map(y => (x, y)))
However, List.flatMap expects a TraversableOnce, which Option is not. The compiler then looks inside Option's object companion and finds the conversion to Iterable, which is a TraversableOnce, making this expression correct.
Second, the companion object of the expected type:
List(1, 2, 3).sorted
The method sorted takes an implicit Ordering. In this case, it looks inside the object Ordering, companion to the class Ordering, and finds an implicit Ordering[Int] there.
Note that companion objects of super classes are also looked into. For example:
class A(val n: Int)
object A {
implicit def str(a: A) = "A: %d" format a.n
}
class B(val x: Int, y: Int) extends A(y)
val b = new B(5, 2)
val s: String = b // s == "A: 2"
This is how Scala found the implicit Numeric[Int] and Numeric[Long] in your question, by the way, as they are found inside Numeric, not Integral.
Implicit Scope of an Argument's Type
If you have a method with an argument type A, then the implicit scope of type A will also be considered. By "implicit scope" I mean that all these rules will be applied recursively -- for example, the companion object of A will be searched for implicits, as per the rule above.
Note that this does not mean the implicit scope of A will be searched for conversions of that parameter, but of the whole expression. For example:
class A(val n: Int) {
def +(other: A) = new A(n + other.n)
}
object A {
implicit def fromInt(n: Int) = new A(n)
}
// This becomes possible:
1 + new A(1)
// because it is converted into this:
A.fromInt(1) + new A(1)
This is available since Scala 2.9.1.
Implicit Scope of Type Arguments
This is required to make the type class pattern really work. Consider Ordering, for instance: It comes with some implicits in its companion object, but you can't add stuff to it. So how can you make an Ordering for your own class that is automatically found?
Let's start with the implementation:
class A(val n: Int)
object A {
implicit val ord = new Ordering[A] {
def compare(x: A, y: A) = implicitly[Ordering[Int]].compare(x.n, y.n)
}
}
So, consider what happens when you call
List(new A(5), new A(2)).sorted
As we saw, the method sorted expects an Ordering[A] (actually, it expects an Ordering[B], where B >: A). There isn't any such thing inside Ordering, and there is no "source" type on which to look. Obviously, it is finding it inside A, which is a type argument of Ordering.
This is also how various collection methods expecting CanBuildFrom work: the implicits are found inside companion objects to the type parameters of CanBuildFrom.
Note: Ordering is defined as trait Ordering[T], where T is a type parameter. Previously, I said that Scala looked inside type parameters, which doesn't make much sense. The implicit looked for above is Ordering[A], where A is an actual type, not type parameter: it is a type argument to Ordering. See section 7.2 of the Scala specification.
This is available since Scala 2.8.0.
Outer Objects for Nested Types
I haven't actually seen examples of this. I'd be grateful if someone could share one. The principle is simple:
class A(val n: Int) {
class B(val m: Int) { require(m < n) }
}
object A {
implicit def bToString(b: A#B) = "B: %d" format b.m
}
val a = new A(5)
val b = new a.B(3)
val s: String = b // s == "B: 3"
Other Dimensions
I'm pretty sure this was a joke, but this answer might not be up-to-date. So don't take this question as being the final arbiter of what is happening, and if you do noticed it has gotten out-of-date, please inform me so that I can fix it.
EDIT
Related questions of interest:
Context and view bounds
Chaining implicits
Scala: Implicit parameter resolution precedence
I wanted to find out the precedence of the implicit parameter resolution, not just where it looks for, so I wrote a blog post revisiting implicits without import tax (and implicit parameter precedence again after some feedback).
Here's the list:
1) implicits visible to current invocation scope via local declaration, imports, outer scope, inheritance, package object that are accessible without prefix.
2) implicit scope, which contains all sort of companion objects and package object that bear some relation to the implicit's type which we search for (i.e. package object of the type, companion object of the type itself, of its type constructor if any, of its parameters if any, and also of its supertype and supertraits).
If at either stage we find more than one implicit, static overloading rule is used to resolve it.
I have read the answer to my question about scala.math.Integral but I do not understand what happens when Integral[T] is passed as an implicit parameter. (I think I understand the implicit parameters concept in general).
Let's consider this function
import scala.math._
def foo[T](t: T)(implicit integral: Integral[T]) { println(integral) }
Now I call foo in REPL:
scala> foo(0)
scala.math.Numeric$IntIsIntegral$#581ea2
scala> foo(0L)
scala.math.Numeric$LongIsIntegral$#17fe89
How does the integral argument become scala.math.Numeric$IntIsIntegral and scala.math.Numeric$LongIsIntegral ?
The short answer is that Scala finds IntIsIntegral and LongIsIntegral inside the object Numeric, which is the companion object of the class Numeric, which is a super class of Integral.
Read on for the long answer.
Types of Implicits
Implicits in Scala refers to either a value that can be passed "automatically", so to speak, or a conversion from one type to another that is made automatically.
Implicit Conversion
Speaking very briefly about the latter type, if one calls a method m on an object o of a class C, and that class does not support method m, then Scala will look for an implicit conversion from C to something that does support m. A simple example would be the method map on String:
"abc".map(_.toInt)
String does not support the method map, but StringOps does, and there's an implicit conversion from String to StringOps available (see implicit def augmentString on Predef).
Implicit Parameters
The other kind of implicit is the implicit parameter. These are passed to method calls like any other parameter, but the compiler tries to fill them in automatically. If it can't, it will complain. One can pass these parameters explicitly, which is how one uses breakOut, for example (see question about breakOut, on a day you are feeling up for a challenge).
In this case, one has to declare the need for an implicit, such as the foo method declaration:
def foo[T](t: T)(implicit integral: Integral[T]) {println(integral)}
View Bounds
There's one situation where an implicit is both an implicit conversion and an implicit parameter. For example:
def getIndex[T, CC](seq: CC, value: T)(implicit conv: CC => Seq[T]) = seq.indexOf(value)
getIndex("abc", 'a')
The method getIndex can receive any object, as long as there is an implicit conversion available from its class to Seq[T]. Because of that, I can pass a String to getIndex, and it will work.
Behind the scenes, the compile changes seq.IndexOf(value) to conv(seq).indexOf(value).
This is so useful that there is a syntactic sugar to write them. Using this syntactic sugar, getIndex can be defined like this:
def getIndex[T, CC <% Seq[T]](seq: CC, value: T) = seq.indexOf(value)
This syntactic sugar is described as a view bound, akin to an upper bound (CC <: Seq[Int]) or a lower bound (T >: Null).
Please be aware that view bounds are deprecated from 2.11, you should avoid them.
Context Bounds
Another common pattern in implicit parameters is the type class pattern. This pattern enables the provision of common interfaces to classes which did not declare them. It can both serve as a bridge pattern -- gaining separation of concerns -- and as an adapter pattern.
The Integral class you mentioned is a classic example of type class pattern. Another example on Scala's standard library is Ordering. There's a library that makes heavy use of this pattern, called Scalaz.
This is an example of its use:
def sum[T](list: List[T])(implicit integral: Integral[T]): T = {
import integral._ // get the implicits in question into scope
list.foldLeft(integral.zero)(_ + _)
}
There is also a syntactic sugar for it, called a context bound, which is made less useful by the need to refer to the implicit. A straight conversion of that method looks like this:
def sum[T : Integral](list: List[T]): T = {
val integral = implicitly[Integral[T]]
import integral._ // get the implicits in question into scope
list.foldLeft(integral.zero)(_ + _)
}
Context bounds are more useful when you just need to pass them to other methods that use them. For example, the method sorted on Seq needs an implicit Ordering. To create a method reverseSort, one could write:
def reverseSort[T : Ordering](seq: Seq[T]) = seq.reverse.sorted
Because Ordering[T] was implicitly passed to reverseSort, it can then pass it implicitly to sorted.
Where do Implicits Come From?
When the compiler sees the need for an implicit, either because you are calling a method which does not exist on the object's class, or because you are calling a method that requires an implicit parameter, it will search for an implicit that will fit the need.
This search obey certain rules that define which implicits are visible and which are not. The following table showing where the compiler will search for implicits was taken from an excellent presentation about implicits by Josh Suereth, which I heartily recommend to anyone wanting to improve their Scala knowledge.
First look in current scope
Implicits defined in current scope
Explicit imports
wildcard imports
Same scope in other files
Now look at associated types in
Companion objects of a type
Companion objects of type parameters types
Outer objects for nested types
Other dimensions
Let's give examples for them.
Implicits Defined in Current Scope
implicit val n: Int = 5
def add(x: Int)(implicit y: Int) = x + y
add(5) // takes n from the current scope
Explicit Imports
import scala.collection.JavaConversions.mapAsScalaMap
def env = System.getenv() // Java map
val term = env("TERM") // implicit conversion from Java Map to Scala Map
Wildcard Imports
def sum[T : Integral](list: List[T]): T = {
val integral = implicitly[Integral[T]]
import integral._ // get the implicits in question into scope
list.foldLeft(integral.zero)(_ + _)
}
Same Scope in Other Files
This is like the first example, but assuming the implicit definition is in a different file than its usage. See also how package objects might be used in to bring in implicits.
Companion Objects of a Type
There are two object companions of note here. First, the object companion of the "source" type is looked into. For instance, inside the object Option there is an implicit conversion to Iterable, so one can call Iterable methods on Option, or pass Option to something expecting an Iterable. For example:
for {
x <- List(1, 2, 3)
y <- Some('x')
} yield, (x, y)
That expression is translated by the compile into
List(1, 2, 3).flatMap(x => Some('x').map(y => (x, y)))
However, List.flatMap expects a TraversableOnce, which Option is not. The compiler then looks inside Option's object companion and finds the conversion to Iterable, which is a TraversableOnce, making this expression correct.
Second, the companion object of the expected type:
List(1, 2, 3).sorted
The method sorted takes an implicit Ordering. In this case, it looks inside the object Ordering, companion to the class Ordering, and finds an implicit Ordering[Int] there.
Note that companion objects of super classes are also looked into. For example:
class A(val n: Int)
object A {
implicit def str(a: A) = "A: %d" format a.n
}
class B(val x: Int, y: Int) extends A(y)
val b = new B(5, 2)
val s: String = b // s == "A: 2"
This is how Scala found the implicit Numeric[Int] and Numeric[Long] in your question, by the way, as they are found inside Numeric, not Integral.
Companion Objects of Type Parameters Types
This is required to make the type class pattern really work. Consider Ordering, for instance... it comes with some implicits in its companion object, but you can't add stuff to it. So how can you make an Ordering for your own class that is automatically found?
Let's start with the implementation:
class A(val n: Int)
object A {
implicit val ord = new Ordering[A] {
def compare(x: A, y: A) = implicitly[Ordering[Int]].compare(x.n, y.n)
}
}
So, consider what happens when you call
List(new A(5), new A(2)).sorted
As we saw, the method sorted expects an Ordering[A] (actually, it expects an Ordering[B], where B >: A). There isn't any such thing inside Ordering, and there is no "source" type on which to look. Obviously, it is finding it inside A, which is a type parameter of Ordering.
This is also how various collection methods expecting CanBuildFrom work: the implicits are found inside companion objects to the type parameters of CanBuildFrom.
Outer Objects for Nested Types
I haven't actually seen examples of this. I'd be grateful if someone could share one. The principle is simple:
class A(val n: Int) {
class B(val m: Int) { require(m < n) }
}
object A {
implicit def bToString(b: A#B) = "B: %d" format b.m
}
val a = new A(5)
val b = new a.B(3)
val s: String = b // s == "B: 3"
Other Dimensions
I'm pretty sure this was a joke. I hope. :-)
EDIT
Related questions of interest:
Context and view bounds
Chaining implicits
The parameter is implicit, which means that the Scala compiler will look if it can find an implicit object somewhere that it can automatically fill in for the parameter.
When you pass in an Int, it's going to look for an implicit object that is an Integral[Int] and it finds it in scala.math.Numeric. You can look at the source code of scala.math.Numeric, where you will find this:
object Numeric {
// ...
trait IntIsIntegral extends Integral[Int] {
// ...
}
// This is the implicit object that the compiler finds
implicit object IntIsIntegral extends IntIsIntegral with Ordering.IntOrdering
}
Likewise, there is a different implicit object for Long that works the same way.