How can I define a function that is accepting all the tuples(1 to 22) as argument, I have something as follows in mind:
def foo (v=Tuple) =...
foo((1,2))
foo((1,2,3))
EDIT:
To answer the comment: I am actually trying to create a Tensor class which is a set of values and a set of indices. The indices can be covariant and/or contravariant (cf Wikipedia1 and Wikipedia2). I wanted to have a special syntax like Tensor((1,2),(3,4),values) which would create a tensor with values, two covariant indices having length (2,3) and two contravariant indices with length (3,4). So using this syntax I could also write Tensor((1,2,3),3,values) (with an implicit Int=>Tuple1).
I agree that Tuples are not suitable for this, better to use Lists. However the syntax is not so nice then...
This really isn't what tuples are for (cf. the comments and answers here). Tuples are for doing things like returning multiple values from a method, where in Java you would have to create a lightweight class. If you have an arbitrary number of elements, you should use a collection.
Another way to provide a convenient API to your users (aside from implicit conversion) is to use multiple parameter lists with varargs:
def tensor(cov: Int*)(contrav: Int*)(values: Int*) = // ...
Your examples would be written
tensor(1,2)(3,4)(values)
tensor(1,2,3)(3)(values)
There is no trait specifically for tuples, but you could use a typeclass approach, as demonstrated in this answer.
If your goal is really to have a List but allow callers to pass in tuples (for convenience), you can modify that solution so that the type class produces a List rather than a Product.
In brief, the idea is that you provide implicit conversions from the types that callers can pass to the type you're actually going to use:
def foo(x: IndexList) = x.indices
sealed case class IndexList(indices: List[Int])
object IndexList {
implicit def val2indices(i: Int) = IndexList(List(i))
implicit def tuple2toIndices(t: (Int, Int)): IndexList =
product2indices(t)
// etc
implicit def list2indices(l: List[Int]) = IndexList(l)
private def product2indices(p: Product) =
IndexList(p.productIterator.toList.asInstanceOf[List[Int]])
}
You can then call your method with any type for which you've provided a conversion:
foo(1)
foo((2,3))
foo(List(1,2,3))
All case classes, including Tuples, extend scala.Product but unfortunately there's no marker trait specifically for tuples, so someone could sneak ordinary case classes into your method. Of course, there's no way to treat all arities in a uniform way and still be typesafe, but you can use productElement(n: Int) to extract the nth value, or productIterator to iterate over all the values.
But... This is heresy around here, but have you considered overloading? :)
What you probably want to use is an HList, not a tuple. An HList (heterogenous list) is basically an arbitrary-length, typed tuple.
There are a few examples of HLists in scala (they are not part of the standard library)
http://jnordenberg.blogspot.com/2008/08/hlist-in-scala.html
a great and comprehensive series by Mark Harrah (of SBT fame)
Miles Sabin's github examples, taken from his recent talk at Scala eXchange
Check this out. It actually works better than I expected ;)
scala> def f[T <: Product](x: T) = x
f: [T <: Product](x: T)T
scala> f(1)
<console>:9: error: inferred type arguments [Int] do not conform to method f's type parameter bounds [T <: Product]
scala> f(1, "2") // you don't even need the extra parenthesis
res0: (Int, java.lang.String) = (2,3)
scala> f(1, "2", BigInt("3"))
res1: (Int, java.lang.String, scala.math.BigInt) = (1,2,3)
Related
I can see in the API docs for Predef that they're subclasses of a generic function type (From) => To, but that's all it says. Um, what? Maybe there's documentation somewhere, but search engines don't handle "names" like "<:<" very well, so I haven't been able to find it.
Follow-up question: when should I use these funky symbols/classes, and why?
These are called generalized type constraints. They allow you, from within a type-parameterized class or trait, to further constrain one of its type parameters. Here's an example:
case class Foo[A](a:A) { // 'A' can be substituted with any type
// getStringLength can only be used if this is a Foo[String]
def getStringLength(implicit evidence: A =:= String) = a.length
}
The implicit argument evidence is supplied by the compiler, iff A is String. You can think of it as a proof that A is String--the argument itself isn't important, only knowing that it exists. [edit: well, technically it actually is important because it represents an implicit conversion from A to String, which is what allows you to call a.length and not have the compiler yell at you]
Now I can use it like so:
scala> Foo("blah").getStringLength
res6: Int = 4
But if I tried use it with a Foo containing something other than a String:
scala> Foo(123).getStringLength
<console>:9: error: could not find implicit value for parameter evidence: =:=[Int,String]
You can read that error as "could not find evidence that Int == String"... that's as it should be! getStringLength is imposing further restrictions on the type of A than what Foo in general requires; namely, you can only invoke getStringLength on a Foo[String]. This constraint is enforced at compile-time, which is cool!
<:< and <%< work similarly, but with slight variations:
A =:= B means A must be exactly B
A <:< B means A must be a subtype of B (analogous to the simple type constraint <:)
A <%< B means A must be viewable as B, possibly via implicit conversion (analogous to the simple type constraint <%)
This snippet by #retronym is a good explanation of how this sort of thing used to be accomplished and how generalized type constraints make it easier now.
ADDENDUM
To answer your follow-up question, admittedly the example I gave is pretty contrived and not obviously useful. But imagine using it to define something like a List.sumInts method, which adds up a list of integers. You don't want to allow this method to be invoked on any old List, just a List[Int]. However the List type constructor can't be so constrainted; you still want to be able to have lists of strings, foos, bars, and whatnots. So by placing a generalized type constraint on sumInts, you can ensure that just that method has an additional constraint that it can only be used on a List[Int]. Essentially you're writing special-case code for certain kinds of lists.
Not a complete answer (others have already answered this), I just wanted to note the following, which maybe helps to understand the syntax better: The way you normally use these "operators", as for example in pelotom's example:
def getStringLength(implicit evidence: A =:= String)
makes use of Scala's alternative infix syntax for type operators.
So, A =:= String is the same as =:=[A, String] (and =:= is just a class or trait with a fancy-looking name). Note that this syntax also works with "regular" classes, for example you can write:
val a: Tuple2[Int, String] = (1, "one")
like this:
val a: Int Tuple2 String = (1, "one")
It's similar to the two syntaxes for method calls, the "normal" with . and () and the operator syntax.
Read the other answers to understand what these constructs are. Here is when you should use them. You use them when you need to constrain a method for specific types only.
Here is an example. Suppose you want to define a homogeneous Pair, like this:
class Pair[T](val first: T, val second: T)
Now you want to add a method smaller, like this:
def smaller = if (first < second) first else second
That only works if T is ordered. You could restrict the entire class:
class Pair[T <: Ordered[T]](val first: T, val second: T)
But that seems a shame--there could be uses for the class when T isn't ordered. With a type constraint, you can still define the smaller method:
def smaller(implicit ev: T <:< Ordered[T]) = if (first < second) first else second
It's ok to instantiate, say, a Pair[File], as long as you don't call smaller on it.
In the case of Option, the implementors wanted an orNull method, even though it doesn't make sense for Option[Int]. By using a type constraint, all is well. You can use orNull on an Option[String], and you can form an Option[Int] and use it, as long as you don't call orNull on it. If you try Some(42).orNull, you get the charming message
error: Cannot prove that Null <:< Int
It depends on where they are being used. Most often, when used while declaring types of implicit parameters, they are classes. They can be objects too in rare instances. Finally, they can be operators on Manifest objects. They are defined inside scala.Predef in the first two cases, though not particularly well documented.
They are meant to provide a way to test the relationship between the classes, just like <: and <% do, in situations when the latter cannot be used.
As for the question "when should I use them?", the answer is you shouldn't, unless you know you should. :-) EDIT: Ok, ok, here are some examples from the library. On Either, you have:
/**
* Joins an <code>Either</code> through <code>Right</code>.
*/
def joinRight[A1 >: A, B1 >: B, C](implicit ev: B1 <:< Either[A1, C]): Either[A1, C] = this match {
case Left(a) => Left(a)
case Right(b) => b
}
/**
* Joins an <code>Either</code> through <code>Left</code>.
*/
def joinLeft[A1 >: A, B1 >: B, C](implicit ev: A1 <:< Either[C, B1]): Either[C, B1] = this match {
case Left(a) => a
case Right(b) => Right(b)
}
On Option, you have:
def orNull[A1 >: A](implicit ev: Null <:< A1): A1 = this getOrElse null
You'll find some other examples on the collections.
I'm building a MultiSet[A] and using a TreeMap[A, Int] to keep track of the elements.
class MultiSet[A <: Ordered[A] ](val tm: TreeMap[A, Int]) { ... }
Now I want to create a MultiSet[Int] using this framework. In particular, I want a method that will take a Vector[Int] and produce a TreeMap[Int, Int] that I can use to make a MultiSet[Int].
I wrote the following vectorToTreeMap, which compiles without complaint.
def vectorToTreeMap[A <: Ordered[A]](elements: Vector[A]): TreeMap[A, Int] =
elements.foldLeft(new TreeMap[A, Int]())((tm, e) => tm.updated(e, tm.getOrElse(e, 0) + 1))
But when I try
val tm: TreeMap[Int, Int] = vectorToTreeMap(Vector(1, 2, 3))
I get compiler complaints saying that Int doesn't conform to A <: Ordered[A]. What does it take to create a TreeMap[Int, Int] in this context? (I want the more general case because the MultiSet[A] is not always MultiSet[Int].)
I also tried A <: scala.math.Ordered[A] and A <: Ordering[A] but with no better results. (I'll admit that I don't understand the differences among the three possibilities and whether it matters in this situation.)
Thanks for your help.
The problem is that Int is an alias for the java int, which does not implement Ordered[Int]. How could it, since java does not even know that the Ordered[T] trait exists.
There are two ways to solve your problem:
View bounds:
The first approach is to change the constraint <: to a view bound <%.
def vectorToTreeMap[A <% Ordered[A]](elements: Vector[A]): TreeMap[A, Int] =
elements.foldLeft(new TreeMap[A, Int]())((tm, e) => tm.updated(e, tm.getOrElse(e, 0) + 1))
A <: Ordered[A] means that the method vectorToTreeMap is only defined for types that directly implement Ordered[A], which excludes Int.
A <% Ordered[A] means that the method vectorToTreeMap is defined for all types that "can be viewed as" implementing Ordered[A], which includes Int because there is an implicit conversion defined from Int to Ordered[Int]:
scala> implicitly[Int => Ordered[Int]]
res7: Int => Ordered[Int] = <function1>
Type classes
The second approach is to not require any (direct or indirect) inheritance relationship for the type A, but just require that there exists a way to order instances of type A.
Basically you always require an ordering to be able to create a TreeMap from a vector, but to avoid having to pass it every single time you call the method you make the ordering an implicit parameter.
def vectorToTreeMap[A](elements: Vector[A])(implicit ordering:Ordering[A]): TreeMap[A, Int] =
elements.foldLeft(new TreeMap[A, Int]())((tm, e) => tm.updated(e, tm.getOrElse(e, 0) + 1))
It turns out that there are instances of Ordering[A] for all java primitive types as well as for String, as you can see with the implicitly method in the scala REPL:
scala> implicitly[Ordering[Int]]
res8: Ordering[Int] = scala.math.Ordering$Int$#5b748182
Scala is even able to derive orderings for composite types. For example if you have a Tuple where there exists an ordering for each element type, scala will automatically provide an ordering for the tuple type as well:
scala> implicitly[Ordering[(Int, Int)]]
res9: Ordering[(Int, Int)] = scala.math.Ordering$$anon$11#66d51003
The second approach of using so-called type classes is much more flexible. For example, if you want a tree of plain old ints, but with reverse order, all you have to do is to provide a reverse int ordering either directly or as an implicit val.
This approach is also very common in idiomatic scala. So there is even special syntax for it:
def vectorToTreeMap[A : Ordering](elements: Vector[A]): TreeMap[A, Int] = ???
is equivalent to
def vectorToTreeMap[A](elements: Vector[A])(implicit ordering:Ordering[A]): TreeMap[A, Int] = ???
It basically means that you want the method vectorToTreeMap defined only for types for which an ordering exists, but you do not care about giving the ordering a name. Even with the short syntax you can use vectorToTreeMap with an implicitly resolved Ordering[A], or pass an Ordering[A] explicitly.
The second approach has two big advantages:
it allows you to define functionality for types you do not "own".
it allows you to decouple the behavior regarding some aspect like e.g. ordering from the type itself, whereas with the inheritance approach you couple the behavior to the type. For example you can have a normal Ordering and a caseInsensitiveOrdering for a Sting. But if you let String extend from Ordered, you must decide on one ordering behavior.
That is why the second approach is used in the scala collections itself to provide an ordering for TreeMap.
Edit: here is an example to provide an ordering for a type that does not have one:
scala> case class Person(name:String, surname:String)
defined class Person
scala> implicitly[Ordering[Person]]
<console>:10: error: No implicit Ordering defined for Person.
implicitly[Ordering[Person]]
^
Case classes do not have orderings automatically defined. But we can easily define one:
scala> :paste
// Entering paste mode (ctrl-D to finish)
case class Person(name:String, surname:String)
object Person {
// just convert to a tuple, which is ordered by the individual elements
val nameSurnameOrdering : Ordering[Person] = Ordering.by(p => (p.name, p.surname))
// make the nameSurnameOrdering the default that is in scope unless something else is specified
implicit def defaultOrdering = nameSurnameOrdering
}
// Exiting paste mode, now interpreting.
defined class Person
defined module Person
scala> implicitly[Ordering[Person]]
res1: Ordering[Person] = scala.math.Ordering$$anon$9#50148190
I can see in the API docs for Predef that they're subclasses of a generic function type (From) => To, but that's all it says. Um, what? Maybe there's documentation somewhere, but search engines don't handle "names" like "<:<" very well, so I haven't been able to find it.
Follow-up question: when should I use these funky symbols/classes, and why?
These are called generalized type constraints. They allow you, from within a type-parameterized class or trait, to further constrain one of its type parameters. Here's an example:
case class Foo[A](a:A) { // 'A' can be substituted with any type
// getStringLength can only be used if this is a Foo[String]
def getStringLength(implicit evidence: A =:= String) = a.length
}
The implicit argument evidence is supplied by the compiler, iff A is String. You can think of it as a proof that A is String--the argument itself isn't important, only knowing that it exists. [edit: well, technically it actually is important because it represents an implicit conversion from A to String, which is what allows you to call a.length and not have the compiler yell at you]
Now I can use it like so:
scala> Foo("blah").getStringLength
res6: Int = 4
But if I tried use it with a Foo containing something other than a String:
scala> Foo(123).getStringLength
<console>:9: error: could not find implicit value for parameter evidence: =:=[Int,String]
You can read that error as "could not find evidence that Int == String"... that's as it should be! getStringLength is imposing further restrictions on the type of A than what Foo in general requires; namely, you can only invoke getStringLength on a Foo[String]. This constraint is enforced at compile-time, which is cool!
<:< and <%< work similarly, but with slight variations:
A =:= B means A must be exactly B
A <:< B means A must be a subtype of B (analogous to the simple type constraint <:)
A <%< B means A must be viewable as B, possibly via implicit conversion (analogous to the simple type constraint <%)
This snippet by #retronym is a good explanation of how this sort of thing used to be accomplished and how generalized type constraints make it easier now.
ADDENDUM
To answer your follow-up question, admittedly the example I gave is pretty contrived and not obviously useful. But imagine using it to define something like a List.sumInts method, which adds up a list of integers. You don't want to allow this method to be invoked on any old List, just a List[Int]. However the List type constructor can't be so constrainted; you still want to be able to have lists of strings, foos, bars, and whatnots. So by placing a generalized type constraint on sumInts, you can ensure that just that method has an additional constraint that it can only be used on a List[Int]. Essentially you're writing special-case code for certain kinds of lists.
Not a complete answer (others have already answered this), I just wanted to note the following, which maybe helps to understand the syntax better: The way you normally use these "operators", as for example in pelotom's example:
def getStringLength(implicit evidence: A =:= String)
makes use of Scala's alternative infix syntax for type operators.
So, A =:= String is the same as =:=[A, String] (and =:= is just a class or trait with a fancy-looking name). Note that this syntax also works with "regular" classes, for example you can write:
val a: Tuple2[Int, String] = (1, "one")
like this:
val a: Int Tuple2 String = (1, "one")
It's similar to the two syntaxes for method calls, the "normal" with . and () and the operator syntax.
Read the other answers to understand what these constructs are. Here is when you should use them. You use them when you need to constrain a method for specific types only.
Here is an example. Suppose you want to define a homogeneous Pair, like this:
class Pair[T](val first: T, val second: T)
Now you want to add a method smaller, like this:
def smaller = if (first < second) first else second
That only works if T is ordered. You could restrict the entire class:
class Pair[T <: Ordered[T]](val first: T, val second: T)
But that seems a shame--there could be uses for the class when T isn't ordered. With a type constraint, you can still define the smaller method:
def smaller(implicit ev: T <:< Ordered[T]) = if (first < second) first else second
It's ok to instantiate, say, a Pair[File], as long as you don't call smaller on it.
In the case of Option, the implementors wanted an orNull method, even though it doesn't make sense for Option[Int]. By using a type constraint, all is well. You can use orNull on an Option[String], and you can form an Option[Int] and use it, as long as you don't call orNull on it. If you try Some(42).orNull, you get the charming message
error: Cannot prove that Null <:< Int
It depends on where they are being used. Most often, when used while declaring types of implicit parameters, they are classes. They can be objects too in rare instances. Finally, they can be operators on Manifest objects. They are defined inside scala.Predef in the first two cases, though not particularly well documented.
They are meant to provide a way to test the relationship between the classes, just like <: and <% do, in situations when the latter cannot be used.
As for the question "when should I use them?", the answer is you shouldn't, unless you know you should. :-) EDIT: Ok, ok, here are some examples from the library. On Either, you have:
/**
* Joins an <code>Either</code> through <code>Right</code>.
*/
def joinRight[A1 >: A, B1 >: B, C](implicit ev: B1 <:< Either[A1, C]): Either[A1, C] = this match {
case Left(a) => Left(a)
case Right(b) => b
}
/**
* Joins an <code>Either</code> through <code>Left</code>.
*/
def joinLeft[A1 >: A, B1 >: B, C](implicit ev: A1 <:< Either[C, B1]): Either[C, B1] = this match {
case Left(a) => a
case Right(b) => Right(b)
}
On Option, you have:
def orNull[A1 >: A](implicit ev: Null <:< A1): A1 = this getOrElse null
You'll find some other examples on the collections.
An implicit question to newcomers to Scala seems to be: where does the compiler look for implicits? I mean implicit because the question never seems to get fully formed, as if there weren't words for it. :-) For example, where do the values for integral below come from?
scala> import scala.math._
import scala.math._
scala> def foo[T](t: T)(implicit integral: Integral[T]) {println(integral)}
foo: [T](t: T)(implicit integral: scala.math.Integral[T])Unit
scala> foo(0)
scala.math.Numeric$IntIsIntegral$#3dbea611
scala> foo(0L)
scala.math.Numeric$LongIsIntegral$#48c610af
Another question that does follow up to those who decide to learn the answer to the first question is how does the compiler choose which implicit to use, in certain situations of apparent ambiguity (but that compile anyway)?
For instance, scala.Predef defines two conversions from String: one to WrappedString and another to StringOps. Both classes, however, share a lot of methods, so why doesn't Scala complain about ambiguity when, say, calling map?
Note: this question was inspired by this other question, in the hopes of stating the problem in a more general manner. The example was copied from there, because it is referred to in the answer.
Types of Implicits
Implicits in Scala refers to either a value that can be passed "automatically", so to speak, or a conversion from one type to another that is made automatically.
Implicit Conversion
Speaking very briefly about the latter type, if one calls a method m on an object o of a class C, and that class does not support method m, then Scala will look for an implicit conversion from C to something that does support m. A simple example would be the method map on String:
"abc".map(_.toInt)
String does not support the method map, but StringOps does, and there's an implicit conversion from String to StringOps available (see implicit def augmentString on Predef).
Implicit Parameters
The other kind of implicit is the implicit parameter. These are passed to method calls like any other parameter, but the compiler tries to fill them in automatically. If it can't, it will complain. One can pass these parameters explicitly, which is how one uses breakOut, for example (see question about breakOut, on a day you are feeling up for a challenge).
In this case, one has to declare the need for an implicit, such as the foo method declaration:
def foo[T](t: T)(implicit integral: Integral[T]) {println(integral)}
View Bounds
There's one situation where an implicit is both an implicit conversion and an implicit parameter. For example:
def getIndex[T, CC](seq: CC, value: T)(implicit conv: CC => Seq[T]) = seq.indexOf(value)
getIndex("abc", 'a')
The method getIndex can receive any object, as long as there is an implicit conversion available from its class to Seq[T]. Because of that, I can pass a String to getIndex, and it will work.
Behind the scenes, the compiler changes seq.IndexOf(value) to conv(seq).indexOf(value).
This is so useful that there is syntactic sugar to write them. Using this syntactic sugar, getIndex can be defined like this:
def getIndex[T, CC <% Seq[T]](seq: CC, value: T) = seq.indexOf(value)
This syntactic sugar is described as a view bound, akin to an upper bound (CC <: Seq[Int]) or a lower bound (T >: Null).
Context Bounds
Another common pattern in implicit parameters is the type class pattern. This pattern enables the provision of common interfaces to classes which did not declare them. It can both serve as a bridge pattern -- gaining separation of concerns -- and as an adapter pattern.
The Integral class you mentioned is a classic example of type class pattern. Another example on Scala's standard library is Ordering. There's a library that makes heavy use of this pattern, called Scalaz.
This is an example of its use:
def sum[T](list: List[T])(implicit integral: Integral[T]): T = {
import integral._ // get the implicits in question into scope
list.foldLeft(integral.zero)(_ + _)
}
There is also syntactic sugar for it, called a context bound, which is made less useful by the need to refer to the implicit. A straight conversion of that method looks like this:
def sum[T : Integral](list: List[T]): T = {
val integral = implicitly[Integral[T]]
import integral._ // get the implicits in question into scope
list.foldLeft(integral.zero)(_ + _)
}
Context bounds are more useful when you just need to pass them to other methods that use them. For example, the method sorted on Seq needs an implicit Ordering. To create a method reverseSort, one could write:
def reverseSort[T : Ordering](seq: Seq[T]) = seq.sorted.reverse
Because Ordering[T] was implicitly passed to reverseSort, it can then pass it implicitly to sorted.
Where do Implicits come from?
When the compiler sees the need for an implicit, either because you are calling a method which does not exist on the object's class, or because you are calling a method that requires an implicit parameter, it will search for an implicit that will fit the need.
This search obey certain rules that define which implicits are visible and which are not. The following table showing where the compiler will search for implicits was taken from an excellent presentation (timestamp 20:20) about implicits by Josh Suereth, which I heartily recommend to anyone wanting to improve their Scala knowledge. It has been complemented since then with feedback and updates.
The implicits available under number 1 below has precedence over the ones under number 2. Other than that, if there are several eligible arguments which match the implicit parameter’s type, a most specific one will be chosen using the rules of static overloading resolution (see Scala Specification §6.26.3). More detailed information can be found in a question I link to at the end of this answer.
First look in current scope
Implicits defined in current scope
Explicit imports
wildcard imports
Same scope in other files
Now look at associated types in
Companion objects of a type
Implicit scope of an argument's type (2.9.1)
Implicit scope of type arguments (2.8.0)
Outer objects for nested types
Other dimensions
Let's give some examples for them:
Implicits Defined in Current Scope
implicit val n: Int = 5
def add(x: Int)(implicit y: Int) = x + y
add(5) // takes n from the current scope
Explicit Imports
import scala.collection.JavaConversions.mapAsScalaMap
def env = System.getenv() // Java map
val term = env("TERM") // implicit conversion from Java Map to Scala Map
Wildcard Imports
def sum[T : Integral](list: List[T]): T = {
val integral = implicitly[Integral[T]]
import integral._ // get the implicits in question into scope
list.foldLeft(integral.zero)(_ + _)
}
Same Scope in Other Files
Edit: It seems this does not have a different precedence. If you have some example that demonstrates a precedence distinction, please make a comment. Otherwise, don't rely on this one.
This is like the first example, but assuming the implicit definition is in a different file than its usage. See also how package objects might be used in to bring in implicits.
Companion Objects of a Type
There are two object companions of note here. First, the object companion of the "source" type is looked into. For instance, inside the object Option there is an implicit conversion to Iterable, so one can call Iterable methods on Option, or pass Option to something expecting an Iterable. For example:
for {
x <- List(1, 2, 3)
y <- Some('x')
} yield (x, y)
That expression is translated by the compiler to
List(1, 2, 3).flatMap(x => Some('x').map(y => (x, y)))
However, List.flatMap expects a TraversableOnce, which Option is not. The compiler then looks inside Option's object companion and finds the conversion to Iterable, which is a TraversableOnce, making this expression correct.
Second, the companion object of the expected type:
List(1, 2, 3).sorted
The method sorted takes an implicit Ordering. In this case, it looks inside the object Ordering, companion to the class Ordering, and finds an implicit Ordering[Int] there.
Note that companion objects of super classes are also looked into. For example:
class A(val n: Int)
object A {
implicit def str(a: A) = "A: %d" format a.n
}
class B(val x: Int, y: Int) extends A(y)
val b = new B(5, 2)
val s: String = b // s == "A: 2"
This is how Scala found the implicit Numeric[Int] and Numeric[Long] in your question, by the way, as they are found inside Numeric, not Integral.
Implicit Scope of an Argument's Type
If you have a method with an argument type A, then the implicit scope of type A will also be considered. By "implicit scope" I mean that all these rules will be applied recursively -- for example, the companion object of A will be searched for implicits, as per the rule above.
Note that this does not mean the implicit scope of A will be searched for conversions of that parameter, but of the whole expression. For example:
class A(val n: Int) {
def +(other: A) = new A(n + other.n)
}
object A {
implicit def fromInt(n: Int) = new A(n)
}
// This becomes possible:
1 + new A(1)
// because it is converted into this:
A.fromInt(1) + new A(1)
This is available since Scala 2.9.1.
Implicit Scope of Type Arguments
This is required to make the type class pattern really work. Consider Ordering, for instance: It comes with some implicits in its companion object, but you can't add stuff to it. So how can you make an Ordering for your own class that is automatically found?
Let's start with the implementation:
class A(val n: Int)
object A {
implicit val ord = new Ordering[A] {
def compare(x: A, y: A) = implicitly[Ordering[Int]].compare(x.n, y.n)
}
}
So, consider what happens when you call
List(new A(5), new A(2)).sorted
As we saw, the method sorted expects an Ordering[A] (actually, it expects an Ordering[B], where B >: A). There isn't any such thing inside Ordering, and there is no "source" type on which to look. Obviously, it is finding it inside A, which is a type argument of Ordering.
This is also how various collection methods expecting CanBuildFrom work: the implicits are found inside companion objects to the type parameters of CanBuildFrom.
Note: Ordering is defined as trait Ordering[T], where T is a type parameter. Previously, I said that Scala looked inside type parameters, which doesn't make much sense. The implicit looked for above is Ordering[A], where A is an actual type, not type parameter: it is a type argument to Ordering. See section 7.2 of the Scala specification.
This is available since Scala 2.8.0.
Outer Objects for Nested Types
I haven't actually seen examples of this. I'd be grateful if someone could share one. The principle is simple:
class A(val n: Int) {
class B(val m: Int) { require(m < n) }
}
object A {
implicit def bToString(b: A#B) = "B: %d" format b.m
}
val a = new A(5)
val b = new a.B(3)
val s: String = b // s == "B: 3"
Other Dimensions
I'm pretty sure this was a joke, but this answer might not be up-to-date. So don't take this question as being the final arbiter of what is happening, and if you do noticed it has gotten out-of-date, please inform me so that I can fix it.
EDIT
Related questions of interest:
Context and view bounds
Chaining implicits
Scala: Implicit parameter resolution precedence
I wanted to find out the precedence of the implicit parameter resolution, not just where it looks for, so I wrote a blog post revisiting implicits without import tax (and implicit parameter precedence again after some feedback).
Here's the list:
1) implicits visible to current invocation scope via local declaration, imports, outer scope, inheritance, package object that are accessible without prefix.
2) implicit scope, which contains all sort of companion objects and package object that bear some relation to the implicit's type which we search for (i.e. package object of the type, companion object of the type itself, of its type constructor if any, of its parameters if any, and also of its supertype and supertraits).
If at either stage we find more than one implicit, static overloading rule is used to resolve it.
I have read the answer to my question about scala.math.Integral but I do not understand what happens when Integral[T] is passed as an implicit parameter. (I think I understand the implicit parameters concept in general).
Let's consider this function
import scala.math._
def foo[T](t: T)(implicit integral: Integral[T]) { println(integral) }
Now I call foo in REPL:
scala> foo(0)
scala.math.Numeric$IntIsIntegral$#581ea2
scala> foo(0L)
scala.math.Numeric$LongIsIntegral$#17fe89
How does the integral argument become scala.math.Numeric$IntIsIntegral and scala.math.Numeric$LongIsIntegral ?
The short answer is that Scala finds IntIsIntegral and LongIsIntegral inside the object Numeric, which is the companion object of the class Numeric, which is a super class of Integral.
Read on for the long answer.
Types of Implicits
Implicits in Scala refers to either a value that can be passed "automatically", so to speak, or a conversion from one type to another that is made automatically.
Implicit Conversion
Speaking very briefly about the latter type, if one calls a method m on an object o of a class C, and that class does not support method m, then Scala will look for an implicit conversion from C to something that does support m. A simple example would be the method map on String:
"abc".map(_.toInt)
String does not support the method map, but StringOps does, and there's an implicit conversion from String to StringOps available (see implicit def augmentString on Predef).
Implicit Parameters
The other kind of implicit is the implicit parameter. These are passed to method calls like any other parameter, but the compiler tries to fill them in automatically. If it can't, it will complain. One can pass these parameters explicitly, which is how one uses breakOut, for example (see question about breakOut, on a day you are feeling up for a challenge).
In this case, one has to declare the need for an implicit, such as the foo method declaration:
def foo[T](t: T)(implicit integral: Integral[T]) {println(integral)}
View Bounds
There's one situation where an implicit is both an implicit conversion and an implicit parameter. For example:
def getIndex[T, CC](seq: CC, value: T)(implicit conv: CC => Seq[T]) = seq.indexOf(value)
getIndex("abc", 'a')
The method getIndex can receive any object, as long as there is an implicit conversion available from its class to Seq[T]. Because of that, I can pass a String to getIndex, and it will work.
Behind the scenes, the compile changes seq.IndexOf(value) to conv(seq).indexOf(value).
This is so useful that there is a syntactic sugar to write them. Using this syntactic sugar, getIndex can be defined like this:
def getIndex[T, CC <% Seq[T]](seq: CC, value: T) = seq.indexOf(value)
This syntactic sugar is described as a view bound, akin to an upper bound (CC <: Seq[Int]) or a lower bound (T >: Null).
Please be aware that view bounds are deprecated from 2.11, you should avoid them.
Context Bounds
Another common pattern in implicit parameters is the type class pattern. This pattern enables the provision of common interfaces to classes which did not declare them. It can both serve as a bridge pattern -- gaining separation of concerns -- and as an adapter pattern.
The Integral class you mentioned is a classic example of type class pattern. Another example on Scala's standard library is Ordering. There's a library that makes heavy use of this pattern, called Scalaz.
This is an example of its use:
def sum[T](list: List[T])(implicit integral: Integral[T]): T = {
import integral._ // get the implicits in question into scope
list.foldLeft(integral.zero)(_ + _)
}
There is also a syntactic sugar for it, called a context bound, which is made less useful by the need to refer to the implicit. A straight conversion of that method looks like this:
def sum[T : Integral](list: List[T]): T = {
val integral = implicitly[Integral[T]]
import integral._ // get the implicits in question into scope
list.foldLeft(integral.zero)(_ + _)
}
Context bounds are more useful when you just need to pass them to other methods that use them. For example, the method sorted on Seq needs an implicit Ordering. To create a method reverseSort, one could write:
def reverseSort[T : Ordering](seq: Seq[T]) = seq.reverse.sorted
Because Ordering[T] was implicitly passed to reverseSort, it can then pass it implicitly to sorted.
Where do Implicits Come From?
When the compiler sees the need for an implicit, either because you are calling a method which does not exist on the object's class, or because you are calling a method that requires an implicit parameter, it will search for an implicit that will fit the need.
This search obey certain rules that define which implicits are visible and which are not. The following table showing where the compiler will search for implicits was taken from an excellent presentation about implicits by Josh Suereth, which I heartily recommend to anyone wanting to improve their Scala knowledge.
First look in current scope
Implicits defined in current scope
Explicit imports
wildcard imports
Same scope in other files
Now look at associated types in
Companion objects of a type
Companion objects of type parameters types
Outer objects for nested types
Other dimensions
Let's give examples for them.
Implicits Defined in Current Scope
implicit val n: Int = 5
def add(x: Int)(implicit y: Int) = x + y
add(5) // takes n from the current scope
Explicit Imports
import scala.collection.JavaConversions.mapAsScalaMap
def env = System.getenv() // Java map
val term = env("TERM") // implicit conversion from Java Map to Scala Map
Wildcard Imports
def sum[T : Integral](list: List[T]): T = {
val integral = implicitly[Integral[T]]
import integral._ // get the implicits in question into scope
list.foldLeft(integral.zero)(_ + _)
}
Same Scope in Other Files
This is like the first example, but assuming the implicit definition is in a different file than its usage. See also how package objects might be used in to bring in implicits.
Companion Objects of a Type
There are two object companions of note here. First, the object companion of the "source" type is looked into. For instance, inside the object Option there is an implicit conversion to Iterable, so one can call Iterable methods on Option, or pass Option to something expecting an Iterable. For example:
for {
x <- List(1, 2, 3)
y <- Some('x')
} yield, (x, y)
That expression is translated by the compile into
List(1, 2, 3).flatMap(x => Some('x').map(y => (x, y)))
However, List.flatMap expects a TraversableOnce, which Option is not. The compiler then looks inside Option's object companion and finds the conversion to Iterable, which is a TraversableOnce, making this expression correct.
Second, the companion object of the expected type:
List(1, 2, 3).sorted
The method sorted takes an implicit Ordering. In this case, it looks inside the object Ordering, companion to the class Ordering, and finds an implicit Ordering[Int] there.
Note that companion objects of super classes are also looked into. For example:
class A(val n: Int)
object A {
implicit def str(a: A) = "A: %d" format a.n
}
class B(val x: Int, y: Int) extends A(y)
val b = new B(5, 2)
val s: String = b // s == "A: 2"
This is how Scala found the implicit Numeric[Int] and Numeric[Long] in your question, by the way, as they are found inside Numeric, not Integral.
Companion Objects of Type Parameters Types
This is required to make the type class pattern really work. Consider Ordering, for instance... it comes with some implicits in its companion object, but you can't add stuff to it. So how can you make an Ordering for your own class that is automatically found?
Let's start with the implementation:
class A(val n: Int)
object A {
implicit val ord = new Ordering[A] {
def compare(x: A, y: A) = implicitly[Ordering[Int]].compare(x.n, y.n)
}
}
So, consider what happens when you call
List(new A(5), new A(2)).sorted
As we saw, the method sorted expects an Ordering[A] (actually, it expects an Ordering[B], where B >: A). There isn't any such thing inside Ordering, and there is no "source" type on which to look. Obviously, it is finding it inside A, which is a type parameter of Ordering.
This is also how various collection methods expecting CanBuildFrom work: the implicits are found inside companion objects to the type parameters of CanBuildFrom.
Outer Objects for Nested Types
I haven't actually seen examples of this. I'd be grateful if someone could share one. The principle is simple:
class A(val n: Int) {
class B(val m: Int) { require(m < n) }
}
object A {
implicit def bToString(b: A#B) = "B: %d" format b.m
}
val a = new A(5)
val b = new a.B(3)
val s: String = b // s == "B: 3"
Other Dimensions
I'm pretty sure this was a joke. I hope. :-)
EDIT
Related questions of interest:
Context and view bounds
Chaining implicits
The parameter is implicit, which means that the Scala compiler will look if it can find an implicit object somewhere that it can automatically fill in for the parameter.
When you pass in an Int, it's going to look for an implicit object that is an Integral[Int] and it finds it in scala.math.Numeric. You can look at the source code of scala.math.Numeric, where you will find this:
object Numeric {
// ...
trait IntIsIntegral extends Integral[Int] {
// ...
}
// This is the implicit object that the compiler finds
implicit object IntIsIntegral extends IntIsIntegral with Ordering.IntOrdering
}
Likewise, there is a different implicit object for Long that works the same way.