I have a Stream of data and I want to throw everything away except for the final n elements. If the input doesn't have enough elements then the resulting stream is padded with None. This is what I came up with:
def lastN[T](in: Stream[T], len: Int): Stream[Option[T]] =
in.foldLeft(Vector.fill[Option[T]](len)(None))(_.tail :+ Option(_)).to[Stream]
I chose Vector for the internal buffer because of its tail and append performance characteristics.
This all works fine. Perhaps there's a better way? [Note: There's always a better way.]
But suppose Iterator is a more appropriate representation of the input data? No problem, just replace the 3 mentions of Stream with Iterator and it all works.
OK, so why not either/both?
I was hoping I could do something like this:
import scala.language.higherKinds
def lastN[T, C[U] <: TraversableOnce[U] ](in: C[T], len: Int): C[Option[T]] =
in.foldLeft(Vector.fill[Option[T]](len)(None))(_.tail :+ Option(_)).to[C]
Alas, no go.
error: Cannot construct a collection of type C[Option[T]] with
elements of type Option[T] based on a collection of type Nothing.
I've tried futzing with CanBuildFrom directly but I'm just not coming up with the magic formula.
I think it's more natural to use Queue as an internal buffer. It's more semantically suitable for this sort of processing and scala.collection.immutable.Queue is implemented with two Lists and may actually be more efficient than Vector (you'd have to make a measurement to find out if that's the case of course). Otherwise the API stays completely the same: you can just replace the mention of Vector with Queue.
As for CanBuildFrom, it's used in your code to call the to method. You can consult its full signature to find out what CanBuildFrom you'd have to request:
def to[Col[_]](implicit cbf: CanBuildFrom[Nothing, A, Col[A]]): Col[A]
So, you would need CanBuildFrom[Nothing, Option[T], C[Option[T]]].
Putting it all together a possible implementation looks like this:
import scala.collection.generic.CanBuildFrom
import scala.collection.immutable.Queue
def lastN[T, C[U] <: TraversableOnce[U]](in: C[T], len: Int)(
implicit cbf: CanBuildFrom[Nothing, Option[T], C[Option[T]]]
): C[Option[T]] =
in.foldLeft(Queue.fill(len)(None: Option[T]))(_.tail :+ Option(_)).to[C]
As for your comment, the compiler knows that to call to it needs CanBuildFrom[Nothing, Option[T], C[Option[T]]], but it can't automatically find an implicit argument with abstract types.
But if you put a request CanBuildFrom[Nothing, Option[T], C[Option[T]]] in the lastN signature, then when you call for example lastN(Vector(1,2,3), 2), the compiler knows that C is Vector, and T is Int, so it has to pass a CanBuildFrom[Nothing, Option[Int], Vector[Option[Int]]].
Here all types are concrete and the compiler can find a relevant instance of CanBuildFrom using the usual implicit lookup rules. I believe, it will find one in the companion object of Vector in this example.
If you want to take the last N elements of an Iterable, you can use the takeRight function. This should work for any collection that inherits from Iterable, so it will work for Stream. Unfortunately, it has been pointed out that this does not work for Iterator.
def lastN[T](in: Iterable[T], n: Int) = in.takeRight(n)
Traversable, the superclass of Iterator, does have a toIterable function you can use though. If you really want to make it as generic as possible, you can try:
def lastN[T](in: Traversable[T], n: Int) = in.toIterable.takeRight(n)
Related
I can see in the API docs for Predef that they're subclasses of a generic function type (From) => To, but that's all it says. Um, what? Maybe there's documentation somewhere, but search engines don't handle "names" like "<:<" very well, so I haven't been able to find it.
Follow-up question: when should I use these funky symbols/classes, and why?
These are called generalized type constraints. They allow you, from within a type-parameterized class or trait, to further constrain one of its type parameters. Here's an example:
case class Foo[A](a:A) { // 'A' can be substituted with any type
// getStringLength can only be used if this is a Foo[String]
def getStringLength(implicit evidence: A =:= String) = a.length
}
The implicit argument evidence is supplied by the compiler, iff A is String. You can think of it as a proof that A is String--the argument itself isn't important, only knowing that it exists. [edit: well, technically it actually is important because it represents an implicit conversion from A to String, which is what allows you to call a.length and not have the compiler yell at you]
Now I can use it like so:
scala> Foo("blah").getStringLength
res6: Int = 4
But if I tried use it with a Foo containing something other than a String:
scala> Foo(123).getStringLength
<console>:9: error: could not find implicit value for parameter evidence: =:=[Int,String]
You can read that error as "could not find evidence that Int == String"... that's as it should be! getStringLength is imposing further restrictions on the type of A than what Foo in general requires; namely, you can only invoke getStringLength on a Foo[String]. This constraint is enforced at compile-time, which is cool!
<:< and <%< work similarly, but with slight variations:
A =:= B means A must be exactly B
A <:< B means A must be a subtype of B (analogous to the simple type constraint <:)
A <%< B means A must be viewable as B, possibly via implicit conversion (analogous to the simple type constraint <%)
This snippet by #retronym is a good explanation of how this sort of thing used to be accomplished and how generalized type constraints make it easier now.
ADDENDUM
To answer your follow-up question, admittedly the example I gave is pretty contrived and not obviously useful. But imagine using it to define something like a List.sumInts method, which adds up a list of integers. You don't want to allow this method to be invoked on any old List, just a List[Int]. However the List type constructor can't be so constrainted; you still want to be able to have lists of strings, foos, bars, and whatnots. So by placing a generalized type constraint on sumInts, you can ensure that just that method has an additional constraint that it can only be used on a List[Int]. Essentially you're writing special-case code for certain kinds of lists.
Not a complete answer (others have already answered this), I just wanted to note the following, which maybe helps to understand the syntax better: The way you normally use these "operators", as for example in pelotom's example:
def getStringLength(implicit evidence: A =:= String)
makes use of Scala's alternative infix syntax for type operators.
So, A =:= String is the same as =:=[A, String] (and =:= is just a class or trait with a fancy-looking name). Note that this syntax also works with "regular" classes, for example you can write:
val a: Tuple2[Int, String] = (1, "one")
like this:
val a: Int Tuple2 String = (1, "one")
It's similar to the two syntaxes for method calls, the "normal" with . and () and the operator syntax.
Read the other answers to understand what these constructs are. Here is when you should use them. You use them when you need to constrain a method for specific types only.
Here is an example. Suppose you want to define a homogeneous Pair, like this:
class Pair[T](val first: T, val second: T)
Now you want to add a method smaller, like this:
def smaller = if (first < second) first else second
That only works if T is ordered. You could restrict the entire class:
class Pair[T <: Ordered[T]](val first: T, val second: T)
But that seems a shame--there could be uses for the class when T isn't ordered. With a type constraint, you can still define the smaller method:
def smaller(implicit ev: T <:< Ordered[T]) = if (first < second) first else second
It's ok to instantiate, say, a Pair[File], as long as you don't call smaller on it.
In the case of Option, the implementors wanted an orNull method, even though it doesn't make sense for Option[Int]. By using a type constraint, all is well. You can use orNull on an Option[String], and you can form an Option[Int] and use it, as long as you don't call orNull on it. If you try Some(42).orNull, you get the charming message
error: Cannot prove that Null <:< Int
It depends on where they are being used. Most often, when used while declaring types of implicit parameters, they are classes. They can be objects too in rare instances. Finally, they can be operators on Manifest objects. They are defined inside scala.Predef in the first two cases, though not particularly well documented.
They are meant to provide a way to test the relationship between the classes, just like <: and <% do, in situations when the latter cannot be used.
As for the question "when should I use them?", the answer is you shouldn't, unless you know you should. :-) EDIT: Ok, ok, here are some examples from the library. On Either, you have:
/**
* Joins an <code>Either</code> through <code>Right</code>.
*/
def joinRight[A1 >: A, B1 >: B, C](implicit ev: B1 <:< Either[A1, C]): Either[A1, C] = this match {
case Left(a) => Left(a)
case Right(b) => b
}
/**
* Joins an <code>Either</code> through <code>Left</code>.
*/
def joinLeft[A1 >: A, B1 >: B, C](implicit ev: A1 <:< Either[C, B1]): Either[C, B1] = this match {
case Left(a) => a
case Right(b) => Right(b)
}
On Option, you have:
def orNull[A1 >: A](implicit ev: Null <:< A1): A1 = this getOrElse null
You'll find some other examples on the collections.
I want to write a polymorphic function that accepts either an IndexedSeq[A] or a ParVector[A]. Inside the function I want access to the prepend method i.e. +: which is in SeqLike. SeqLike like is a rather confusing type for me since it take a Repr which I sort of ignored, unsuccessfully of course.
def goFoo[M[_] <: SeqLike[_,_], A](ac: M[A])(p: Int): M[A] = ???
The function should accept an empty accumulator to start with and call itself recursively p times and each time prepend an A. Here is a concrete example
def goStripper[M[_] <: SeqLike[_,_]](ac: M[PDFTextStripper])(p: Int): M[PDFTextStripper] = {
val str = new PDFTextStripper
str.setStartPage(p)
str.setEndPage(p)
if (p > 1) goStripper(str +: ac)(p-1)
else str +: ac
}
But of course this doesn't compile because I am missing something fundamental about SeqLike. Does anyone has a solution (preferably with an explanation for this?)
Thanks.
Dealing with SeqLike[A, Repr] can be a bit difficult sometimes. You really need to have a good understanding of how the collections library works (This is a great article if you are interested, http://docs.scala-lang.org/overviews/core/architecture-of-scala-collections.html). Thankfully, in your case, you actually don't need to even worry about it too much. Both IndexedSeq[A] and ParVector[A] are subclasses of scala.collection.GenSeq[A]. So you can merely write your method as follows
Simple Solution
scala> def goFoo[A, B <: GenSeq[A] with GenSeqLike[A, B]](ac: B)(p: Int): B = ac
goFoo: [A, B <: scala.collection.GenSeq[A] with scala.collection.GenSeqLike[A,B]](ac: B)(p: Int)B
scala> goFoo[Int, IndexedSeq[Int]](IndexedSeq(1))(1)
res26: IndexedSeq[Int] = Vector(1)
scala> goFoo[Int, ParVector[Int]](new ParVector(Vector(1)))(1)
res27: scala.collection.parallel.immutable.ParVector[Int] = ParVector(1)
You need to enforce that B is both a subtype of GenSeq[A] and GenSeqLike[A, Repr] so that you can provide the correct value for the Repr. You also need to enforce that the Repr in GenSeqLike[A, Repr] is B. Otherwise some of the methods won't return the correct type. Repr is the underlying representation of the collection. To really understand it, you should read the article I linked, but you can think of it as the output type of the many of the collection operations, although that is very oversimplified. I talk about it more below, if you are really interested. For now, it suffices to say we want it to be the same type as the collection we are operating on.
Higher Kind Solution
Right now, the type system needs you to manually supply both generic parameters, which is fine, but we can do a little better. You can make this a little cleaner if you allow for higher kinds.
scala> import scala.language.higherKinds
import scala.language.higherKinds
scala> def goFoo[A, B[A] <: GenSeq[A] with GenSeqLike[A, B[A]]](ac: B[A])(p: Int): B[A] = ac
goFoo: [A, B[A] <: scala.collection.GenSeq[A] with scala.collection.GenSeqLike[A,B[A]]](ac: B[A])(p: Int)B[A]
scala> goFoo(IndexedSeq(1))(1)
res28: IndexedSeq[Int] = Vector(1)
scala> goFoo(new ParVector(Vector(1)))(1)
res29: scala.collection.parallel.immutable.ParVector[Int] = ParVector(1)
Now you don't have to worry about manually supplying the types.
Recursion
These solutions work with recursion as well.
scala> #tailrec
| def goFoo[A, B <: GenSeq[A] with GenSeqLike[A, B]](ac: B)(p: Int): B =
| if(p == 0){
| ac
| } else {
| goFoo[A, B](ac.drop(1))(p-1)
| }
goFoo: [A, B <: scala.collection.GenSeq[A] with scala.collection.GenSeqLike[A,B]](ac: B)(p: Int)B
scala> goFoo[Int, IndexedSeq[Int]](IndexedSeq(1, 2))(1)
res30: IndexedSeq[Int] = Vector(2)
And the higher kinded version
scala> #tailrec
| def goFoo[A, B[A] <: GenSeq[A] with GenSeqLike[A, B[A]]](ac: B[A])(p: Int): B[A] =
| if(p == 0){
| ac
| } else {
| goFoo(ac.drop(1))(p-1)
| }
goFoo: [A, B[A] <: scala.collection.GenSeq[A] with scala.collection.GenSeqLike[A,B[A]]](ac: B[A])(p: Int)B[A]
scala> goFoo(IndexedSeq(1, 2))(1)
res31: IndexedSeq[Int] = Vector(2)
Using GenSeqLike[A, Repr] Directly TL;DR
So I just want to say, unless you have a need for a more general solution don't do this. It is the hardest to understand and work with. We can't use SeqLike[A, Repr] because ParVector is not an instance of SeqLike, but we can use GenSeqLike[A, Repr], which both ParVector[A] and IndexedSeq[A] subclass.
That being said let's talk about how you could also solve this problem using GenSeqLike[A, Repr] directly.
Unpacking the type variables
First the easy one
A
This is just the type of the value in the collection, so for a Seq[Int] this would be Int.
Repr
This is the underlying type of the collection.
Scala collections implement most of their functionality in common traits, so that they don't have to duplicate code all over the place. Further, they wish to permit out of band types to function as though they are collections even if they don't inherit from a collections trait(I'm looking at you Array), and to allow client libraries/programs to add their own collection instances very easily while getting the bulk of the collection methods defined for free.
They are designed with two guiding constraints
Return the most specific type for the operation
Don't violate Liskov's Substitution Principle (https://en.wikipedia.org/wiki/Liskov_substitution_principle)
Note: These examples are taken from the aforementioned article and are not my own.(linked again here for completeness http://docs.scala-lang.org/overviews/core/architecture-of-scala-collections.html)
The first constraint can be shown in the following example. A BitSet is a set of non-negative integers. If I perform the following operation, what should the result be?
BitSet(1).map(_+1): ???
The correct answer was a BitSet. I know that seemed rather obvious, but consider the following. What is the type of this operation?
BitSet(1).map(_.toFloat): ???
It can't be a BitSet, right? Because we said that BitSet values are non-negative integers. So it turns out to be a SortedSet[Float].
The Repr parameter, combined with an appropriate CanBuildFrom instance(I explain what this is in a second) is one of the primary mechanisms that allows for the returning the most specific type possible. We can see this by sort of cheating the system on the REPL. Consider the following, Vector is both a subclass of IndexedSeq and Seq. So what if we do this...
scala> val x: GenSeqLike[Int, IndexedSeq[Int]] = Vector(1)
x: scala.collection.SeqLike[Int,IndexedSeq[Int]] = Vector(1)
scala> 1 +: x
res26: IndexedSeq[Int] = Vector(1, 1)
See how the final type here was IndexedSeq[Int]. This was because we told the type system that the underlying representation of the collection was IndexedSeq[Int] so it tries to return that type if possible. Now watch this,
scala> val x: GenSeqLike[Int, Seq[Int]] = Vector(1)
x: scala.collection.SeqLike[Int,Seq[Int]] = Vector(1)
scala> 1 +: x
res27: Seq[Int] = Vector(1, 1)
Now we get a Seq out.
So scala collections try to give you the most specific type for your operation, while still allowing for huge amounts of code reuse. They do this by leveraging the Repr type, as we as a CanBuildFrom(still getting to it) I know your probably wondering what this has to do with your question, don't worry we're getting to that right now. I am not going to say anything on Liskov's Substitution Principle, as it doesn't pertain much to your specific question (but you should still read about it!)
Okay, so now we understand that GenSeqLike[A, Repr] is the trait that scala collections use to reuse the code for Seq (and other things like Seq). And we understand that Repr is used to store the underlying collection representation to help inform the type of collection to return. How this last point works we have yet to explain, so let's do that now!
CanBuildFrom[-From, -Elem, +To]
A CanBuildFrom instance is how the collections library knows how to build the result type of a given operation. For instance the real type of the +: method on SeqLike[A, Repr] is this.
abstract def +:[B >: A, That](elem: B)(implicit bf: CanBuildFrom[Repr, B, That]): That
This means that in order to prepend an element to a GenSeqLike[A, Repr] we need an instance of CanBuildFrom[Repr, B, That] where Repr is the type of our current collection, B is a supertype of the elements that we have in our collection, and That is they type of collection we will have after the operation is done. I am not going to get into the internals of how CanBuildFrom works (again see the linked article for the details), for now just believe me that this is what it does.
Putting it all together
So now we are ready to build an instance of goFoo that works with GenSeqLike[A, Repr] values.
scala> def goFoo[A, Repr <: GenSeqLike[A, Repr]](ac: Repr)(p: Int)(implicit cbf: CanBuildFrom[Repr, A, Repr]): Repr = ac
goFoo: [A, Repr <: scala.collection.GenSeqLike[A,Repr]](ac: Repr)(p: Int)(implicit cbf: scala.collection.generic.CanBuildFrom[Repr,A,Repr])Repr
scala> goFoo[Int, IndexedSeq[Int]](IndexedSeq(1))(1)
res7: IndexedSeq[Int] = Vector(1)
scala> goFoo[Int, ParVector[Int]](new ParVector(Vector(1)))(1)
res8: scala.collection.parallel.immutable.ParVector[Int] = ParVector(1)
What we are saying here, is that there is a CanBuildFrom that will take a subclass of GenSeqLike of type Repr over elements A and build a new Repr. This means that we can perform any operation on Repr type that will result in a new Repr, or in the specific case a new ParVector or IndexedSeq.
Unfortunately we must provide the generic parameters manually or the type system gets confused. Thankfully we can use higher kinds again to avoid this,
scala> def goFoo[A, Repr[A] <: GenSeqLike[A, Repr[A]]](ac: Repr[A])(p: Int)(implicit cbf: CanBuildFrom[Repr[A], A, Repr[A]]): Repr[A] = ac
goFoo: [A, Repr[A] <: scala.collection.GenSeqLike[A,Repr[A]]](ac: Repr[A])(p: Int)(implicit cbf: scala.collection.generic.CanBuildFrom[Repr[A],A,Repr[A]])Repr[A]
scala> goFoo(IndexedSeq(1))(1)
res16: IndexedSeq[Int] = Vector(1)
scala> goFoo(new ParVector(Vector(1)))(1)
res17: scala.collection.parallel.immutable.ParVector[Int] = ParVector(1)
So this is nice, in that it is a little more general than using GenSeq, but it is also way more confusing. I would not recommend doing this for anything other than a thought experiment.
Conclusion
While it has hopefully been informative to learn about how scala collections work to use GenSeqLike directly, I could hardly think of a use case where I would actually recommend it. The code is hard to understand, hard to work with, and may very well have some edge cases that I have missed. In general I would recommend avoiding interacting with scala collections implementation traits, such as GenSeqLike as much as is possible, unless you are installing your own collection into the system. You still have to touch GenSeqLike lightly to get all of the operations in GenSeq by giving it the correct Repr type, but you can avoid thinking about CanBuildFrom values.
Let's say I come up with a combinator:
def optional[M[_]: Applicative, A, B](fn: Kleisli[M, A, B]) =
Kleisli[M, Option[A], Option[B]] {
case Some(t) => fn(t).map(_.some)
case None => Applicative[M].point(none[B])
}
This combinator maps any Kleisli[M, A, B] to Kleisli[M, Option[A], Option[B].
However, after some time, I realize (admittedly with the help of estewei on #scalaz) this can be made to work with containers more general than just Option, namely anything for which there is a Traverse instance:
def traverseKleisli[M[_]: Applicative, F[_]: Traverse, A, B](k: Kleisli[M, A, B]) =
Kleisli[M, F[A], F[B]](k.traverse)
so that optional can now be defined as:
def optional[M[_]: Applicative, A, B](fn: Kleisli[M, A, B]) =
traverseKleisli[M, Option, A, B](fn)
However, I'd like to verify that at least the resulting type signature is equal to the original definition of optional, and whereas I could resort to hovering over both definitions in my IDE (Ensime in my case) and compare the response, I'd like a more solid way of determining that.
I tried:
implicitly[optional1.type =:= optional2.type]
but (obviously?) that fails due to both identifies being considered unstable by scalac.
Other than perhaps temporarily making both of the functions objects with an apply method, are there any easy ways to compare their static types without resorting to relying on hints from IDE presentation compilers?
P.S. the name optional comes from the fact that I use that combinator as part of a validation DSL to take a Kleisli-wrapped Validation[String, T] and turn it into a Kleisli-wrapped Validation[String, Option[T]] that verifies the validity of optional values if present.
The problem you're having is that a method is not a value in scala, and values are monotyped. You can test that a particular "instance" of your method has the correct type (using a utility function from shapeless):
val optional1Fix = optional1[Future, Int, String] _
val optional2Fix = optional2[Future, Int, String] _
import shapeless.test._
sameTyped(optional1Fix)(optional2Fix)
but as with unit tests, this is somewhat unsatisfying as even if we test several instances we can't be sure it works for everything. (Note that implicitly[optional1Fix.type =:= optional2Fix.type] still doesn't work, I assume because the compiler never realizes when two path-dependent types are equal).
For a full solution we have to see the complete function as a value, so we would have to replace it with an object with an apply method as you suggest (analogous to shapeless' ~>). The only alternative I can think of is a macro, which would have to be applied to the object containing the methods, and know which methods you wanted to compare; writing a specific macro for this specific test seems a little excessive.
I can see in the API docs for Predef that they're subclasses of a generic function type (From) => To, but that's all it says. Um, what? Maybe there's documentation somewhere, but search engines don't handle "names" like "<:<" very well, so I haven't been able to find it.
Follow-up question: when should I use these funky symbols/classes, and why?
These are called generalized type constraints. They allow you, from within a type-parameterized class or trait, to further constrain one of its type parameters. Here's an example:
case class Foo[A](a:A) { // 'A' can be substituted with any type
// getStringLength can only be used if this is a Foo[String]
def getStringLength(implicit evidence: A =:= String) = a.length
}
The implicit argument evidence is supplied by the compiler, iff A is String. You can think of it as a proof that A is String--the argument itself isn't important, only knowing that it exists. [edit: well, technically it actually is important because it represents an implicit conversion from A to String, which is what allows you to call a.length and not have the compiler yell at you]
Now I can use it like so:
scala> Foo("blah").getStringLength
res6: Int = 4
But if I tried use it with a Foo containing something other than a String:
scala> Foo(123).getStringLength
<console>:9: error: could not find implicit value for parameter evidence: =:=[Int,String]
You can read that error as "could not find evidence that Int == String"... that's as it should be! getStringLength is imposing further restrictions on the type of A than what Foo in general requires; namely, you can only invoke getStringLength on a Foo[String]. This constraint is enforced at compile-time, which is cool!
<:< and <%< work similarly, but with slight variations:
A =:= B means A must be exactly B
A <:< B means A must be a subtype of B (analogous to the simple type constraint <:)
A <%< B means A must be viewable as B, possibly via implicit conversion (analogous to the simple type constraint <%)
This snippet by #retronym is a good explanation of how this sort of thing used to be accomplished and how generalized type constraints make it easier now.
ADDENDUM
To answer your follow-up question, admittedly the example I gave is pretty contrived and not obviously useful. But imagine using it to define something like a List.sumInts method, which adds up a list of integers. You don't want to allow this method to be invoked on any old List, just a List[Int]. However the List type constructor can't be so constrainted; you still want to be able to have lists of strings, foos, bars, and whatnots. So by placing a generalized type constraint on sumInts, you can ensure that just that method has an additional constraint that it can only be used on a List[Int]. Essentially you're writing special-case code for certain kinds of lists.
Not a complete answer (others have already answered this), I just wanted to note the following, which maybe helps to understand the syntax better: The way you normally use these "operators", as for example in pelotom's example:
def getStringLength(implicit evidence: A =:= String)
makes use of Scala's alternative infix syntax for type operators.
So, A =:= String is the same as =:=[A, String] (and =:= is just a class or trait with a fancy-looking name). Note that this syntax also works with "regular" classes, for example you can write:
val a: Tuple2[Int, String] = (1, "one")
like this:
val a: Int Tuple2 String = (1, "one")
It's similar to the two syntaxes for method calls, the "normal" with . and () and the operator syntax.
Read the other answers to understand what these constructs are. Here is when you should use them. You use them when you need to constrain a method for specific types only.
Here is an example. Suppose you want to define a homogeneous Pair, like this:
class Pair[T](val first: T, val second: T)
Now you want to add a method smaller, like this:
def smaller = if (first < second) first else second
That only works if T is ordered. You could restrict the entire class:
class Pair[T <: Ordered[T]](val first: T, val second: T)
But that seems a shame--there could be uses for the class when T isn't ordered. With a type constraint, you can still define the smaller method:
def smaller(implicit ev: T <:< Ordered[T]) = if (first < second) first else second
It's ok to instantiate, say, a Pair[File], as long as you don't call smaller on it.
In the case of Option, the implementors wanted an orNull method, even though it doesn't make sense for Option[Int]. By using a type constraint, all is well. You can use orNull on an Option[String], and you can form an Option[Int] and use it, as long as you don't call orNull on it. If you try Some(42).orNull, you get the charming message
error: Cannot prove that Null <:< Int
It depends on where they are being used. Most often, when used while declaring types of implicit parameters, they are classes. They can be objects too in rare instances. Finally, they can be operators on Manifest objects. They are defined inside scala.Predef in the first two cases, though not particularly well documented.
They are meant to provide a way to test the relationship between the classes, just like <: and <% do, in situations when the latter cannot be used.
As for the question "when should I use them?", the answer is you shouldn't, unless you know you should. :-) EDIT: Ok, ok, here are some examples from the library. On Either, you have:
/**
* Joins an <code>Either</code> through <code>Right</code>.
*/
def joinRight[A1 >: A, B1 >: B, C](implicit ev: B1 <:< Either[A1, C]): Either[A1, C] = this match {
case Left(a) => Left(a)
case Right(b) => b
}
/**
* Joins an <code>Either</code> through <code>Left</code>.
*/
def joinLeft[A1 >: A, B1 >: B, C](implicit ev: A1 <:< Either[C, B1]): Either[C, B1] = this match {
case Left(a) => a
case Right(b) => Right(b)
}
On Option, you have:
def orNull[A1 >: A](implicit ev: Null <:< A1): A1 = this getOrElse null
You'll find some other examples on the collections.
How can I define a function that is accepting all the tuples(1 to 22) as argument, I have something as follows in mind:
def foo (v=Tuple) =...
foo((1,2))
foo((1,2,3))
EDIT:
To answer the comment: I am actually trying to create a Tensor class which is a set of values and a set of indices. The indices can be covariant and/or contravariant (cf Wikipedia1 and Wikipedia2). I wanted to have a special syntax like Tensor((1,2),(3,4),values) which would create a tensor with values, two covariant indices having length (2,3) and two contravariant indices with length (3,4). So using this syntax I could also write Tensor((1,2,3),3,values) (with an implicit Int=>Tuple1).
I agree that Tuples are not suitable for this, better to use Lists. However the syntax is not so nice then...
This really isn't what tuples are for (cf. the comments and answers here). Tuples are for doing things like returning multiple values from a method, where in Java you would have to create a lightweight class. If you have an arbitrary number of elements, you should use a collection.
Another way to provide a convenient API to your users (aside from implicit conversion) is to use multiple parameter lists with varargs:
def tensor(cov: Int*)(contrav: Int*)(values: Int*) = // ...
Your examples would be written
tensor(1,2)(3,4)(values)
tensor(1,2,3)(3)(values)
There is no trait specifically for tuples, but you could use a typeclass approach, as demonstrated in this answer.
If your goal is really to have a List but allow callers to pass in tuples (for convenience), you can modify that solution so that the type class produces a List rather than a Product.
In brief, the idea is that you provide implicit conversions from the types that callers can pass to the type you're actually going to use:
def foo(x: IndexList) = x.indices
sealed case class IndexList(indices: List[Int])
object IndexList {
implicit def val2indices(i: Int) = IndexList(List(i))
implicit def tuple2toIndices(t: (Int, Int)): IndexList =
product2indices(t)
// etc
implicit def list2indices(l: List[Int]) = IndexList(l)
private def product2indices(p: Product) =
IndexList(p.productIterator.toList.asInstanceOf[List[Int]])
}
You can then call your method with any type for which you've provided a conversion:
foo(1)
foo((2,3))
foo(List(1,2,3))
All case classes, including Tuples, extend scala.Product but unfortunately there's no marker trait specifically for tuples, so someone could sneak ordinary case classes into your method. Of course, there's no way to treat all arities in a uniform way and still be typesafe, but you can use productElement(n: Int) to extract the nth value, or productIterator to iterate over all the values.
But... This is heresy around here, but have you considered overloading? :)
What you probably want to use is an HList, not a tuple. An HList (heterogenous list) is basically an arbitrary-length, typed tuple.
There are a few examples of HLists in scala (they are not part of the standard library)
http://jnordenberg.blogspot.com/2008/08/hlist-in-scala.html
a great and comprehensive series by Mark Harrah (of SBT fame)
Miles Sabin's github examples, taken from his recent talk at Scala eXchange
Check this out. It actually works better than I expected ;)
scala> def f[T <: Product](x: T) = x
f: [T <: Product](x: T)T
scala> f(1)
<console>:9: error: inferred type arguments [Int] do not conform to method f's type parameter bounds [T <: Product]
scala> f(1, "2") // you don't even need the extra parenthesis
res0: (Int, java.lang.String) = (2,3)
scala> f(1, "2", BigInt("3"))
res1: (Int, java.lang.String, scala.math.BigInt) = (1,2,3)