In a simple way, what are context and view bounds and what is the difference between them?
Some easy-to-follow examples would be great too!
I thought this was asked already, but, if so, the question isn't apparent in the "related" bar. So, here it is:
What is a View Bound?
A view bound was a mechanism introduced in Scala to enable the use of some type A as if it were some type B. The typical syntax is this:
def f[A <% B](a: A) = a.bMethod
In other words, A should have an implicit conversion to B available, so that one can call B methods on an object of type A. The most common usage of view bounds in the standard library (before Scala 2.8.0, anyway), is with Ordered, like this:
def f[A <% Ordered[A]](a: A, b: A) = if (a < b) a else b
Because one can convert A into an Ordered[A], and because Ordered[A] defines the method <(other: A): Boolean, I can use the expression a < b.
Please be aware that view bounds are deprecated, you should avoid them.
What is a Context Bound?
Context bounds were introduced in Scala 2.8.0, and are typically used with the so-called type class pattern, a pattern of code that emulates the functionality provided by Haskell type classes, though in a more verbose manner.
While a view bound can be used with simple types (for example, A <% String), a context bound requires a parameterized type, such as Ordered[A] above, but unlike String.
A context bound describes an implicit value, instead of view bound's implicit conversion. It is used to declare that for some type A, there is an implicit value of type B[A] available. The syntax goes like this:
def f[A : B](a: A) = g(a) // where g requires an implicit value of type B[A]
This is more confusing than the view bound because it is not immediately clear how to use it. The common example of usage in Scala is this:
def f[A : ClassManifest](n: Int) = new Array[A](n)
An Array initialization on a parameterized type requires a ClassManifest to be available, for arcane reasons related to type erasure and the non-erasure nature of arrays.
Another very common example in the library is a bit more complex:
def f[A : Ordering](a: A, b: A) = implicitly[Ordering[A]].compare(a, b)
Here, implicitly is used to retrive the implicit value we want, one of type Ordering[A], which class defines the method compare(a: A, b: A): Int.
We'll see another way of doing this below.
How are View Bounds and Context Bounds implemented?
It shouldn't be surprising that both view bounds and context bounds are implemented with implicit parameters, given their definition. Actually, the syntax I showed are syntactic sugars for what really happens. See below how they de-sugar:
def f[A <% B](a: A) = a.bMethod
def f[A](a: A)(implicit ev: A => B) = a.bMethod
def g[A : B](a: A) = h(a)
def g[A](a: A)(implicit ev: B[A]) = h(a)
So, naturally, one can write them in their full syntax, which is specially useful for context bounds:
def f[A](a: A, b: A)(implicit ord: Ordering[A]) = ord.compare(a, b)
What are View Bounds used for?
View bounds are used mostly to take advantage of the pimp my library pattern, through which one "adds" methods to an existing class, in situations where you want to return the original type somehow. If you do not need to return that type in any way, then you do not need a view bound.
The classic example of view bound usage is handling Ordered. Note that Int is not Ordered, for example, though there is an implicit conversion. The example previously given needs a view bound because it returns the non-converted type:
def f[A <% Ordered[A]](a: A, b: A): A = if (a < b) a else b
This example won't work without view bounds. However, if I were to return another type, then I don't need a view bound anymore:
def f[A](a: Ordered[A], b: A): Boolean = a < b
The conversion here (if needed) happens before I pass the parameter to f, so f doesn't need to know about it.
Besides Ordered, the most common usage from the library is handling String and Array, which are Java classes, like they were Scala collections. For example:
def f[CC <% Traversable[_]](a: CC, b: CC): CC = if (a.size < b.size) a else b
If one tried to do this without view bounds, the return type of a String would be a WrappedString (Scala 2.8), and similarly for Array.
The same thing happens even if the type is only used as a type parameter of the return type:
def f[A <% Ordered[A]](xs: A*): Seq[A] = xs.toSeq.sorted
What are Context Bounds used for?
Context bounds are mainly used in what has become known as typeclass pattern, as a reference to Haskell's type classes. Basically, this pattern implements an alternative to inheritance by making functionality available through a sort of implicit adapter pattern.
The classic example is Scala 2.8's Ordering, which replaced Ordered throughout Scala's library. The usage is:
def f[A : Ordering](a: A, b: A) = if (implicitly[Ordering[A]].lt(a, b)) a else b
Though you'll usually see that written like this:
def f[A](a: A, b: A)(implicit ord: Ordering[A]) = {
import ord.mkOrderingOps
if (a < b) a else b
}
Which take advantage of some implicit conversions inside Ordering that enable the traditional operator style. Another example in Scala 2.8 is the Numeric:
def f[A : Numeric](a: A, b: A) = implicitly[Numeric[A]].plus(a, b)
A more complex example is the new collection usage of CanBuildFrom, but there's already a very long answer about that, so I'll avoid it here. And, as mentioned before, there's the ClassManifest usage, which is required to initialize new arrays without concrete types.
The context bound with the typeclass pattern is much more likely to be used by your own classes, as they enable separation of concerns, whereas view bounds can be avoided in your own code by good design (it is used mostly to get around someone else's design).
Though it has been possible for a long time, the use of context bounds has really taken off in 2010, and is now found to some degree in most of Scala's most important libraries and frameworks. The most extreme example of its usage, though, is the Scalaz library, which brings a lot of the power of Haskell to Scala. I recommend reading up on typeclass patterns to get more acquainted with all the ways in which it can be used.
EDIT
Related questions of interest:
A discussion on types, origin and precedence of implicits
Chaining implicits
Related
Is there any sufficient difference between constraints of two method at the trait Foo?
trait Foo[A] {
def barWithTypeBound[B <: A]: B
def barWithGeneralizedTypeConstraint[B](implicit ev: B <:< A): B
}
Sometimes it is easier to produce an evidence at call site than provide the evidence through all intermediate layers, e.g.
trait Flattenable[F[_]] {
def flatten[A](ffa: F[F[A]]): F[A]
}
extension [F[_], A](fa: F[A])
def flatten[B](using A <:< F[B], F: Flattenable[F]): F[B] =
F.flatten(fa)
To implement this extension without <:< I would have to change the definition of fa:
extension [F[_], A](fa: F[F[A]])
def flatten[B](using F: Flattenable[F]): F[B] =
F.flatten(fa)
which is possible here but not in every case. Another example would be something like:
class Wrapper[A](value: A):
def get: A = value
def isNatural(using: A <:< Int): Boolean = value >= 0
def isTrue(using A <:< Boolean): Boolean = value
which allows caller to call the method only of A is of particular type. (But again, you could solve these with specialized extension methods and normal constraints). I was able to define parametric type in one place, and slap an additional constraint on it later on. If I would need that constrain from the start, I would just use a type bound.
Usually, there is no need to make some method (or implicit, or given) available or not based on the type. And it only proves that some particular case is true, so it wouldn't work well with co- and contravatiance. And, it requires you to create a dummy object. So with exception of a few cases where such "hack" is useful normal constrains are preferred.
In your example there is no difference between the 2 methods, except the second one would have to pass and additional object as a parameter.
So, I have been searching documentation about the main difference between parametric polymorphism and adhoc-polymorphism, but I still have some doubts.
For instance, methods like head in Collections, are clearly parametric polymorphism, since the code used to obtain the head in a List[Int] is the same as in any other List.
List[T] {
def head: T = this match {
case x :: xs => x
case Nil => throw new RuntimeException("Head of empty List.")
}
}
(Not sure if that's the actual implementation of head, but it doesn't matter)
On the other hand, type classes are considered adhoc-polymorphism. Since we can provide different implementations, bounded to the types.
trait Expression[T] {
def evaluate(expr: T): Int
}
object ExpressionEvaluator {
def evaluate[T: Expression](value: T): Int = implicitly[Expression[T]].evaluate(value)
}
implicit val intExpression: Expression[Int] = new Expression[Int] {
override def evaluate(expr: Int): Int = expr
}
ExpressionEvaluator.evaluate(5)
// 5
In the middle, we have methods like filter, that are parametrized, but we can provide different implementations, by providing different functions.
List(1,2,3).filter(_ % 2 == 0)
// List(2)
Are methods like filter, map, etc, considered ad-hoc polymorphism? Why or why not?
The method filter on Lists is an example of parametric polymorphism. The signature is
def filter(p: (A) ⇒ Boolean): List[A]
It works in exactly the same way for all types A. Since it can be parameterized by any type A, it's ordinary parametric polymorphism.
Methods like map make use of both types of polymorphism simultaneously.
Full signature of map is:
final def map[B, That]
(f: (A) ⇒ B)
(implicit bf: CanBuildFrom[List[A], B, That])
: That
This method relies on the presence of an implicit value (the CBF-gizmo), therefore it's ad-hoc polymorphism. However, some of the implicit methods that provide the CBFs of the right type are actually themselves parametrically polymorphic in types A and B. Therefore, unless the compiler manages to find some very special ad-hoc construct like an CanBuildFrom[List[String], Int, BitSet] in the implicit scope, it will sooner or later fall back to something like
implicit def ahComeOnGiveUpAndJustBuildAList[A, B]
: CanBuildFrom[List[A], B, List[B]]
Therefore, I think one could say that it's a kind-of "hybrid parametric-ad-hoc polymorphism" that first attempts to find the most appropriate ad-hoc type class CanBuildFrom[List[A], B, That] in the implicit scope, but eventually falls back to ordinary parametric polymorphism, and returns a one-fits-all CanBuildFrom[List[A], B, List[B]]-solution that is parametrically polymorphic in both A and B.
I can see in the API docs for Predef that they're subclasses of a generic function type (From) => To, but that's all it says. Um, what? Maybe there's documentation somewhere, but search engines don't handle "names" like "<:<" very well, so I haven't been able to find it.
Follow-up question: when should I use these funky symbols/classes, and why?
These are called generalized type constraints. They allow you, from within a type-parameterized class or trait, to further constrain one of its type parameters. Here's an example:
case class Foo[A](a:A) { // 'A' can be substituted with any type
// getStringLength can only be used if this is a Foo[String]
def getStringLength(implicit evidence: A =:= String) = a.length
}
The implicit argument evidence is supplied by the compiler, iff A is String. You can think of it as a proof that A is String--the argument itself isn't important, only knowing that it exists. [edit: well, technically it actually is important because it represents an implicit conversion from A to String, which is what allows you to call a.length and not have the compiler yell at you]
Now I can use it like so:
scala> Foo("blah").getStringLength
res6: Int = 4
But if I tried use it with a Foo containing something other than a String:
scala> Foo(123).getStringLength
<console>:9: error: could not find implicit value for parameter evidence: =:=[Int,String]
You can read that error as "could not find evidence that Int == String"... that's as it should be! getStringLength is imposing further restrictions on the type of A than what Foo in general requires; namely, you can only invoke getStringLength on a Foo[String]. This constraint is enforced at compile-time, which is cool!
<:< and <%< work similarly, but with slight variations:
A =:= B means A must be exactly B
A <:< B means A must be a subtype of B (analogous to the simple type constraint <:)
A <%< B means A must be viewable as B, possibly via implicit conversion (analogous to the simple type constraint <%)
This snippet by #retronym is a good explanation of how this sort of thing used to be accomplished and how generalized type constraints make it easier now.
ADDENDUM
To answer your follow-up question, admittedly the example I gave is pretty contrived and not obviously useful. But imagine using it to define something like a List.sumInts method, which adds up a list of integers. You don't want to allow this method to be invoked on any old List, just a List[Int]. However the List type constructor can't be so constrainted; you still want to be able to have lists of strings, foos, bars, and whatnots. So by placing a generalized type constraint on sumInts, you can ensure that just that method has an additional constraint that it can only be used on a List[Int]. Essentially you're writing special-case code for certain kinds of lists.
Not a complete answer (others have already answered this), I just wanted to note the following, which maybe helps to understand the syntax better: The way you normally use these "operators", as for example in pelotom's example:
def getStringLength(implicit evidence: A =:= String)
makes use of Scala's alternative infix syntax for type operators.
So, A =:= String is the same as =:=[A, String] (and =:= is just a class or trait with a fancy-looking name). Note that this syntax also works with "regular" classes, for example you can write:
val a: Tuple2[Int, String] = (1, "one")
like this:
val a: Int Tuple2 String = (1, "one")
It's similar to the two syntaxes for method calls, the "normal" with . and () and the operator syntax.
Read the other answers to understand what these constructs are. Here is when you should use them. You use them when you need to constrain a method for specific types only.
Here is an example. Suppose you want to define a homogeneous Pair, like this:
class Pair[T](val first: T, val second: T)
Now you want to add a method smaller, like this:
def smaller = if (first < second) first else second
That only works if T is ordered. You could restrict the entire class:
class Pair[T <: Ordered[T]](val first: T, val second: T)
But that seems a shame--there could be uses for the class when T isn't ordered. With a type constraint, you can still define the smaller method:
def smaller(implicit ev: T <:< Ordered[T]) = if (first < second) first else second
It's ok to instantiate, say, a Pair[File], as long as you don't call smaller on it.
In the case of Option, the implementors wanted an orNull method, even though it doesn't make sense for Option[Int]. By using a type constraint, all is well. You can use orNull on an Option[String], and you can form an Option[Int] and use it, as long as you don't call orNull on it. If you try Some(42).orNull, you get the charming message
error: Cannot prove that Null <:< Int
It depends on where they are being used. Most often, when used while declaring types of implicit parameters, they are classes. They can be objects too in rare instances. Finally, they can be operators on Manifest objects. They are defined inside scala.Predef in the first two cases, though not particularly well documented.
They are meant to provide a way to test the relationship between the classes, just like <: and <% do, in situations when the latter cannot be used.
As for the question "when should I use them?", the answer is you shouldn't, unless you know you should. :-) EDIT: Ok, ok, here are some examples from the library. On Either, you have:
/**
* Joins an <code>Either</code> through <code>Right</code>.
*/
def joinRight[A1 >: A, B1 >: B, C](implicit ev: B1 <:< Either[A1, C]): Either[A1, C] = this match {
case Left(a) => Left(a)
case Right(b) => b
}
/**
* Joins an <code>Either</code> through <code>Left</code>.
*/
def joinLeft[A1 >: A, B1 >: B, C](implicit ev: A1 <:< Either[C, B1]): Either[C, B1] = this match {
case Left(a) => a
case Right(b) => Right(b)
}
On Option, you have:
def orNull[A1 >: A](implicit ev: Null <:< A1): A1 = this getOrElse null
You'll find some other examples on the collections.
Let's say I come up with a combinator:
def optional[M[_]: Applicative, A, B](fn: Kleisli[M, A, B]) =
Kleisli[M, Option[A], Option[B]] {
case Some(t) => fn(t).map(_.some)
case None => Applicative[M].point(none[B])
}
This combinator maps any Kleisli[M, A, B] to Kleisli[M, Option[A], Option[B].
However, after some time, I realize (admittedly with the help of estewei on #scalaz) this can be made to work with containers more general than just Option, namely anything for which there is a Traverse instance:
def traverseKleisli[M[_]: Applicative, F[_]: Traverse, A, B](k: Kleisli[M, A, B]) =
Kleisli[M, F[A], F[B]](k.traverse)
so that optional can now be defined as:
def optional[M[_]: Applicative, A, B](fn: Kleisli[M, A, B]) =
traverseKleisli[M, Option, A, B](fn)
However, I'd like to verify that at least the resulting type signature is equal to the original definition of optional, and whereas I could resort to hovering over both definitions in my IDE (Ensime in my case) and compare the response, I'd like a more solid way of determining that.
I tried:
implicitly[optional1.type =:= optional2.type]
but (obviously?) that fails due to both identifies being considered unstable by scalac.
Other than perhaps temporarily making both of the functions objects with an apply method, are there any easy ways to compare their static types without resorting to relying on hints from IDE presentation compilers?
P.S. the name optional comes from the fact that I use that combinator as part of a validation DSL to take a Kleisli-wrapped Validation[String, T] and turn it into a Kleisli-wrapped Validation[String, Option[T]] that verifies the validity of optional values if present.
The problem you're having is that a method is not a value in scala, and values are monotyped. You can test that a particular "instance" of your method has the correct type (using a utility function from shapeless):
val optional1Fix = optional1[Future, Int, String] _
val optional2Fix = optional2[Future, Int, String] _
import shapeless.test._
sameTyped(optional1Fix)(optional2Fix)
but as with unit tests, this is somewhat unsatisfying as even if we test several instances we can't be sure it works for everything. (Note that implicitly[optional1Fix.type =:= optional2Fix.type] still doesn't work, I assume because the compiler never realizes when two path-dependent types are equal).
For a full solution we have to see the complete function as a value, so we would have to replace it with an object with an apply method as you suggest (analogous to shapeless' ~>). The only alternative I can think of is a macro, which would have to be applied to the object containing the methods, and know which methods you wanted to compare; writing a specific macro for this specific test seems a little excessive.
I can see in the API docs for Predef that they're subclasses of a generic function type (From) => To, but that's all it says. Um, what? Maybe there's documentation somewhere, but search engines don't handle "names" like "<:<" very well, so I haven't been able to find it.
Follow-up question: when should I use these funky symbols/classes, and why?
These are called generalized type constraints. They allow you, from within a type-parameterized class or trait, to further constrain one of its type parameters. Here's an example:
case class Foo[A](a:A) { // 'A' can be substituted with any type
// getStringLength can only be used if this is a Foo[String]
def getStringLength(implicit evidence: A =:= String) = a.length
}
The implicit argument evidence is supplied by the compiler, iff A is String. You can think of it as a proof that A is String--the argument itself isn't important, only knowing that it exists. [edit: well, technically it actually is important because it represents an implicit conversion from A to String, which is what allows you to call a.length and not have the compiler yell at you]
Now I can use it like so:
scala> Foo("blah").getStringLength
res6: Int = 4
But if I tried use it with a Foo containing something other than a String:
scala> Foo(123).getStringLength
<console>:9: error: could not find implicit value for parameter evidence: =:=[Int,String]
You can read that error as "could not find evidence that Int == String"... that's as it should be! getStringLength is imposing further restrictions on the type of A than what Foo in general requires; namely, you can only invoke getStringLength on a Foo[String]. This constraint is enforced at compile-time, which is cool!
<:< and <%< work similarly, but with slight variations:
A =:= B means A must be exactly B
A <:< B means A must be a subtype of B (analogous to the simple type constraint <:)
A <%< B means A must be viewable as B, possibly via implicit conversion (analogous to the simple type constraint <%)
This snippet by #retronym is a good explanation of how this sort of thing used to be accomplished and how generalized type constraints make it easier now.
ADDENDUM
To answer your follow-up question, admittedly the example I gave is pretty contrived and not obviously useful. But imagine using it to define something like a List.sumInts method, which adds up a list of integers. You don't want to allow this method to be invoked on any old List, just a List[Int]. However the List type constructor can't be so constrainted; you still want to be able to have lists of strings, foos, bars, and whatnots. So by placing a generalized type constraint on sumInts, you can ensure that just that method has an additional constraint that it can only be used on a List[Int]. Essentially you're writing special-case code for certain kinds of lists.
Not a complete answer (others have already answered this), I just wanted to note the following, which maybe helps to understand the syntax better: The way you normally use these "operators", as for example in pelotom's example:
def getStringLength(implicit evidence: A =:= String)
makes use of Scala's alternative infix syntax for type operators.
So, A =:= String is the same as =:=[A, String] (and =:= is just a class or trait with a fancy-looking name). Note that this syntax also works with "regular" classes, for example you can write:
val a: Tuple2[Int, String] = (1, "one")
like this:
val a: Int Tuple2 String = (1, "one")
It's similar to the two syntaxes for method calls, the "normal" with . and () and the operator syntax.
Read the other answers to understand what these constructs are. Here is when you should use them. You use them when you need to constrain a method for specific types only.
Here is an example. Suppose you want to define a homogeneous Pair, like this:
class Pair[T](val first: T, val second: T)
Now you want to add a method smaller, like this:
def smaller = if (first < second) first else second
That only works if T is ordered. You could restrict the entire class:
class Pair[T <: Ordered[T]](val first: T, val second: T)
But that seems a shame--there could be uses for the class when T isn't ordered. With a type constraint, you can still define the smaller method:
def smaller(implicit ev: T <:< Ordered[T]) = if (first < second) first else second
It's ok to instantiate, say, a Pair[File], as long as you don't call smaller on it.
In the case of Option, the implementors wanted an orNull method, even though it doesn't make sense for Option[Int]. By using a type constraint, all is well. You can use orNull on an Option[String], and you can form an Option[Int] and use it, as long as you don't call orNull on it. If you try Some(42).orNull, you get the charming message
error: Cannot prove that Null <:< Int
It depends on where they are being used. Most often, when used while declaring types of implicit parameters, they are classes. They can be objects too in rare instances. Finally, they can be operators on Manifest objects. They are defined inside scala.Predef in the first two cases, though not particularly well documented.
They are meant to provide a way to test the relationship between the classes, just like <: and <% do, in situations when the latter cannot be used.
As for the question "when should I use them?", the answer is you shouldn't, unless you know you should. :-) EDIT: Ok, ok, here are some examples from the library. On Either, you have:
/**
* Joins an <code>Either</code> through <code>Right</code>.
*/
def joinRight[A1 >: A, B1 >: B, C](implicit ev: B1 <:< Either[A1, C]): Either[A1, C] = this match {
case Left(a) => Left(a)
case Right(b) => b
}
/**
* Joins an <code>Either</code> through <code>Left</code>.
*/
def joinLeft[A1 >: A, B1 >: B, C](implicit ev: A1 <:< Either[C, B1]): Either[C, B1] = this match {
case Left(a) => a
case Right(b) => Right(b)
}
On Option, you have:
def orNull[A1 >: A](implicit ev: Null <:< A1): A1 = this getOrElse null
You'll find some other examples on the collections.