This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
What do <:<, <%<, and =:= mean in Scala 2.8, and where are they documented?
I'm curious since I saw them in Scala library code, but I found it quite hard to Google something about them since their names are not words.
These classes are used for implicit parameters that restrict the applicability of a method. Below is a description of each class. In general they are useful to restrain the a type parameter of an enclosing class within the context of a single method.
<:<[A,B] or A <:< B
The compiler can provide an implicit instance of this type only when A is a subtype of B. This is similar to A <: B in a type parameter list.
This can be useful when you want to put an additional constraint on a class type parameter in the context of a particular method. For example the class Foo below can be used with any type, but the method bar is only valid when T is a subtype of Number.
class Foo[T](x: T) {
// In general T could be any type
def bar(implicit ev: T <:< Number) = {
// This method can now only be used when T is a subtype of Number
// also, you can use ev to convert a T to number
ev(x).doubleValue
}
}
new Foo(123 : java.lang.Integer).bar // returns 123.0: Double
new Foo("123").bar // compile error: Cannot prove java.lang.String <:< java.lang.Number
=:=[A,B] or A =:= B
The compiler can provide an implicit instance of this type only when A is the same type as B. This doesn't have an equivalent syntax in a type parameter list, you'd just use the same type parameter twice.
This can be used much like <:< except that it requires the types to match exactly. This could be used to make a pair of methods mutually exclusive.
class Foo[T<:Number](x:T) {
def numOnly(implicit ev: T =:= Number) = ()
def floatOnly(implicit ev: T =:= Float) = ()
}
val asFloat = new Foo(123.0f:java.lang.Float)
asFloat.numOnly // Compile error
asFloat.floatOnly // Ok
val asNum = new Foo(123.0f:java.lang.Number)
asFloat.floatOnly // Ok
asFloat.numOnly // Compile error
Essentially if the type parameter is more specific than the constraint you can force the more specific method to be used.
<%<[A,B] or A <%< B
The compiler can provide an implicit instance of this type only when A can be converted to B. This is similar to A <% B in a type parameter list.
This requires that there is an implicit function available to turn an A into a B. This will always be possible when A <: B since the implicit A <:< B satisfies this constraint.
This class is actually marked as deprecated. It says you should instead just use A => B.
<:<, =:= and <%< are generic classes, all of them with two type parameters, and all of them extends Function1 (function with one argument). They are defined in Predef.
They are intended to supply very basic conversion as implicits. The conversions are so basic that most of the time, they are identity. The point of having those classes and not Functions is that they may be distinguished from other functions that might be in implicit scopes. Predef gives the following implicit
For every type A, a <:<[A,A] is available. As <:< is [-From, +To],
<:<[A,A] will satisfy any <:<[A,B] where A conforms to B. The
implementation is identity.
For every type A, there is also a =:=[A,A], but =:= is invariant,
so it will not satisfy any A =:= B if A is not exactly B.
implementation is identity
There is also a A <%< B each time A <% B. Implementation is the
implicit conversion implied by A <% B.
The point is not to provide clever conversions (identity is not too clever), but to provide a library level way enforce some typing constraint at compile time, similar to language level constraint <:, <%, and simple lack of type argument, (which is quite a strong constraint too), in places where the language constraints are not available. A typical example is, in a generic class, when you want a method to be available only for some value of a type parameter. Suppose Collection[A], and I want a method sort that will be available only if A <: Ordered[A]. I don't want to put the constraint in the declaration of the class, because I want to accept collections whose elements are not Ordered. I just don't want them to have a sort method. I cannot put the constraint on the method sort either, because A does not even appear as a type parameter of method sort. <:< provides a solution :
class MyCollection[A] {
def sort(implicit ev: A <:< Ordered[A])
// other methods, without constraints
}
Doing that, sort is technically available on all instances of MyCollection, but practically, when one does not pass ev parameter explicitely, it will look for an A <:< Ordered[A] in implicit scope, and conforms in predef will give one only if A is a subtype of Ordered[A]. As the (identity) conversion between A and Ordered[A] is made available in the implicit scope of the routine, A will be usable as an Ordered[A] in the body of method sort.
Using =:= would force the type to be exactly Ordered[A] (the equivalent constraint on MyCollection would be simply to make it non generic, and put the given type everywhere there is the generic parameter). <%< would ensure there is an implicit conversion. That would be the more likely one here.
Actually for this particular method sort, the proper solution would be to get an Ordering[A] in implicit context, with def sort(implicit ev: Ordering[A]) but that would not have demonstrated the point of those classes.
They are generalized type constraints for type arguments. Check out their definitions along with corresponding implicit conforms() methods in Predef.
In brief it works like this:
you add implicit parameter to your method like implicit tc: T <:< U, that works like implicit tc: <:<[T, U]
there is implicit conforms() method that returns required instance ensures that T <: U (just like with regular generics)
if T <: U is not the case, compilation of method call fails with "implicit not found"
Look at usage sample here: http://java.dzone.com/articles/using-generalized-type
Related
This code compiles and does exactly what one expects
class MyList[T](list: T)(implicit o: Ordering[T])
However this does not:
class MyList2[T](list: T) {
val o = implicitly[Ordering[T]]
}
And I can't see why. In the first example when the class is being constructed the compiler will find the Ordering implicit because it will know the concrete type T. But in the second case it should also find the implicit since T will already be a concrete type.
In the first example when the class is being constructed the compiler
will find the Ordering implicit because it will know the concrete type
T.
In the first example, one needs to have an implicit Ordering[T] in scope for the compiler to find it. The compiler by itself doesn't "make up" implicits. Since you've directly required one to be available via the second parameter list, if such an implicit exists, it will be passed down to the class constructor.
But in the second case it should also find the implicit since T will
already be a concrete type.
The fact that T is a concrete type at compile time doesn't help the compiler find an implicit for it. When we say T is a concrete type, you must remember that at the call-site, T is simply a generic type parameter, nothing more. If you don't help the compiler, it can't give the guarantee of having an implicit in scope. You need to have the method supply an implicit, this can be done via a Context Bound:
class MyList2[T: Ordering](list: T)
Which requires the existence, at compile time, of an ordering for type T. Semantically, this is equivalent to your second parameter list.
You must always tell the compiler that an implicit should be provided for your type. That is, you must always put implicit o: Ordering[T]. What implicitly does is that it allows you to access the implicit in case you haven't named it. Note that you can use syntax sugar (called "context bound") for the implicit parameter, in which case implicitly becomes neccessary:
class MyList2[T : Ordering](list: T) {
val o = implicitly[Ordering[T]]
}
Type [T : Ordering] is a shorthand for "some type T for which an implicit Ordering[T] exists in scope". It's the same as writing:
class MyList2[T](list: T)(implicit o: Ordering[T]) {
}
but in that case implicitly is not needed since you can access your implicit parameter by its identifier o.
But in the second case it should also find the implicit since T will already be a concrete type.
Scala (and Java) generics don't work like C++ templates. The compiler isn't going to see MyList2[Int] elsewhere, generate
class MyList2_Int(list: Int) {
val o = implicitly[Ordering[Int]]
}
and typecheck that definition. It is MyList2 itself which gets typechecked, with no concrete T.
For the second to work, you need to specify that there is an Ordering for type T using a context bound:
class MyList2[T : Ordering](list: T) {
val o = implicitly[Ordering[T]]
}
Let's suppose I have a def that takes multiple type parameters:
def foo[A, B, C](b: B, c: C)(implicit ev: Writer[A])
However, the intended usage is that type parameters B and C should be inferred (based on the passed-in arguments). And the caller should only need to really specify A explicitly (e.g. to have an appropriate implicit chosen by the compiler). Unfortunately, Scala only allows all or none of the type parameters to be specified by the caller. In a sense, I want the type parameters to be curried:
def foo[A][B, C]...
Is there some trick to accomplish this in Scala?
(If my specific example doesn't make complete sense I'm happy to improve it with suggestions.)
The best way I've been able to pull this off is by defining a class which holds the curried type information then uses the apply method to simulate the function call.
I've written about this here - http://caryrobbins.com/dev/scala-type-curry/
For your specific example, you'd need to put the implicit ev: Writes[A] in the signature for the apply and not in the signature for foo. This is because it causes ambiguity between explicitly passing the implicit argument or implicitly calling the apply method.
Here's an example implementation for your example -
object Example {
def foo[A]: _Foo[A] = _foo.asInstanceOf[_Foo[A]]
final class _Foo[A] private[Example] {
def apply[B, C](b: B, c: C)(implicit ev: Writes[A]): Unit = ???
}
private lazy val _foo = new _Foo[Nothing]
}
You can then supply your type parameter you wish to curry and the following arguments passed to the apply method will be inferred.
Example.foo[Int]("bar", new Object)
If you do end up needing to specify the other type parameters, you can do so by explicitly calling apply; although, I've never seen a need to do this yet.
Example.foo[Int].apply[String, Object]("bar", new Object)
If you don't wish to use the intermediate type you can also use a structural type, which I discuss in the aforementioned post; however, this requires reflectiveCalls and an inferred type signature, both of which I like to avoid.
case class Level[B](b: B){
def printCovariant[A<:B](a: A): Unit = println(a)
def printInvariant(b: B): Unit = println(b)
def printContravariant[C>:B](c: C): Unit = println(c)
}
class First
class Second extends First
class Third extends Second
//First >: Second >: Third
object Test extends App {
val second = Level(new Second) //set B as Second
//second.printCovariant(new First) //error and reasonable
second.printCovariant(new Second)
second.printCovariant(new Third)
//second.printInvariant(new First) //error and reasonable
second.printInvariant(new Second)
second.printInvariant(new Third) //why no error?
second.printContravariant(new First)
second.printContravariant(new Second)
second.printContravariant(new Third) //why no error?
}
It seems scala's lowerbound type checking has bugs... for invariant case and contravariant case.
I wonder above code are have bugs or not.
Always keep in mind that if Third extends Second then whenever a Second is wanted, a Third can be provided. This is called subtype polymorhpism.
Having that in mind, it's natural that second.printInvariant(new Third) compiles. You provided a Third which is a subtype of Second, so it checks out. It's like providing an Apple to a method which takes a Fruit.
This means that your method
def printCovariant[A<:B](a: A): Unit = println(a)
can be written as:
def printCovariant(a: B): Unit = println(a)
without losing any information. Due to subtype polymorphism, the second one accepts B and all its subclasses, which is the same as the first one.
Same goes for your second error case - it's another case of subtype polymorphism. You can pass the new Third because Third is actually a Second (note that I'm using the "is-a" relationship between subclass and superclass taken from object-oriented notation).
In case you're wondering why do we even need upper bounds (isn't subtype polymorphism enough?), observe this example:
def foo1[A <: AnyRef](xs: A) = xs
def foo2(xs: AnyRef) = xs
val res1 = foo1("something") // res1 is a String
val res2 = foo2("something") // res2 is an Anyref
Now we do observe the difference. Even though subtype polymorphism will allow us to pass in a String in both cases, only method foo1 can reference the type of its argument (in our case a String). Method foo2 will happily take a String, but will not really know that it's a String. So, upper bounds can come in handy when you want to preserve the type (in your case you just print out the value so you don't really care about the type - all types have a toString method).
EDIT:
(extra details, you may already know this but I'll put it for completeness)
There are more uses of upper bounds then what I described here, but when parameterizing a method this is the most common scenario. When parameterizing a class, then you can use upper bounds to describe covariance and lower bounds to describe contravariance. For example,
class SomeClass[U] {
def someMethod(foo: Foo[_ <: U]) = ???
}
says that parameter foo of method someMethod is covariant in its type. How's that? Well, normally (that is, without tweaking variance), subtype polymorphism wouldn't allow us to pass a Foo parameterized with a subtype of its type parameter. If T <: U, that doesn't mean that Foo[T] <: Foo[U]. We say that Foo is invariant in its type. But we just tweaked the method to accept Foo parameterized with U or any of its subtypes. Now that is effectively covariance. So, as long as someMethod is concerned - if some type T is a subtype of U, then Foo[T] is a subtype of Foo[U]. Great, we achieved covariance. But note that I said "as long as someMethod is concerned". Foo is covariant in its type in this method, but in others it may be invariant or contravariant.
This kind of variance declaration is called use-site variance because we declare the variance of a type at the point of its usage (here it's used as a method parameter type of someMethod). This is the only kind of variance declaration in, say, Java. When using use-site variance, you have watch out for the get-put principle (google it). Basically this principle says that we can only get stuff from covariant classes (we can't put) and vice versa for contravariant classes (we can put but can't get). In our case, we can demonstrate it like this:
class Foo[T] { def put(t: T): Unit = println("I put some T") }
def someMethod(foo: Foo[_ <: String]) = foo.put("asd") // won't compile
def someMethod2(foo: Foo[_ >: String]) = foo.put("asd")
More generally, we can only use covariant types as return types and contravariant types as parameter types.
Now, use-site declaration is nice, but in Scala it's much more common to take advantage of declaration-site variance (something Java doesn't have). This means that we would describe the variance of Foo's generic type at the point of defining Foo. We would simply say class Foo[+T]. Now we don't need to use bounds when writing methods that work with Foo; we proclaimed Foo to be permanently covariant in its type, in every use case and every scenario.
For more details about variance in Scala feel free to check out my blog post on this topic.
I'm not sure if I've got the terminology down right in the title, but what's the difference between this:
class Container[A <% Int] { def addIt(x: A) = 123 + x }
and this:
class Container[A](value: A) { def addIt(implicit evidence: A =:= Int) = 123 + value }
I suppose the question is why would I use one form of type bound over another? Is it simply a matter of being able to apply a type bound at different parts of the code (eg in the parameter list vs the body)?
Also, the documents say that methods may ask for "evidence" for a type rather then using other objects for type checking then provide the second code snippet. What kind of evidence are they referring to?
Note: this is regarding this article on advanced types.
View bounds are for when you want use a type that is viewable as another type. In your example, you want A to be viewable as Int. And what that means is we want any implicit conversion in scope from A => Int.
ie. it is the same as:
class Container(implicit evidence: A => Int) { def addIt(x: A) = 123 + x }
They are also deprecated. That should be reason enough to avoid them.
Type bounds is not the correct term for your second example. They are for strictly bounding the types of a parameter from above or below. A type bound would look like this:
class Container[A <: Int] { def addIt(x: A) = 123 + x }
In this example A must be strictly bounded above by Int. Implicit conversions cannot apply.
I'm not sure if there's really a name for your second example, but it differs from view bounds in that it requires an instance of the type class =:= seen here. It is similar to view bounds in that =:= witnesses that A is the same as Int, and therefore allows A to be converted explicitly to Int. However, it requires an instance of the type class =:=[A, Int] to exist, and not just any implicit conversion from A => Int.
Your two examples are kind of fundamentally different though. The first requires the view bound on the class itself, where the second requires the type evidence on the method. That is, the first example does not allow instances of Container[String] to exist at all (without an implicit conversion available), but the second one does. The second example happily allows you to construct a Container[String], but will not let you use the addIt method, unless you have evidence that String =:= Int.
By evidence we mean either an implicit conversion to the type we're interested in (A => Int), or an instance of the type class =:= that witnesses the equality. For types that are actually the same, we have those automatically generated in Predef (earlier link).
I was reading the (great) post A Generic Quicksort in Scala from Josh Suereth. Especially interesting was the part about the deferring of type inference for collections.
Now I wonder if this also applies for non-collections. The following 2 methods create a Foo[T,O]
def sort1[T, O](from: T, to: T)(implicit ev: O <:< Ordering[T], ord: O):Foo[T,O] = {
...
}
def sort2[T, O <: Ordering[Int]](from: T, to: T)(implicit ord: O):Foo[T,O] = {
...
}
Which of the two methods is preferred and why?
sort2(2,5) does work, with sort1(2,5) the compiler seems find more implicits since there is an ambiguous implicit resolution error.
Deferring type inference using generalized type constraints is all about going around limitations with type inference. These limitations are not necessarily bugs, they can be by design.
I can think of two common situations where they are useful:
You want to get the type inside another higher-kinded type, but you have no constraint to get it.
Example:
def sort[T, Coll <: SeqLike[T, Coll](a: Coll): Coll
The compiler has no way to get the type parameter T because it is not constraint in any way: it does not appear on the left-hand side of a <: or >: in the type parameter list, and does not appear in the argument's types.
Generalized type constraints allow here to constraint (through an argument) the type T:
def sort[T, Coll](a: Coll)(implicit ev: Coll <:< SeqLike[T, Coll]): Coll
Note: this is only one way to do this. There is usually a way to get the same thing something very close without the implicit evidence. Here it would be:
def sort[T, Coll <: SeqLike[T, Coll]](a: Coll with SeqLike[T, Coll]): Coll
You have no control over the type parameter, because it comes from the enclosing class.
For example, adding a flatten method on List[A] that only works if A is a collection itself. You cannot change the type parameter A just for that one method, but you can locally add a constraint with an implicit ev: A <:< Traversable[B] or something similar.
Note 2: this isn't what is done in the collection library, which uses implicit ev: (A) => Traversable[B], so that anything that can be converted to a collection works (like String or Array), but sometimes you do not want that.
Edit to address the sort1 vs sort2 question: adding a generalized type constraint when none is needed can create this kind of error because types become under-constrained. Since there is no constraint on O in sort1, ord: O can be anything. The implicit evidence can only be used to view an O as an Ordering[T] within the body of the method.
If you really wanted to keep the implicit evidence, you would have to re-introduce some constraint somewhere. On the type parameter like in sort2, or on ord itself:
def sort3[T, O](from: T, to: T)
(implicit ev: O <:< Ordering[T], ord: O with Ordering[T]):Foo[T,O]
in this case, sort2 seems to be the best way to do this.
Indeed, this can seems unintuitive.
The reason is that sort1 defines no constraints on O, so 'ord: O' is ambiguous.
(the constraint on the first implicit parameter only defines a constraint on 'ev' type).
Hope it helps :)