Scala contravariance - real life example - scala

I understand covariance and contravariance in scala. Covariance has many applications in the real world, but I can not think of any for contravariance applications, except the same old examples for Functions.
Can someone shed some light on real world examples of contravariance use?

In my opinion, the two most simple examples after Function are ordering and equality. However, the first is not contra-variant in Scala's standard library, and the second doesn't even exist in it. So, I'm going to use Scalaz equivalents: Order and Equal.
Next, I need some class hierarchy, preferably one which is familiar and, of course, it both concepts above must make sense for it. If Scala had a Number superclass of all numeric types, that would have been perfect. Unfortunately, it has no such thing.
So I'm going to try to make the examples with collections. To make it simple, let's just consider Seq[Int] and List[Int]. It should be clear that List[Int] is a subtype of Seq[Int], ie, List[Int] <: Seq[Int].
So, what can we do with it? First, let's write something that compares two lists:
def smaller(a: List[Int], b: List[Int])(implicit ord: Order[List[Int]]) =
if (ord.order(a,b) == LT) a else b
Now I'm going to write an implicit Order for Seq[Int]:
implicit val seqOrder = new Order[Seq[Int]] {
def order(a: Seq[Int], b: Seq[Int]) =
if (a.size < b.size) LT
else if (b.size < a.size) GT
else EQ
}
With these definitions, I can now do something like this:
scala> smaller(List(1), List(1, 2, 3))
res0: List[Int] = List(1)
Note that I'm asking for an Order[List[Int]], but I'm passing a Order[Seq[Int]]. This means that Order[Seq[Int]] <: Order[List[Int]]. Given that Seq[Int] >: List[Int], this is only possible because of contra-variance.
The next question is: does it make any sense?
Let's consider smaller again. I want to compare two lists of integers. Naturally, anything that compares two lists is acceptable, but what's the logic of something that compares two Seq[Int] being acceptable?
Note in the definition of seqOrder how the things being compared becomes parameters to it. Obviously, a List[Int] can be a parameter to something expecting a Seq[Int]. From that follows that a something that compares Seq[Int] is acceptable in place of something that compares List[Int]: they both can be used with the same parameters.
What about the reverse? Let's say I had a method that only compared :: (list's cons), which, together with Nil, is a subtype of List. I obviously could not use this, because smaller might well receive a Nil to compare. It follows that an Order[::[Int]] cannot be used instead of Order[List[Int]].
Let's proceed to equality, and write a method for it:
def equalLists(a: List[Int], b: List[Int])(implicit eq: Equal[List[Int]]) = eq.equal(a, b)
Because Order extends Equal, I can use it with the same implicit above:
scala> equalLists(List(4, 5, 6), List(1, 2, 3)) // we are comparing lengths!
res3: Boolean = true
The logic here is the same one. Anything that can tell whether two Seq[Int] are the same can, obviously, also tell whether two List[Int] are the same. From that, it follows that Equal[Seq[Int]] <: Equal[List[Int]], which is true because Equal is contra-variant.

This example is from the last project I was working on. Say you have a type-class PrettyPrinter[A] that provides logic for pretty-printing objects of type A. Now if B >: A (i.e. if B is superclass of A) and you know how to pretty-print B (i.e. have an instance of PrettyPrinter[B] available) then you can use the same logic to pretty-print A. In other words, B >: A implies PrettyPrinter[B] <: PrettyPrinter[A]. So you can declare PrettyPrinter[A] contravariant on A.
scala> trait Animal
defined trait Animal
scala> case class Dog(name: String) extends Animal
defined class Dog
scala> trait PrettyPrinter[-A] {
| def pprint(a: A): String
| }
defined trait PrettyPrinter
scala> def pprint[A](a: A)(implicit p: PrettyPrinter[A]) = p.pprint(a)
pprint: [A](a: A)(implicit p: PrettyPrinter[A])String
scala> implicit object AnimalPrettyPrinter extends PrettyPrinter[Animal] {
| def pprint(a: Animal) = "[Animal : %s]" format (a)
| }
defined module AnimalPrettyPrinter
scala> pprint(Dog("Tom"))
res159: String = [Animal : Dog(Tom)]
Some other examples would be Ordering type-class from Scala standard library, Equal, Show (isomorphic to PrettyPrinter above), Resource type-classes from Scalaz etc.
Edit:
As Daniel pointed out, Scala's Ordering isn't contravariant. (I really don't know why.) You may instead consider scalaz.Order which is intended for the same purpose as scala.Ordering but is contravariant on its type parameter.
Addendum:
Supertype-subtype relationship is but one type of relationship that can exist between two types. There can be many such relationships possible. Let's consider two types A and B related with function f: B => A (i.e. an arbitrary relation). Data-type F[_] is said to be a contravariant functor if you can define an operation contramap for it that can lift a function of type B => A to F[A => B].
The following laws need to be satisfied:
x.contramap(identity) == x
x.contramap(f).contramap(g) == x.contramap(f compose g)
All of the data types discussed above (Show, Equal etc.) are contravariant functors. This property lets us do useful things such as the one illustrated below:
Suppose you have a class Candidate defined as:
case class Candidate(name: String, age: Int)
You need an Order[Candidate] which orders candidates by their age. Now you know that there is an Order[Int] instance available. You can obtain an Order[Candidate] instance from that with the contramap operation:
val byAgeOrder: Order[Candidate] =
implicitly[Order[Int]] contramap ((_: Candidate).age)

An example based on a real-world event-driven software system. Such a system is based on broad categories of events, like events related to the functioning of the system (system events), events generated by user actions (user events) and so on.
A possible event hierarchy:
trait Event
trait UserEvent extends Event
trait SystemEvent extends Event
trait ApplicationEvent extends SystemEvent
trait ErrorEvent extends ApplicationEvent
Now the programmers working on the event-driven system need to find a way to register/process the events generated in the system. They will create a trait, Sink, that is used to mark components in need to be notified when an event has been fired.
trait Sink[-In] {
def notify(o: In)
}
As a consequence of marking the type parameter with the - symbol, the Sink type became contravariant.
A possible way to notify interested parties that an event happened is to write a method and to pass it the corresponding event. This method will hypothetically do some processing and then it will take care of notifying the event sink:
def appEventFired(e: ApplicationEvent, s: Sink[ApplicationEvent]): Unit = {
// do some processing related to the event
// notify the event sink
s.notify(e)
}
def errorEventFired(e: ErrorEvent, s: Sink[ErrorEvent]): Unit = {
// do some processing related to the event
// notify the event sink
s.notify(e)
}
A couple of hypothetical Sink implementations.
trait SystemEventSink extends Sink[SystemEvent]
val ses = new SystemEventSink {
override def notify(o: SystemEvent): Unit = ???
}
trait GenericEventSink extends Sink[Event]
val ges = new GenericEventSink {
override def notify(o: Event): Unit = ???
}
The following method calls are accepted by the compiler:
appEventFired(new ApplicationEvent {}, ses)
errorEventFired(new ErrorEvent {}, ges)
appEventFired(new ApplicationEvent {}, ges)
Looking at the series of calls you notice that it is possible to call a method expecting a Sink[ApplicationEvent] with a Sink[SystemEvent] and even with a Sink[Event]. Also, you can call the method expecting a Sink[ErrorEvent] with a Sink[Event].
By replacing invariance with a contravariance constraint, a Sink[SystemEvent] becomes a subtype of Sink[ApplicationEvent]. Therefore, contravariance can also be thought of as a ‘widening’ relationship, since types are ‘widened’ from more specific to more generic.
Conclusion
This example has been described in a series of articles about variance found on my blog
In the end, I think it helps to also understand the theory behind it...

Short answer that might help people who were super confused like me and didn't want to read these long winded examples:
Imagine you have 2 classes Animal, and Cat, which extends Animal. Now, imagine that you have a type Printer[Cat], that contains the functionality for printing Cats. And you have a method like this:
def print(p: Printer[Cat], cat: Cat) = p.print(cat)
but the thing is, that since Cat is an Animal, Printer[Animal] should also be able to print Cats, right?
Well, if Printer[T] were defined like Printer[-T], i.e. contravariant, then we could pass Printer[Animal] to the print function above and use its functionality to print cats.
This is why contravariance exists. Another example, from C#, for example, is the class IComparer which is contravariant as well. Why? Because we should be able to use Animal comparers to compare Cats, too.

Related

Scala: Casting results of groupBy(_.getClass)

In this hypothetical, I have a list of operations to be executed. Some of the operations in that list will be more efficient if they can be batched together (eg, lookup up different rows from the same table in a database).
trait Result
trait BatchableOp[T <: BatchableOp[T]] {
def resolve(batch: Vector[T]): Vector[Result]
}
Here we use F-bounded Polymorphism to allow the implementation of the operation to refer to its own type, which is highly convenient.
However, this poses a problem when it comes time to execute:
def execute(operations: Vector[BatchableOp[_]]): Vector[Result] = {
def helper[T <: BatchableOp[T]](clazz: Class[T], batch: Vector[T]): Vector[Result] =
batch.head.resolve(batch)
operations
.groupBy(_.getClass)
.toVector
.flatMap { case (clazz, batch) => helper(clazz, batch)}
}
This results in a compiler error stating inferred type arguments [BatchableOp[_]] do not conform to method helper's type parameter bounds [T <: BatchableOp[T]].
How can the Scala compiler be convinced that the group is all of the same type (which is a subclass of BatchableOp)?
One workaround is to specify the type explicitly, but in this case the type is unknown.
Another workaround relies on enumerating the child types, but I'd like to not have to update the execute method after implementing a new BatchableOp type.
I would like to approach the question systematically, so that the same solution strategy can be applied in similar cases.
First, an obvious remark: you want to work with a vector. The content of the vector can be of different types. The length of the vector is not limited. The number of types of entries of the vector is not limited. Therefore, the compiler cannot prove everything at compile time: you will have to use something like asInstanceOf at some point.
Now to the solution of the actual question:
This here compiles under 2.12.4:
import scala.language.existentials
trait Result
type BOX = BatchableOp[X] forSome { type X <: BatchableOp[X] }
trait BatchableOp[C <: BatchableOp[C]] {
def resolve(batch: Vector[C]): Vector[Result]
// not abstract, needed only once!
def collectSameClassInstances(batch: Vector[BOX]): Vector[C] = {
for (b <- batch if this.getClass.isAssignableFrom(b.getClass))
yield b.asInstanceOf[C]
}
// not abstract either, no additional hassle for subclasses!
def collectAndResolve(batch: Vector[BOX]): Vector[Result] =
resolve(collectSameClassInstances(batch))
}
def execute(operations: Vector[BOX]): Vector[Result] = {
operations
.groupBy(_.getClass)
.toVector
.flatMap{ case (_, batch) =>
batch.head.collectAndResolve(batch)
}
}
The main problem that I see here is that in Scala (unlike in some experimental dependently typed languages) there is no simple way to write down complex computations "under the assumption of existence of a type".
Therefore, it seems difficult / impossible to transform
Vector[BatchOp[T] forSome T]
into a
Vector[BatchOp[T]] forSome T
Here, the first type says: "it's a vector of batchOps, their types are unknown, and can be all different", whereas the second type says: "it's a vector of batchOps of unknown type T, but at least we know that they are all the same".
What you want is something like the following hypothetical language construct:
val vec1: Vector[BatchOp[T] forSome T] = ???
val vec2: Vector[BatchOp[T]] forSome T =
assumingExistsSomeType[C <: BatchOp[C]] yield {
/* `C` now available inside this scope `S` */
vec1.map(_.asInstanceOf[C])
}
Unfortunately, we don't have anything like it for existential types, we can't introduce a helper type C in some scope S such that when C is eliminated, we are left with an existential (at least I don't see a general way to do it).
Therefore, the only interesting question that is to be answered here is:
Given a Vector[BatchOp[X] forSome X] for which I know that there is one common type C such that they all are actually Vector[C], where is the scope in which this C is present as a usable type variable?
It turns out that BatchableOp[C] itself has a type variable C in scope. Therefore, I can add a method collectSameClassInstances to BachableOp[C], and this method will actually have some type C available that it can use in the return type. Then I can immediately pass the result of collectSameClassInstances to the resolve method, and then I get a completely benign Vector[Result] type as output.
Final remark: If you decide to write any code with F-bounded polymorphisms and existentials, at least make sure that you have documented very clearly what exactly you are doing there, and how you will ensure that this combination does not escape in any other parts of the codebase. It doesn't feel like a good idea to expose such interfaces to the users. Keep it localized, make sure these abstractions do not leak anywhere.
Andrey's answer has a key insight that the only scope with the appropriate type variable is on the BatchableOp itself. Here's a reduced version that doesn't rely on importing existentials:
trait Result
trait BatchableOp[T <: BatchableOp[T]] {
def resolve(batch: Vector[T]): Vector[Result]
def unsafeResolve(batch: Vector[BatchableOp[_]]): Vector[Result] = {
resolve(batch.asInstanceOf[Vector[T]])
}
}
def execute(operations: Vector[BatchableOp[_]]): Vector[Result] = {
operations
.groupBy(_.getClass)
.toVector
.flatMap{ case (_, batch) =>
batch.head.unsafeResolve(batch)
}
}

How to check covariant and contravariant position of an element in the function?

This is a code snippet from one of the articles that I read regarding contravariance and covariance in scala. However, I fail to understand the error message thrown by the scala compiler "error: covariant type A occurs in contravariant position in type A of value pet2
class Pets[+A](val pet:A) {
def add(pet2: A): String = "done"
}
My understanding of this code snippet is that Pets is covariant and accepts objects that are subtypes of A. However, the function add takes in a parameter of type A only.Being covariant means that the Pets can take parameters of Type A and its subtypes. Then how is this supposed to throw error. From where is the question of contravariance even arising.
Any explanation to the above error message will be highly helpful. Thanks
TL;DR:
Your Pets class can produce values of type A by returning the member variable pet, so a Pet[VeryGeneral] cannot be a subtype of Pet[VerySpecial], because when it produces something VeryGeneral, it cannot guarantee that it is also an instance of VerySpecial. Therefore, it cannot be contravariant.
Your Pets class can consume values of type A by passing them as arguments to add. Therefore a Pet[VerySpecial] cannot be a subtype of pet Pet[VeryGeneral], because it will choke on any input that is not VerySpecial. Therefore, your class cannot be covariant.
The only remaining possibility is: Pets must be invariant in A.
###An illustration: Covariance vs. Contravariance:
I'll use this opportunity to present an improved and significantly more
rigorous version of this comic. It is an illustration of the covariance and contravariance
concepts for programming languages with subtyping and declaration-site variance annotations
(apparently, even Java people found it sufficiently enlightening,
despite the fact that the question was about use-site variance).
First, the illustration:
Now a more detailed description with compilable Scala code.
###Explanation for Contravariance (left part of Figure 1)
Consider the following hierarchy of energy sources, from very general, to very specific:
class EnergySource
class Vegetables extends EnergySource
class Bamboo extends Vegetables
Now consider a trait Consumer[-A] that has a single consume(a: A)-method:
trait Consumer[-A] {
def consume(a: A): Unit
}
Let's implement a few examples of this trait:
object Fire extends Consumer[EnergySource] {
def consume(a: EnergySource): Unit = a match {
case b: Bamboo => println("That's bamboo! Burn, bamboo!")
case v: Vegetables => println("Water evaporates, vegetable burns.")
case c: EnergySource => println("A generic energy source. It burns.")
}
}
object GeneralistHerbivore extends Consumer[Vegetables] {
def consume(a: Vegetables): Unit = a match {
case b: Bamboo => println("Fresh bamboo shoots, delicious!")
case v: Vegetables => println("Some vegetables, nice.")
}
}
object Panda extends Consumer[Bamboo] {
def consume(b: Bamboo): Unit = println("Bamboo! I eat nothing else!")
}
Now, why does Consumer have to be contravariant in A? Let's try to instantiate
a few different energy sources, and then feed them to various consumers:
val oilBarrel = new EnergySource
val mixedVegetables = new Vegetables
val bamboo = new Bamboo
Fire.consume(bamboo) // ok
Fire.consume(mixedVegetables) // ok
Fire.consume(oilBarrel) // ok
GeneralistHerbivore.consume(bamboo) // ok
GeneralistHerbivore.consume(mixedVegetables) // ok
// GeneralistHerbivore.consume(oilBarrel) // No! Won't compile
Panda.consume(bamboo) // ok
// Panda.consume(mixedVegetables) // No! Might contain sth Panda is allergic to
// Panda.consume(oilBarrel) // No! Pandas obviously cannot eat crude oil
The outcome is: Fire can consume everything a GeneralistHerbivore can consume,
and in turn GeneralistHerbivore can consume everything a Panda can eat.
Therefore, as long as we care only about the ability to consume energy sources,
Consumer[EnergySource] can be substituted where a Consumer[Vegetables] is required,
and
Consumer[Vegetables] can be substituted where a Consumer[Bamboo] is required.
Therefore, it makes sense that Consumer[EnergySource] <: Consumer[Vegetables] and
Consumer[Vegetables] <: Consumer[Bamboo], even though the relationship between
the type parameters is exactly the opposite:
type >:>[B, A] = A <:< B
implicitly: EnergySource >:> Vegetables
implicitly: EnergySource >:> Bamboo
implicitly: Vegetables >:> Bamboo
implicitly: Consumer[EnergySource] <:< Consumer[Vegetables]
implicitly: Consumer[EnergySource] <:< Consumer[Bamboo]
implicitly: Consumer[Vegetables] <:< Consumer[Bamboo]
###Explanation for Covariance (right part of Figure 1)
Define a hierarchy of products:
class Entertainment
class Music extends Entertainment
class Metal extends Music // yes, it does, seriously^^
Define a trait that can produce values of type A:
trait Producer[+A] {
def get: A
}
Define various "sources"/"producers" of varying levels of specialization:
object BrowseYoutube extends Producer[Entertainment] {
def get: Entertainment = List(
new Entertainment { override def toString = "Lolcats" },
new Entertainment { override def toString = "Juggling Clowns" },
new Music { override def toString = "Rick Astley" }
)((System.currentTimeMillis % 3).toInt)
}
object RandomMusician extends Producer[Music] {
def get: Music = List(
new Music { override def toString = "...plays Mozart's Piano Sonata no. 11" },
new Music { override def toString = "...plays BBF3 piano cover" }
)((System.currentTimeMillis % 2).toInt)
}
object MetalBandMember extends Producer[Metal] {
def get = new Metal { override def toString = "I" }
}
The BrowseYoutube is the most generic source of Entertainment: it could give you
basically any kind of entertainment: cat videos, juggling clowns, or (accidentally)
some music.
This generic source of Entertainment is represented by the archetypical jester in the Figure 1.
The RandomMusician is already somewhat more specialized, at least we know that this object
produces music (even though there is no restriction to any particular genre).
Finally, MetalBandMember is extremely specialized: the get method is guaranteed to return
only the very specific kind of Metal music.
Let's try to obtain various kinds of Entertainment from those three objects:
val entertainment1: Entertainment = BrowseYoutube.get // ok
val entertainment2: Entertainment = RandomMusician.get // ok
val entertainment3: Entertainment = MetalBandMember.get // ok
// val music1: Music = BrowseYoutube.get // No: could be cat videos!
val music2: Music = RandomMusician.get // ok
val music3: Music = MetalBandMember.get // ok
// val metal1: Metal = BrowseYoutube.get // No, probably not even music
// val metal2: Metal = RandomMusician.get // No, could be Mozart, could be Rick Astley
val metal3: Metal = MetalBandMember.get // ok, because we get it from the specialist
We see that all three Producer[Entertainment], Producer[Music] and Producer[Metal] can produce some kind of Entertainment.
We see that only Producer[Music] and Producer[Metal] are guaranteed to produce Music.
Finally, we see that only the extremely specialized Producer[Metal] is guaranteed to
produce Metal and nothing else. Therefore, Producer[Music] and Producer[Metal] can be substituted
for a Producer[Entertainment]. A Producer[Metal] can be substituted for a Producer[Music].
In general, a producer of
a more specific kind of product can be subsituted for a less specialized producer:
implicitly: Metal <:< Music
implicitly: Metal <:< Entertainment
implicitly: Music <:< Entertainment
implicitly: Producer[Metal] <:< Producer[Music]
implicitly: Producer[Metal] <:< Producer[Entertainment]
implicitly: Producer[Music] <:< Producer[Entertainment]
The subtyping relationship between the products is the same as the subtyping relationship between
the producers of the products. This is what covariance means.
Related links
A similar discussion about ? extends A and ? super B in Java 8:
Java 8 Comparator comparing() static function
Classical "what are the right type parameters for flatMap in my own Either implementation" question: Type L appears in contravariant position in Either[L, R]
Class Pets is covariant in its type A (because it's marked as +A), but you are using it in a contravariant position. This is because, if you take a look at the Function trait in Scala, you will see that the input parameter type is contravariant, while return type is covariant. Every function is contravariant in its input type and covariant in its return type.
For example, function taking one argument has this definition:
trait Function1[-T1, +R]
The thing is, for a function S to be a subtype of function F, it needs to "require (same or) less and provide (same or) more". This is also known as the Liskov substitution principle. In practice, this means that Function trait needs to be contravariant in its input and covariant in its output. By being contravariant in its input, it requires "same or less", because it accepts either T1 or any of its supertypes (here "less" means "supertype" because we are loosening the restriction, e.g. from Fruit to Food). Also, by being covariant in its return type, it requires "same or more", meaning that it can return R or anything more specific than that (here "more" means "subtype" because we are adding more information, e.g. from Fruit to Apple).
But why? Why not the other way around? Here's an example that will hopefully explain it more intuitively - imagine two concrete functions, one being subtype of another:
val f: Fruit => Fruit
val s: Food => Apple
Function s is a valid subtype for function f, because it requires less (we "lose" information going from Fruit to Food) and provides more (we "gain" information going from Fruit to Apple). Note how s has an input type that's a supertype of f's input type (contravariance), and it has a return type that's a subtype of f's return type (covariance). Now let's imagine a piece of code that uses such functions:
def someMethod(fun: Fruit => Fruit) = // some implementation
Both someMethod(f) and someMethod(s) are valid invocations. Method someMethod uses fun internally to apply fruit to it, and to receive fruit from it. Since s is a subtype of f, that means we can supply Food => Apple to serve as a perfectly good instance of fun. Code inside someMethod will at some point feed fun with some fruit, which is OK, because fun takes food, and fruit is food. On the other hand, fun having Apple as return type is also fine, because fun should return fruit, and by returning apples it complies with that contract.
I hope I managed to clarify it a bit, feel free to ask further questions.

Map from Class[T] to T without casting

I want to map from class tokens to instances along the lines of the following code:
trait Instances {
def put[T](key: Class[T], value: T)
def get[T](key: Class[T]): T
}
Can this be done without having to resolve to casts in the get method?
Update:
How could this be done for the more general case with some Foo[T] instead of Class[T]?
You can try retrieving the object from your map as an Any, then using your Class[T] to “cast reflectively”:
trait Instances {
private val map = collection.mutable.Map[Class[_], Any]()
def put[T](key: Class[T], value: T) { map += (key -> value) }
def get[T](key: Class[T]): T = key.cast(map(key))
}
With help of a friend of mine, we defined the map with keys as Manifest instead of Class which gives a better api when calling.
I didnt get your updated question about "general case with some Foo[T] instead of Class[T]". But this should work for the cases you specified.
object Instances {
private val map = collection.mutable.Map[Manifest[_], Any]()
def put[T: Manifest](value: T) = map += manifest[T] -> value
def get[T: Manifest]: T = map(manifest[T]).asInstanceOf[T]
def main (args: Array[String] ) {
put(1)
put("2")
println(get[Int])
println(get[String])
}
}
If you want to do this without any casting (even within get) then you will need to write a heterogeneous map. For reasons that should be obvious, this is tricky. :-) The easiest way would probably be to use a HList-like structure and build a find function. However, that's not trivial since you need to define some way of checking type equality for two arbitrary types.
I attempted to get a little tricky with tuples and existential types. However, Scala doesn't provide a unification mechanism (pattern matching doesn't work). Also, subtyping ties the whole thing in knots and basically eliminates any sort of safety it might have provided:
val xs: List[(Class[A], A) forSome { type A }] = List(
classOf[String] -> "foo", classOf[Int] -> 42)
val search = classOf[String]
val finalResult = xs collect { case (`search`, result) => result } headOption
In this example, finalResult will be of type Any. This is actually rightly so, since subtyping means that we don't really know anything about A. It's not why the compiler is choosing that type, but it is a correct choice. Take for example:
val xs: List[(Class[A], A) forSome { type A }] = List(classOf[Boolean] -> 'bippy)
This is totally legal! Subtyping means that A in this case will be chosen as Any. It's hardly what we want, but it is what you will get. Thus, in order to express this constraint without tracking all of the types individual (using a HMap), Scala would need to be able to express the constraint that a type is a specific type and nothing else. Unfortunately, Scala does not have this ability, and so we're basically stuck on the generic constraint front.
Update Actually, it's not legal. Just tried it and the compiler kicked it out. I think that only worked because Class is invariant in its type parameter. So, if Foo is a definite type that is invariant, you should be safe from this case. It still doesn't solve the unification problem, but at least it's sound. Unfortunately, type constructors are assumed to be in a magical super-position between co-, contra- and invariance, so if it's truly an arbitrary type Foo of kind * => *, then you're still sunk on the existential front.
In summary: it should be possible, but only if you fully encode Instances as a HMap. Personally, I would just cast inside get. Much simpler!

What does "abstract over" mean?

Often in the Scala literature, I encounter the phrase "abstract over", but I don't understand the intent. For example, Martin Odersky writes
You can pass methods (or "functions") as parameters, or you can abstract over them. You can specify types as parameters, or you can abstract over them.
As another example, in the "Deprecating the Observer Pattern" paper,
A consequence from our event streams being first-class values is that we can abstract over them.
I have read that first order generics "abstract over types", while monads "abstract over type constructors". And we also see phrases like this in the Cake Pattern paper. To quote one of many such examples:
Abstract type members provide flexible way to abstract over concrete types of components.
Even relevant stack overflow questions use this terminology. "can't existentially abstract over parameterized type..."
So... what does "abstract over" actually mean?
In algebra, as in everyday concept formation, abstractions are formed by grouping things by some essential characteristics and omitting their specific other characteristics. The abstraction is unified under a single symbol or word denoting the similarities. We say that we abstract over the differences, but this really means we're integrating by the similarities.
For example, consider a program that takes the sum of the numbers 1, 2, and 3:
val sumOfOneTwoThree = 1 + 2 + 3
This program is not very interesting, since it's not very abstract. We can abstract over the numbers we're summing, by integrating all lists of numbers under a single symbol ns:
def sumOf(ns: List[Int]) = ns.foldLeft(0)(_ + _)
And we don't particularly care that it's a List either. List is a specific type constructor (takes a type and returns a type), but we can abstract over the type constructor by specifying which essential characteristic we want (that it can be folded):
trait Foldable[F[_]] {
def foldl[A, B](as: F[A], z: B, f: (B, A) => B): B
}
def sumOf[F[_]](ns: F[Int])(implicit ff: Foldable[F]) =
ff.foldl(ns, 0, (x: Int, y: Int) => x + y)
And we can have implicit Foldable instances for List and any other thing we can fold.
implicit val listFoldable = new Foldable[List] {
def foldl[A, B](as: List[A], z: B, f: (B, A) => B) = as.foldLeft(z)(f)
}
implicit val setFoldable = new Foldable[Set] {
def foldl[A, B](as: Set[A], z: B, f: (B, A) => B) = as.foldLeft(z)(f)
}
val sumOfOneTwoThree = sumOf(List(1,2,3))
What's more, we can abstract over both the operation and the type of the operands:
trait Monoid[M] {
def zero: M
def add(m1: M, m2: M): M
}
trait Foldable[F[_]] {
def foldl[A, B](as: F[A], z: B, f: (B, A) => B): B
def foldMap[A, B](as: F[A], f: A => B)(implicit m: Monoid[B]): B =
foldl(as, m.zero, (b: B, a: A) => m.add(b, f(a)))
}
def mapReduce[F[_], A, B](as: F[A], f: A => B)
(implicit ff: Foldable[F], m: Monoid[B]) =
ff.foldMap(as, f)
Now we have something quite general. The method mapReduce will fold any F[A] given that we can prove that F is foldable and that A is a monoid or can be mapped into one. For example:
case class Sum(value: Int)
case class Product(value: Int)
implicit val sumMonoid = new Monoid[Sum] {
def zero = Sum(0)
def add(a: Sum, b: Sum) = Sum(a.value + b.value)
}
implicit val productMonoid = new Monoid[Product] {
def zero = Product(1)
def add(a: Product, b: Product) = Product(a.value * b.value)
}
val sumOf123 = mapReduce(List(1,2,3), Sum)
val productOf456 = mapReduce(Set(4,5,6), Product)
We have abstracted over monoids and foldables.
To a first approximation, being able to "abstract over" something means that instead of using that something directly, you can make a parameter of it, or otherwise use it "anonymously".
Scala allows you to abstract over types, by allowing classes, methods, and values to have type parameters, and values to have abstract (or anonymous) types.
Scala allows you to abstract over actions, by allowing methods to have function parameters.
Scala allows you to abstract over features, by allowing types to be defined structurally.
Scala allows you to abstract over type parameters, by allowing higher-order type parameters.
Scala allows you to abstract over data access patterns, by allowing you to create extractors.
Scala allows you to abstract over "things that can be used as something else", by allowing implicit conversions as parameters. Haskell does similarly with type classes.
Scala doesn't (yet) allow you to abstract over classes. You can't pass a class to something, and then use that class to create new objects. Other languages do allow abstraction over classes.
("Monads abstract over type constructors" is only true in a very restrictive way. Don't get hung up on it until you have your "Aha! I understand monads!!" moment.)
The ability to abstract over some aspect of computation is basically what allows code reuse, and enables the creation of libraries of functionality. Scala allows many more sorts of things to be abstracted over than more mainstream languages, and libraries in Scala can be correspondingly more powerful.
An abstraction is a sort of generalization.
http://en.wikipedia.org/wiki/Abstraction
Not only in Scala but many languages there is a need to have such mechanisms to reduce complexity(or at least create a hierarchy that partitions information into easier to understand pieces).
A class is an abstraction over a simple data type. It is sort of like a basic type but actually generalizes them. So a class is more than a simple data type but has many things in common with it.
When he says "abstracting over" he means the process by which you generalize. So if you are abstracting over methods as parameters you are generalizing the process of doing that. e.g., instead of passing methods to functions you might create some type of generalized way to handle it(such as not passing methods at all but building up a special system to deal with it).
In this case he specifically means the process of abstracting a problem and creating a oop like solution to the problem. C has very little ability to abstract(you can do it but it gets messy real quick and the language doesn't directly support it). If you wrote it in C++ you could use oop concepts to reduce the complexity of the problem(well, it's the same complexity but the conceptualization is generally easier(at least once you learn to think in terms of abstractions)).
e.g., If I needed a special data type that was like an int but, lets say restricted I could abstract over it by creating a new type that could be used like an int but had those properties I needed. The process I would use to do such a thing would be called an "abstracting".
Here is my narrow show and tell interpretation. It's self-explanatory and runs in the REPL.
class Parameterized[T] { // type as a parameter
def call(func: (Int) => Int) = func(1) // function as a parameter
def use(l: Long) { println(l) } // value as a parameter
}
val p = new Parameterized[String] // pass type String as a parameter
p.call((i:Int) => i + 1) // pass function increment as a parameter
p.use(1L) // pass value 1L as a parameter
abstract class Abstracted {
type T // abstract over a type
def call(i: Int): Int // abstract over a function
val l: Long // abstract over value
def use() { println(l) }
}
class Concrete extends Abstracted {
type T = String // specialize type as String
def call(i:Int): Int = i + 1 // specialize function as increment function
val l = 1L // specialize value as 1L
}
val a: Abstracted = new Concrete
a.call(1)
a.use()
The other answers give already a good idea of what kinds of abstractions exist. Lets go over the quotes one by one, and provide an example:
You can pass methods (or "functions")
as parameters, or you can abstract
over them. You can specify types as
parameters, or you can abstract over
them.
Pass function as a parameter: List(1,-2,3).map(math.abs(x)) Clearly abs is passed as parameter here. map itself abstracts over a function that does a certain specialiced thing with each list element. val list = List[String]() specifies a type paramter (String). You could write a collection type which uses abstract type members instead: val buffer = Buffer{ type Elem=String }. One difference is that you have to write def f(lis:List[String])... but def f(buffer:Buffer)..., so the element type is kind of "hidden" in the second method.
A consequence from our event streams
being first-class values is that we
can abstract over them.
In Swing an event just "happens" out of the blue, and you have to deal with it here and now. Event streams allow you to do all the plumbing an wiring in a more declarative way. E.g. when you want to change the responsible listener in Swing, you have to unregister the old and to register the new one, and to know all the gory details (e.g. threading issues). With event streams, the source of the events becomes a thing you can simply pass around, making it not very different from a byte or char stream, hence a more "abstract" concept.
Abstract type members provide flexible
way to abstract over concrete types of
components.
The Buffer class above is already an example for this.
Answers above provide an excellent explanation, but to summarize it in a single sentence, I would say:
Abstracting over something is the very same as neglecting it where irrelevant.

Structural Type Dispatch in Scala

I'm trying to get a better grasp of structural type dispatch. For instance, assume I have an iterable object with a summary method that computes the mean. So o.summary() gives the mean value of the list. I might like to use structural type dispatch to enable summary(o).
Is there a set of best practices regarding o.summary() vs. summary(o)?
How does scala resolve summary(o) if I have a method summary(o: ObjectType) and summary(o: { def summary: Double})
How does structural type dispatch differ from multimethods or generic functions?
Michael Galpin gives the following discription about structural type dispatch:
Structural types are Scala’s version of “responds-to” style programming as seen in many dynamic languages. So like
def sayName ( x : { def name:String }){
println(x.name)
}
Then any object with a method called name that takes no parameters and returns a string, can be passed to sayName:
case class Person(name:String)
val dean = Person("Dean")
sayName(dean) // Dean
-- 1. In your example, I wouldn't use the summary(o) version, as this is not a very object oriented style of programming. When calling o.summary (you could drop the brackets as it has no side-effects), you are asking for the summary property of o. When calling summary(o), you are passing o to a method that calculates the summary of o. I believe that the first approach is nicer :).
I haven't used structural type dispatch much, but I assume that it is best suited (in a large system) for the case where you would have to write an interface just because one method wants a type that has some method defined. Sometimes creating that interface and forcing the clients to implement it can be awkward. Sometimes you want to use a client defined in another API which conforms to your interface but doesn't explicitly implement it. So, in my opinion, structural type dispatch serves as a nice way to make the adapter pattern implicitly (saves on boilerplate, yay!).
-- 2. Apparently if you call summary(o) and o is of ObjectType, summary(o: ObjectType) gets called (which does make sense). If you call summary(bar), in which bar is not of ObjectType, two things can happen. The call compiles if bar has the method summary() of the right signature and name or otherwise, the call doesn't compile.
Example:
scala> case class ObjectType(summary: Double)
defined class ObjectType
scala> val o = ObjectType(1.2)
o: ObjectType = ObjectType(1.2)
scala> object Test {
| def summary(o: ObjectType) { println("1") }
| def summary(o: { def summary: Double}) { println("2")}
| }
defined module Test
scala> Test.summary(o)
1
Unfortunately, something like the following does not compile due to type erasure:
scala> object Test{
| def foo(a: {def a: Int}) { println("A") }
| def foo(b: {def b: Int}) { println("B") }
| }
:6: error: double definition:
method foo:(AnyRef{def b(): Int})Unit and
method foo:(AnyRef{def a(): Int})Unit at line 5
have same type after erasure: (java.lang.Object)Unit
def foo(b: {def b: Int}) { println("B") }
-- 3. In a sense, structural type dispatch is more dynamic than generic methods, and also serves a different purpose. In a generic method you can say either: a. I want something of any type; b. I want something of a type that is a subtype of A; c. I'll take something that is a supertype of B; d. I'll take something that has an implicit conversion to type C. All of these are much stricter than just "I want a type that has the method foo with the right signature". Also, structural type dispatch does use reflection, as they are implemented by type erasure.
I don't know much about multimethods, but looking at the wikipedia article, it seems that multimethods in Scala can be achieved using pattern matching. Example:
def collide(a: Collider, b: Collider) = (a, b) match {
case (asteroid: Asteroid, spaceship: Spaceship) => // ...
case (asteroid1: Asteroid, asteroid2: Asteroid) => // ...
...
Again, you could use structural type dispatch - def collide(a: {def processCollision()}), but that depends on a design decision (and I would create an interface in this example).
-- Flaviu Cipcigan
Structural data types aren't really all that useful. That's not to say they are useless, but they definitely a niche thing.
For instance, you might want to write a generic test case for the "size" of something. You might do it like this:
def hasSize(o: { def size: Int }, s: Int): Boolean = {
o.size == s
}
This can then be used with any object that implements the "size" method, no matter its class hierarchy.
Now, they are NOT structural type dispatches. They are NOT related to dispatching, but to type definition.
And Scala is an object oriented language always. You must call methods on objects. Function calls are actually "apply" method calls. Things like "println" are just members of objects imported into scope.
I think you asked what Scala does with a call on a structural type. It uses reflection. For example, consider
def repeat(x: { def quack(): Unit }, n: Int) {
for (i <- 1 to n) x.quack()
}
The call x.quack() is compiled into a lookup of the quack method, and then a call, both using Java reflection. (You can verify that by looking at the byte codes with javap. Look for a class with a funny name like Example$$anonfun$repeat$1.)
If you think about it, it's not surprising. There is no way of making a regular method call because there is no common interface or superclass.
So, the other respondents were absolutely right that you don't want to do this unless you must. The cost is very high.