So I'm trying to write a dominoes game server, and I'm writing my core types, tiles and sets of dominoes and it occurs to me that including type information for the tile pips would allow me to write a much simpler function that creates chains of dominoes, I've started and left this project incomplete several times due to my being unable to figure this out, hoping someone has a simple type safe tile representation that leads to a simple domino chain function. the reason being is my current mental model is a board in a dominoes games is just an initial tile and 1-3 chains of dominoes each beginning and matching the pips on the initial tile.
Thanks so much in advance, and apologies for any imperfections in my question.
sealed case class DoubleSix[L >: Nat, R <: Nat](lPips: Int, rPips: Int) extends Tile[L, R]
object DoubleSixSet {
val zeroZero: DoubleSix[_0, _0] = DoubleSix(0, 0)
}
an older attempt at the type safe chaining function.
trait DominoChain[End] {
// val hasSpinner: Boolean
}
case object End extends DominoChain[Nothing] {
// val hasSpinner = false
}
case class Chain[D0, D1, X](head: Domino[D0, D1], tail: DominoChain[Domino[D1, X]]) extends DominoChain[Domino[D0, D1]] {
def hasSpinner =
if (head.isDouble || rest.hasSpinner) true
else false
}
As you probably noticed expressing dominoes in types is easy:
sealed trait One
sealed trait Two
sealed trait Three
sealed trait Four
sealed trait Five
sealed trait Six
sealed trait Domino[A, B] extends Product with Serializable
object Domino {
case object OneOne[One, One]
case object OneTwo[One, Two]
... // the other types of dominoes
}
If you want to have a linear chain it is also easy:
sealed trait Chain[A, B] extends Product with Serializable
object Chain {
case class One[A, B](domino: Domino[A, B]) extends Chain[A, B]
case class Prepend[A, B, C](head: Domino[A, B], tail: Chain[B, C]) extends Chain[A, C]
}
Things gets tricky if this is not linear though. You might want to make a turn. There is more than one way of doing this:
xxyy
yy
xx
xx
yy
xx
y
y
y
y
xx
and each of them would have to be expressed as a separate case. If you would like to avoid things like:
f <- f tile would have to be over or under bb tile
aabbc f
e c
edd
you would have to somehow detect such case and prevent it. You have 2 options here:
don't express it in type, express it as value and use some smart constructor which would calculate if your move is valid and return chain with added tile or error
express each turn as a different type, express it in type level and require some evidence in order to create a tile. This should be possible but much much harder, and it would require you to know the exact type in compile time (so adding tile dynamically on demand could be harder as you would have to have that evidence prepared upfront for each move)
But in domino besides turns we can also have branches:
aab
bdd
cc
If you wanted to express it in type, now you have a two heads you can prepend to (and one tail you can append to). And during the game you could have more of them, so you would have to somehow express both: how many branches you have but also to which one do you want to add a new tile. Still possible, but but complicates your code even further.
You could e.g. express heads with some sort of HList (if you are using shapeless) and use that representation to provide implicit telling you which element of HList you want to modify.
However at this point you have very little benefits of type-level programming: you have to know you types ahead of time, you would have difficulties adding new tiles dynamically, you would have to persist state in such a way you would be able to retrieve the exact type so that type-level evidence would work...
Because of that I suggest an approach which is still type-safe while much easier to approach: just use smart constructors:
type Position = UUID
sealed trait Chain extends Product with Serializable
object Chain {
// prevent user from accessing constructors and copy directly
sealed abstract case class One private (
domino: Domino,
position: Position
) extends Chain
sealed abstract case class PrependLine private (
domino: Domino,
position: Position,
chain: Chain
)
sealed abstract case class Branch private (
chain1: Chain,
chain2: Chain
)
def start(domino: Domino): Chain
// check if you can add domino at this position, recursively rewrite tree
// if needed to add it at the right branch or maybe even create a new branch
def prepend(domino: Domino, to: Chain, at: Position): Either[Error, Chain]
}
This will still make it impossible to create an "invalid" domino chain. At the same time it will be much easier to add new rules, expand functionalities and persist state between requests (you mentioned that you want to build a server).
Related
Consider the two codes below. They accomplish the same goal : only such A[T]-s can be stored in the Container where T extends C
However they use two different approaches to achieve this goal :
1) existentials
2) covariance
I prefer the first solution because then A remains simpler. Is there any reason why I ever would want to use the second solution (covariance) ?
My problem with the second solution is that it is not natural in the sense that it should not be A-s responsibility to describe what I can store in a Container and what not, that should be the Container's responsibility. The second solution is also more complicated once I want to start to operate on A and then I have to deal with all the stuff that comes with covariance.
What benefit would I get by using the second (more complicated, less natural) solution ?
object Existentials extends App {
class A[T](var t:T)
class C
class C1 extends C
class C2 extends C
class Z
class Container[T]{
var t:T = _
}
val c=new Container[A[_<:C]]()
c.t=new A(new C)
// c.t=new Z // not compile
val r: A[_ <: C] = c.t
println(r)
}
object Cov extends App{
class A[+T](val t:T)
class C
class C1 extends C
class C2 extends C
class Z
class Container[T]{
var t:T = _
}
val c: Container[A[C]] =new Container[A[C]]()
c.t=new A(new C)
//c.t=new A(new Z) // not compile
val r: A[C] = c.t
println(r)
}
EDIT (in response to Alexey's answer):
Commenting on :
"My problem with the second solution is that it is not natural in the sense that it should not be A-s responsibility to describe what I can store in a Container and what not, that should be the Container's responsibility."
If I have class A[T](var t:T) that means that I can store only A[T]-s and not ( A[S] where S<:T ) in a container, in any container.
However if I have class A[+T](var t:T) then I can store A[S] where S<:T as well in any container.
So when declaring A either to be invariant or covariant I decide what type of A[S] can be stored in a container (as shown above), this decision takes place at the declaration of A.
However , I think, this decision should take place, instead, at the declaration of the container because it is container specific what will be allowed to go into that container, only A[T]-s or also A[S] where S<:T-s.
In other words, changing the variance in A[T] has effects globally, while changing the type parameter of a container from A[T] to A[_<:S] has a well defined local effect on the container itself. So the principle of "changes should have local effects" here favors the existential solution as well.
In the first case A is simpler, but in the second case its clients are. Since there is normally more than one place where you use A, this is often a worthwhile tradeoff. Your own code demonstrates it: when you need to write A[_ <: C] in the first case (in two places), you can just use A[C] in the second one.
In addition, in the first case you can write just A[C] where A[_ <: C] is really desired. Let's say you have a method
def foo(x: A[C]): C = x.t
Now you can't call foo(y) with y: A[C1] even though it would make sense: y.t does have type C.
When this happens in your code, it can be fixed, but what about third-party?
Of course, this applies to the standard library types as well: if types like Maybe and List weren't covariant, either signatures for all methods taking/returning them would have to be more complex or many programs which are currently valid and make perfect sense would break.
it should not be A-s responsibility to describe what I can store in a Container and what not, that should be the Container's responsibility.
Variance isn't about what you can store in a container; it is about when A[B] is a subtype of A[C]. This argument is a bit like saying that you shouldn't have extends at all: otherwise class Apple extends Fruit allows you to store an Apple in Container[Fruit], and deciding that is Container's responsibility.
I am working on a slick project and I am trying to make my database layer easily swappable between different profiles in order to write tests on an in-memory database. This question is inspired by this problem but it doesn't have anything to do with slick itself.
I don't have a great deal of experience with dependent types, in my case I have the following trait that I use to abstract away some types from the database:
trait Types {
type A <: SomeType
type B <: SomeOtherType
val bTag: ClassTag[B]
}
Then I have another trait which is basically a slice of my (faux) cake pattern:
trait BaseComponent {
type ComponentTypes <: Types
val a: Types#A
implicit val bTag: ClassTag[Types#B]
}
Then I have an actual implementation of my component that can be seen as follows:
trait DefaultTypes {
type A = SomeConcreteType
type B = SomeOtherConcreteType
val bTag = implicitly[ClassTag[B]]
}
trait DefaultBaseComponent extends BaseComponent {
type ComponentTypes = DefaultTypes
val ct = new ComponentTypes {}
implicit val bTag = ct.bTag
}
I need the tag because later on a service will need it (in my actual implementation I use this type to abstract over different type of exceptions thrown by different DB libraries); I am quite sure that there is a much better way to do what I am trying to do.
If I do not instantiate the ComponentTypes trait in order to get the tag and I move the implicit-conjuring code in the DefaultBaseComponent it will conjure a null in place of the ClassTag. I need to have a way to refer to the actual types that I am using (the different A and B that I have in my different environments) and I need to do it in other components without knowing which actual types they are.
My solution works, compiles and pass all the tests I wrote for it, can anyone help me in getting it better?
Thank you!
Your example is a bit unclear with all these Defaults and Components - maybe a more concrete example (e.g. DatabaseService / MysqlDatabaseService) would make it clearer?
You need to pass the ClassTag around wherever it's abstract - you can only "summon" one when you have a concrete type. You might like to package up the notion of a value and its tag:
trait TaggedValue[A] {val a: A; val ct: ClassTag[A]}
object TaggedValue {
def apply[A: ClassTag](a1: A) =
new TaggedValue[A] {
val a = a1
val ct = implicitly[ClassTag[A]]
}
}
but this is just a convenience thing. You could also turn some of your traits into abstract classes, allowing you to use [A: ClassTag] to pass the tags implicitly, but obviously this affects which classes you can multiply inherit.
If you're hitting nulls that sounds like a trait initialization order problem, though without a more specific error message it's hard to help. You might be able to resolve it by replacing some of your vals with defs, or by using early initializers.
I have a simple class Feature, currently implemented as a case class
case class Feature(value :String)
There are multiple operations decorating a feature with different properties, for example there is a function that might count the number of appearances of the feature so then I might need a CountedFeature. Besides counting I might need also a WeightedFeature, an IndexedFeature and so on.
My intuition says it's suitable for traits so I defined the following traits
trait Counted {def count :Long}
trait Weighted {def weight :Double}
trait Indexed {def index :Int}
Two issues pop up with this:
1. Do I need to create a concrete class implementing each combination of traits (e.x. implement a CountedWeightedFeature, CountedIndexedfeature and so on) ,or is there some way to avoid it. If I will move to more decorations it will be impossible to mantain classes for all combinations.
2. I want to design a function that weights features based on their count. It's signature should look something like :
def computeWeightsByCount(features :List[T <: Feature with Counted]) :List[T with Weighted] = {...}
T here may be Indexed or not, so this function should have some way to take a class and instansiate a new class that has all the traits of the original class stacked inside plus an additional one.
Is there some elegant way to do this in Scala, or should I totaly rethink this design?
The design looks fine to me, except extending case classes is not recommended. A brief summary of the reasons why can be found here: https://stackoverflow.com/a/12705634/2186890
So you will might want to rewrite Feature as something like this:
trait Feature { def value: String }
Now you can define case classes for pattern matching etc. like this:
case class CountedFeature(value: String, count: Long) extends Feature with Counted
There is no easy way to avoid combinatorial explosion of case classes like this, but you are enabled to use types such as Feature with Counted wherever you like. Keep in mind that you can easily create objects that match type Feature with Counted on the fly. For instance:
val x: Feature with Counted = new Feature with Counted { val value = ""; val count = 0L }
Implementing computeWeightsByCount like you want is a little tricky, because there is no easy way to build a T with Weighted without knowing more about type T. But it can be done with implicit methods. Essentially, we need to have a defined path for generating a T with Weighted from a T for every Feature with Counted that you want to apply this method to. For instance, we start with this:
trait Feature { def value: String }
trait Counted { def count: Long }
trait Weighted { def weight: Double }
trait Indexed { def index: Int }
We want to define computeWeightsByCount like you did in your question, but also taking an implicit method that takes a T and a weight, and produces a T with Weighted:
def computeWeightsByCount[
T <: Feature with Counted](
features: List[T])(
implicit weighted: (T, Double) => T with Weighted
): List[T with Weighted] = {
def weight(fc: Feature with Counted): Double = 0.0d
features map { f => weighted(f, weight(f)) }
}
Now we need to define an implicit method to produce weighted features from the input features. Let's start with getting a Feature with Counted with Weighted from a Feature with Counted. We'll put it in companion object for Feature:
object Feature {
implicit def weight(fc: Feature with Counted, weight: Double): Feature with Counted with Weighted = {
case class FCW(value: String, count: Long, weight: Double) extends Feature with Counted with Weighted
FCW(fc.value, fc.count, weight)
}
}
We can use it like so:
case class FC(value: String, count: Long) extends Feature with Counted
val fcs: List[Feature with Counted] = List(FC("0", 0L), FC("1", 1L))
val fcws: List[Feature with Counted with Weighted] = computeWeightsByCount[Feature with Counted](fcs)
For any type that you want to compute weighted counts for, you need to define a similar such implicit method.
Admittedly, this is far from a beautiful solution. So yes, you are right, you may want to rethink the design. The advantage to this approach, however, is that any further extensions to the Feature "hierarchy" can be made without needing to make any changes to computeWeightsByCount. Whoever writes the new trait can provide the appropriate implicit methods as well.
There are two ways of defining a method for two different classes inheriting the same trait in Scala.
sealed trait Z { def minus: String }
case class A() extends Z { def minus = "a" }
case class B() extends Z { def minus = "b" }
The alternative is the following:
sealed trait Z { def minus: String = this match {
case A() => "a"
case B() => "b"
}
case class A() extends Z
case class B() extends Z
The first method repeats the method name, whereas the second method repeats the class name.
I think that the first method is the best to use because the codes are separated. However, I found myself often using the second one for complicated methods, so that adding additional arguments can be done very easily for example like this:
sealed trait Z {
def minus(word: Boolean = false): String = this match {
case A() => if(word) "ant" else "a"
case B() => if(word) "boat" else "b"
}
case class A() extends Z
case class B() extends Z
What are other differences between those practices? Are there any bugs that are waiting for me if I choose the second approach?
EDIT:
I was quoted the open/closed principle, but sometimes, I need to modify not only the output of the functions depending on new case classes, but also the input because of code refactoring. Is there a better pattern than the first one? If I want to add the previous mentioned functionality in the first example, this would yield the ugly code where the input is repeated:
sealed trait Z { def minus(word: Boolean): String ; def minus = minus(false) }
case class A() extends Z { def minus(word: Boolean) = if(word) "ant" else "a" }
case class B() extends Z { def minus(word: Boolean) = if(word) "boat" else "b" }
I would choose the first one.
Why ? Merely to keep Open/Closed Principle.
Indeed, if you want to add another subclass, let's say case class C, you'll have to modify supertrait/superclass to insert the new condition... ugly
Your scenario has a similar in Java with template/strategy pattern against conditional.
UPDATE:
In your last scenario, you can't avoid the "duplication" of input. Indeed, parameter type in Scala isn't inferable.
It still better to have cohesive methods than blending the whole inside one method presenting as many parameters as the method union expects.
Just Imagine ten conditions in your supertrait method. What if you change inadvertently the behavior of one of each? Each change would be risked and supertrait unit tests should always run each time you modify it ...
Moreover changing inadvertently an input parameter (not a BEHAVIOR) is not "dangerous" at all. Why? because compiler would tell you that a parameter/parameter type isn't relevant any more.
And if you want to change it and do the same for every subclasses...ask to your IDE, it loves refactoring things like this in one click.
As this link explains:
Why open-closed principle matters:
No unit testing required.
No need to understand the sourcecode from an important and huge class.
Since the drawing code is moved to the concrete subclasses, it's a reduced risk to affect old functionallity when new functionality is added.
UPDATE 2:
Here a sample avoiding inputs duplication fitting your expectation:
sealed trait Z {
def minus(word: Boolean): String = if(word) whenWord else whenNotWord
def whenWord: String
def whenNotWord: String
}
case class A() extends Z { def whenWord = "ant"; def whenNotWord = "a"}
Thanks type inference :)
Personally, I'd stay away from the second approach. Each time you add a new sub class of Z you have to touch the shared minus method, potentially putting at risk the behavior tied to the existing implementations. With the first approach adding a new subclass has no potential side effect on the existing structures. There might be a little of the Open/Closed Principle in here and your second approach might violate it.
Open/Closed principle can be violated with both approaches. They are orthogonal to each other. The first one allows to easily add new type and implement required methods, it breaks Open/Closed principle if you need to add new method into hierarchy or refactor method signatures to the point that it breaks any client code. It is after all reason why default methods were added to Java8 interfaces so that old API can be extended without requiring client code to adapt.
This approach is typical for OOP.
The second approach is more typical for FP. In this case it is easy to add methods but it is hard to add new type (it breaks O/C here). It is good approach for closed hierarchies, typical example are Algebraic Data Types (ADT). Standardized protocol which is not meant to be extended by clients could be a candidate.
Languages struggle to allow to design API which would have both benefits - easy to add types as well as adding methods. This problem is called Expression Problem. Scala provides Typeclass pattern to solve this problem which allows to add functionality to existing types in ad-hoc and selective manner.
Which one is better depends on your use case.
Starting in Scala 3, you have the possibility to use trait parameters (just like classes have parameters), which simplifies things quite a lot in this case:
trait Z(x: String) { def minus: String = x }
case class A() extends Z("a")
case class B() extends Z("b")
A().minus // "a"
B().minus // "b"
I understand covariance and contravariance in scala. Covariance has many applications in the real world, but I can not think of any for contravariance applications, except the same old examples for Functions.
Can someone shed some light on real world examples of contravariance use?
In my opinion, the two most simple examples after Function are ordering and equality. However, the first is not contra-variant in Scala's standard library, and the second doesn't even exist in it. So, I'm going to use Scalaz equivalents: Order and Equal.
Next, I need some class hierarchy, preferably one which is familiar and, of course, it both concepts above must make sense for it. If Scala had a Number superclass of all numeric types, that would have been perfect. Unfortunately, it has no such thing.
So I'm going to try to make the examples with collections. To make it simple, let's just consider Seq[Int] and List[Int]. It should be clear that List[Int] is a subtype of Seq[Int], ie, List[Int] <: Seq[Int].
So, what can we do with it? First, let's write something that compares two lists:
def smaller(a: List[Int], b: List[Int])(implicit ord: Order[List[Int]]) =
if (ord.order(a,b) == LT) a else b
Now I'm going to write an implicit Order for Seq[Int]:
implicit val seqOrder = new Order[Seq[Int]] {
def order(a: Seq[Int], b: Seq[Int]) =
if (a.size < b.size) LT
else if (b.size < a.size) GT
else EQ
}
With these definitions, I can now do something like this:
scala> smaller(List(1), List(1, 2, 3))
res0: List[Int] = List(1)
Note that I'm asking for an Order[List[Int]], but I'm passing a Order[Seq[Int]]. This means that Order[Seq[Int]] <: Order[List[Int]]. Given that Seq[Int] >: List[Int], this is only possible because of contra-variance.
The next question is: does it make any sense?
Let's consider smaller again. I want to compare two lists of integers. Naturally, anything that compares two lists is acceptable, but what's the logic of something that compares two Seq[Int] being acceptable?
Note in the definition of seqOrder how the things being compared becomes parameters to it. Obviously, a List[Int] can be a parameter to something expecting a Seq[Int]. From that follows that a something that compares Seq[Int] is acceptable in place of something that compares List[Int]: they both can be used with the same parameters.
What about the reverse? Let's say I had a method that only compared :: (list's cons), which, together with Nil, is a subtype of List. I obviously could not use this, because smaller might well receive a Nil to compare. It follows that an Order[::[Int]] cannot be used instead of Order[List[Int]].
Let's proceed to equality, and write a method for it:
def equalLists(a: List[Int], b: List[Int])(implicit eq: Equal[List[Int]]) = eq.equal(a, b)
Because Order extends Equal, I can use it with the same implicit above:
scala> equalLists(List(4, 5, 6), List(1, 2, 3)) // we are comparing lengths!
res3: Boolean = true
The logic here is the same one. Anything that can tell whether two Seq[Int] are the same can, obviously, also tell whether two List[Int] are the same. From that, it follows that Equal[Seq[Int]] <: Equal[List[Int]], which is true because Equal is contra-variant.
This example is from the last project I was working on. Say you have a type-class PrettyPrinter[A] that provides logic for pretty-printing objects of type A. Now if B >: A (i.e. if B is superclass of A) and you know how to pretty-print B (i.e. have an instance of PrettyPrinter[B] available) then you can use the same logic to pretty-print A. In other words, B >: A implies PrettyPrinter[B] <: PrettyPrinter[A]. So you can declare PrettyPrinter[A] contravariant on A.
scala> trait Animal
defined trait Animal
scala> case class Dog(name: String) extends Animal
defined class Dog
scala> trait PrettyPrinter[-A] {
| def pprint(a: A): String
| }
defined trait PrettyPrinter
scala> def pprint[A](a: A)(implicit p: PrettyPrinter[A]) = p.pprint(a)
pprint: [A](a: A)(implicit p: PrettyPrinter[A])String
scala> implicit object AnimalPrettyPrinter extends PrettyPrinter[Animal] {
| def pprint(a: Animal) = "[Animal : %s]" format (a)
| }
defined module AnimalPrettyPrinter
scala> pprint(Dog("Tom"))
res159: String = [Animal : Dog(Tom)]
Some other examples would be Ordering type-class from Scala standard library, Equal, Show (isomorphic to PrettyPrinter above), Resource type-classes from Scalaz etc.
Edit:
As Daniel pointed out, Scala's Ordering isn't contravariant. (I really don't know why.) You may instead consider scalaz.Order which is intended for the same purpose as scala.Ordering but is contravariant on its type parameter.
Addendum:
Supertype-subtype relationship is but one type of relationship that can exist between two types. There can be many such relationships possible. Let's consider two types A and B related with function f: B => A (i.e. an arbitrary relation). Data-type F[_] is said to be a contravariant functor if you can define an operation contramap for it that can lift a function of type B => A to F[A => B].
The following laws need to be satisfied:
x.contramap(identity) == x
x.contramap(f).contramap(g) == x.contramap(f compose g)
All of the data types discussed above (Show, Equal etc.) are contravariant functors. This property lets us do useful things such as the one illustrated below:
Suppose you have a class Candidate defined as:
case class Candidate(name: String, age: Int)
You need an Order[Candidate] which orders candidates by their age. Now you know that there is an Order[Int] instance available. You can obtain an Order[Candidate] instance from that with the contramap operation:
val byAgeOrder: Order[Candidate] =
implicitly[Order[Int]] contramap ((_: Candidate).age)
An example based on a real-world event-driven software system. Such a system is based on broad categories of events, like events related to the functioning of the system (system events), events generated by user actions (user events) and so on.
A possible event hierarchy:
trait Event
trait UserEvent extends Event
trait SystemEvent extends Event
trait ApplicationEvent extends SystemEvent
trait ErrorEvent extends ApplicationEvent
Now the programmers working on the event-driven system need to find a way to register/process the events generated in the system. They will create a trait, Sink, that is used to mark components in need to be notified when an event has been fired.
trait Sink[-In] {
def notify(o: In)
}
As a consequence of marking the type parameter with the - symbol, the Sink type became contravariant.
A possible way to notify interested parties that an event happened is to write a method and to pass it the corresponding event. This method will hypothetically do some processing and then it will take care of notifying the event sink:
def appEventFired(e: ApplicationEvent, s: Sink[ApplicationEvent]): Unit = {
// do some processing related to the event
// notify the event sink
s.notify(e)
}
def errorEventFired(e: ErrorEvent, s: Sink[ErrorEvent]): Unit = {
// do some processing related to the event
// notify the event sink
s.notify(e)
}
A couple of hypothetical Sink implementations.
trait SystemEventSink extends Sink[SystemEvent]
val ses = new SystemEventSink {
override def notify(o: SystemEvent): Unit = ???
}
trait GenericEventSink extends Sink[Event]
val ges = new GenericEventSink {
override def notify(o: Event): Unit = ???
}
The following method calls are accepted by the compiler:
appEventFired(new ApplicationEvent {}, ses)
errorEventFired(new ErrorEvent {}, ges)
appEventFired(new ApplicationEvent {}, ges)
Looking at the series of calls you notice that it is possible to call a method expecting a Sink[ApplicationEvent] with a Sink[SystemEvent] and even with a Sink[Event]. Also, you can call the method expecting a Sink[ErrorEvent] with a Sink[Event].
By replacing invariance with a contravariance constraint, a Sink[SystemEvent] becomes a subtype of Sink[ApplicationEvent]. Therefore, contravariance can also be thought of as a ‘widening’ relationship, since types are ‘widened’ from more specific to more generic.
Conclusion
This example has been described in a series of articles about variance found on my blog
In the end, I think it helps to also understand the theory behind it...
Short answer that might help people who were super confused like me and didn't want to read these long winded examples:
Imagine you have 2 classes Animal, and Cat, which extends Animal. Now, imagine that you have a type Printer[Cat], that contains the functionality for printing Cats. And you have a method like this:
def print(p: Printer[Cat], cat: Cat) = p.print(cat)
but the thing is, that since Cat is an Animal, Printer[Animal] should also be able to print Cats, right?
Well, if Printer[T] were defined like Printer[-T], i.e. contravariant, then we could pass Printer[Animal] to the print function above and use its functionality to print cats.
This is why contravariance exists. Another example, from C#, for example, is the class IComparer which is contravariant as well. Why? Because we should be able to use Animal comparers to compare Cats, too.