This could be a very silly question, but I am not able to understand the difference even after scratching my head for a long time.
I am going through the page of scala generics: https://docs.scala-lang.org/tour/generic-classes.html
Here, it is said that
Note: subtyping of generic types is invariant. This means that if we
have a stack of characters of type Stack[Char] then it cannot be used
as an integer stack of type Stack[Int]. This would be unsound because
it would enable us to enter true integers into the character stack. To
conclude, Stack[A] is only a subtype of Stack[B] if and only if B = A.
I understand this completely that I cannot use Char where Int is required.
But, my Stack class accepts only A type (which is invariant). If I put Apple, Banana or Fruit in them, they all are accepted.
class Fruit
class Apple extends Fruit
class Banana extends Fruit
val stack2 = new Stack[Fruit]
stack2.push(new Fruit)
stack2.push(new Banana)
stack2.push(new Apple)
But, on the next page (https://docs.scala-lang.org/tour/variances.html), it says that type parameter should be covariant +A, then how is the Fruit example working as even it is adding the subtypes with invariant.
Hope I am clear with my question. Let me know if more Info. needs to be added.
This has nothing to do with variance at all.
You declare stack2 to be a Stack[Fruit], in other words, you declare that you are allowed to put anything into the Stack which is a Fruit. An Apple is a (subtype of) Fruit, ergo you are allowed to put an Apple into a Stack of Fruits.
This is called subtyping and has nothing to do with variance at all.
Let's take a step back: what does variance actually mean?
Well, variance means "change" (think of words like "to vary" or "variable"). co- means "together" (think of cooperation, co-education, co-location), contra- means "against" (think of contradiction, counter-intelligence, counter-insurgency, contraceptive), and in- means "unrelated" or "non-" (think of involuntary, inaccessible, intolerant).
So, we have "change" and that change can be "together", "against" or "unrelated". Well, in order to have related changes, we need two things which change, and they can either change together (i.e. when one thing changes, the other thing also changes "in the same direction"), they can change against each other (i.e. when one thing changes, the other thing changes "in the opposite direction"), or they can be unrelated (i.e. when one thing changes, the other doesn't.)
And that's all there is to the mathematical concept of covariance, contravariance, and invariance. All we need are two "things", some notion of "change", and this change needs to have some notion of "direction".
Now, that's of course very abstract. In this particular instance, we are talking about the context of subtyping and parametric polymorphism. How does this apply here?
Well, what are our two things? When we have a type constructor such as C[A], then our two things are:
The type argument A.
The constructed type which is the result of applying the type constructor C to A.
And what is our change with a sense of direction? It is subtyping!
So, the question now becomes: "When I change A to B (along one of the directions of subtyping, i.e. make it either a subtype or a supertype), then how does C[A] relate to C[B]".
And again, there are three possibilities:
Covariance: A <: B ⇒ C[A] <: C[B]: when A is a subtype of B then C[A] is a subtype of C[B], in other words, when I change A along the subtyping hierarchy, then C[A] changes with A in the same direction.
Contravariance: A <: B ⇒ C[A] :> C[B]: when A is a subtype of B, then C[A] is a supertype of C[B], in other words, when I change A along the subtyping hierarchy, then C[A] changes against A in the opposite direction.
Invariance: there is no subtyping relationship between C[A] and C[B], neither is a sub- nor supertype of the other.
There are two questions you might ask yourself now:
Why is this useful?
Which one is the right one?
This is useful for the same reason subtyping is useful. In fact, this is just subtyping. So, if you have a language which has both subtyping and parametric polymorphism, then it is important to know whether one type is a subtype of another type, and variance tells you whether or not a constructed type is a subtype of another constructed type of the same constructor based on the subtyping relationship between the type arguments.
Which one is the right one is trickier, but thankfully, we have a powerful tool for analyzing when a subtype is a subtype of another type: Barbara Liskov's Substitution Principle tells us that a type S is a subtype of type T IFF any instance of T can be replaced with an instance of S without changing the observable desirable properties of the program.
Let's take a simple generic type, a function. A function has two type parameters, one for the input, and one for the output. (We are keeping it simple here.) F[A, B] is a function that takes in an argument of type A and returns a result of type B.
And now we play through a couple of scenarios. I have some operation O that wants to work with a function from Fruits to Mammals (yeah, I know, exciting original examples!) The LSP says that I should also be able to pass in a subtype of that function, and everything should still work. Let's say, F were covariant in A. Then I should be able to pass in a function from Apples to Mammals as well. But what happens when O passes an Orange to F? That should be allowed! O was able to pass an Orange to F[Fruit, Mammal] because Orange is a subtype of Fruit. But, a function from Apples doesn't know how to deal with Oranges, so it blows up. The LSP says it should work though, which means that the only conclusion we can draw is that our assumption is wrong: F[Apple, Mammal] is not a subtype of F[Fruit, Mammal], in other words, F is not covariant in A.
What if it were contravariant? What if we pass an F[Food, Mammal] into O? Well, O again tries to pass an Orange and it works: Orange is a Food, so F[Food, Mammal] knows how to deal with Oranges. We can now conclude that functions are contravariant in their inputs, i.e. you can pass a function that takes a more general type as its input as a replacement for a function that takes a more restricted type and everything will work out fine.
Now let's look at the output of F. What would happen if F were contravariant in B just like it is in A? We pass an F[Fruit, Animal] to O. According to the LSP, if we are right and functions are contravariant in their output, nothing bad should happen. Unfortunately, O calls the getMilk method on the result of F, but F just returned it a Chicken. Oops. Ergo, functions can't be contravariant in their outputs.
OTOH, what happens if we pass an F[Fruit, Cow]? Everything still works! O calls getMilk on the returned cow, and it indeed gives milk. So, it looks like functions are covariant in their outputs.
And that is a general rule that applies to variance:
It is safe (in the sense of the LSP) to make C[A] covariant in A IFF A is used only as an output.
It is safe (in the sense of the LSP) to make C[A] contravariant in A IFF A is used only as an input.
If A can be used either as an input or as an output, then C[A] must be invariant in A, otherwise the result is not safe.
In fact, that's why C♯'s designers chose to re-use the already existing keywords in and out for variance annotations and Kotlin uses those same keywords.
So, for example, immutable collections can generally be covariant in their element type, since they don't allow you to put something into the collection (you can only construct a new collection with a potentially different type) but only to get elements out. So, if I want to get a list of numbers, and someone hands me a list of integers, I am fine.
On the other hand, think of an output stream (such as a Logger), where you can only put stuff in but not get it out. For this, it is safe to be contravariant. I.e. if I expect to be able to print strings, and someone hands me a printer that can print any object, then it can also print strings, and I am fine. Other examples are comparison functions (you only put generics in, the output is fixed to be a boolean or an enum or an integer or whatever design your particular language chooses). Or predicates, they only have generic inputs, the output is always fixed to be a boolean.
But, for example, mutable collections, where you can both put stuff in and get stuff out, are only type-safe when they are invariant. There are a great many tutorials explaining in detail how to break Java's or C♯'s type-safety using their covariant mutable arrays, for example.
Note, however that it is not always obvious whether a type is an input or an output once you get to more complex types. For example, when your type parameter is used as the upper or lower bound of an abstract type member, or when you have a method which takes a function that returns a function whose argument type is your type parameter.
Now, to come back to your question: you only have one stack. You never ask whether one stack is a subtype of another stack. Therefore, variance doesn't come into play in your example.
One of the non-obvious things about Scala type variance is that the annotation, +A and -A, actually tells us more about the wrapper than it does about the type parameter.
Let's say you have a box: class Box[T]
Because T is invariant that means that some Box[Apple] is unrelated to a Box[Fruit].
Now let's make it covariant: class Box[+T]
This does two things, it restricts the way the Box code can use T internally, but, more importantly, it changes the relationship between various instances of Boxes. In particular, the type Box[Apple] is now a sub-type of Box[Fruit], because Apple is a sub-type of Fruit, and we've instructed Box to vary its type relationships in the same manner (i.e. "co-") as its type parameter.
... it says that type parameter should be covariant +A
Actually, that Stack code can't be made co- or contra-variant. As I mentioned, variance annotation adds some restrictions to the way the type parameter is used and that Stack code uses A in ways that are contrary to both co- and contra-variance.
Variance is related more with complex type rather then passing objects which is called subtyping.
Explained here:
https://en.wikipedia.org/wiki/Covariance_and_contravariance_%28computer_science%29
If you want to make a complex type that accepts some type as a child/parent of list that accepts certain other type, then idea of variance comes int effect. As in your example, it is about passing child in place of parent. So it works.
https://coderwall.com/p/dlqvnq/simple-example-for-scala-covariance-contravariance-and-invariance
Please see the code here. It is understandable. Please respond if you do not get it.
Related
The cats documentation on FunctionK contains:
Thus natural transformation can be implemented in terms of FunctionK. This is why a parametric polymorphic function FunctionK[F, G] is sometimes referred as a natural transformation. However, they are two different concepts that are not isomorphic.
What's an example of a FunctionK which isn't a Natural Transformation (or vice versa)?
It's not clear to me whether this statement is implying F and G need to have Functor instances, for a FunctionK to be a Natural Transformation.
The Wikipedia article on Natural Transformations says that the Commutative Diagram can be written with a Contravariant Functor instead of a Covariant Functor, which to me implies that a Functor instance isn't required?
Alternatively, the statement could be refering to impure FunctionKs, although I'd kind of expect analogies to category theory breaking down in the presence of impurity to be a given; and not need explicitly stating?
When you have some objects (from CT) in one category and some objects in another category, and you are able to come up with a way show that each object and arrow between objects has a correspondence in later then you can say that there is a functor from one to another. In less strict language: you can say that there is a functor from A to B if you can find a "subgraph" in B that has the same shape as A.
In such case you can "zoom out": draw a point, call it object representing category A, draw another, call it object representing B, and draw an arrow and call it functor.
If there are many ways you can do it, you can have multiple functors. And with more categories you can combine these functors like you compose arrows. Which in this "zoomed out" world look like normal objects and arrows. And you can form categories with them. If you can find a mapping between these categories, a functor on functors, then this is a natural transformation.
When it comes to functional programming you don't work in such generic framework. Usually, you assume that:
object is a type
arrow is a function
to define a category you almost always would have to use a generic type, or else it would be too specific to be useful as a general purpose library (isomorphism between e.g. possible transitions of one enum into transition states of another enum could be a functor, but that wouldn't necessarily fit some generic interface)
since programming languages cannot let you define a generic mapping between two arbitrary types, functors that you'll see will be almost exclusively Id ~> F: you can lift a function A => B into List[A] => List[B], Future[A] => Future[B] and so one easily (this proves existence of F[A] -> F[B] arrow for given A -> B arrow, and if A and B are generic you provided a proof for all arrows from Id), but try finding something more complex than "given A, add a wrapper around it to get F[A]" and it's a challenge
similarly the only natural transformations you'll see will be from Id ~> F into Id ~> G that is "given A, change the wrapper type from F[A] to G[A]", because you have a guarantee that there is the same A hidden somehow in both F and G and you don't have to deal with modifying it (only with modifying the wrapper)
The latter being exactly a FunctionK or just a polymorphic function in Scala 3: [A] => F[A] => G[A]. A concept from type theory rather than from CT (although in mathematics a lot of concepts map into each other, like here it FunctionK maps to natural transformation where objects represent types and arrows functions between them).
Category theory isn't so restrictive. As a matter of the fact, isn't even rooted in computer science and was not created with functional programmers in mind. Let's create non-FunctionK natural transformation which models some operation on "state machines" implementation:
object is a state in state machine of sort (let's say enum value)
arrow is a legal transition from one state to another (let say you can model it as a pair of enum values, and it somehow incorporates transitivity)
when you have 2 models of state machines with different enums, BUT you can take one model: one enum and allowed pairs and translate it to another model: another enum and its pair, you have a functor. one that doesn't implement cats.Functor but still
let's say you would model it with some class Translate[Enum1, Enum2] {...} interface
let's say that this translation also extends the model with some functionality, so it's actually Translate[Enum1, Enum2 | ExtensionX]
now, build another extension Translate[Enum1, Enum2 | ExtensionY]
if you can somehow convert Translate[A, B | ExtensionX] into Translate[A, B | ExtensionY] for a whole category of enums (described as A and B) then this would be a natural transformation
notice that it would not fit FunctionK just like Translate doesn't fit Functor
It's a bit stretched example, also hardly anyone implementing an isomorphisms between state machines would reach for functor as a way to describe it, but it should show that natural transformation is not FunctionK. And why it's so rare to see functors and natural transformations different than Id ~> F lifts and (Id ~> F) ~> (Id ~> G) rewrappings.
PS. When it comes to Scala, you can also meet CT used as: object is a type, arrow is a subtyping relationship, thus covariant functors and contravariant functors appear in type parameters. Since "A is a subtype of B" translates to "A can be always upcasted to B", then these 2 interpretations of functors will often be mashed together - something along "don't define map if you cannot upcast and don't define contramap if you cannot downcast parameter" (see also narrow and widen operations in Cats).
PS2. Authors might be defending against hardcore-CT fans: from the point of view of CT Kleisli and ReaderT are 2 different things, yet in Cats they are combined together into a single Kleisli type for convenience. I saw some people complaining about it so maybe authors of the documentation felt that they need this disclaimer.
You can write down FunctionK-instances for things that aren't functors at all, neither covariant nor contravariant.
For example, given
type F[X] = X => X
you could implement a FunctionK[F, F] by
new FunctionK[F, F] {
def apply[X](f: F[X]): F[X] = f andThen f
}
Here, the F cannot be considered to be a functor, because X appears with both variances. Thus, you get something that's certainly a FunctionK, but the question whether it's a natural transformation isn't even valid to begin with.
Note that this example does not depend on whether you take the general CT-definition or the narrow FP-definition of what a "functor" is, the mapping F is simply not functorial.
I would like to use the cats-saga from that repository: https://github.com/VladKopanev/cats-saga
However I am stuck on that piece of code at OrderSagaCoordinator.scala L160:
def apply[F[_]: Sync: Concurrent: Timer: Sleep: Parallel](
paymentServiceClient: PaymentServiceClient[F],
loyaltyPointsServiceClient: LoyaltyPointsServiceClient[F],
orderServiceClient: OrderServiceClient[F],
sagaLogDao: SagaLogDao[F],
maxRequestTimeout: Int
): F[OrderSagaCoordinatorImpl[F]] =
What is F, where does it come from, can someone explain that piece of code ?
Thanks
Edit: I know what a generic type is. However in that case the apply method is called without specifying the concrete type and I do not see any places where it came from.
(for {
paymentService <- PaymentServiceClientStub(randomUtil, clientMaxReqTimeout, flakyClient)
loyaltyPoints <- LoyaltyPointsServiceClientStub(randomUtil, clientMaxReqTimeout, flakyClient)
orderService <- OrderServiceClientStub(randomUtil, clientMaxReqTimeout, flakyClient)
xa = Transactor.fromDriverManager[IO]("org.postgresql.Driver", "jdbc:postgresql:Saga", "postgres", "root")
logDao = new SagaLogDaoImpl(xa)
orderSEC <- OrderSagaCoordinatorImpl(paymentService, loyaltyPoints, orderService, logDao, sagaMaxReqTimeout)
// ...
Think of something concrete, say 'box of chocolates'
case class Box(v: Chocolate)
Now imagine we take away the chocolate, and make the box take any kind of element A, maybe box of coins, box of candy, box of cards, etc
case class Box[A](v: A)
Here we are polymorphic in the element type of the box. Many languages can express this level of polymorphism. But Scala takes this further. In the same way how we took away the chocolate, we can take away the box itself, essentially expressing a very abstract idea of "any kind of context of any type of elements"
trait Ctx[F[_]]
As another analogy consider the following
box of chocolate -> proper type -> case class Box(v: Chocolate)
box of _ -> type constructor of first order -> case class Box[A](v: A)
_ of _ -> type constructor of higher order -> trait Ctx[F[_]]
Now focus on _ of _. Here we have "something of something", which kind of seems like we have nothing. How can we do anything with this? This is where the idea of a type class comes into play. A type class can constrain a highly polymorphic shape such as F[_]
def apply[F[_]: Sync](...)
Here [F[_]: Sync] represents this constraint. It means that method apply accepts any type constructor of first kind for which there exists evidence that it satisfies the constraints of type class Sync. Note that type class Sync
trait Sync[F[_]]
is considered a higher order type constructor, whilst type parameter F[_] represents a first order type constructor. Similarly
F[_] : Sync : Concurrent
specifies that type constructor F must not only satisfy Sync constraints, but also constraints of Concurrent type class, and so on. These techniques are sometimes referred to as scary sounding
higher order type constructor polymorphism
and yet I am confident that most programmers have all the conceptual tools already present to understand it because
if you ever passed a function to a function, then you can work with concept of higher order
if you ever used a List, then you can work with concept of type constructors
if you ever wrote a method that uses the same implementation for both Integers and Doubles, then you can work with concept of polymorphism
Providing evidence that a type constructor satisfies constraints of a type class are given using Scala's implicit mechanisms. IMO Scala 3 has significantly simplified the concept so consider https://dotty.epfl.ch/docs/reference/contextual/type-classes.html
It means define trait Option[T] is same as trait Option[+T].
It's easy to consider val humanOpt: Option[Human] can point to a Option[Student] instance just like val humanOpt: Human can point to a Student instance.
Maybe it seems some strange but why I consider this?
Java variable default as polymorphism which compare with c++ that it should use virtual keyword.I think its a important point to simplify OO in Java.
Scala use high order type in many use case that is more frequently compare to Java, such as Option, Try or define a Cache[T] by ourselves.
Besides, it still comply with Liskov Substitution principle.
I just want to know why not simplify covariant as default behavior?
or define a Cache[T] by ourselves
If your Cache has a put(T) method, it can't be covariant, exactly because of the Liskov substitution principle.
val humanCache: Cache[Human] = new Cache[Student] // legal if cache is covariant
humanCache.put(new Professor) // oops, we put a Professor into a Cache[Student]
So making all types covariant by default simply won't work.
You could use variance inference instead: if type can be covariant make it covariant, if it can be contravariant make if contravariant, if neither make it invariant. But then just adding a method can change the variance and break a lot of code using your type. By making all variance explicit you are forced to notice when you do that.
I know Scala Nothing is the bottom type. When I see the API it extends from "Any" which is the top in the hierarchy.
Now since Scala does not support multiple inheritance, how can we say that it is the bottom type. In other words it is not inheriting directly all the classes or traits like Seq, List, String, Int and so on. If that is the case how can we say that it is the bottom of all type ?
What I meant is that if we are able to assign List[Nothing] (Nil) to List[String] as List is covariant in scala how it is possible because there is no direct correlation between Nothing and String type. As we know Nothing is a bottom type but I am having little difficulty in seeing the relation between String and Nothing as I stated in the above example.
Thanks & Regards,
Mohamed
tl;dr summary: Nothing is a subtype of every type because the spec says so. It cannot be explained from within the language. Every language (or at least almost every language) has some things at the very core that cannot be explained from within the language, e.g. java.lang.Object having no superclass even though every class has a superclass, since even if we don't write an extends clause, the class will implicitly get a superclass. Or the "bootstrap paradox" in Ruby, Object being an instance of Class, but Class being a subclass of Object, and thus Object being an indirect instance of itself (and even more directly: Class being an instance of Class).
I know Scala Nothing is the bottom type. When I see the API it extends from "Any" which is the top in the hierarchy.
Now since Scala does not support multiple inheritance, how can we say that it is the bottom type.
There are two possible answers to this.
The simple and short answer is: because the spec says so. The spec says Nothing is a subtype of all types, so Nothing is the subtype of all types. How? We don't care. The spec says it is so, so that's what it is. Let the compiler designers worry about how to represent this fact within their compiler. Do you care how Any is able to have to superclass? Do you care how def is represented internally in the compiler?
The slightly longer answer is: Yes, it's true, Nothing inherits from Any and only from Any. But! Inheritance is not the same thing as subtyping. In Scala, inheritance and subtyping are closely tied together, but they are not the same thing. The fact that Nothing can only inherit from one class does not mean that it cannot be the subtype of more than one type. A type is not the same thing as a class.
In fact, to be very specific, the spec does not even say that Nothing is a subtype of all types. It only says that Nothing conforms to all types.
In other words it is not inheriting directly all the classes or traits like Seq, List, String, Int and so on. If that is the case how can we say that it is the bottom of all type ?
Again, we can say that, because the spec says we can say that.
How can we say that def defines a method? Because the spec says so. How can we say that a b c means the same thing as a.b(c) and a b_: c means the same thing as { val __some_unforgeable_id__ = a; c.b_:(__some_unforgeable_id__) }? Because the spec says so. How can we say that "" is a string and '' is a character? Because the spec says so.
What I meant is that if we are able to assign List[Nothing] (Nil) to List[String] as List is covariant in scala how it is possible because there is no direct correlation between Nothing and String type.
Yes, there is a direct correlation between the types Nothing and String. Nothing is a subtype of String because Nothing is a subtype of all types, including String.
As we know Nothing is a bottom type but I am having little difficulty in seeing the relation between String and Nothing as I stated in the above example.
The relation between String and Nothing is that Nothing is a subtype of String. Why? Because the spec says so.
The compiler knows Nothing is a subtype of String the same way it knows 1 is an instance of Int and has a + method, even though if you look at the source code of the Scala standard library, the Int class is actually abstract and all its methods have no implementation.
Someone, somewhere wrote some code within the compiler that knows how to handle adding two numbers, even though those numbers are actually represented as JVM primitives and don't even exist inside the Scala object system. The same way, someone, somewhere wrote some code within the compiler that knows that Nothing is a subtype of all types even though this fact is not represented (and is not even representable) in the source code of Nothing.
Now since Scala does not support multiple inheritance
Scala does support multiple inheritance, using trait mixin. This is currently not commutative, i.e. the type A with B is not identical with B with A (this will happen with Dotty), but still it's a form of multiple inheritance, and indeed one of Scala's strong points, as it solves the diamond problem through its linearisation rules.
By the way, Null is another bottom type, inherited from Java (which could also be said to have a Nothing bottom type because you can throw a runtime exception in any possible place).
I think you need to distinguish between class inheritance and type bounds. There is no contradiction in defining Nothing as a bottom type, although it does not "explicitly" inherit from any type you want, such as List. It's more like a capability, the capability to throw an exception.
if we are able to assign List[Nothing] (Nil) to List[String] as List is covariant in scala how it is possible because there is no direct correlation between Nothing and String type
Yes, the idea of the bottom type is that Nothing is also (among many other things) a sub-type of String. So you can write
def foo: String = throw new Exception("No")
This only works because Nothing (the type of throwing an exception) is more specific than the declared return type String.
Inspired by Real-world examples of co- and contravariance in Scala I thought a better question would be:
When designing a library, are there a specific set of questions you should ask yourself when determining whether a type parameter should be covariant or contravariant? Or should you make everything invariant and then change as needed?
Well, simple, does it make sense? Think of Liskov substitution.
Co-variance
If A <: B, does it make sense to pass a C[A] where a C[B] is expected? If so, make it C[+T]. The classic example is the immutable List, where a List[A] can be passed to anything expecting a List[B], assuming A is a subtype of B.
Two counter examples:
Mutable sequences are invariant, because it is possible to have type safety violations otherwise (in fact, Java's co-variant Array is vulnerable to just such things, which is why it is invariant in Scala).
Immutable Set is invariant, even though its methods are very similar to those of an immutable Seq. The difference lies with contains, which is typed on sets and untyped (ie, accept Any) on sequences. So, even though it would otherwise be possible to make it co-variant, the desire for an increased type safety on a particular method led to a choice of invariance over co-variance.
Contra-variance
If A <: B, does it make sense to pass a C[B] where a C[A] is expected? If so, make it C[-T]. The classic would-be example is Ordering. While some unrelated technical problems prevent Ordering from being contra-variant, it is intuitive that anything that can order a super-class of A can also order A. It follows that Ordering[B], which orders all elements of type B, a supertype of A, can be passed to something expecting an Ordering[A].
While Scala's Ordering is not contra-variant, Scalaz's Order is contra-variant as expected. Another example from Scalaz is its Equal trait.
Mixed Variance?
The most visible example of mixed variance in Scala is Function1 (and 2, 3, etc). It is contra-variant in the parameter it receives, and co-variant in what it returns. Note, though, that Function1 is what is used for a lot of closures, and closures are used in a lot of places, and these places are usually where Java uses (or would use) Single Abstract Method classes.
So, if you have a situation where a SAM class applies, that's likely a place for mixed contra-variance and co-variance.