I would like to use the cats-saga from that repository: https://github.com/VladKopanev/cats-saga
However I am stuck on that piece of code at OrderSagaCoordinator.scala L160:
def apply[F[_]: Sync: Concurrent: Timer: Sleep: Parallel](
paymentServiceClient: PaymentServiceClient[F],
loyaltyPointsServiceClient: LoyaltyPointsServiceClient[F],
orderServiceClient: OrderServiceClient[F],
sagaLogDao: SagaLogDao[F],
maxRequestTimeout: Int
): F[OrderSagaCoordinatorImpl[F]] =
What is F, where does it come from, can someone explain that piece of code ?
Thanks
Edit: I know what a generic type is. However in that case the apply method is called without specifying the concrete type and I do not see any places where it came from.
(for {
paymentService <- PaymentServiceClientStub(randomUtil, clientMaxReqTimeout, flakyClient)
loyaltyPoints <- LoyaltyPointsServiceClientStub(randomUtil, clientMaxReqTimeout, flakyClient)
orderService <- OrderServiceClientStub(randomUtil, clientMaxReqTimeout, flakyClient)
xa = Transactor.fromDriverManager[IO]("org.postgresql.Driver", "jdbc:postgresql:Saga", "postgres", "root")
logDao = new SagaLogDaoImpl(xa)
orderSEC <- OrderSagaCoordinatorImpl(paymentService, loyaltyPoints, orderService, logDao, sagaMaxReqTimeout)
// ...
Think of something concrete, say 'box of chocolates'
case class Box(v: Chocolate)
Now imagine we take away the chocolate, and make the box take any kind of element A, maybe box of coins, box of candy, box of cards, etc
case class Box[A](v: A)
Here we are polymorphic in the element type of the box. Many languages can express this level of polymorphism. But Scala takes this further. In the same way how we took away the chocolate, we can take away the box itself, essentially expressing a very abstract idea of "any kind of context of any type of elements"
trait Ctx[F[_]]
As another analogy consider the following
box of chocolate -> proper type -> case class Box(v: Chocolate)
box of _ -> type constructor of first order -> case class Box[A](v: A)
_ of _ -> type constructor of higher order -> trait Ctx[F[_]]
Now focus on _ of _. Here we have "something of something", which kind of seems like we have nothing. How can we do anything with this? This is where the idea of a type class comes into play. A type class can constrain a highly polymorphic shape such as F[_]
def apply[F[_]: Sync](...)
Here [F[_]: Sync] represents this constraint. It means that method apply accepts any type constructor of first kind for which there exists evidence that it satisfies the constraints of type class Sync. Note that type class Sync
trait Sync[F[_]]
is considered a higher order type constructor, whilst type parameter F[_] represents a first order type constructor. Similarly
F[_] : Sync : Concurrent
specifies that type constructor F must not only satisfy Sync constraints, but also constraints of Concurrent type class, and so on. These techniques are sometimes referred to as scary sounding
higher order type constructor polymorphism
and yet I am confident that most programmers have all the conceptual tools already present to understand it because
if you ever passed a function to a function, then you can work with concept of higher order
if you ever used a List, then you can work with concept of type constructors
if you ever wrote a method that uses the same implementation for both Integers and Doubles, then you can work with concept of polymorphism
Providing evidence that a type constructor satisfies constraints of a type class are given using Scala's implicit mechanisms. IMO Scala 3 has significantly simplified the concept so consider https://dotty.epfl.ch/docs/reference/contextual/type-classes.html
This could be a very silly question, but I am not able to understand the difference even after scratching my head for a long time.
I am going through the page of scala generics: https://docs.scala-lang.org/tour/generic-classes.html
Here, it is said that
Note: subtyping of generic types is invariant. This means that if we
have a stack of characters of type Stack[Char] then it cannot be used
as an integer stack of type Stack[Int]. This would be unsound because
it would enable us to enter true integers into the character stack. To
conclude, Stack[A] is only a subtype of Stack[B] if and only if B = A.
I understand this completely that I cannot use Char where Int is required.
But, my Stack class accepts only A type (which is invariant). If I put Apple, Banana or Fruit in them, they all are accepted.
class Fruit
class Apple extends Fruit
class Banana extends Fruit
val stack2 = new Stack[Fruit]
stack2.push(new Fruit)
stack2.push(new Banana)
stack2.push(new Apple)
But, on the next page (https://docs.scala-lang.org/tour/variances.html), it says that type parameter should be covariant +A, then how is the Fruit example working as even it is adding the subtypes with invariant.
Hope I am clear with my question. Let me know if more Info. needs to be added.
This has nothing to do with variance at all.
You declare stack2 to be a Stack[Fruit], in other words, you declare that you are allowed to put anything into the Stack which is a Fruit. An Apple is a (subtype of) Fruit, ergo you are allowed to put an Apple into a Stack of Fruits.
This is called subtyping and has nothing to do with variance at all.
Let's take a step back: what does variance actually mean?
Well, variance means "change" (think of words like "to vary" or "variable"). co- means "together" (think of cooperation, co-education, co-location), contra- means "against" (think of contradiction, counter-intelligence, counter-insurgency, contraceptive), and in- means "unrelated" or "non-" (think of involuntary, inaccessible, intolerant).
So, we have "change" and that change can be "together", "against" or "unrelated". Well, in order to have related changes, we need two things which change, and they can either change together (i.e. when one thing changes, the other thing also changes "in the same direction"), they can change against each other (i.e. when one thing changes, the other thing changes "in the opposite direction"), or they can be unrelated (i.e. when one thing changes, the other doesn't.)
And that's all there is to the mathematical concept of covariance, contravariance, and invariance. All we need are two "things", some notion of "change", and this change needs to have some notion of "direction".
Now, that's of course very abstract. In this particular instance, we are talking about the context of subtyping and parametric polymorphism. How does this apply here?
Well, what are our two things? When we have a type constructor such as C[A], then our two things are:
The type argument A.
The constructed type which is the result of applying the type constructor C to A.
And what is our change with a sense of direction? It is subtyping!
So, the question now becomes: "When I change A to B (along one of the directions of subtyping, i.e. make it either a subtype or a supertype), then how does C[A] relate to C[B]".
And again, there are three possibilities:
Covariance: A <: B ⇒ C[A] <: C[B]: when A is a subtype of B then C[A] is a subtype of C[B], in other words, when I change A along the subtyping hierarchy, then C[A] changes with A in the same direction.
Contravariance: A <: B ⇒ C[A] :> C[B]: when A is a subtype of B, then C[A] is a supertype of C[B], in other words, when I change A along the subtyping hierarchy, then C[A] changes against A in the opposite direction.
Invariance: there is no subtyping relationship between C[A] and C[B], neither is a sub- nor supertype of the other.
There are two questions you might ask yourself now:
Why is this useful?
Which one is the right one?
This is useful for the same reason subtyping is useful. In fact, this is just subtyping. So, if you have a language which has both subtyping and parametric polymorphism, then it is important to know whether one type is a subtype of another type, and variance tells you whether or not a constructed type is a subtype of another constructed type of the same constructor based on the subtyping relationship between the type arguments.
Which one is the right one is trickier, but thankfully, we have a powerful tool for analyzing when a subtype is a subtype of another type: Barbara Liskov's Substitution Principle tells us that a type S is a subtype of type T IFF any instance of T can be replaced with an instance of S without changing the observable desirable properties of the program.
Let's take a simple generic type, a function. A function has two type parameters, one for the input, and one for the output. (We are keeping it simple here.) F[A, B] is a function that takes in an argument of type A and returns a result of type B.
And now we play through a couple of scenarios. I have some operation O that wants to work with a function from Fruits to Mammals (yeah, I know, exciting original examples!) The LSP says that I should also be able to pass in a subtype of that function, and everything should still work. Let's say, F were covariant in A. Then I should be able to pass in a function from Apples to Mammals as well. But what happens when O passes an Orange to F? That should be allowed! O was able to pass an Orange to F[Fruit, Mammal] because Orange is a subtype of Fruit. But, a function from Apples doesn't know how to deal with Oranges, so it blows up. The LSP says it should work though, which means that the only conclusion we can draw is that our assumption is wrong: F[Apple, Mammal] is not a subtype of F[Fruit, Mammal], in other words, F is not covariant in A.
What if it were contravariant? What if we pass an F[Food, Mammal] into O? Well, O again tries to pass an Orange and it works: Orange is a Food, so F[Food, Mammal] knows how to deal with Oranges. We can now conclude that functions are contravariant in their inputs, i.e. you can pass a function that takes a more general type as its input as a replacement for a function that takes a more restricted type and everything will work out fine.
Now let's look at the output of F. What would happen if F were contravariant in B just like it is in A? We pass an F[Fruit, Animal] to O. According to the LSP, if we are right and functions are contravariant in their output, nothing bad should happen. Unfortunately, O calls the getMilk method on the result of F, but F just returned it a Chicken. Oops. Ergo, functions can't be contravariant in their outputs.
OTOH, what happens if we pass an F[Fruit, Cow]? Everything still works! O calls getMilk on the returned cow, and it indeed gives milk. So, it looks like functions are covariant in their outputs.
And that is a general rule that applies to variance:
It is safe (in the sense of the LSP) to make C[A] covariant in A IFF A is used only as an output.
It is safe (in the sense of the LSP) to make C[A] contravariant in A IFF A is used only as an input.
If A can be used either as an input or as an output, then C[A] must be invariant in A, otherwise the result is not safe.
In fact, that's why C♯'s designers chose to re-use the already existing keywords in and out for variance annotations and Kotlin uses those same keywords.
So, for example, immutable collections can generally be covariant in their element type, since they don't allow you to put something into the collection (you can only construct a new collection with a potentially different type) but only to get elements out. So, if I want to get a list of numbers, and someone hands me a list of integers, I am fine.
On the other hand, think of an output stream (such as a Logger), where you can only put stuff in but not get it out. For this, it is safe to be contravariant. I.e. if I expect to be able to print strings, and someone hands me a printer that can print any object, then it can also print strings, and I am fine. Other examples are comparison functions (you only put generics in, the output is fixed to be a boolean or an enum or an integer or whatever design your particular language chooses). Or predicates, they only have generic inputs, the output is always fixed to be a boolean.
But, for example, mutable collections, where you can both put stuff in and get stuff out, are only type-safe when they are invariant. There are a great many tutorials explaining in detail how to break Java's or C♯'s type-safety using their covariant mutable arrays, for example.
Note, however that it is not always obvious whether a type is an input or an output once you get to more complex types. For example, when your type parameter is used as the upper or lower bound of an abstract type member, or when you have a method which takes a function that returns a function whose argument type is your type parameter.
Now, to come back to your question: you only have one stack. You never ask whether one stack is a subtype of another stack. Therefore, variance doesn't come into play in your example.
One of the non-obvious things about Scala type variance is that the annotation, +A and -A, actually tells us more about the wrapper than it does about the type parameter.
Let's say you have a box: class Box[T]
Because T is invariant that means that some Box[Apple] is unrelated to a Box[Fruit].
Now let's make it covariant: class Box[+T]
This does two things, it restricts the way the Box code can use T internally, but, more importantly, it changes the relationship between various instances of Boxes. In particular, the type Box[Apple] is now a sub-type of Box[Fruit], because Apple is a sub-type of Fruit, and we've instructed Box to vary its type relationships in the same manner (i.e. "co-") as its type parameter.
... it says that type parameter should be covariant +A
Actually, that Stack code can't be made co- or contra-variant. As I mentioned, variance annotation adds some restrictions to the way the type parameter is used and that Stack code uses A in ways that are contrary to both co- and contra-variance.
Variance is related more with complex type rather then passing objects which is called subtyping.
Explained here:
https://en.wikipedia.org/wiki/Covariance_and_contravariance_%28computer_science%29
If you want to make a complex type that accepts some type as a child/parent of list that accepts certain other type, then idea of variance comes int effect. As in your example, it is about passing child in place of parent. So it works.
https://coderwall.com/p/dlqvnq/simple-example-for-scala-covariance-contravariance-and-invariance
Please see the code here. It is understandable. Please respond if you do not get it.
What is the difference between List[T] forSome {type T} and List[T forSome {type T}]? How do I read them in "English"? How should I grok the forSome keyword? What are some practical uses of forSome? What are some useful practical and more complex than simple T forSome {type T} usages?
Attention: (Update 2016-12-08) The forSome keyword is very likely going away with Scala 2.13 or 2.14, according to Martin Odersky's talk on the ScalaX 2016. Replace it with path dependent types or with anonymous type attributes (A[_]). This is possible in most cases. If you have an edge case where it is not possible, refactor your code or loosen your type restrictions.
How to read "forSome" (in an informal way)
Usually when you use a generic API, the API guarantees you, that it will work with any type you provide (up to some given constraints). So when you use List[T], the List API guarantees you that it will work with any type T you provide.
With forSome (so called existentially quantified type parameters) it is the other way round. The API will provide a type (not you) and it guarantees you, it will work with this type it provided you. The semantics is, that a concrete object will give you something of type T. The same object will also accept the things it provided you. But no other object may work with these Ts and no other object can provide you with something of type T.
The idea of "existentially quantified" is: There exists (at least) one type T (in the implementation) to fulfill the contract of the API. But I won't tell you which type it is.
forSome can be read similar: For some types T the API contract holds true. But it is not necessary true for all types T. So when you provide some type T (instead of the one hidden in the implementation of the API), the compiler cannot guarantee that you got the right T. So it will throw a type error.
Applied to your example
So when you see List[T] forSome {type T} in an API, you can read it like this: The API will provide you with a List of some unknown type T. It will gladly accept this list back and it will work with it. But it won't tell you, what T is. But you know at least, that all elements of the list are of the same type T.
The second one is a little bit more tricky. Again the API will provide you with a List. And it will use some type T and not tell you what T is. But it is free to choose a different type for each element. A real world API would establish some constraints for T, so it can actually work with the elements of the list.
Conclusion
forSome is useful, when you write an API, where each object represents an implementation of the API. Each implementation will provide you with some objects and will accept these objects back. But you can neither mix objects from different implementations nor can you create the objects yourself. Instead you must always use the corresponding API functions to get some objects that will work with that API. forSome enables a very strict kind of encapsulation. You can read forSome in the following way:
The API contract folds true for some types. But you don't know for
which types it holds true. Hence you cannot provide you own type and
you cannot create your own objects. You have to use the ones provided
through the API that uses forSome.
This is quite informal and might even be wrong in some corner cases. But it should help you to grok the concept.
There are a lot of questions here, and most of them have been addressed pretty thoroughly in the answers linked in the comments above, so I'll respond to your more concrete first question.
There's no real meaningful difference between List[T] forSome { type T } and List[T forSome { type T }], but we can see a difference between the following two types:
class Foo[A]
type Outer = List[Foo[T]] forSome { type T }
type Inner = List[Foo[T] forSome { type T }]
We can read the first as "a list of foos of T, for some type T". There's a single T for the entire list. The second, on the other hand, can be read as "a list of foos, where each foo is of T for some T".
To put it another way, if we've got a list outer: Outer, we can say that "there exists some type T such that outer is a list of foos of T", where for a list of type Inner, we can only say that "for each element of the list, there exists some T such that that element is a foo of T". The latter is weaker—it tells us less about the list.
So, for example, if we have the following two lists:
val inner: Inner = List(new Foo[Char], new Foo[Int])
val outer: Outer = List(new Foo[Char], new Foo[Int])
The first will compile just fine—each element of the list is a Foo[T] for some T. The second won't compile, since there's not some T such that each element of the list is a Foo[T].
Code seems trivial but I'm not understanding one thing in the return value:
trait JdbcTemplate {
def query(psc: PreparedStatementCreator,
rowMapper: RowMapper): List[_]
}
What exactly does List[_] mean here? Wouldn't using List[Any] imply the same thing? Where can I read on the differences?
Any is a specific, known (though utterly all-inclusive) type. The use of the underscore as type parameter is a shorthand for a more cumbersome and more general syntax for what is called an "existential type." Existential types are non-specific: They say there's at least one type that could go here. They are the dual of universal quantification that is the interpretation of the more commonly used unbounded type parameters. E.g., def method[T](t: T) .... In this construct, T may be bound to any type whatsoever though at each place where that type is instantiated (every occurrence of a call to that method), it is bound to a specific type.
Given that _ means you don't care about the type and Any is supertype of everything, both are the same.
Inspired by Real-world examples of co- and contravariance in Scala I thought a better question would be:
When designing a library, are there a specific set of questions you should ask yourself when determining whether a type parameter should be covariant or contravariant? Or should you make everything invariant and then change as needed?
Well, simple, does it make sense? Think of Liskov substitution.
Co-variance
If A <: B, does it make sense to pass a C[A] where a C[B] is expected? If so, make it C[+T]. The classic example is the immutable List, where a List[A] can be passed to anything expecting a List[B], assuming A is a subtype of B.
Two counter examples:
Mutable sequences are invariant, because it is possible to have type safety violations otherwise (in fact, Java's co-variant Array is vulnerable to just such things, which is why it is invariant in Scala).
Immutable Set is invariant, even though its methods are very similar to those of an immutable Seq. The difference lies with contains, which is typed on sets and untyped (ie, accept Any) on sequences. So, even though it would otherwise be possible to make it co-variant, the desire for an increased type safety on a particular method led to a choice of invariance over co-variance.
Contra-variance
If A <: B, does it make sense to pass a C[B] where a C[A] is expected? If so, make it C[-T]. The classic would-be example is Ordering. While some unrelated technical problems prevent Ordering from being contra-variant, it is intuitive that anything that can order a super-class of A can also order A. It follows that Ordering[B], which orders all elements of type B, a supertype of A, can be passed to something expecting an Ordering[A].
While Scala's Ordering is not contra-variant, Scalaz's Order is contra-variant as expected. Another example from Scalaz is its Equal trait.
Mixed Variance?
The most visible example of mixed variance in Scala is Function1 (and 2, 3, etc). It is contra-variant in the parameter it receives, and co-variant in what it returns. Note, though, that Function1 is what is used for a lot of closures, and closures are used in a lot of places, and these places are usually where Java uses (or would use) Single Abstract Method classes.
So, if you have a situation where a SAM class applies, that's likely a place for mixed contra-variance and co-variance.