How does one derive a `BoundedEnum (Maybe a)` from a `BoundedEnum a`? - purescript

I'm getting an error on "no typeclass instance was found for Data.Enum.BoundedEnum (Maybe InstitutionContactType)", but due to the existence of instance enumMaybe :: BoundedEnum a => Enum (Maybe a), instance boundedMaybe :: Bounded a => Bounded (Maybe a), and instance boundedEnumInstitutionContactType :: BoundedEnum InstitutionContactType, I would think this should work, though maybe it is because the direction is wrong for what I need in enumMaybe?
boundedMaybe gives us Bounded (Maybe a), enumMaybe gives us Enum (Maybe a), and class (Bounded a, Enum a) <= BoundedEnum a should, given Bounded (Maybe a) and Enum (Maybe a), also give us BoundedEnum (Maybe a), and I think I have the directionality correct, which is reversed for instances and classes in terms of what needs to be provided, if I understand correctly.

Of course, I finally realize now this doesn't work, since BoundedEnum has three additional members that need to be implemented on top of what is already available in Bounded and Enum.

Related

PureScript - Inferred Type Causes Compiler Warning

Consider the following simple snippet of PureScript code
a :: Int
a = 5
b :: Int
b = 7
c = a + b
main ∷ Effect Unit
main = do
logShow c
The program successfully infers the type of C to be Int, and outputs the expected result:
12
However, it also produces this warning:
No type declaration was provided for the top-level declaration of c.
It is good practice to provide type declarations as a form of documentation.
The inferred type of c was:
Int
in value declaration c
I find this confusing, since I would expect the Int type for C to be safely inferred. Like it often says in the docs, "why derive types when the compiler can do it for you?" This seems like a textbook example of the simplest and most basic type inference.
Is this warning expected? Is there a standard configuration that would suppress it?
Does this warning indicate that every variable should in fact be explicitly typed?
In most cases, and certainly in the simplest cases, the types can be inferred unambiguously, and indeed, in those cases type signatures are not necessary at all. This is why simpler languages, such as F#, Ocaml, or Elm, do not require type signatures.
But PureScript (and Haskell) has much more complicated cases too. Constrained types are one. Higher-rank types are another. It's a whole mess. Don't get me wrong, I love me some high-power type system, but the sad truth is, type inference works ambiguously with all of that stuff a lot of the time, and sometimes doesn't work at all.
In practice, even when type inference does work, it turns out that its results may be wildly different from what the developer intuitively expects, leading to very hard to debug issues. I mean, type errors in PureScript can be super vexing as it is, but imagine that happening across multiple top-level definitions, across multiple modules, even perhaps across multiple libraries. A nightmare!
So over the years a consensus has formed that overall it's better to have all the top-level definitions explicitly typed, even when it's super obvious. It makes the program much more understandable and puts constraints on the typechecker, providing it with "anchor points" of sorts, so it doesn't go wild.
But since it's not a hard requirement (most of the time), it's just a warning, not an error. You can ignore it if you wish, but do that at your own peril.
Now, another part of your question is whether every variable should be explicitly typed, - and the answer is "no".
As a rule, every top-level binding should be explicitly typed (and that's where you get a warning), but local bindings (i.e. let and where) don't have to, unless you need to clarify something that the compiler can't infer.
Moreover, in PureScript (and modern Haskell), local bindings are actually "monomorphised" - that's a fancy term basically meaning they can't be generic unless explicitly specified. This solves the problem of all the ambiguous type inference, while still working intuitively most of the time.
You can notice the difference with the following example:
f :: forall a b. Show a => Show b => a -> b -> String
f a b = s a <> s b
where
s x = show x
On the second line s a <> s b you get an error saying "Could not match type b with type a"
This happens because the where-bound function s has been monomorphised, - meaning it's not generic, - and its type has been inferred to be a -> String based on the s a usage. And this means that s b usage is ill-typed.
This can be fixed by giving s an explicit type signature:
f :: forall a b. Show a => Show b => a -> b -> String
f a b = s a <> s b
where
s :: forall x. Show x => x -> String
s x = show x
Now it's explicitly specified as generic, so it can be used with both a and b parameters.

Scala, Cats. Can someone explain what is `F` and where does it come from?

I would like to use the cats-saga from that repository: https://github.com/VladKopanev/cats-saga
However I am stuck on that piece of code at OrderSagaCoordinator.scala L160:
def apply[F[_]: Sync: Concurrent: Timer: Sleep: Parallel](
paymentServiceClient: PaymentServiceClient[F],
loyaltyPointsServiceClient: LoyaltyPointsServiceClient[F],
orderServiceClient: OrderServiceClient[F],
sagaLogDao: SagaLogDao[F],
maxRequestTimeout: Int
): F[OrderSagaCoordinatorImpl[F]] =
What is F, where does it come from, can someone explain that piece of code ?
Thanks
Edit: I know what a generic type is. However in that case the apply method is called without specifying the concrete type and I do not see any places where it came from.
(for {
paymentService <- PaymentServiceClientStub(randomUtil, clientMaxReqTimeout, flakyClient)
loyaltyPoints <- LoyaltyPointsServiceClientStub(randomUtil, clientMaxReqTimeout, flakyClient)
orderService <- OrderServiceClientStub(randomUtil, clientMaxReqTimeout, flakyClient)
xa = Transactor.fromDriverManager[IO]("org.postgresql.Driver", "jdbc:postgresql:Saga", "postgres", "root")
logDao = new SagaLogDaoImpl(xa)
orderSEC <- OrderSagaCoordinatorImpl(paymentService, loyaltyPoints, orderService, logDao, sagaMaxReqTimeout)
// ...
Think of something concrete, say 'box of chocolates'
case class Box(v: Chocolate)
Now imagine we take away the chocolate, and make the box take any kind of element A, maybe box of coins, box of candy, box of cards, etc
case class Box[A](v: A)
Here we are polymorphic in the element type of the box. Many languages can express this level of polymorphism. But Scala takes this further. In the same way how we took away the chocolate, we can take away the box itself, essentially expressing a very abstract idea of "any kind of context of any type of elements"
trait Ctx[F[_]]
As another analogy consider the following
box of chocolate -> proper type -> case class Box(v: Chocolate)
box of _ -> type constructor of first order -> case class Box[A](v: A)
_ of _ -> type constructor of higher order -> trait Ctx[F[_]]
Now focus on _ of _. Here we have "something of something", which kind of seems like we have nothing. How can we do anything with this? This is where the idea of a type class comes into play. A type class can constrain a highly polymorphic shape such as F[_]
def apply[F[_]: Sync](...)
Here [F[_]: Sync] represents this constraint. It means that method apply accepts any type constructor of first kind for which there exists evidence that it satisfies the constraints of type class Sync. Note that type class Sync
trait Sync[F[_]]
is considered a higher order type constructor, whilst type parameter F[_] represents a first order type constructor. Similarly
F[_] : Sync : Concurrent
specifies that type constructor F must not only satisfy Sync constraints, but also constraints of Concurrent type class, and so on. These techniques are sometimes referred to as scary sounding
higher order type constructor polymorphism
and yet I am confident that most programmers have all the conceptual tools already present to understand it because
if you ever passed a function to a function, then you can work with concept of higher order
if you ever used a List, then you can work with concept of type constructors
if you ever wrote a method that uses the same implementation for both Integers and Doubles, then you can work with concept of polymorphism
Providing evidence that a type constructor satisfies constraints of a type class are given using Scala's implicit mechanisms. IMO Scala 3 has significantly simplified the concept so consider https://dotty.epfl.ch/docs/reference/contextual/type-classes.html

Why T is in the covariance position or contravariance position

Given the following class definition
class X[+T] {
def get[U>:T](default:U):U
}
why T in the method def get[U>:T](default:U) is in the covariance position
Given the following class definition
class X[-T] {
def get[U<:T](default:U):U
}
why T in the method def get[U<:T](default:U) is in the cotravariance position
It is hard to answer "Why?" question without you providing more details but I'll try. I assume that your question is really "why type restriction on U are not inverted?". The short answer: because this is type-safe and covers some cases that are not covered otherwise.
Your first example is probably inspired by Option[T] and its getOrElse method. Although I'm not sure why anybody needs getOrElse with U different from T, logic why type restriction can be only U>:T seems obvious to me. Let's assume you have 3 classes: C which inherits B which inherits A and you have an Option[B]. If your default value is already B or C you don't need anything beyond U = T and thus simpler signature without additional generic U would suffice. The only case when you can't pass default value to the getOrElse method is if you have it of some type which is not a subtype of B (such as A). Let's extend this signature even more for a moment
def getOrElse[U, R](default:U): R
How types U, T and R should be related? Obviously R should be some common super-type of U and T because it should be able to contain both T and U. However such definition would be an overkill. First of all it is really weird to have default value of a type that is not related to the T at all. Secondly, even if it is such a strange case, you (and compiler) still can calculate a common super-type and assign it to some new U' = R'. Thus you don't need R but adding U adds some more flexibility (at least theoretically). But U still has to be some super-type of T because it is also the return type.
So to sum up: adding U with U<:T will
either produce wrong code if you use U as the result type (U as a result type can't hold T)
or if you use T as the result type would not extend applicability of this method i.e. would not allow any code that does not compile without U to compile with this additional U.
But adding U with U>:T will allow some more code which is actually type-safe to compile such as (yes, stupid example but as I said I don't know any real life examples):
val opt = Option[Int](1)
val orElse: Any = opt.getOrElse(List())

Are polymorphic functions "restrictive" in Scala?

In the book Functional Programming in Scala MEAP v10, the author mentions
Polymorphic functions are often so constrained by their type that they only have one implementation!
and gives the example
def partial1[A,B,C](a: A, f: (A,B) => C): B => C = (b: B) => f(a, b)
What does he mean by this statement? Are polymorphic functions restrictive?
Here's a simpler example:
def mysteryMethod[A, B](somePair: (A, B)): B = ???
What does this method do? It turns out, that there is only one thing this method can do! You don't need the name of the method, you don't need the implementation of the method, you don't need any documentation. The type tells you everything it could possibly do, and it turns out that "everything" in this case is exactly one thing.
So, what does it do? It takes a pair (A, B) and returns some value of type B. What value does it return? Can it construct a value of type B? No, it can't, because it doesn't know what B is! Can it return a random value of type B? No, because randomness is a side-effect and thus would have to appear in the type signature. Can it go out in the universe and fetch some B? No, because that would be a side-effect and would have to appear in the type signature!
In fact, the only thing it can do is return the value of type B that was passed into it, the second element of the pair. So, this mysteryMethod is really the second method, and its only sensible implementation is:
def second[A, B](somePair: (A, B)): B = somePair._2
Note that in reality, since Scala is neither pure nor total, there are in fact a couple of other things the method could do: throw an exception (i.e. return abnormally), go into an infinite loop (i.e. not return at all), use reflection to figure out the actual type of B and reflectively invoke the constructor to fabricate a new value, etc.
However, assuming purity (the return value may only depend on the arguments), totality (the method must return a value normally) and parametricity (it really doesn't know anything about A and B), then there is in fact an awful lot you can tell about a method by only looking at its type.
Here's another example:
def mysteryMethod(someBoolean: Boolean): Boolean = ???
What could this do? It could always return false and ignore its argument. But then it would be overly constrained: if it always ignores its argument, then it doesn't care that it is a Boolean and its type would rather be
def alwaysFalse[A](something: A): Boolean = false // same for true, obviously
It could always just return its argument, but again, then it wouldn't actually care about booleans, and its type would rather be
def identity[A](something: A): A = something
So, really, the only thing it can do is return a different boolean than the one that was passed in, and since there are only two booleans, we know that our mysteryMethod is, in fact, not:
def not(someBoolean: Boolean): Boolean = if (someBoolean) false else true
So, here, we have an example, where the types don't give us the implementation, but at least, they give as a (small) set of 4 possible implementations, only one of which makes sense.
(By the way: it turns out that there is only one possible implementation of a method which takes an A and returns an A, and it is the identity method shown above.)
So, to recap:
purity means that you can only use the building blocks that were handed to you (the arguments)
a strong, strict, static type system means that you can only use those building blocks in such a way that their types line up
totality means that you can't do stupid things (like infinite loops or throwing exceptions)
parametricity means that you cannot make any assumptions at all about your type variables
Think about your arguments as parts of a machine and your types as connectors on those machine parts. There will only be a limited number of ways that you can connect those machine parts together in a way that you only plug together compatible connectors and you don't have any leftover parts. Often enough, there will be only one way, or if there are multiple ways, then often one will be obviously the right one.
What this means is that, once you have designed the types of your objects and methods, you won't even have to think about how to implement those methods, because the types will already dictate the only possible way to implement them! Considering how many questions on StackOverflow are basically "how do I implement this?", can you imagine how freeing it must be not having to think about that at all, because the types already dictate the one (or one of a few) possible implementation?
Now, look at the signature of the method in your question and try playing around with different ways to combine a and f in such a way that the types line up and you use both a and f and you will indeed see that there is only one way to do that. (As Chris and Paul have shown.)
def partial1[A,B,C](a: A, f: (A,B) => C): B => C = (b: B) => f(a, b)
Here, partial1 takes as parameters value of type A, and a function that takes a parameter of type A and a parameter of type B, returning a value of type C.
partial1 must return a function taking a value of type B and returning a C. Given A, B, and C are arbitary, we cannot apply any functions to their values. So the only possibility is to apply the function f to the value a passed to partial, and the value of type B that is a parameter to the function we return.
So you end up with the single possibility that's in the definition f(a,b)
To take a simpler example, consider the type Option[A] => Boolean. There's only a couple ways to implement this:
def foo1(x: Option[A]): Boolean = x match { case Some(_) => true
case None => false }
def foo2(x: Option[A]): Boolean = !foo1(x)
def foo3(x: Option[A]): Boolean = true
def foo4(x: Option[A]): Boolean = false
The first two choices are pretty much the same, and the last two are trivial, so essentially there's only one useful thing this function could do, which is tell you whether the Option is Some or None.
The space of possible implementation is "restricted" by the abstractness of the function type. Since A is unconstrained, the option's value could be anything, so the function can't depend on that value in any way because you know nothing about what it. The only "understanding" the function may have about its parameter is the structure of Option[_].
Now, back to your example. You have no idea what C is, so there's no way you can construct one yourself. Therefore the function you create is going to have to call f to get a C. And in order to call f, you need to provide an arguments of types A and B. Again, since there's no way to create an A or a B yourself, the only thing you can do is use the arguments that are given to you. So there's no other possible function you could write.

When are higher kinded types useful?

I've been doing dev in F# for a while and I like it. However one buzzword I know doesn't exist in F# is higher-kinded types. I've read material on higher-kinded types, and I think I understand their definition. I'm just not sure why they're useful. Can someone provide some examples of what higher-kinded types make easy in Scala or Haskell, that require workarounds in F#? Also for these examples, what would the workarounds be without higher-kinded types (or vice-versa in F#)? Maybe I'm just so used to working around it that I don't notice the absence of that feature.
(I think) I get that instead of myList |> List.map f or myList |> Seq.map f |> Seq.toList higher kinded types allow you to simply write myList |> map f and it'll return a List. That's great (assuming it's correct), but seems kind of petty? (And couldn't it be done simply by allowing function overloading?) I usually convert to Seq anyway and then I can convert to whatever I want afterwards. Again, maybe I'm just too used to working around it. But is there any example where higher-kinded types really saves you either in keystrokes or in type safety?
So the kind of a type is its simple type. For instance Int has kind * which means it's a base type and can be instantiated by values. By some loose definition of higher-kinded type (and I'm not sure where F# draws the line, so let's just include it) polymorphic containers are a great example of a higher-kinded type.
data List a = Cons a (List a) | Nil
The type constructor List has kind * -> * which means that it must be passed a concrete type in order to result in a concrete type: List Int can have inhabitants like [1,2,3] but List itself cannot.
I'm going to assume that the benefits of polymorphic containers are obvious, but more useful kind * -> * types exist than just the containers. For instance, the relations
data Rel a = Rel (a -> a -> Bool)
or parsers
data Parser a = Parser (String -> [(a, String)])
both also have kind * -> *.
We can take this further in Haskell, however, by having types with even higher-order kinds. For instance we could look for a type with kind (* -> *) -> *. A simple example of this might be Shape which tries to fill a container of kind * -> *.
data Shape f = Shape (f ())
Shape [(), (), ()] :: Shape []
This is useful for characterizing Traversables in Haskell, for instance, as they can always be divided into their shape and contents.
split :: Traversable t => t a -> (Shape t, [a])
As another example, let's consider a tree that's parameterized on the kind of branch it has. For instance, a normal tree might be
data Tree a = Branch (Tree a) a (Tree a) | Leaf
But we can see that the branch type contains a Pair of Tree as and so we can extract that piece out of the type parametrically
data TreeG f a = Branch a (f (TreeG f a)) | Leaf
data Pair a = Pair a a
type Tree a = TreeG Pair a
This TreeG type constructor has kind (* -> *) -> * -> *. We can use it to make interesting other variations like a RoseTree
type RoseTree a = TreeG [] a
rose :: RoseTree Int
rose = Branch 3 [Branch 2 [Leaf, Leaf], Leaf, Branch 4 [Branch 4 []]]
Or pathological ones like a MaybeTree
data Empty a = Empty
type MaybeTree a = TreeG Empty a
nothing :: MaybeTree a
nothing = Leaf
just :: a -> MaybeTree a
just a = Branch a Empty
Or a TreeTree
type TreeTree a = TreeG Tree a
treetree :: TreeTree Int
treetree = Branch 3 (Branch Leaf (Pair Leaf Leaf))
Another place this shows up is in "algebras of functors". If we drop a few layers of abstractness this might be better considered as a fold, such as sum :: [Int] -> Int. Algebras are parameterized over the functor and the carrier. The functor has kind * -> * and the carrier kind * so altogether
data Alg f a = Alg (f a -> a)
has kind (* -> *) -> * -> *. Alg useful because of its relation to datatypes and recursion schemes built atop them.
-- | The "single-layer of an expression" functor has kind `(* -> *)`
data ExpF x = Lit Int
| Add x x
| Sub x x
| Mult x x
-- | The fixed point of a functor has kind `(* -> *) -> *`
data Fix f = Fix (f (Fix f))
type Exp = Fix ExpF
exp :: Exp
exp = Fix (Add (Fix (Lit 3)) (Fix (Lit 4))) -- 3 + 4
fold :: Functor f => Alg f a -> Fix f -> a
fold (Alg phi) (Fix f) = phi (fmap (fold (Alg phi)) f)
Finally, though they're theoretically possible, I've never see an even higher-kinded type constructor. We sometimes see functions of that type such as mask :: ((forall a. IO a -> IO a) -> IO b) -> IO b, but I think you'll have to dig into type prolog or dependently typed literature to see that level of complexity in types.
Consider the Functor type class in Haskell, where f is a higher-kinded type variable:
class Functor f where
fmap :: (a -> b) -> f a -> f b
What this type signature says is that fmap changes the type parameter of an f from a to b, but leaves f as it was. So if you use fmap over a list you get a list, if you use it over a parser you get a parser, and so on. And these are static, compile-time guarantees.
I don't know F#, but let's consider what happens if we try to express the Functor abstraction in a language like Java or C#, with inheritance and generics, but no higher-kinded generics. First try:
interface Functor<A> {
Functor<B> map(Function<A, B> f);
}
The problem with this first try is that an implementation of the interface is allowed to return any class that implements Functor. Somebody could write a FunnyList<A> implements Functor<A> whose map method returns a different kind of collection, or even something else that's not a collection at all but is still a Functor. Also, when you use the map method you can't invoke any subtype-specific methods on the result unless you downcast it to the type that you're actually expecting. So we have two problems:
The type system doesn't allow us to express the invariant that the map method always returns the same Functor subclass as the receiver.
Therefore, there's no statically type-safe manner to invoke a non-Functor method on the result of map.
There are other, more complicated ways you can try, but none of them really works. For example, you could try augment the first try by defining subtypes of Functor that restrict the result type:
interface Collection<A> extends Functor<A> {
Collection<B> map(Function<A, B> f);
}
interface List<A> extends Collection<A> {
List<B> map(Function<A, B> f);
}
interface Set<A> extends Collection<A> {
Set<B> map(Function<A, B> f);
}
interface Parser<A> extends Functor<A> {
Parser<B> map(Function<A, B> f);
}
// …
This helps to forbid implementers of those narrower interfaces from returning the wrong type of Functor from the map method, but since there is no limit to how many Functor implementations you can have, there is no limit to how many narrower interfaces you'll need.
(EDIT: And note that this only works because Functor<B> appears as the result type, and so the child interfaces can narrow it. So AFAIK we can't narrow both uses of Monad<B> in the following interface:
interface Monad<A> {
<B> Monad<B> flatMap(Function<? super A, ? extends Monad<? extends B>> f);
}
In Haskell, with higher-rank type variables, this is (>>=) :: Monad m => m a -> (a -> m b) -> m b.)
Yet another try is to use recursive generics to try and have the interface restrict the result type of the subtype to the subtype itself. Toy example:
/**
* A semigroup is a type with a binary associative operation. Law:
*
* > x.append(y).append(z) = x.append(y.append(z))
*/
interface Semigroup<T extends Semigroup<T>> {
T append(T arg);
}
class Foo implements Semigroup<Foo> {
// Since this implements Semigroup<Foo>, now this method must accept
// a Foo argument and return a Foo result.
Foo append(Foo arg);
}
class Bar implements Semigroup<Bar> {
// Any of these is a compilation error:
Semigroup<Bar> append(Semigroup<Bar> arg);
Semigroup<Foo> append(Bar arg);
Semigroup append(Bar arg);
Foo append(Bar arg);
}
But this sort of technique (which is rather arcane to your run-of-the-mill OOP developer, heck to your run-of-the-mill functional developer as well) still can't express the desired Functor constraint either:
interface Functor<FA extends Functor<FA, A>, A> {
<FB extends Functor<FB, B>, B> FB map(Function<A, B> f);
}
The problem here is this doesn't restrict FB to have the same F as FA—so that when you declare a type List<A> implements Functor<List<A>, A>, the map method can still return a NotAList<B> implements Functor<NotAList<B>, B>.
Final try, in Java, using raw types (unparametrized containers):
interface FunctorStrategy<F> {
F map(Function f, F arg);
}
Here F will get instantiated to unparametrized types like just List or Map. This guarantees that a FunctorStrategy<List> can only return a List—but you've abandoned the use of type variables to track the element types of the lists.
The heart of the problem here is that languages like Java and C# don't allow type parameters to have parameters. In Java, if T is a type variable, you can write T and List<T>, but not T<String>. Higher-kinded types remove this restriction, so that you could have something like this (not fully thought out):
interface Functor<F, A> {
<B> F<B> map(Function<A, B> f);
}
class List<A> implements Functor<List, A> {
// Since F := List, F<B> := List<B>
<B> List<B> map(Function<A, B> f) {
// ...
}
}
And addressing this bit in particular:
(I think) I get that instead of myList |> List.map f or myList |> Seq.map f |> Seq.toList higher kinded types allow you to simply write myList |> map f and it'll return a List. That's great (assuming it's correct), but seems kind of petty? (And couldn't it be done simply by allowing function overloading?) I usually convert to Seq anyway and then I can convert to whatever I want afterwards.
There are many languages that generalize the idea of the map function this way, by modeling it as if, at heart, mapping is about sequences. This remark of yours is in that spirit: if you have a type that supports conversion to and from Seq, you get the map operation "for free" by reusing Seq.map.
In Haskell, however, the Functor class is more general than that; it isn't tied to the notion of sequences. You can implement fmap for types that have no good mapping to sequences, like IO actions, parser combinators, functions, etc.:
instance Functor IO where
fmap f action =
do x <- action
return (f x)
-- This declaration is just to make things easier to read for non-Haskellers
newtype Function a b = Function (a -> b)
instance Functor (Function a) where
fmap f (Function g) = Function (f . g) -- `.` is function composition
The concept of "mapping" really isn't tied to sequences. It's best to understand the functor laws:
(1) fmap id xs == xs
(2) fmap f (fmap g xs) = fmap (f . g) xs
Very informally:
The first law says that mapping with an identity/noop function is the same as doing nothing.
The second law says that any result that you can produce by mapping twice, you can also produce by mapping once.
This is why you want fmap to preserve the type—because as soon as you get map operations that produce a different result type, it becomes much, much harder to make guarantees like this.
I don't want to repeat information in some excellent answers already here, but there's a key point I'd like to add.
You usually don't need higher-kinded types to implement any one particular monad, or functor (or applicative functor, or arrow, or ...). But doing so is mostly missing the point.
In general I've found that when people don't see the usefulness of functors/monads/whatevers, it's often because they're thinking of these things one at a time. Functor/monad/etc operations really add nothing to any one instance (instead of calling bind, fmap, etc I could just call whatever operations I used to implement bind, fmap, etc). What you really want these abstractions for is so you can have code that works generically with any functor/monad/etc.
In a context where such generic code is widely used, this means any time you write a new monad instance your type immediately gains access to a large number of useful operations that have already been written for you. That's the point of seeing monads (and functors, and ...) everywhere; not so that I can use bind rather than concat and map to implement myFunkyListOperation (which gains me nothing in itself), but rather so that when I come to need myFunkyParserOperation and myFunkyIOOperation I can re-use the code I originally saw in terms of lists because it's actually monad-generic.
But to abstract across a parameterised type like a monad with type safety, you need higher-kinded types (as well explained in other answers here).
For a more .NET-specific perspective, I wrote a blog post about this a while back. The crux of it is, with higher-kinded types, you could potentially reuse the same LINQ blocks between IEnumerables and IObservables, but without higher-kinded types this is impossible.
The closest you could get (I figured out after posting the blog) is to make your own IEnumerable<T> and IObservable<T> and extended them both from an IMonad<T>. This would allow you to reuse your LINQ blocks if they're denoted IMonad<T>, but then it's no longer typesafe because it allows you to mix-and-match IObservables and IEnumerables within the same block, which while it may sound intriguing to enable this, you'd basically just get some undefined behavior.
I wrote a later post on how Haskell makes this easy. (A no-op, really--restricting a block to a certain kind of monad requires code; enabling reuse is the default).
The most-used example of higher-kinded type polymorphism in Haskell is the Monad interface. Functor and Applicative are higher-kinded in the same way, so I'll show Functor in order to show something concise.
class Functor f where
fmap :: (a -> b) -> f a -> f b
Now, examine that definition, looking at how the type variable f is used. You'll see that f can't mean a type that has value. You can identify values in that type signature because they're arguments to and results of a functions. So the type variables a and b are types that can have values. So are the type expressions f a and f b. But not f itself. f is an example of a higher-kinded type variable. Given that * is the kind of types that can have values, f must have the kind * -> *. That is, it takes a type that can have values, because we know from previous examination that a and b must have values. And we also know that f a and f b must have values, so it returns a type that must have values.
This makes the f used in the definition of Functor a higher-kinded type variable.
The Applicative and Monad interfaces add more, but they're compatible. This means that they work on type variables with kind * -> * as well.
Working on higher-kinded types introduces an additional level of abstraction - you aren't restricted to just creating abstractions over basic types. You can also create abstractions over types that modify other types.
Why might you care about Applicative? Because of traversals.
class (Functor t, Foldable t) => Traversable t where
traverse :: Applicative f => (a -> f b) -> t a -> f (t b)
type Traversal s t a b = forall f. Applicative f => (a -> f b) -> s -> f t
Once you've written a Traversable instance, or a Traversal for some type, you can use it for an arbitrary Applicative.
Why might you care about Monad? One reason is streaming systems like pipes, conduit, and streaming. These are entirely non-trivial systems for working with effectful streams. With the Monad class, we can reuse all that machinery for whatever we like, rather than having to rewrite it from scratch each time.
Why else might you care about Monad? Monad transformers. We can layer monad transformers however we like to express different ideas. The uniformity of Monad is what makes this all work.
What are some other interesting higher-kinded types? Let's say ... Coyoneda. Want to make repeated mapping fast? Use
data Coyoneda f a = forall x. Coyoneda (x -> a) (f x)
This works or any functor f passed to it. No higher-kinded types? You'll need a custom version of this for each functor. This is a pretty simple example, but there are much trickier ones you might not want to have to rewrite every time.
Recently stated learning a bit about higher kinded types. Although it's an interesting idea, to be able to have a generic that needs another generic but apart from library developers, i do not see any practical use in any real app. I use scala in business app, i have also seen and studied the code of some nicely designed sgstems and libraries like kafka, akka and some financial apps. Nowhere I found any higher kinded type in use.
It seems like they are nice for academia or similar but the market doesn't need it or hasn't reached to a point where HKT has any practical uses or proves to be better than other existing techniques. To me it's something that you can use to impress others or write blog posts but nothing more than that. It's like multiverse or string theory. Looks nice on paper, gives you hours to speak about but nothing real
(sorry if you don't have any interest in theoratical physiss). One prove is that all the answers above, they all brilliantly describe the mechanics fail to cite one true real case where we would need it despite being the fact it's been 6+ years since OP posted it.