Variance of function - scala

sealed trait Sum[+A, +B] {
def flatMap[A, C](f: B => Sum[A, C]): Sum[A, C] =
this match {
case Failure(v) => Failure(v)
case Success(v) => f(v)
}
}
Isn't it said that function parameters are contra-variant and the results co-variant? Why does the compiler say that A is in a contra-variant position? I am expecting compiler to complain that B is in a contra-variant position instead.
Can someone explain to me why this is so ? Feeling confused.

I assume you actually meant to write:
sealed trait Sum[+A, +B] {
def flatMap[C](f: B => Sum[A, C]): Sum[A, C] = // No shadowing of A
this match {
case Failure(v) => Failure(v)
case Success(v) => f(v)
}
}
Take a look at flatMap again:
def flatMap[C](f: B => Sum[A, C]): Sum[A, C]
Let's rewrite it a bit:
def flatMap[C]: (B => Sum[A, C]) => Sum[A, C]
Let's build up the type from the inside out.
Sum[A, C]
A is a parameter to Sum, which is normally a covariant position.
B => Sum[A, C]
Sum[A, C] is the result of a function, which is normally a covariant position. These two combine, and you have A in a covariant position still.
(B => Sum[A, C]) => Sum[A, C]
B => Sum[A, C] is also the parameter of a function, so the entire thing is in contravariant position. Since A was in a covariant position before, the variances combine and A is now in a contravariant position.
Looking at B:
B => Sum[A, C]
Parameter of a function, which is normally a contravariant position.
(B => Sum[A, C]) => Sum[B, C]
The entire function is also the parameter to another function, so the contravariances cancel out and B is sitting in a covariant position.
You can also draw a nifty analogy. Look at the definition of a covariant and contravariant type parameter:
trait Co[+A]; trait Con[-A]
They look like positive and negative numbers, just a bit. Now, remember the rules for multiplication and signs you learned in elementary:
(+) * (+) = (+)
(+) * (-) = (-)
(-) * (+) = (-)
(-) * (-) = (+)
This is analogous to (if you squint a bit)
Co[Co[A]] => A is in a covariant position
Co[Con[A]] => A is in a contravariant position
Con[Co[A]] => A is in a contravariant position
Con[Con[A]] => A is in a covariant position.

Related

Partially applied type constructor in Scala 3?

Reading "Learn You a Haskell for Great Good" and trying to understand how Haskell concepts of the book may be written in Scala 3.
chapter 11 mentions "partially applied type constructors" in section about Functions as Functors:
Another instance of Functor that we've been dealing with all along but didn't know was a Functor is (->) r. You're probably slightly confused now, since what the heck does (->) r mean? The function type r -> a can be rewritten as (->) r a, much like we can write 2 + 3 as (+) 2 3. When we look at it as (->) r a, we can see (->) in a slightly different light, because we see that it's just a type constructor that takes two type parameters, just like Either. But remember, we said that a type constructor has to take exactly one type parameter so that it can be made an instance of Functor. That's why we can't make (->) an instance of Functor, but if we partially apply it to (->) r, it doesn't pose any problems. If the syntax allowed for type constructors to be partially applied with sections (like we can partially apply + by doing (2+), which is the same as (+) 2), you could write (->) r as (r ->)
instance Functor ((->) r) where
fmap f g = (\x -> f (g x))
I understand all this logic in the book + the fact that Haskell deals with type constructors just as with usual functions - those can be curried and partially applied.
Question:
What is Scala 3 analogue of such partially applied type constructor so we might define fmap in the way that visually resembles below Haskell definition?
(it is smth that can be modelled with higer-kinded types?)
In Scala 3 you can use type lambdas
trait Functor[F[_]]:
def map[A, B](f: A => B)(fa: F[A]): F[B]
given [R]: Functor[[X] =>> R => X] with
override def map[A, B](f: A => B)(fa: R => A): R => B = fa andThen f
or kind projector (scalacOptions += "-Ykind-projector")
given [R]: Functor[R => *] with
override def map[A, B](f: A => B)(fa: R => A): R => B = fa andThen f
Eventually this should become
given [R]: Functor[R => _] with
override def map[A, B](f: A => B)(fa: R => A): R => B = fa andThen f
Polymorphic method works with type lambda, but not with type wildcard in Scala 3

In Scala cats-laws, why is the functor composition law different from canonical definition?

The (covariant) functor definition in cats-laws looks like this:
def covariantComposition[A, B, C](fa: F[A], f: A => B, g: B => C): IsEq[F[C]] =
fa.map(f).map(g) <-> fa.map(f.andThen(g))
But if I translate the functor composition rule to Scala, it should be:
def covariantComposition[A, B, C](fa: F[A], f: A => B, g: B => C): IsEq[F[C]] =
fa.map(f).andThen(fa.map(g)) <-> fa.map(f.andThen(g))
Why are they different? Which version is correct?
UPDATE 1 I'm aware of a similar implementation in Haskell, but I haven't had a chance to read it. I wonder if the Haskell version is more by the book.
F(g ∘ f) = F(g) ∘ F(f) is the same as ∀fa, (F(g ∘ f))(fa) = (F(g) ∘ F(f))(fa) (equality of functions is equality of images for all arguments, this is extensionality in HoTT 1 2 3).
The latter is translated as
def covariantComposition[A, B, C](fa: F[A], f: A => B, g: B => C): IsEq[F[C]] =
fa.map(f).map(g) <-> fa.map(f.andThen(g))
(actually, fa.map(f.andThen(g)) <-> fa.map(f).map(g)).
If you'd like to have "point-free" F(g ∘ f) = F(g) ∘ F(f) you could write _.map(f.andThen(g)) <-> _.map(f).map(g) or _.map(f.andThen(g)) <-> (_.map(f)).andThen(_.map(g)) (this is fmap (g . f) = fmap g . fmap f in Haskell, or more precisely, in some "meta-Haskell").
The 2nd code snippet in your question
def covariantComposition[A, B, C](fa: F[A], f: A => B, g: B => C): IsEq[F[C]] =
fa.map(f).andThen(fa.map(g)) <-> fa.map(f.andThen(g))
is incorrect. fa.map(f).andThen... doesn't make sense as it was mentioned in comments. You seem to confuse F and F[A].
In category theory, in general categories, f: A -> B can be just arrows, not necessarily functions (e.g. related pairs in a pre-order if a category is this pre-order), so (F(g ∘ f))(fa) can make no sense. But the category of types in Scala (or Haskell) is a category where objects are types and morphisms are functions.
I think your confusion comes from the different way functor map property can be represented.
trait Functor[F[_]] {
def map1[A, B](f: A => B): F[A] => F[B]
def map2[A, B](f: A => B)(fa: F[A]): F[B]
def map3[A, B](fa: F[A])(f: A => B): F[B]
}
Here... map1 is the haskell aligned definition... and hence the functor law representation used by haskell also works with this one.
So, this haskell
fmap (g . f) = fmap g . fmap f
translates to following Scala
map1( g.compose(f) ) = map1(g).compose( map1(f) )
// or
map1( f.andThen(g) ) <-> map1(f).andThen(map1(g))
But, the thing is that we have few more ways to represent the same map property as given by map2 and map3. The overall essens is still the same, we just switched the representation.
Now, when we add the full object oriented angle to it... the "object-oriented" Functor becomes something like following.
trait List[+A] {
def map(f: A => B): List[B]
}
So... for the "object oriented functor" like List, the same law can be represented as following
listA.map(f).map(g) <-> listA.map(f.andThen(g))
And, you are seeing exactly this.

How to ask Scala if evidence exists for all instantiations of type parameter?

Given the following type-level addition function on Peano numbers
sealed trait Nat
class O extends Nat
class S[N <: Nat] extends Nat
type plus[a <: Nat, b <: Nat] = a match
case O => b
case S[n] => S[n plus b]
say we want to prove theorem like
for all natural numbers n, n + 0 = n
which perhaps can be specified like so
type plus_n_0 = [n <: Nat] =>> (n plus O) =:= n
then when it comes to providing evidence for theorem we can easily ask Scala compiler for evidence in particular cases
summon[plus_n_O[S[S[O]]]] // ok, 2 + 0 = 2
but how can we ask Scala if it can generate evidence for all instantiations of [n <: Nat], thus providing proof of plus_n_0?
Here is one possible approach, which is an attempt at a literal interpretation of this paragraph:
When proving a statement E:N→U about all natural numbers, it suffices to prove it for 0 and for succ(n), assuming it holds for n, i.e., we construct ez:E(0) and es:∏(n:N)E(n)→E(succ(n)).
from the HoTT book (section 5.1).
Here is the plan of what was implemented in the code below:
Formulate what it means to have a proof for a statement that "Some property P holds for all natural numbers". Below, we will use
trait Forall[N, P[n <: N]]:
inline def apply[n <: N]: P[n]
where the signature of the apply-method essentially says "for all n <: N, we can generate evidence of P[n]".
Note that the method is declared to be inline. This is one possible way to ensure that the proof of ∀n.P(n) is constructive and executable at runtime (However, see edit history for alternative proposals with manually generated witness terms).
Postulate some sort of induction principle for natural numbers. Below, we will use the following formulation:
If
P(0) holds, and
whenever P(i) holds, then also P(i + 1) holds,
then
For all `n`, P(n) holds
I believe that it should be possible to derive such induction principles using some metaprogramming facilities.
Write proofs for the base case and the induction case of the induction principle.
???
Profit
The code then looks like this:
sealed trait Nat
class O extends Nat
class S[N <: Nat] extends Nat
type plus[a <: Nat, b <: Nat] <: Nat = a match
case O => b
case S[n] => S[n plus b]
trait Forall[N, P[n <: N]]:
inline def apply[n <: N]: P[n]
trait NatInductionPrinciple[P[n <: Nat]] extends Forall[Nat, P]:
def base: P[O]
def step: [i <: Nat] => (P[i] => P[S[i]])
inline def apply[n <: Nat]: P[n] =
(inline compiletime.erasedValue[n] match
case _: O => base
case _: S[pred] => step(apply[pred])
).asInstanceOf[P[n]]
given liftCoUpperbounded[U, A <: U, B <: U, S[_ <: U]](using ev: A =:= B):
(S[A] =:= S[B]) = ev.liftCo[[X] =>> Any].asInstanceOf[S[A] =:= S[B]]
type NatPlusZeroEqualsNat[n <: Nat] = (n plus O) =:= n
def trivialLemma[i <: Nat]: ((S[i] plus O) =:= S[i plus O]) =
summon[(S[i] plus O) =:= S[i plus O]]
object Proof extends NatInductionPrinciple[NatPlusZeroEqualsNat]:
val base = summon[(O plus O) =:= O]
val step: ([i <: Nat] => NatPlusZeroEqualsNat[i] => NatPlusZeroEqualsNat[S[i]]) =
[i <: Nat] => (p: NatPlusZeroEqualsNat[i]) =>
given previousStep: ((i plus O) =:= i) = p
given liftPreviousStep: (S[i plus O] =:= S[i]) =
liftCoUpperbounded[Nat, i plus O, i, S]
given definitionalEquality: ((S[i] plus O) =:= S[i plus O]) =
trivialLemma[i]
definitionalEquality.andThen(liftPreviousStep)
def demoNat(): Unit = {
println("Running demoNat...")
type two = S[S[O]]
val ev = Proof[two]
val twoInstance: two = new S[S[O]]
println(ev(twoInstance) == twoInstance)
}
It compiles, runs, and prints:
true
meaning that we have successfully invoked the recursively defined
method on the executable evidence-term of type two plus O =:= two.
Some further comments
The trivialLemma was necessary so that summons inside of other givens don't accidentally generate recursive loops, which is a bit annoying.
The separate liftCo-method for S[_ <: U] was needed, because =:=.liftCo does not allow type constructors with upper-bounded type parameters.
compiletime.erasedValue + inline match is awesome! It automatically generates some sort of runtime-gizmos that allow us to do pattern matching on an "erased" type. Before I found this out, I was attempting to construct appropriate witness terms manually, but this does not seem necessary at all, it's provided for free (see edit history for the approach with manually constructed witness terms).

How does pattern matching work in exists function?

This code from book "Functional programming in scala"
sealed trait Stream[+A] {
def foldRight[B](z: => B)(f: (A, => B) => B): B = this match {
case Cons(h, t) => f(h(), t().foldRight(z)(f))
case _ => z
}
def exists(p: A => Boolean): Boolean = foldRight(false)((a, b) => p(a) || b)
}
case object Empty extends Stream[Nothing]
case class Cons[+A](h: () => A, t: () => Stream[A]) extends Stream[A]
I don't understand what is it a and b in exists func? How scala match arguments to foldRight
The foldRight and foldLeft operates over a collection (stream in your example), receives two parameters (base value) and a function. This function receives also two parameters an accumulator and a element, which process in each iteration
The accumulator is in the side of the fold (i.e. to the right in foldRight and to the left in foldLeft), so in your case b is the accumulator.
The accumulator is initialized as the default value (false in your example)
The other parameter (a in your example) is each element of the stream over which you iterate.
In this case (exists) will iterate until an element satisfied the predicate p, otherwise it will keep iterating until it reaches the end of the Stream
From the API
def foldRight[B](z: B)(op: (A, B) ⇒ B): B Applies a binary operator to
all elements of this sequence and a start value, going right to left.
Note: will not terminate for infinite-sized collections.
B the result type of the binary operator.
z the start value.
op the binary operator.
returns the result of inserting op between consecutive elements of
this sequence, going right to left with the start value z on the
right:
op(x_1, op(x_2, ... op(x_n, z)...)) where x1, ..., xn are the elements
of this sequence. Returns z if this sequence is empty.
You can check the whole API here
Always check the excellent ScalaAPI (http://lampwww.epfl.ch/~hmiller/scaladoc/library/scala/collection/TraversableOnce.html)
def foldRight[B](z: B)(op: (A, B) ⇒ B): B
From the types you can figure it out:
a -> A
b -> B
As you have a Stream of type A (sealed trait Stream[+A]) a can only be one of the elements of the Stream.
b is so the value you are accumulating the result on. In exists it checks if one element is true for the predicate p.

Why does the type parameter of reduceLeft contain a lower bound?

The signature of reduceLeft on some Seq[A] is
def reduceLeft [B >: A] (f: (B, A) => B): B
The type of A is known, but the lower bound >: tells us that B can be any supertype of A.
Why is it like this? Why not
def reduceLeft (f: (A, A) => A): A
We already know that the head of the sequence is type A and so I can't think of how B could be anything other than equal to A. Can you provide an example where B is some super-type?
Let's say your class B has a method combine(other:B): B. Now you call reduceLeft((b,a) => b.combine(a)) on a list of As. Since the return type of combine is B the type parameter to reduceLeft needs to be B.