Why does the type parameter of reduceLeft contain a lower bound? - scala

The signature of reduceLeft on some Seq[A] is
def reduceLeft [B >: A] (f: (B, A) => B): B
The type of A is known, but the lower bound >: tells us that B can be any supertype of A.
Why is it like this? Why not
def reduceLeft (f: (A, A) => A): A
We already know that the head of the sequence is type A and so I can't think of how B could be anything other than equal to A. Can you provide an example where B is some super-type?

Let's say your class B has a method combine(other:B): B. Now you call reduceLeft((b,a) => b.combine(a)) on a list of As. Since the return type of combine is B the type parameter to reduceLeft needs to be B.

Related

Partially applied type constructor in Scala 3?

Reading "Learn You a Haskell for Great Good" and trying to understand how Haskell concepts of the book may be written in Scala 3.
chapter 11 mentions "partially applied type constructors" in section about Functions as Functors:
Another instance of Functor that we've been dealing with all along but didn't know was a Functor is (->) r. You're probably slightly confused now, since what the heck does (->) r mean? The function type r -> a can be rewritten as (->) r a, much like we can write 2 + 3 as (+) 2 3. When we look at it as (->) r a, we can see (->) in a slightly different light, because we see that it's just a type constructor that takes two type parameters, just like Either. But remember, we said that a type constructor has to take exactly one type parameter so that it can be made an instance of Functor. That's why we can't make (->) an instance of Functor, but if we partially apply it to (->) r, it doesn't pose any problems. If the syntax allowed for type constructors to be partially applied with sections (like we can partially apply + by doing (2+), which is the same as (+) 2), you could write (->) r as (r ->)
instance Functor ((->) r) where
fmap f g = (\x -> f (g x))
I understand all this logic in the book + the fact that Haskell deals with type constructors just as with usual functions - those can be curried and partially applied.
Question:
What is Scala 3 analogue of such partially applied type constructor so we might define fmap in the way that visually resembles below Haskell definition?
(it is smth that can be modelled with higer-kinded types?)
In Scala 3 you can use type lambdas
trait Functor[F[_]]:
def map[A, B](f: A => B)(fa: F[A]): F[B]
given [R]: Functor[[X] =>> R => X] with
override def map[A, B](f: A => B)(fa: R => A): R => B = fa andThen f
or kind projector (scalacOptions += "-Ykind-projector")
given [R]: Functor[R => *] with
override def map[A, B](f: A => B)(fa: R => A): R => B = fa andThen f
Eventually this should become
given [R]: Functor[R => _] with
override def map[A, B](f: A => B)(fa: R => A): R => B = fa andThen f
Polymorphic method works with type lambda, but not with type wildcard in Scala 3

Scala - Map2 function on Option --> flatMap vs. Map vs. For-Comprehension

Option is used for dealing with partiality in Scala, but we can also lift ordinary functions to the context of Options in order to handle errors. When implementing the function map2 I am curious on how to know when to use which functions. Consider the following implementation:
def map2[A,B,C] (ao: Option[A], bo: Option[B]) (f: (A,B) => C): Option[C] =
ao flatMap {aa =>
bo map {bb =>
f(aa, bb)
aa is of type A, and bb is of type B which is then fed to F, giving us a C. However, if we do the following:
def map2_1[A,B,C] (ao: Option[A], bo: Option[B]) (f: (A,B) => C): Option[C] =
ao flatMap {aa =>
bo flatMap {bb =>
f(aa, bb)
aa is still of type A, and bb is still of type B, yet we will have to wrap the last call in Some(f(aa, bb)) in order to get an Option[C] instead of a regular C. Why is this? What does it mean to flatten on BO here?
Last and not least, one could do the simpler:
def map2_2[A,B,C] (ao: Option[A], bo: Option[B]) (f: (A,B) => C): Option[C] = for {
as <- ao
bs <- bo
} yield(f(as,bs))
I know that for-comprehensions are syntactic sugar for ForEach'es, maps and flatmaps etc, but how do I, as a developer, know that the compiler will choose MAP with bs <- bo, and not flatMap?
I think I am on the verge of understanding the difference, yet nested flatmaps confuse me.
Taking the last question first, the developer knows what the compiler will do with for because the behaviour is defined and predictable: All <- turn into flatMap except the last one which will be either map or foreach depending on whether or not there is a yield.
The broader question seems to be about the difference between map and flatMap. The difference should be clear from the signatures e.g. for List these are the (simplified) signatures:
def map[B] (f: A => B) : List[B]
def flatMap[B](f: A => List[B]): List[B]
So map just replaces the values in a List with new values by applying f to each element of type A to generate a B.
flatMap generates a new list by concatenating the results of calling f on each element of the original List. It is equivalent to map followed by flatten (hence the name).
Intuitively, map is a one-for-one replacement whereas flatMap allows each element in the original List to generate 0 or more new elements.

Variance of function

sealed trait Sum[+A, +B] {
def flatMap[A, C](f: B => Sum[A, C]): Sum[A, C] =
this match {
case Failure(v) => Failure(v)
case Success(v) => f(v)
}
}
Isn't it said that function parameters are contra-variant and the results co-variant? Why does the compiler say that A is in a contra-variant position? I am expecting compiler to complain that B is in a contra-variant position instead.
Can someone explain to me why this is so ? Feeling confused.
I assume you actually meant to write:
sealed trait Sum[+A, +B] {
def flatMap[C](f: B => Sum[A, C]): Sum[A, C] = // No shadowing of A
this match {
case Failure(v) => Failure(v)
case Success(v) => f(v)
}
}
Take a look at flatMap again:
def flatMap[C](f: B => Sum[A, C]): Sum[A, C]
Let's rewrite it a bit:
def flatMap[C]: (B => Sum[A, C]) => Sum[A, C]
Let's build up the type from the inside out.
Sum[A, C]
A is a parameter to Sum, which is normally a covariant position.
B => Sum[A, C]
Sum[A, C] is the result of a function, which is normally a covariant position. These two combine, and you have A in a covariant position still.
(B => Sum[A, C]) => Sum[A, C]
B => Sum[A, C] is also the parameter of a function, so the entire thing is in contravariant position. Since A was in a covariant position before, the variances combine and A is now in a contravariant position.
Looking at B:
B => Sum[A, C]
Parameter of a function, which is normally a contravariant position.
(B => Sum[A, C]) => Sum[B, C]
The entire function is also the parameter to another function, so the contravariances cancel out and B is sitting in a covariant position.
You can also draw a nifty analogy. Look at the definition of a covariant and contravariant type parameter:
trait Co[+A]; trait Con[-A]
They look like positive and negative numbers, just a bit. Now, remember the rules for multiplication and signs you learned in elementary:
(+) * (+) = (+)
(+) * (-) = (-)
(-) * (+) = (-)
(-) * (-) = (+)
This is analogous to (if you squint a bit)
Co[Co[A]] => A is in a covariant position
Co[Con[A]] => A is in a contravariant position
Con[Co[A]] => A is in a contravariant position
Con[Con[A]] => A is in a covariant position.

Why is fold curried?

Could start value just be a parameter in op argument list ?
Food is defined on List as
def fold[A1 >: A](z: A1)(op: (A1, A1) ⇒ A1): A1 Folds the elements
of this traversable or iterator using the specified associative binary
operator.
What would the implications of defining fold as
def fold[A1 >: A](op: (z:A1,A1, A1) ⇒ A1): A1
So in this version the initial value is passed as a value to the function instead of being curried in a separate parameter list.
If you're looking to motivate that particular signature of foldLeft, it may be worthwhile to first examine reduceLeft.
// Slightly simplified to remove the supertype constraint
def reduceLeft(f: (A, A) => A): A
reduceLeft squishes the entire collection into a single element and it takes as an argument a function that tells it how to squish each new element in the collection onto what it's got so far.
There's, however, a problem. reduceLeft is partial. In particular if the collection is empty, reduceLeft has nowhere to begin squishing things. So we can make it total, by telling reduceLeft where to begin. So we give reduceLeft an additional parameter.
def reduceLeftTotal(initial: A, f: (A, A) => A): A
Note that if we just glommed initial as another argument to f, we wouldn't fix the partiality of reduceLeft. If this is an empty collection, we still blow up.
// This doesn't get us what we want. Where does the initial `A` come from?
def reduceLeftNotWhatWeWant(f: (A, A, A) => A): A
Okay, now that we've got reduceLeftTotal, there's an immediate new avenue for generalization. Why does the thing that we're squishing all the elements of our collection onto have to have the same type as the elements? The answer is it doesn't!
def generalReduceLeftTotal[B](initial: B, f: (B, A) => A): B
Finally because type information in previous argument lists, but not previous arguments in the same list, can be used to help Scala's type inference, we can reduce the amount of explicit type annotations we need by currying.
// And we're back to foldLeft!
def foldLeft[B](initial: B)(f: (B, A) => A): B

How to extract an element from an HList with a specific (parameterized) type

I'm chaining transformations, and I'd like to accumulate the result of each transformation so that it can potentially be used in any subsequent step, and also so that all the results are available at the end (mostly for debugging purposes). There are several steps and from time to time I need to add a new step or change the inputs for a step.
HList seems to offer a convenient way to collect the results in a flexible but still type-safe way. But I'd rather not complicate the actual steps by making them deal with the HList and the accompanying business.
Here's a simplified version of the combinator I'd like to write, which isn't working. The idea is that given an HList containing an A, and the index of A, and a function from A -> B, mapNth will extract the A, run the function, and cons the result onto the list. The resulting extended list captures the type of the new result, so several of these mapNth-ified steps can be composed to produce a list containing the result from each step:
def mapNth[L <: HList, A, B]
(l: L, index: Nat, f: A => B)
(implicit at: shapeless.ops.hlist.At[L, index.N]):
B :: L =
f(l(index)) :: l
Incidentally, I'll also need map2Nth taking two indices and f: (A, B) => C, but I believe the issues are the same.
However, mapNth does not compile, saying l(index) has type at.Out, but f's argument should be A. That's correct, of course, so what I suppose I need is a way to provide evidence that at.Out is in fact A (or, at.Out <: A).
Is there a way to express that constraint? I believe it will have to take the form of an implicit, because of course the constraint can only be checked when mapNth is applied to a particular list and function.
You're exactly right about needing evidence that at.Out is A, and you can provide that evidence by including the value of the type member in at's type:
def mapNth[L <: HList, A, B]
(l: L, index: Nat, f: A => B)
(implicit at: shapeless.ops.hlist.At[L, index.N] { type Out = A }):
B :: L =
f(l(index)) :: l
The companion objects for type classes like At in Shapeless also define an Aux type that includes the output type as a final type parameter.
def mapNth[L <: HList, A, B]
(l: L, index: Nat, f: A => B)
(implicit at: shapeless.ops.hlist.At.Aux[L, index.N, A]):
B :: L =
f(l(index)) :: l
This is pretty much equivalent but more idiomatic (and it looks a little nicer).