Dealing with Option and Either types - idiomatic conversions? - scala

I'm probably missing something that's right in the documentation, but I can't really make much sense of it - I've been teaching myself Scala mostly by trial and error.
Given a function f: A => C, what is the idiomatic way to perform the following conversions?
Either[A, B] -> Either[C, B]
Either[B, A] -> Either[B, C]
(If I have two such functions and want to convert both sides, can I do it all at once or should I apply the idiom twice sequentially?)
Option[A] -> Option[C]
(I have a feeling that this is supposed to use for (...) yield somehow; I'm probably just blanking on it, and will feel silly when I see an answer)
And what exactly is a "projection" of an Either, anyway?

You do either a:
either.left.map(f)
or a:
either.right.map(f)
You can also use a for-comprehension: for (x <- either.left) yield f(x)
Here's a more concrete example of doing a map on an Either[Boolean, Int]:
scala> val either: Either[Boolean, Int] = Right(5)
either: Either[Boolean, Int] = Right(5)
scala> val e2 = either.right.map(_ > 0)
either: Either[Boolean, Boolean] = Right(true)
scala> e2.left.map(!_)
either: Either[Boolean, Boolean] = Right(true)
EDIT:
How does it work? Say you have an Either[A, B]. Calling left or right creates a LeftProjection or a RightProjection object that is a wrapper that holds the Either[A, B] object.
For the left wrapper, a subsequent map with a function f: A => C is applied to transform the Either[A, B] to Either[C, B]. It does so by using pattern matching under the hood to check if Either is actually a Left. If it is, it creates a new Left[C, B]. If not, it just changes creates a new Right[C, B] with the same underlying value.
And vice versa for the right wrapper. Effectively, saying either.right.map(f) means - if the either (Either[A, B]) object holds a Right value, map it. Otherwise, leave it as is, but change the type B of the either object as if you've mapped it.
So technically, these projections are mere wrappers. Semantically, they are a way of saying that you are doing something that assumes that the value stored in the Either object is either Left or Right. If this assumption is wrong, the mapping does nothing, but the type parameters are changed accordingly.

Given f: A=>B and xOpt: Option[A], xOpt map f produces the Option[B] you need.
Given f: A=>B and xOrY: Either[A, C], xOrY.left.map(f) produces the Either you are looking for, mapping just the first component; similarly you can deal with RightProjection of Either.
If you have two functions, you can define mapping for both components, xOrY.fold(f, g).

val e1:Either[String, Long] = Right(1)
val e2:Either[Int,Boolean] = e1.left.map(_.size).right.map( _ >1 )
// e2: Either[Int,Boolean] = Right(false)

Related

Scala - Map2 function on Option --> flatMap vs. Map vs. For-Comprehension

Option is used for dealing with partiality in Scala, but we can also lift ordinary functions to the context of Options in order to handle errors. When implementing the function map2 I am curious on how to know when to use which functions. Consider the following implementation:
def map2[A,B,C] (ao: Option[A], bo: Option[B]) (f: (A,B) => C): Option[C] =
ao flatMap {aa =>
bo map {bb =>
f(aa, bb)
aa is of type A, and bb is of type B which is then fed to F, giving us a C. However, if we do the following:
def map2_1[A,B,C] (ao: Option[A], bo: Option[B]) (f: (A,B) => C): Option[C] =
ao flatMap {aa =>
bo flatMap {bb =>
f(aa, bb)
aa is still of type A, and bb is still of type B, yet we will have to wrap the last call in Some(f(aa, bb)) in order to get an Option[C] instead of a regular C. Why is this? What does it mean to flatten on BO here?
Last and not least, one could do the simpler:
def map2_2[A,B,C] (ao: Option[A], bo: Option[B]) (f: (A,B) => C): Option[C] = for {
as <- ao
bs <- bo
} yield(f(as,bs))
I know that for-comprehensions are syntactic sugar for ForEach'es, maps and flatmaps etc, but how do I, as a developer, know that the compiler will choose MAP with bs <- bo, and not flatMap?
I think I am on the verge of understanding the difference, yet nested flatmaps confuse me.
Taking the last question first, the developer knows what the compiler will do with for because the behaviour is defined and predictable: All <- turn into flatMap except the last one which will be either map or foreach depending on whether or not there is a yield.
The broader question seems to be about the difference between map and flatMap. The difference should be clear from the signatures e.g. for List these are the (simplified) signatures:
def map[B] (f: A => B) : List[B]
def flatMap[B](f: A => List[B]): List[B]
So map just replaces the values in a List with new values by applying f to each element of type A to generate a B.
flatMap generates a new list by concatenating the results of calling f on each element of the original List. It is equivalent to map followed by flatten (hence the name).
Intuitively, map is a one-for-one replacement whereas flatMap allows each element in the original List to generate 0 or more new elements.

Why List.fill method has two group of parameters instead of one?

What is the reason behind List.fill is being defined with two groups of parameters instead of one with n and elem parameters together
Current definition
def fill[A](n: Int)(elem: ⇒ A): CC[A]
Proposed definition
def fill[A](n: Int, elem: ⇒ A): CC[A]
Isn't it unnecessary boilerplate? Or is it designed to use the first part (List.fill(n)) as a curried function constructor?
You can write
List.fill(10){ val r = math.random; r * r }
but you cannot write
List.fill(10, r = math.random; r * r)
and
List.fill(10, {r = math.random; r * r})
looks somewhat awkward.
In this case, it's almost irrelevant, but note that the way how the arguments are grouped into argument lists can influence the type inference quite significantly, e.g.
def map[X, Y](a: F[X])(f: X => Y): F[Y]
works perfectly fine without any type annotations most of the time, whereas
def map[X, Y](a: F[X], f: X => Y): F[Y]
is quite painful to use. Take a careful look at such methods as ap, ap2 or map2 in this piece of code, for example. There is a good reason why the argument lists are the way they are, you would notice it immediately if they were defined differently.

Why does this function parameter need double braces on a tuple?

Note: I am using scalac. Please do not recommend to use sbt instead.
I ran into a peculiar issue that I could resolve, but I am wondering why it works that way and not the way I did it before. Here's a code snippet:
def multiply[A](r1: Vector[A], r2: Vector[A], multOp: (A,A) => A, sumOp: (A, A) => A) =
r1.zip(r2).map(multOp).reduce(sumOp)
It does not compile, resulting in an error message like:
Error:(73, 20) type mismatch;
found : (A, A) => A
required: ((A, A)) => ?
r1.zip(r2).map(multOp).reduce(sumOp)
Changing the snippet to:
def multiply[A](r1: Vector[A], r2: Vector[A], multOp: ((A,A)) => A, sumOp: (A, A) => A) =
r1.zip(r2).map(multOp).reduce(sumOp)
will resolve the issue.
Note that sumOp does work with only one pair of braces.
Why?
Method map is defined as taking a single parameter, and (A, A) => A has two. By converting two parameters of type A into one parameter which is a tuple of type (A, A), it compiles.
(A, A) => A // fails due to two params of type A
((A, A)) => A // works due to one param of type (A, A)
On the other hand, reduce is defined as taking two parameters of the same type, so it's happy to take sumOp which matches that description.
Here are the full signatures found in TraversableLike and TraversableOnce respectively:
def map[B, That](f: A => B)(implicit bf: CanBuildFrom[Repr, B, That]): That
def reduce[A1 >: A](op: (A1, A1) => A1): A1
EDIT (extra info):
Reason for this is the fact that reduce always takes a 2-arity function (that is, a function of two parameters) in order to reduce the collection to a single value by iteratively applying that function on the result of previous application and the next value. On the other hand, map always takes a 1-arity function and it maps the underlying value with that function. In case of Option, Future, etc. there's only one single underlying value, while in case of a Vector (like yours) there can be many, so it applies it to every element of the collection.
In some libraries you might come across map2 which takes a two-parameter function. For example, in order to combine two Options (actually, any applicative functors, but let's leave theory aside), you might do:
// presudocode
Option(1, 2).map2((a, b) => a + b)
which would give you an Option(3). I think this mechanism has been dropped in the favour of more easily understandable map + product
// presudocode
(Option(1) product Option(2)) map ((a, b) => a + b)
Actual scalaz syntax for the line above would be (in Scalaz 7):
(Option(1) |#| Option(2))((a, b) => a + b)
They are equally powerful principles (what one can do, exactly the same other one can do, no more, no less) so the latter one is usually preferred and sometimes it's the only one provided, but yes, you might come across map2 from time to time.
Alright, that's a bit of extra info. As far as map is concerned, just remember there's always just one single parameter coming in and one value coming out.
The first snippet will work, if you define it as:
def multiply[A](r1: Vector[A], r2: Vector[A], multOp: (A,A) => A, sumOp: (A, A) => A) =
(r1, r2).zipped.map(multOp).reduce(sumOp)
The map method of a zipped tuple takes a function with two arguments (A, A) => B, because that's the expected usage pattern.
This approach also avoids the creation of an intermediate Vector[(A, A)].

Folding union-types into Either/Coproducts

Given a pseudo-union type in Scala.js, how would I fold it into an Either[A, B] (or a coproduct)?
If you're looking for a function toEither[A, B](union: A | B): Either[A, B] you're out of luck. The easiest way to see this is to note that it must work for any choices of A and B, so if I specialize them both to Unit
toEither[A, B](union: A | B ): Either[A, B]
toEither (union: Unit | Unit): Either[Unit, Unit]
toEither (union: Unit ): Either[Unit, Unit]
it becomes clear that any such function would need to make an arbitrary choice and thus no such function would really exist. Try this exercise with other types C and A = B = C.
Generally, unions types are convenient when you recognize that some Javascript function takes several different types of values and distinguishes them at runtime, but they're also less useful than Either is.
Generally, a function toUnion[A, B](eit: Either[A, B]): A | B is doing one thing: "forgetting" whether a value is "Left" or "Right" into the undifferentiated union. With this information destroyed you have fewer options to move forward.

Why is fold curried?

Could start value just be a parameter in op argument list ?
Food is defined on List as
def fold[A1 >: A](z: A1)(op: (A1, A1) ⇒ A1): A1 Folds the elements
of this traversable or iterator using the specified associative binary
operator.
What would the implications of defining fold as
def fold[A1 >: A](op: (z:A1,A1, A1) ⇒ A1): A1
So in this version the initial value is passed as a value to the function instead of being curried in a separate parameter list.
If you're looking to motivate that particular signature of foldLeft, it may be worthwhile to first examine reduceLeft.
// Slightly simplified to remove the supertype constraint
def reduceLeft(f: (A, A) => A): A
reduceLeft squishes the entire collection into a single element and it takes as an argument a function that tells it how to squish each new element in the collection onto what it's got so far.
There's, however, a problem. reduceLeft is partial. In particular if the collection is empty, reduceLeft has nowhere to begin squishing things. So we can make it total, by telling reduceLeft where to begin. So we give reduceLeft an additional parameter.
def reduceLeftTotal(initial: A, f: (A, A) => A): A
Note that if we just glommed initial as another argument to f, we wouldn't fix the partiality of reduceLeft. If this is an empty collection, we still blow up.
// This doesn't get us what we want. Where does the initial `A` come from?
def reduceLeftNotWhatWeWant(f: (A, A, A) => A): A
Okay, now that we've got reduceLeftTotal, there's an immediate new avenue for generalization. Why does the thing that we're squishing all the elements of our collection onto have to have the same type as the elements? The answer is it doesn't!
def generalReduceLeftTotal[B](initial: B, f: (B, A) => A): B
Finally because type information in previous argument lists, but not previous arguments in the same list, can be used to help Scala's type inference, we can reduce the amount of explicit type annotations we need by currying.
// And we're back to foldLeft!
def foldLeft[B](initial: B)(f: (B, A) => A): B