Idiomatic Scala for applying functions in a chain if Option(s) are defined - scala

Is there a pre-existing / Scala-idiomatic / better way of accomplishing this?
def sum(x: Int, y: Int) = x + y
var x = 10
x = applyOrBypass(target=x, optValueToApply=Some(22), sum)
x = applyOrBypass(target=x, optValueToApply=None, sum)
println(x) // will be 32
My applyOrBypass could be defined like this:
def applyOrBypass[A, B](target: A, optValueToApply: Option[B], func: (A, B) => A) = {
optValueToApply map { valueToApply =>
func(target, valueToApply)
} getOrElse {
target
}
}
Basically I want to apply operations depending on wether certain Option values are defined or not. If they are not, I should get the pre-existing value. Ideally I would like to chain these operations and not having to use a var.
My intuition tells me that folding or reducing would be involved, but I am not sure how it would work. Or maybe there is another approach with monadic-fors...
Any suggestions / hints appreciated!

Scala has a way to do this with for comprehensions (The syntax is similar to haskell's do notation if you are familiar with it):
(for( v <- optValueToApply )
yield func(target, v)).getOrElse(target)
Of course, this is more useful if you have several variables that you want to check the existence of:
(for( v1 <- optV1
; v2 <- optV2
; v3 <- optV3
) yield func(target, v1, v2, v3)).getOrElse(target)
If you are trying to accumulate a value over a list of options, then I would recommend a fold, so your optional sum would look like this:
val vs = List(Some(1), None, None, Some(2), Some(3))
(target /: vs) ( (x, v) => x + v.getOrElse(0) )
// => 6 + target
You can generalise this, under the condition that your operation func has some identity value, identity:
(target /: vs) ( (x, v) => func(x, v.getOrElse(identity)) )
Mathematically speaking this condition is that (func, identity) forms a Monoid. But that's by-the-by. The actual effect is that whenever a None is reached, applying func to it and x will always produce x, (None's are ignored, and Some values are unwrapped and applied as normal), which is what you want.

What I would do in a case like this is use partially applied functions and identity:
def applyOrBypass[A, B](optValueToApply: Option[B], func: B => A => A): A => A =
optValueToApply.map(func).getOrElse(identity)
You would apply it like this:
def sum(x: Int)(y: Int) = x + y
var x = 10
x = applyOrBypass(optValueToApply=Some(22), sum)(x)
x = applyOrBypass(optValueToApply=None, sum)(x)
println(x)

Yes, you can use fold. If you have multiple optional operands, there are some useful abstractions in the Scalaz library I believe.
var x = 10
x = Some(22).fold(x)(sum(_, x))
x = None .fold(x)(sum(_, x))

If you have multiple functions, it can be done with Scalaz.
There are several ways to do it, but here is one of the most concise.
First, add your imports:
import scalaz._, Scalaz._
Then, create your functions (this way isn't worth it if your functions are always the same, but if they are different, it makes sense)
val s = List(Some(22).map((i: Int) => (j: Int) => sum(i,j)),
None .map((i: Int) => (j: Int) => multiply(i,j)))
Finally, apply them all:
(s.flatten.foldMap(Endo(_)))(x)

Related

Scala - access collection members within map or flatMap

Suppose that I use a sequence of various maps and/or flatMaps to generate a sequence of collections. Is it possible to access information about the "current" collection from within any of those methods? For example, without knowing anything specific about the functions used in the previous maps or flatMaps, and without using any intermediate declarations, how can I get the maximum value (or length, or first element, etc.) of the collection upon which the last map acts?
List(1, 2, 3)
.flatMap(x => f(x) /* some unknown function */)
.map(x => x + ??? /* what is the max element of the collection? */)
Edit for clarification:
In the example, I'm not looking for the max (or whatever) of the initial List. I'm looking for the max of the collection after the flatMap has been applied.
By "without using any intermediate declarations" I mean that I do not want to use any temporary collections en route to the final result. So, the example by Steve Waldman below, while giving the desired result, is not what I am seeking. (I include this condition is mostly for aesthetic reasons.)
Edit for clarification, part 2:
The ideal solution would be some magic keyword or syntactic sugar that lets me reference the current collection:
List(1, 2, 3)
.flatMap(x => f(x))
.map(x => x + theCurrentList.max)
I'm prepared to accept the fact, however, that this simply is not possible.
Maybe just define the list as a val, so you can name it? I don't know of any facility built into map(...) or flatMap(...) that would help.
val myList = List(1, 2, 3)
myList
.flatMap(x => f(x) /* some unknown function */)
.map(x => x + myList.max /* what is the max element of the List? */)
Update: By this approach at least, if you have multiple transformations and want to see the transformed version, you'd have to name that. You could get away with
val myList = List(1, 2, 3).flatMap(x => f(x) /* some unknown function */)
myList.map(x => x + myList.max /* what is the max element of the List? */)
Or, if there will be multiple transformations, get in the habit of naming the stages.
val rawList = List(1, 2, 3)
val smordified = rawList.flatMap(x => f(x) /* some unknown function */)
val maxified = smordified.map(x => x + smordified.max /* what is the max element of the List? */)
maxified
Update 2: Watch it work in the REPL even with heterogenous types:
scala> def f( x : Int ) : Vector[Double] = Vector(x * math.random, x * math.random )
f: (x: Int)Vector[Double]
scala> val rawList = List(1, 2, 3)
rawList: List[Int] = List(1, 2, 3)
scala> val smordified = rawList.flatMap(x => f(x) /* some unknown function */)
smordified: List[Double] = List(0.40730853571901315, 0.15151641399798665, 1.5305929709857609, 0.35211231420067435, 0.644241939254793, 0.15530230501048903)
scala> val maxified = smordified.map(x => x + smordified.max /* what is the max element of the List? */)
maxified: List[Double] = List(1.937901506704774, 1.6821093849837476, 3.0611859419715217, 1.8827052851864352, 2.1748349102405538, 1.6858952759962498)
scala> maxified
res3: List[Double] = List(1.937901506704774, 1.6821093849837476, 3.0611859419715217, 1.8827052851864352, 2.1748349102405538, 1.6858952759962498)
It is possible, but not pretty, and not likely something you want if you are doing it for "aesthetic reasons."
import scala.math.max
def f(x: Int): Seq[Int] = ???
List(1, 2, 3).
flatMap(x => f(x) /* some unknown function */).
foldRight((List[Int](),List[Int]())) {
case (x, (xs, Nil)) => ((x :: xs), List.fill(xs.size + 1)(x))
case (x, (xs, xMax :: _)) => ((x :: xs), List.fill(xs.size + 1)(max(x, xMax)))
}.
zipped.
map {
case (x, xMax) => x + xMax
}
// Or alternately, a slightly more efficient version using Streams.
List(1, 2, 3).
flatMap(x => f(x) /* some unknown function */).
foldRight((List[Int](),Stream[Int]())) {
case (x, (xs, Stream())) =>
((x :: xs), Stream.continually(x))
case (x, (xs, curXMax #:: _)) =>
val newXMax = max(x, curXMax)
((x :: xs), Stream.continually(newXMax))
}.
zipped.
map {
case (x, xMax) => x + xMax
}
Seriously though, I just took this on to see if I could do it. While the code didn't turn out as bad as I expected, I still don't think it's particularly readable. I'd discourage using this over something similar to Steve Waldman's answer. Sometimes, it's simply better to just introduce a val, rather than being dogmatic about it.
You could define a mapWithSelf (resp. flatMapWithSelf) operation along these lines and add it as an implicit enrichment to the collection. For List it might look like:
// Scala 2.13 APIs
object Enrichments {
implicit class WithSelfOps[A](val lst: List[A]) extends AnyVal {
def mapWithSelf[B](f: (A, List[A]) => B): List[B] =
lst.map(f(_, lst))
def flatMapWithSelf[B](f: (A, List[A]) => IterableOnce[B]): List[B] =
lst.flatMap(f(_, lst))
}
}
The enrichment basically fixes the value of the collection before the operation and threads it through. It should be possible to generify this (at least for the strict collections), though it would look a little different in 2.12 vs. 2.13+.
Usage would look like
import Enrichments._
val someF: Int => IterableOnce[Int] = ???
List(1, 2, 3)
.flatMap(someF)
.mapWithSelf { (x, lst) =>
x + lst.max
}
So at the usage site, it's aesthetically pleasant. Note that if you're computing something which traverses the list, you'll be traversing the list every time (leading to a quadratic runtime). You can get around that with some mutability or by just saving the intermediate list after the flatMap.
One somewhat-simple way of referencing prior output within the current map/collect operation is to use a named reference outside the map, then reference it from within the map block:
var prevOutput = ... // starting value of whatever is referenced within the map
myValues.map {
prevOutput = ... // expression that references prior `prevOutput`
prevOutput // return above computed value for the map to collect
}
This draws attention to the fact that we're referencing prior elements while building the new sequence.
This would be more messy, though, if you wanted to reference arbitrarily previous values, not just the previous one.

Partially applied/curried function vs overloaded function

Whilst I understand what a partially applied/curried function is, I still don't fully understand why I would use such a function vs simply overloading a function. I.e. given:
def add(a: Int, b: Int): Int = a + b
val addV = (a: Int, b: Int) => a + b
What is the practical difference between
def addOne(b: Int): Int = add(1, b)
and
def addOnePA = add(1, _:Int)
// or currying
val addOneC = addV.curried(1)
Please note I am NOT asking about currying vs partially applied functions as this has been asked before and I have read the answers. I am asking about currying/partially applied functions VS overloaded functions
The difference in your example is that overloaded function will have hardcoded value 1 for the first argument to add, i.e. set at compile time, while partially applied or curried functions are meant to capture their arguments dynamically, i.e. at run time. Otherwise, in your particular example, because you are hardcoding 1 in both cases it's pretty much the same thing.
You would use partially applied/curried function when you pass it through different contexts, and it captures/fills-in arguments dynamically until it's completely ready to be evaluated. In FP this is important because many times you don't pass values, but rather pass functions around. It allows for higher composability and code reusability.
There's a couple reasons why you might prefer partially applied functions. The most obvious and perhaps superficial one is that you don't have to write out intermediate functions such as addOnePA.
List(1, 2, 3, 4) map (_ + 3) // List(4, 5, 6, 7)
is nicer than
def add3(x: Int): Int = x + 3
List(1, 2, 3, 4) map add3
Even the anonymous function approach (that the underscore ends up expanding out to by the compiler) feels a tiny bit clunky in comparison.
List(1, 2, 3, 4) map (x => x + 3)
Less superficially, partial application comes in handy when you're truly passing around functions as first-class values.
val fs = List[(Int, Int) => Int](_ + _, _ * _, _ / _)
val on3 = fs map (f => f(_, 3)) // partial application
val allTogether = on3.foldLeft{identity[Int] _}{_ compose _}
allTogether(6) // (6 / 3) * 3 + 3 = 9
Imagine if I hadn't told you what the functions in fs were. The trick of coming up with named function equivalents instead of partial application becomes harder to use.
As for currying, currying functions often lets you naturally express transformations of functions that produce other functions (rather than a higher order function that simply produces a non-function value at the end) which might otherwise be less clear.
For example,
def integrate(f: Double => Double, delta: Double = 0.01)(x: Double): Double = {
val domain = Range.Double(0.0, x, delta)
domain.foldLeft(0.0){case (acc, a) => delta * f(a) + acc
}
can be thought of and used in the way that you actually learned integration in calculus, namely as a transformation of a function that produces another function.
def square(x: Double): Double = x * x
// Ignoring issues of numerical stability for the moment...
// The underscore is really just a wart that Scala requires to bind it to a val
val cubic = integrate(square) _
val quartic = integrate(cubic) _
val quintic = integrate(quartic) _
// Not *utterly* horrible for a two line numerical integration function
cubic(1) // 0.32835000000000014
quartic(1) // 0.0800415
quintic(1) // 0.015449626499999999
Currying also alleviates a few of the problems around fixed function arity.
implicit class LiftedApply[A, B](fOpt: Option[A => B]){
def ap(xOpt: Option[A]): Option[B] = for {
f <- fOpt
x <- xOpt
} yield f(x)
}
def not(x: Boolean): Boolean = !x
def and(x: Boolean)(y: Boolean): Boolean = x && y
def and3(x: Boolean)(y: Boolean)(z: Boolean): Boolean = x && y && z
Some(not _) ap Some(false) // true
Some(and _) ap Some(true) ap Some(true) // true
Some(and3 _) ap Some(true) ap Some(true) ap Some(true) // true
By having curried functions, we've been able to "lift" a function to work on Option for as many arguments as we need. If our logic functions had not been curried, then we would have had to have separate functions to lift A => B to Option[A] => Option[B], (A, B) => C to (Option[A], Option[B]) => Option[C], (A, B, C) => D to (Option[A], Option[B], Option[C]) => Option[D] and so on for all the arities we cared about.
Currying also has some other miscellaneous benefits when it comes to type inference and is required if you have both implicit and non-implicit arguments for a method.
Finally, the answers to this question list out some more times you might want currying.

Correct way to work with two instances of Option together

When I have one Option[T] instance it is quite easy to perform any operation on T using monadic operations such as map() and flatMap(). This way I don't have to do checks to see whether it is defined or empty, and chain operations together to ultimately get an Option[R] for the result R.
My difficulty is whether there is a similar elegant way to perform functions on two Option[T] instances.
Lets take a simple example where I have two vals, x and y of type Option[Int]. And I want to get the maximum of them if they are both defined, or the one that is defined if only one is defined, and None if none are defined.
How would one write this elegantly without involving lots of isDefined checks inside the map() of the first Option?
You can use something like this:
def optMax(op1:Option[Int], op2: Option[Int]) = op1 ++ op2 match {
case Nil => None
case list => list.max
}
Or one much better:
def f(vars: Option[Int]*) = (for( vs <- vars) yield vs).max
#jwvh,thanks for a good improvement:
def f(vars: Option[Int]*) = vars.max
Usually, you'll want to do something if both values are defined.
In that case, you could use a for-comprehension:
val aOpt: Option[Int] = getIntOpt
val bOpt: Option[Int] = getIntOpt
val maxOpt: Option[Int] =
for {
a <- aOpt
b <- bOpt
} yield max(a, b)
Now, the problem you described is not as common. You want to do something if both values are defined, but you also want to retrieve the value of an option if only one of them is defined.
I would just use the for-comprehension above, and then chain two calls to orElse to provide alternative values if maxOpt turns out to be None.
maxOpt orElse aOpt orElse bOpt
orElse's signature:
def orElse[B >: A](alternative: ⇒ Option[B]): Option[B]
Here's another fwiw:
import scala.util.Try
def maxOpt (a:Option[Int]*)= Try(a.flatten.max).toOption
It works with n arguments (including zero arguments).
Pattern matching would allow something easy to grasp, but that might not be the most elegant way:
def maxOpt[T](optA: Option[T], optB: Option[T])(implicit f: (T, T) => T): Option[T] = (optA, optB) match {
case (Some(a), Some(b)) => Some(f(a, b))
case (None, Some(b)) => Some(b)
case (Some(a), None) => Some(a)
case (None, None) => None
}
You end up with something like:
scala> maxOpt(Some(1), None)(Math.max)
res2: Option[Int] = Some(1)
Once you have that building, block, you can use it inside for-comp or monadic operations.
To get maxOpt, you can also use an applicative, which using Scalaz would look like (aOpt |#| bOpt) { max(_, _) } & then chain orElses as #dcastro suggested.
I assume you expect Some[Int]|None as a result, not Int|None (otherwise return type has to be Any):
def maxOption(opts: Option[Int]*) = {
val flattened = opts.flatten
flattened.headOption.map { _ => flattened.max }
}
Actually, Scala already gives you this ability more or less directly.
scala> import Ordering.Implicits._
import Ordering.Implicits._
scala> val (a,b,n:Option[Int]) = (Option(4), Option(9), None)
a: Option[Int] = Some(4)
b: Option[Int] = Some(9)
n: Option[Int] = None
scala> a max b
res60: Option[Int] = Some(9)
scala> a max n
res61: Option[Int] = Some(4)
scala> n max b
res62: Option[Int] = Some(9)
scala> n max n
res63: Option[Int] = None
A Haskell-ish take on this question is to observe that the following operations:
max, min :: Ord a => a -> a -> a
max a b = if a < b then b else a
min a b = if a < b then a else b
...are associative:
max a (max b c) == max (max a b) c
min a (min b c) == min (min a b) c
As such, any type Ord a => a together with either of these operations is a semigroup, a concept for which reusable abstractions can be built.
And you're dealing with Maybe (Haskell for "option"), which adds a generic "neutral" element to the base a type (you want max Nothing x == x to hold as a law). This takes you into monoids, which are a subtype of semigroups.
The Haskell semigroups library provides a Semigroup type class and two wrapper types, Max and Min, that generically implement the corresponding behaviors.
Since we're dealing with Maybe, in terms of that library the type that captures the semantics you want is Option (Max a)—a monoid that has the same binary operation as the Max semigroup, and uses Nothing as the identity element. So then the function simply becomes:
maxOpt :: Ord a => Option (Max a) -> Option (Max a) -> Option (Max a)
maxOpt a b = a <> b
...which since it's just the <> operator for Option (Max a) is not worth writing. You also gain all the other utility functions and classes that work on Semigroup and Monoid, so for example to find the maximum element of a [Option (Max a)] you'd just use the mconcat function.
The scalaz library comes with a Semigroup and a Monoid trait, as well as Max, Min, MaxVal and MinVal tags that implement those traits, so in fact the stuff that I've demonstrated here in Haskell exists in scalaz as well.

Refactoring a small Scala function

I have this function to compute the distance between two n-dimensional points using Pythagoras' theorem.
def computeDistance(neighbour: Point) = math.sqrt(coordinates.zip(neighbour.coordinates).map {
case (c1: Int, c2: Int) => math.pow(c1 - c2, 2)
}.sum)
The Point class (simplified) looks like:
class Point(val coordinates: List[Int])
I'm struggling to refactor the method so it's a little easier to read, can anybody help please?
Here's another way that makes the following three assumptions:
The length of the list is the number of dimensions for the point
Each List is correctly ordered, i.e. List(x, y) or List(x, y, z). We do not know how to handle List(x, z, y)
All lists are of equal length
def computeDistance(other: Point): Double = sqrt(
coordinates.zip(other.coordinates)
.flatMap(i => List(pow(i._2 - i._1, 2)))
.fold(0.0)(_ + _)
)
The obvious disadvantage here is that we don't have any safety around list length. The quick fix for this is to simply have the function return an Option[Double] like so:
def computeDistance(other: Point): Option[Double] = {
if(other.coordinates.length != coordinates.length) {
return None
}
return Some(sqrt(coordinates.zip(other.coordinates)
.flatMap(i => List(pow(i._2 - i._1, 2)))
.fold(0.0)(_ + _)
))
I'd be curious if there is a type safe way to ensure equal list length.
EDIT
It was politely pointed out to me that flatMap(x => List(foo(x))) is equivalent to map(foo) , which I forgot to refactor when I was originally playing w/ this. Slightly cleaner version w/ Map instead of flatMap :
def computeDistance(other: Point): Double = sqrt(
coordinates.zip(other.coordinates)
.map(i => pow(i._2 - i._1, 2))
.fold(0.0)(_ + _)
)
Most of your problem is that you're trying to do math with really long variable names. It's almost always painful. There's a reason why mathematicians use single letters. And assign temporary variables.
Try this:
class Point(val coordinates: List[Int]) { def c = coordinates }
import math._
def d(p: Point) = {
val delta = for ((a,b) <- (c zip p.c)) yield pow(a-b, dims)
sqrt(delta.sum)
}
Consider type aliases and case classes, like this,
type Coord = List[Int]
case class Point(val c: Coord) {
def distTo(p: Point) = {
val z = (c zip p.c).par
val pw = z.aggregate(0.0) ( (a,v) => a + math.pow( v._1-v._2, 2 ), _ + _ )
math.sqrt(pw)
}
}
so that for any two points, for instance,
val p = Point( (1 to 5).toList )
val q = Point( (2 to 6).toList )
we have that
p distTo q
res: Double = 2.23606797749979
Note method distTo uses aggregate on a parallelised collection of tuples, and combines the partial results by the last argument (summation). For high dimensional points this may prove more efficient than the sequential counterpart.
For simplicity of use, consider also implicit classes, as suggested in a comment above,
implicit class RichPoint(val c: Coord) extends AnyVal {
def distTo(d: Coord) = Point(c) distTo Point(d)
}
Hence
List(1,2,3,4,5) distTo List(2,3,4,5,6)
res: Double = 2.23606797749979

Scala by Example - problems with currying

Not sure how to properly formulate the question, there is a problem with currying in the merge sort example from Scala by Example book on page 69. The function is defined as follows:
def msort[A](less: (A, A) => Boolean)(xs: List[A]): List[A] = {
def merge(xs1: List[A], xs2: List[A]): List[A] =
if (xs1.isEmpty) xs2
else if (xs2.isEmpty) xs1
else if (less(xs1.head, xs2.head)) xs1.head :: merge(xs1.tail, xs2)
else xs2.head :: merge(xs1, xs2.tail)
val n = xs.length/2
if (n == 0) xs
else merge(msort(less)(xs take n), msort(less)(xs drop n))
}
and then there is an example of how to create other functions from it by currying:
val intSort = msort((x : Int, y : Int) => x < y)
val reverseSort = msort((x:Int, y:Int) => x > y)
however these two lines give me errors about insufficient number of arguments. And if I do like this:
val intSort = msort((x : Int, y : Int) => x < y)(List(1, 2, 4))
val reverseSort = msort((x:Int, y:Int) => x > y)(List(4, 3, 2))
it WILL work. Why? Can someone explain? Looks like the book is really dated since it is not the first case of such an inconsistance in its examples. Could anyone point to something more real to read? (better a free e-book).
My compiler (2.9.1) agrees, there seems to be an error here, but the compiler does tell you what to do:
error: missing arguments for method msort in object $iw;
follow this method with `_' if you want to treat it as a partially applied function
So, this works:
val intSort = msort((x : Int, y : Int) => x < y) _
Since the type of intSort is inferred, the compiler doesn't otherwise seem to know whether you intended to partially apply, or whether you missed arguments.
The _ can be omitted when the compiler can infer from the expected type that a partially applied function is what is intended. So this works too:
val intSort: List[Int] => List[Int] = msort((x: Int, y: Int) => x < y)
That's obviously more verbose, but more often you will take advantage of this without any extra boilerplate, for example if msort((x: Int, y: Int) => x < y) were the argument to a function where the parameter type is already known to be List[Int] => List[Int].
Edit: Page 181 of the current edition of the Scala Language Specification mentions a tightening of rules for implicit conversions of partially applied methods to functions since Scala 2.0. There is an example of invalid code very similar to the one in Scala by Example, and it is described as "previously legal code".