How to implement memoization in Scala without mutability? - scala

I was recently reading Category Theory for Programmers and in one of the challenges, Bartosz proposed to write a function called memoize which takes a function as an argument and returns the same one with the difference that, the first time this new function is called, it stores the result of the argument and then returns this result each time it is called again.
def memoize[A, B](f: A => B): A => B = ???
The problem is, I can't think of any way to implement this function without resorting to mutability. Moreover, the implementations I have seen uses mutable data structures to accomplish the task.
My question is, is there a purely functional way of accomplishing this? Maybe without mutability or by using some functional trick?
Thanks for reading my question and for any future help. Have a nice day!

is there a purely functional way of accomplishing this?
No. Not in the narrowest sense of pure functions and using the given signature.
TLDR: Use mutable collections, it's okay!
Impurity of g
val g = memoize(f)
// state 1
g(a)
// state 2
What would you expect to happen for the call g(a)?
If g(a) memoizes the result, an (internal) state has to change, so the state is different after the call g(a) than before.
As this could be observed from the outside, the call to g has side effects, which makes your program impure.
From the Book you referenced, 2.5 Pure and Dirty Functions:
[...] functions that
always produce the same result given the same input and
have no side effects
are called pure functions.
Is this really a side effect?
Normally, at least in Scala, internal state changes are not considered side effects.
See the definition in the Scala Book
A pure function is a function that depends only on its declared inputs and its internal algorithm to produce its output. It does not read any other values from “the outside world” — the world outside of the function’s scope — and it does not modify any values in the outside world.
The following examples of lazy computations both change their internal states, but are normally still considered purely functional as they always yield the same result and have no side effects apart from internal state:
lazy val x = 1
// state 1: x is not computed
x
// state 2: x is 1
val ll = LazyList.continually(0)
// state 1: ll = LazyList(<not computed>)
ll(0)
// state 2: ll = LazyList(0, <not computed>)
In your case, the equivalent would be something using a private, mutable Map (as the implementations you may have found) like:
def memoize[A, B](f: A => B): A => B = {
val cache = mutable.Map.empty[A, B]
(a: A) => cache.getOrElseUpdate(a, f(a))
}
Note that the cache is not public.
So, for a pure function f and without looking at memory consumption, timings, reflection or other evil stuff, you won't be able to tell from the outside whether f was called twice or g cached the result of f.
In this sense, side effects are only things like printing output, writing to public variables, files etc.
Thus, this implementation is considered pure (at least in Scala).
Avoiding mutable collections
If you really want to avoid var and mutable collections, you need to change the signature of your memoize method.
This is, because if g cannot change internal state, it won't be able to memoize anything new after it was initialized.
An (inefficient but simple) example would be
def memoizeOneValue[A, B](f: A => B)(a: A): (B, A => B) = {
val b = f(a)
val g = (v: A) => if (v == a) b else f(v)
(b, g)
}
val (b1, g) = memoizeOneValue(f, a1)
val (b2, h) = memoizeOneValue(g, a2)
// ...
The result of f(a1) would be cached in g, but nothing else. Then, you could chain this and always get a new function.
If you are interested in a faster version of that, see #esse's answer, which does the same, but more efficient (using an immutable map, so O(log(n)) instead of the linked list of functions above, O(n)).

Let's try(Note: I have change the return type of memoize to store the cached data):
import scala.language.existentials
type M[A, B] = A => T forSome { type T <: (B, A => T) }
def memoize[A, B](f: A => B): M[A, B] = {
import scala.collection.immutable
def withCache(cache: immutable.Map[A, B]): M[A, B] = a => cache.get(a) match {
case Some(b) => (b, withCache(cache))
case None =>
val b = f(a)
(b, withCache(cache + (a -> b)))
}
withCache(immutable.Map.empty)
}
def f(i: Int): Int = { print(s"Invoke f($i)"); i }
val (i0, m0) = memoize(f)(1) // f only invoked at first time
val (i1, m1) = m0(1)
val (i2, m2) = m1(1)

Yes there is pure functional ways to implement polymorphic function memoization. The topic is surprisingly deep and even summons the Yoneda Lemma, which is likely what Bartosz had in mind with this exercise.
The blog post Memoization in Haskell gives a nice introduction by simplifying the problem a bit: instead of looking at arbitrary functions it restricts the problem to functions from the integers.
The following memoize function takes a function of type Int -> a and
returns a memoized version of the same function. The trick is to turn
a function into a value because, in Haskell, functions are not
memoized but values are. memoize converts a function f :: Int -> a
into an infinite list [a] whose nth element contains the value of f n.
Thus each element of the list is evaluated when it is first accessed
and cached automatically by the Haskell runtime thanks to lazy
evaluation.
memoize :: (Int -> a) -> (Int -> a)
memoize f = (map f [0 ..] !!)
Apparently the approach can be generalised to function of arbitrary domains. The trick is to come up with a way to use the type of the domain as an index into a lazy data structure used for "storing" previous values. And this is where the Yoneda Lemma comes in and my own understanding of the topic becomes flimsy.

Related

Why List.fill method has two group of parameters instead of one?

What is the reason behind List.fill is being defined with two groups of parameters instead of one with n and elem parameters together
Current definition
def fill[A](n: Int)(elem: ⇒ A): CC[A]
Proposed definition
def fill[A](n: Int, elem: ⇒ A): CC[A]
Isn't it unnecessary boilerplate? Or is it designed to use the first part (List.fill(n)) as a curried function constructor?
You can write
List.fill(10){ val r = math.random; r * r }
but you cannot write
List.fill(10, r = math.random; r * r)
and
List.fill(10, {r = math.random; r * r})
looks somewhat awkward.
In this case, it's almost irrelevant, but note that the way how the arguments are grouped into argument lists can influence the type inference quite significantly, e.g.
def map[X, Y](a: F[X])(f: X => Y): F[Y]
works perfectly fine without any type annotations most of the time, whereas
def map[X, Y](a: F[X], f: X => Y): F[Y]
is quite painful to use. Take a careful look at such methods as ap, ap2 or map2 in this piece of code, for example. There is a good reason why the argument lists are the way they are, you would notice it immediately if they were defined differently.

Functor for sequence computation

I am reading scala cats book from underscore.io. It says following about Monad and Functor:
While monads and functors are the most widely used sequencing data
types..
I can see that Monad is using for sequencing data but Functor not at all. Could someone please show about sequencing computation on functors?
Seq(1, 2, 3).map(_ * 2).map(_.toString).foreach(println)
Here: you have a sequence of operations on a sequence of data.
Every monad is actually a functor, because you could implement map with flatMap and unit/pure/whatever your implementation calls it. So if you agree that monads are "sequencing data types", then you should agree on functors being them too.
Taken out of context, this statement is less clear than it could be.
A fuller version of the quote is:
While monads and functors are the most widely used sequencing data types
[...], semigroupals and applicatives are the most general.
The goal of this statement is not to erase the difference between functorial and monadic notions of "sequencing", but rather to contrast them with obviously non-sequential operations provided by Semigroupal.
Both Functor and Monad do support (different) kinds of "sequencing".
Given a value x of type F[X] for some functor F and some type X, we can "sequence" pure functions
f: X => Y
g: Y => Z
like this:
x map f map g
You can call this "sequencing", at least "elementwise". The point is that g has to wait until f produces at least a single y of type Y in order to do anything useful. However, this does not mean that all invocations of f must be finished before g is invoked for the first time, e.g. if x is a long list, one could process each element in parallel - that's why I called it "elementwise".
With monads that represent monadic effects, the notion of "sequencing" is usually taken a bit more seriously. For example, if you are working with a value x of type M[X] = Writer[Vector[String], X], and you have two functions with the "writing"-effect
f: X => M[Y]
g: Y => M[Z]
and then you sequence them like this:
x flatMap f flatMap g
you really want f to finish completely, until g begins to write anything into the Vector[String]-typed log. So here, this is literally just "sequencing", without any fine-print.
Now, contrast this with Semigroupal.
Suppose that F is semigroupal, that is, we can always form a F[(A,B)] from F[A] and F[B]. Given two functions
f: X => F[A]
g: X => F[B]
we can build a function
(x: X) => product(f(x), g(x))
that returns results of type F[(A, B)]. Note that now f and g can process x completely independently: whatever it is, it is definitely not sequencing.
Similarly, for an Applicative F and functions
f: A => X
g: B => Y
c: (X, Y) => Z
and two independent values a: F[A], b: F[B], you can process a and b completely independently with f and g, and then combine the results in the end with c into a single F[Z]:
map2(a, b){ (a, b) => c(f(a), g(b)) }
Again: f and g don't know anything about each other's inputs and outputs, they work completely independently until the very last step c, so this is again not "sequencing".
I hope it clarifies the distinction somewhat.

Scala Function.tupled and Function.untupled equivalent for variable arity, or, calling variable arity function with tuple

I was trying to do some stuff last night around accepting and calling a generic function (i.e. the type is known at the call site, but potentially varies across call sites, so the definition should be generic across arities).
For example, suppose I have a function f: (A, B, C, ...) => Z. (There are actually many such fs, which I do not know in advance, and so I cannot fix the types nor count of A, B, C, ..., Z.)
I'm trying to achieve the following.
How do I call f generically with an instance of (A, B, C, ...)? If the signature of f were known in advance, then I could do something involving Function.tupled f or equivalent.
How do I define another function or method (for example, some object's apply method) with the same signature as f? That is to say, how do I define a g for which g(a, b, c, ...) type checks if and only if f(a, b, c, ...) type checks? I was looking into Shapeless's HList for this. From what I can tell so far, HList at least solves the "representing an arbitrary arity args list" issue, and also, Shapeless would solve the conversion to and from tuple issue. However, I'm still not sure I understand how this would fit in with a function of generic arity, if at all.
How do I define another function or method with a related type signature to f? The biggest example that comes to mind now is some h: (A, B, C, ...) => SomeErrorThing[Z] \/ Z.
I remember watching a conference presentation on Shapeless some time ago. While the presenter did not explicitly demonstrate these things, what they did demonstrate (various techniques around abstracting/genericizing tuples vs HLists) would lead me to believe that similar things as the above are possible with the same tools.
Thanks in advance!
Yes, Shapeless can absolutely help you here. Suppose for example that we want to take a function of arbitrary arity and turn it into a function of the same arity but with the return type wrapped in Option (I think this will hit all three points of your question).
To keep things simple I'll just say the Option is always Some. This takes a pretty dense four lines:
import shapeless._, ops.function._
def wrap[F, I <: HList, O](f: F)(implicit
ftp: FnToProduct.Aux[F, I => O],
ffp: FnFromProduct[I => Option[O]]
): ffp.Out = ffp(i => Some(ftp(f)(i)))
We can show that it works:
scala> wrap((i: Int) => i + 1)
res0: Int => Option[Int] = <function1>
scala> wrap((i: Int, s: String, t: String) => (s * i) + t)
res1: (Int, String, String) => Option[String] = <function3>
scala> res1(3, "foo", "bar")
res2: Option[String] = Some(foofoofoobar)
Note the appropriate static return types. Now for how it works:
The FnToProduct type class provides evidence that some type F is a FunctionN (for some N) that can be converted into a function from some HList to the original output type. The HList function (a Function1, to be precise) is the Out type member of the instance, or the second type parameter of the FnToProduct.Aux helper.
FnFromProduct does the reverse—it's evidence that some F is a Function1 from an HList to some output type that can be converted into a function of some arity to that output type.
In our wrap method, we use FnToProduct.Aux to constrain the Out of the FnToProduct instance for F in such a way that we can refer to the HList parameter list and the O result type in the type of our FnFromProduct instance. The implementation is then pretty straightforward—we just apply the instances in the appropriate places.
This may all seem very complicated, but once you've worked with this kind of generic programming in Scala for a while it becomes more or less intuitive, and we'd of course be happy to answer more specific questions about your use case.

Scala: Why will a right folding on a List cause stack overflow whereas not for a left folding?

I ran a right folding (:\) on a List of String, which caused a stack overflow. But When I changed it to use a left folding (/:), it worked fine. Why?
Since the source is open, you can actually see the implementations in LinearSeqOptimized.scala:
override /*TraversableLike*/
def foldLeft[B](z: B)(f: (B, A) => B): B = {
var acc = z
var these = this
while (!these.isEmpty) {
acc = f(acc, these.head)
these = these.tail
}
acc
}
override /*IterableLike*/
def foldRight[B](z: B)(f: (A, B) => B): B =
if (this.isEmpty) z
else f(head, tail.foldRight(z)(f))
What you will notice is that foldLeft uses a while-loop while foldRight uses recursion. The loop strategy is very efficient but recursion requires pushing a frame on the stack for every element in the list (as it traverses to the end). This is why foldLeft works fine but foldRight causes a stack overflow.
Fold are a general set of commonly used functions which traverse recursive data structures and typically result in a single value (reference). On sequences and lists, FoldLeft (in a general sense) is tail-recursive and as such, it can be optimized. FoldRight is not tail-recursive and thus can not be tail-call optimized. It does potentially have the benefit of being able to be applied to infinite series however.
The implementation of foldLeft and foldRight from the scala libraries (pirated from #dhg's answer) reflect this optimization/lack-there-of. foldLeft has been manually tail-call optimized using a while loop. foldRight can not be.
override /*TraversableLike*/
def foldLeft[B](z: B)(f: (B, A) => B): B = {
var acc = z
var these = this
while (!these.isEmpty) {
acc = f(acc, these.head)
these = these.tail
}
acc
}
override /*IterableLike*/
def foldRight[B](z: B)(f: (A, B) => B): B =
if (this.isEmpty) z
else f(head, tail.foldRight(z)(f))
I believe there is a section in Programming in Scala, Second Edition by Odersky, Spoon, Venners on folds which describes how foldLeft on Lists is tail-recursive while it may be possible to foldRight on infinite lists. Unfortunately, I do not have it on me at the moment in order to provide page numbers, etc. If not, it isn't very difficult to prove this.
See also the section of folds in Learn You a Haskell for Great Good by Miran Lipovača
Back when we were dealing with recursion, we noticed a theme
throughout many of the recursive functions that operated on lists.
Usually, we'd have an edge case for the empty list. We'd introduce the
x:xs pattern and then we'd do some action that involves a single
element and the rest of the list. It turns out this is a very common
pattern, so a couple of very useful functions were introduced to
encapsulate it. These functions are called folds.

How can I write f(g(h(x))) in Scala with fewer parentheses?

Expressions like
ls map (_ + 1) sum
are lovely because they are left-to-right and not nested. But if the functions in question are defined outside the class, it is less pretty.
Following an example I tried
final class DoublePlus(val self: Double) {
def hypot(x: Double) = sqrt(self*self + x*x)
}
implicit def doubleToDoublePlus(x: Double) =
new DoublePlus(x)
which works fine as far as I can tell, other than
A lot of typing for one method
You need to know in advance that you want to use it this way
Is there a trick that will solve those two problems?
You can call andThen on a function object:
(h andThen g andThen f)(x)
You can't call it on methods directly though, so maybe your h needs to become (h _) to transform the method into a partially applied function. The compiler will translate subsequent method names to functions automatically because the andThen method accepts a Function parameter.
You could also use the pipe operator |> to write something like this:
x |> h |> g |> f
Enriching an existing class/interface with an implicit conversion (which is what you did with doubleToDoublePlus) is all about API design when some classes aren't under your control. I don't recommend to do that lightly just to save a few keystrokes or having a few less parenthesis. So if it's important to be able to type val h = d hypot x, then the extra keystrokes should not be a concern. (there may be object allocations concerns but that's different).
The title and your example also don't match:
f(g(h(x))) can be rewritten asf _ compose g _ compose h _ apply x if your concern is about parenthesis or f compose g compose h apply x if f, g, h are function objects rather than def.
But ls map (_ + 1) sum aren't nested calls as you say, so I'm not sure how that relates to the title. And although it's lovely to use, the library/language designers went through a lot of efforts to make it easy to use and under the hood is not simple (much more complex than your hypot example).
def fgh (n: N) = f(g(h(n)))
val m = fgh (n)
Maybe this, observe how a is provided:
def compose[A, B, C](f: B => C, g: A => B): A => C = (a: A) => f(g(a))
basically like the answer above combine the desired functions to a intermediate one which you then can use easily with map.
Starting Scala 2.13, the standard library provides the chaining operation pipe which can be used to convert/pipe a value with a function of interest.
Using multiple pipes we can thus build a pipeline which as mentioned in the title of your question, minimizes the number of parentheses:
import scala.util.chaining._
x pipe h pipe g pipe f