Fibonacci memoization in Scala with Memo.mutableHashMapMemo - scala

I am trying implement the fibonacci function in Scala with memoization
One example given here uses a case statement:
Is there a generic way to memoize in Scala?
import scalaz.Memo
lazy val fib: Int => BigInt = Memo.mutableHashMapMemo {
case 0 => 0
case 1 => 1
case n => fib(n-2) + fib(n-1)
}
It seems the variable n is implicitly defined as the first argument, but I get a compilation error if I replace n with _
Also what advantage does the lazy keyword have here, as the function seems to work equally well with and without this keyword.
However I wanted to generalize this to a more generic function definition with appropriate typing
import scalaz.Memo
def fibonachi(n: Int) : Int = Memo.mutableHashMapMemo[Int, Int] {
var value : Int = 0
if( n <= 1 ) { value = n; }
else { value = fibonachi(n-1) + fibonachi(n-2) }
return value
}
but I get the following compilation error
cmd10.sc:4: type mismatch;
found : Int => Int
required: Int
def fibonachi(n: Int) : Int = Memo.mutableHashMapMemo[Int, Int] {
^Compilation Failed
Compilation Failed
So I am trying to understand the generic way of adding adding a memoization annotation to a scala def function

One way to achieve a Fibonacci sequence is via a recursive Stream.
val fib: Stream[BigInt] = 0 #:: fib.scan(1:BigInt)(_+_)
An interesting aspect of streams is that, if something holds on to the head of the stream, the calculation results are auto-memoized. So, in this case, because the identifier fib is a val and not a def, the value of fib(n) is calculated only once and simply retrieved thereafter.
However, indexing a Stream is still a linear operation. If you want to memoize that away you could create a simple memo-wrapper.
def memo[A,R](f: A=>R): A=>R =
new collection.mutable.WeakHashMap[A,R] {
override def apply(a: A) = getOrElseUpdate(a,f(a))
}
val fib: Stream[BigInt] = 0 #:: fib.scan(1:BigInt)(_+_)
val mfib = memo(fib)
mfib(99) //res0: BigInt = 218922995834555169026

The more general question I am trying to ask is how to take a pre-existing def function and add a mutable/immutable memoization annotation/wrapper to it inline.
Unfortunately there is no way to do this in Scala unless you are willing to use a macro annotation for this which feels like an overkill to me or to use some very ugly design.
The contradicting requirements are "def" and "inline". The fundamental reason for this is that whatever you do inline with the def can't create any new place to store the memoized values (unless you use a macro that can re-write code introducing new val/vars). You may try to work this around using some global cache but this IMHO falls under the "ugly design" branch.
The design of ScalaZ Memo is used to create a val of the type Function[K,V] which is often written in Scala as just K => V instead of def. In this way the produced val can contain also the storage for the cached values. On the other hand syntactically the difference between usage of a def method and of a K => V function is minimal so this works pretty well. Since the Scala compiler knows how to convert def method into a function, you can wrap a def with Memo but you can't get a def out of it. If for some reason you need def anyway, you'll need another wrapper def.
import scalaz.Memo
object Fib {
def fib(n: Int): BigInt = n match {
case 0 => BigInt(0)
case 1 => BigInt(1)
case _ => fib(n - 2) + fib(n - 1)
}
// "fib _" converts a method into a function. Sometimes "_" might be omitted
// and compiler can imply it but sometimes the compiler needs this explicit hint
lazy val fib_mem_val: Int => BigInt = Memo.mutableHashMapMemo(fib _)
def fib_mem_def(n: Int): BigInt = fib_mem_val(n)
}
println(Fib.fib(5))
println(Fib.fib_mem_val(5))
println(Fib.fib_mem_def(5))
Note how there is no difference in syntax of calling fib, fib_mem_val and fib_mem_def although fib_mem_val is a value. You may also try this example online
Note: beware that some ScalaZ Memo implementations are not thread-safe.
As for the lazy part, the benefit is typical for any lazy val: the actual value with the underlying storage will not be created until the first usage. If the method will be used anyway, I see no benefits in declaring it as lazy

Related

What's the purpose of Currying given other alternatives to return a function in Scala?

I'm currently doing a Scala course and recently I was introduced to different techniques of returning functions.
For example, given this function and method:
val simpleAddFunction = (x: Int, y: Int) => x + y
def simpleAddMethod(x: Int, y: Int) = x + y
I can return another function just doing this:
val add7_v1 = (x: Int) => simpleAddFunction(x, 7)
val add7_v2 = simpleAddFunction(_: Int, 7)
val add7_v3 = (x: Int) => simpleAddMethod(x, 7)
val add7_v4 = simpleAddMethod(_: Int, 7)
All the values add7_x accomplish the same thing, so, whats the purpose of Currying then?
Why I have to write def simpleCurryMethod(x: Int)(y: Int) = x + y if all of the above functions do a similar functionality?
That's it! I'm a newbie in functional programming and I don't know many use cases of Currying apart from saving time by reducing the use of parameters repeatedly. So, if someone could explain me the advantages of currying over the previous examples or in Currying in general I would be very grateful.
That's it, have a nice day!
In Scala 2 there are only four pragmatic reasons for currying METHODS (as far as I can recall, if someone has another valid use case then please let me know).
(and probably the principal reason to use it) to drive type inference.
For example, when you want to accept a function or another kind of generic value whose generic type should be inferred from some plain data. For example:
def applyTwice[A](a: A)(f: A => A): A = f(f(a))
applyTwice(10)(_ + 1) // Here the compiler is able to infer that f is Int => Int
In the above example, if I wouldn't have curried the function then I would need to have done something like: applyTwice(10, (x: Int) => x + 1) to call the function.
Which is redundant and looks worse (IMHO).
Note: In Scala 3 type inference is improved thus this reason is not longer valid there.
(and probably the main reason now in Scala 3) for the UX of callers.
For example, if you expect an argument to be a function or a block it is usually better as a single argument in its own (and last) parameter list so it looks nice in usage. For example:
def iterN(n: Int)(body: => Unit): Unit =
if (n > 0) {
body
iterN(n - 1)(body)
}
iterN(3) {
println("Hello")
// more code
println("World")
}
Again, if I wouldn't have curried the previous method the usage would have been like this:
iterN(3, {
println("Hello")
// more code
println("World")
})
Which doesn't look that nice :)
(in my experience weird but valid) when you know that majority of users will call it partially to return a function.
Because val baz = foo(bar) _ looks better than val baz = foo(bar, _) and with the first one, you sometimes don't the the underscore like: data.map(foo(bar))
Note: Disclaimer, I personally think that if this is the case, is better to just return a function right away instead of currying.
Edit
Thanks to #jwvh for pointing out this fourth use case.
(useful when using path-dependant types) when you need to refer to previous parameters. For example:
trait Foo {
type I
def bar(i: I): Baz
}
def run(foo: Foo)(i: foo.I): Baz =
foo.bar(i)

Why do each new instance of case classes evaluate lazy vals again in Scala?

From what I have understood, scala treats val definitions as values.
So, any instance of a case class with same parameters should be equal.
But,
case class A(a: Int) {
lazy val k = {
println("k")
1
}
val a1 = A(5)
println(a1.k)
Output:
k
res1: Int = 1
println(a1.k)
Output:
res2: Int = 1
val a2 = A(5)
println(a1.k)
Output:
k
res3: Int = 1
I was expecting that for println(a2.k), it should not print k.
Since this is not the required behavior, how should I implement this so that for all instances of a case class with same parameters, it should only execute a lazy val definition only once. Do I need some memoization technique or Scala can handle this on its own?
I am very new to Scala and functional programming so please excuse me if you find the question trivial.
Assuming you're not overriding equals or doing something ill-advised like making the constructor args vars, it is the case that two case class instantiations with same constructor arguments will be equal. However, this does not mean that two case class instantiations with same constructor arguments will point to the same object in memory:
case class A(a: Int)
A(5) == A(5) // true, same as `A(5).equals(A(5))`
A(5) eq A(5) // false
If you want the constructor to always return the same object in memory, then you'll need to handle this yourself. Maybe use some sort of factory:
case class A private (a: Int) {
lazy val k = {
println("k")
1
}
}
object A {
private[this] val cache = collection.mutable.Map[Int, A]()
def build(a: Int) = {
cache.getOrElseUpdate(a, A(a))
}
}
val x = A.build(5)
x.k // prints k
val y = A.build(5)
y.k // doesn't print anything
x == y // true
x eq y // true
If, instead, you don't care about the constructor returning the same object, but you just care about the re-evaluation of k, you can just cache that part:
case class A(a: Int) {
lazy val k = A.kCache.getOrElseUpdate(a, {
println("k")
1
})
}
object A {
private[A] val kCache = collection.mutable.Map[Int, Int]()
}
A(5).k // prints k
A(5).k // doesn't print anything
The trivial answer is "this is what the language does according to the spec". That's the correct, but not very satisfying answer. It's more interesting why it does this.
It might be clearer that it has to do this with a different example:
case class A[B](b: B) {
lazy val k = {
println(b)
1
}
}
When you're constructing two A's, you can't know whether they are equal, because you haven't defined what it means for them to be equal (or what it means for B's to be equal). And you can't statically intitialize k either, as it depends on the passed in B.
If this has to print twice, it would be entirely intuitive if that would only be the case if k depends on b, but not if it doesn't depend on b.
When you ask
how should I implement this so that for all instances of a case class with same parameters, it should only execute a lazy val definition only once
that's a trickier question than it sounds. You make "the same parameters" sound like something that can be known at compile time without further information. It's not, you can only know it at runtime.
And if you only know that at runtime, that means you have to keep all past uses of the instance A[B] alive. This is a built in memory leak - no wonder Scala has no built-in way to do this.
If you really want this - and think long and hard about the memory leak - construct a Map[B, A[B]], and try to get a cached instance from that map, and if it doesn't exist, construct one and put it in the map.
I believe case classes only consider the arguments to their constructor (not any auxiliary constructor) to be part of their equality concept. Consider when you use a case class in a match statement, unapply only gives you access (by default) to the constructor parameters.
Consider anything in the body of case classes as "extra" or "side effect" stuffs. I consider it a good tactic to make case classes as near-empty as possible and put any custom logic in a companion object. Eg:
case class Foo(a:Int)
object Foo {
def apply(s: String) = Foo(s.toInt)
}
In addition to dhg answer, I should say, I'm not aware of functional language that does full constructor memoizing by default. You should understand that such memoizing means that all constructed instances should stick in memory, which is not always desirable.
Manual caching is not that hard, consider this simple code
import scala.collection.mutable
class Doubler private(a: Int) {
lazy val double = {
println("calculated")
a * 2
}
}
object Doubler{
val cache = mutable.WeakHashMap.empty[Int, Doubler]
def apply(a: Int): Doubler = cache.getOrElseUpdate(a, new Doubler(a))
}
Doubler(1).double //calculated
Doubler(5).double //calculated
Doubler(1).double //most probably not calculated

why we need Free monad to interpret Action to Future

I wrote one example to use scalaz.Free to to map Action to Future, it looks pretty cool. However, I am trying to understand the benefits of it. I hope I can get the answer here. Here is my code snippet
Firstly, I create an Action, which is AST.
trait Action[A]
case class GetNumberAction(x: Int) extends Action[Int]
case class GetStringAction(x: String) extends Action[String]
case class ConvertToIntAction(x: String) extends Action[Int]
case class AddAction(x: Int, y: Int) extends Action[Int]
Then, I create a class to map Action to ASTMonad by using Scalaz Free and Coyonda.
type Functor[A] = Coyoneda[Action, A]
type ASTMonad[A]= Free[Functor, A]
def toMonad[A](action: Action[A]): ASTMonad[A] = Free.liftFC[Action, A](action)
object ADTMonad {
def getNumber(x: Int): ASTMonad[Int] = toMonad(GetNumberAction(x))
def getString(x: String): ASTMonad[String] = toMonad(GetStringAction(x))
def converToInt(x: String): ASTMonad[Int] = toMonad(ConvertToIntAction(x))
def add(x: Int, y: Int): ASTMonad[Int] = toMonad(AddAction(x, y))
}
At last, I create an Interpreter to interpret Action to Future
object Interpreter extends (Action ~> Future) {
def apply[A](action: Action[A]): Future[A] = {
action match {
case GetNumberAction(x) => Future(x)
case GetStringAction(x) => Future(x)
case ConvertToIntAction(x) => Future(x.toInt)
case AddAction(x, y) => Future(x + y)
}
}
}
When I run it, I can use
val chain = for {
number <- ASTMonad.getNumber(x)
str <- ASTMonad.getString(y)
convertedNumber <- ASTMonad.converToInt(str)
total <- ASTMonad.add(number, convertedNumber)
} yield total
chain.runWith(Interpreter)
It seems to work and I think I understand this monad and interpreter things. However, I am thinking what is the benefits comparing to the solution if I am using Future.flatmap and map directly ?
for {
number <- Future(x)
str <- Future(y)
convertedNumber <- Future(str.toInt)
total <- Future(number + convertedNumber)
} yield total
The code of using Future flatmap and map looks simpler to me. So back to my questions, do we need to use Free monad to interpret the business logic to Future, since Future has already provided flatMap and map. If it does, can someone give me more concrete example, so I can see the benefits ?
Thanks in advance
A good and motivated example for using free applicative are command-line parsers, let's call the type CLI[A].
A value of type CLI[A] means you will get an A if you provide command-line arguments (Array[String]) and they can be parsed successfully. Now this functionality is isomorphic to Array[String] -> Either[String,A] when using Either for error handling.
Because you made CLI applicative, you can map and apply (combine) values. You can for example create a Int argument count, another Int argument count2, and combine them to a final sum: CLI[Int] that holds their sum.
Suppose you apply the computation directly, this yields something that is "only" equivalent to Array[String] -> Either[String,Int]. But if you want to create a help text you have to know both initial arguments, and this information is lost.
Free to the rescue. With Free you can retain the computation graph, which you can use to extract all initial CLI values that are directly parsed from the arguments. You can then later run the computation which yields the final value of sum by providing the parse results for all initial arguments.
Of course you could implement a special CLI that keeps track of all the initial values over computations, but Free let's you avoid this extra work.

Random as instance of scalaz.Monad

This is a follow-up to my previous question. I wrote a monad (for an exercise) that is actually a function generating random values. However it is not defined as an instance of type class scalaz.Monad.
Now I looked at Rng library and noticed that it defined Rng as scalaz.Monad:
implicit val RngMonad: Monad[Rng] =
new Monad[Rng] {
def bind[A, B](a: Rng[A])(f: A => Rng[B]) = a flatMap f
def point[A](a: => A) = insert(a)
}
So I wonder how exactly users benefit from that. How can we use the fact that Rng is an instance of type class scalaz.Monad ? Can you give any examples ?
Here's a simple example. Suppose I want to pick a random size for a range, and then pick a random index inside that range, and then return both the range and the index. The second computation of a random value clearly depends on the first—I need to know the size of the range in order to pick a value in the range.
This kind of thing is specifically what monadic binding is for—it allows you to write the following:
val rangeAndIndex: Rng[(Range, Int)] = for {
max <- Rng.positiveint
index <- Rng.chooseint(0, max)
} yield (0 to max, index)
This wouldn't be possible if we didn't have a Monad instance for Rng.
One of the benefit is that you will get a lot of useful methods defined in MonadOps.
For example, Rng.double.iterateUntil(_ < 0.1) will produce only the values that are less than 0.1 (while the values greater than 0.1 will be skipped).
iterateUntil can be used for generation of distribution samples using a rejection method.
E.g. this is the code that creates a beta distribution sample generator:
import com.nicta.rng.Rng
import java.lang.Math
import scalaz.syntax.monad._
object Main extends App {
def beta(alpha: Double, beta: Double): Rng[Double] = {
// Purely functional port of Numpy's beta generator: https://github.com/numpy/numpy/blob/31b94e85a99db998bd6156d2b800386973fef3e1/numpy/random/mtrand/distributions.c#L187
if (alpha <= 1.0 && beta <= 1.0) {
val rng: Rng[Double] = Rng.double
val xy: Rng[(Double, Double)] = for {
u <- rng
v <- rng
} yield (Math.pow(u, 1 / alpha), Math.pow(v, 1 / beta))
xy.iterateUntil { case (x, y) => x + y <= 1.0 }.map { case (x, y) => x / (x + y) }
} else ???
}
val rng: Rng[List[Double]] = beta(0.5, 0.5).fill(10)
println(rng.run.unsafePerformIO) // Prints 10 samples of the beta distribution
}
Like any interface, declaring an instance of Monad[Rng] does two things: it provides an implementation of the Monad methods under standard names, and it expresses an implicit contract that those method implementations conform to certain laws (in this case, the monad laws).
#Travis gave an example of one thing that's implemented with these interfaces, the Scalaz implementation of map and flatMap. You're right that you could implement these directly; they're "inherited" in Monad (actually a little more complex than that).
For an example of a method that you definitely have to implement some Scalaz interface for, how about sequence? This is a method that turns a List (or more generally a Traversable) of contexts into a single context for a List, e.g.:
val randomlyGeneratedNumbers: List[Rng[Int]] = ...
randomlyGeneratedNumbers.sequence: Rng[List[Int]]
But this actually only uses Applicative[Rng] (which is a superclass), not the full power of Monad. I can't actually think of anything that uses Monad directly (there are a few methods on MonadOps, e.g. untilM, but I've never used any of them in anger), but you might want a Bind for a "wrapper" case where you have an "inner" Monad "inside" your Rng things, in which case MonadTrans is useful:
val a: Rng[Reader[Config, Int]] = ...
def f: Int => Rng[Reader[Config, Float]] = ...
//would be a pain to manually implement something to combine a and f
val b: ReaderT[Rng, Config, Int] = ...
val g: Int => ReaderT[Rng, Config, Float] = ...
b >>= g
To be totally honest though, Applicative is probably good enough for most Monad use cases, at least the simpler ones.
Of course all of these methods are things you could implement yourself, but like any library the whole point of Scalaz is that they're already implemented, and under standard names, making it easier for other people to understand your code.

Scala's lazy arguments: How do they work?

In the file Parsers.scala (Scala 2.9.1) from the parser combinators library I seem to have come across a lesser known Scala feature called "lazy arguments". Here's an example:
def ~ [U](q: => Parser[U]): Parser[~[T, U]] = { lazy val p = q // lazy argument
(for(a <- this; b <- p) yield new ~(a,b)).named("~")
}
Apparently, there's something going on here with the assignment of the call-by-name argument q to the lazy val p.
So far I have not been able to work out what this does and why it's useful. Can anyone help?
Call-by-name arguments are called every time you ask for them. Lazy vals are called the first time and then the value is stored. If you ask for it again, you'll get the stored value.
Thus, a pattern like
def foo(x: => Expensive) = {
lazy val cache = x
/* do lots of stuff with cache */
}
is the ultimate put-off-work-as-long-as-possible-and-only-do-it-once pattern. If your code path never takes you to need x at all, then it will never get evaluated. If you need it multiple times, it'll only be evaluated once and stored for future use. So you do the expensive call either zero (if possible) or one (if not) times, guaranteed.
The wikipedia article for Scala even answers what the lazy keyword does:
Using the keyword lazy defers the initialization of a value until this value is used.
Additionally, what you have in this code sample with q : => Parser[U] is a call-by-name parameter. A parameter declared this way remains unevaluated, until you explicitly evaluate it somewhere in your method.
Here is an example from the scala REPL on how the call-by-name parameters work:
scala> def f(p: => Int, eval : Boolean) = if (eval) println(p)
f: (p: => Int, eval: Boolean)Unit
scala> f(3, true)
3
scala> f(3/0, false)
scala> f(3/0, true)
java.lang.ArithmeticException: / by zero
at $anonfun$1.apply$mcI$sp(<console>:9)
...
As you can see, the 3/0 does not get evaluated at all in the second call. Combining the lazy value with a call-by-name parameter like above results in the following meaning: the parameter q is not evaluated immediately when calling the method. Instead it is assigned to the lazy value p, which is also not evaluated immediately. Only lateron, when p is used this leads to the evaluation of q. But, as p is a val the parameter q will only be evaluated once and the result is stored in p for later reuse in the loop.
You can easily see in the repl, that the multiple evaluation can happen otherwise:
scala> def g(p: => Int) = println(p + p)
g: (p: => Int)Unit
scala> def calc = { println("evaluating") ; 10 }
calc: Int
scala> g(calc)
evaluating
evaluating
20