Elegant way to reverse a list using foldRight? - scala

I was reading about fold techniques in Programming in Scala book and came across this snippet:
def reverseLeft[T](xs:List[T]) = (List[T]() /: xs) {
(y,ys) => ys :: y
}
As you can see, it was done using foldLeft or /: operator. Curious how it would look like if I did it using :\, I came up with this:
def reverseRight[T](xs:List[T]) = (xs :\ List[T]()) {
(y,ys) => ys ::: List(y)
}
As I understand it, ::: doesn't seem to be as fast as :: and has a linear cost depending on the size of the operand list. Admittedly, I don't have a background in CS and no prior FP experience. So my questions are:
How do you recognise/distinguish between foldLeft/foldRight in problem approaches?
Is there a better way of doing this without using :::?

Since foldRight on List in the standard library is strict and implemented using linear recursion, you should avoid using it, as a rule. An iterative implementation of foldRight would be as follows:
def foldRight[A,B](f: (A, B) => B, z: B, xs: List[A]) =
xs.reverse.foldLeft(z)((x, y) => f(y, x))
A recursive implementation of foldLeft could be this:
def foldLeft[A,B](f: (B, A) => B, z: B, xs: List[A]) =
xs.reverse.foldRight(z)((x, y) => f(y, x))
So you see, if both are strict, then one or the other of foldRight and foldLeft is going to be implemented (conceptually anyway) with reverse. Since the way lists are constructed with :: associates to the right, the straightforward iterative fold is going to be foldLeft, and foldRight is simply "reverse then foldLeft".
Intuitively, you might think that this would be a slow implementation of foldRight, since it folds the list twice. But:
"Twice" is a constant factor anyway, so it's asymptotically equivalent to folding once.
You have to go over the list twice anyway. Once to push computations onto the stack and again to pop them off the stack.
The implementation of foldRight above is faster than the one in the standard library.

Operations on a List are intentionally not symmetric. The List data structure is a singly-linked list where each node (both data and pointer) are immutable. The idea behind this data structure is that you perform modifications on the front of the list by taking references to internal nodes and adding new nodes that point to them -- different versions of the list will share the same nodes for the end of the list.
The ::: operator which appends a new element on to the end of the list has to create a new copy of the entire list, because otherwise it would modify other lists that share nodes with the list you're appending to. This is why ::: takes linear time.
Scala has a data structure called a ListBuffer that you can use instead of the ::: operator to make appending to the end of a list faster. Basically, you create a new ListBuffer and it starts with an empty list. The ListBuffer maintains a list completely separate from any other list that the program knows about, so it's safe to modify it by adding on to the end. When you're finished adding on to the end, you call ListBuffer.toList, which releases the list into the world, at which point you can no longer add on to the end without copying it.
foldLeft and foldRight also share a similar assymmetry. foldRight requires you to walk the entire list to get to the end of the list, and keep track of everywhere you've visited on the way there, so that you an visit them in reverse order. This is usually done recursively, and it can lead to foldRight causing stack overflows on large lists. foldLeft on the other hand, deals with nodes in the order they appear in the list, so it can forget the ones it's visited already and only needs to know about one node at a time. Though foldLeft is also usually implemented recursively, it can take advantage of an optimization called tail recursion elimination, in which the compiler transforms the recursive calls into a loop because the function doesn't do anything after returning from the recursive call. Thus, foldLeft doesn't overflow the stack even on very long lists. EDIT: foldRight in Scala 2.8 is actually implemented by reversing the list and running foldLeft on the reversed list -- so the tail recursion issue is not an issue -- both data structures optimize tail recursion correctly, and you could choose either one (You do get into the issue now that you're defining reverse in terms of reverse -- you don't need to worry if you're defining your own reverse method for the fun of it, but you wouldn't have the foldRight option at all if you were defining Scala's reverse method.)
Thus, you should prefer foldLeft and :: over foldRight and :::.
(In an algorithm that would combine foldLeft with ::: or foldRight with ::, then you need to make a decision for yourself about which is more important: stack space or running time. Or you should use foldLeft with a ListBuffer.)

Related

Scala: Is there any reason to prefer `filter+map` over `collect`?

Can there be any reason to prefer filter+map:
list.filter (i => aCondition(i)).map(i => fun(i))
over collect? :
list.collect(case i if aCondition(i) => fun(i))
The one with collect (single look) looks faster and cleaner to me. So I would always go for collect.
Most of Scala's collections eagerly apply operations and (unless you're using a macro library that does this for you) will not fuse operations. So filter followed by map will usually create two collections (and even if you use Iterator or somesuch, the intermediate form will be transiently created, albeit only an element at a time), whereas collect will not.
On the other hand, collect uses a partial function to implement the joint test, and partial functions are slower than predicates (A => Boolean) at testing whether something is in the collection.
Additionally, there can be cases where it is simply clearer to read one than the other and you don't care about performance or memory usage differences of a factor of 2 or so. In that case, use whichever is clearer. Generally if you already have the functions named, it's clearer to read
xs.filter(p).map(f)
xs.collect{ case x if p(x) => f(x) }
but if you are supplying the closures inline, collect generally looks cleaner
xs.filter(x < foo(x, x)).map(x => bar(x, x))
xs.collect{ case x if foo(x, x) => bar(x, x) }
even though it's not necessarily shorter, because you only refer to the variable once.
Now, how big is the difference in performance? That varies, but if we consider a a collection like this:
val v = Vector.tabulate(10000)(i => ((i%100).toString, (i%7).toString))
and you want to pick out the second entry based on filtering the first (so the filter and map operations are both really easy), then we get the following table.
Note: one can get lazy views into collections and gather operations there. You don't always get your original type back, but you can always use to get the right collection type. So xs.view.filter(p).map(f).toVector would, because of the view, not create an intermediate. That is tested below also. It has also been suggested that one can xs.flatMap(x => if (p(x)) Some(f(x)) else None) and that this is efficient. That is not so. It's also tested below. And one can avoid the partial function by explicitly creating a builder: val vb = Vector.newBuilder[String]; xs.foreach(x => if (p(x)) vb += f(x)); vb.result, and the results for that are also listed below.
In the table below, three conditions have been tested: filter out nothing, filter out half, filter out everything. The times have been normalized to filter/map (100% = same time as filter/map, lower is better). Error bounds are around +- 3%.
Performance of different filter/map alternatives
====================== Vector ========================
filter/map collect view filt/map flatMap builder
100% 44% 64% 440% 30% filter out none
100% 60% 76% 605% 42% filter out half
100% 112% 103% 1300% 74% filter out all
Thus, filter/map and collect are generally pretty close (with collect winning when you keep a lot), flatMap is far slower under all situations, and creating a builder always wins. (This is true specifically for Vector. Other collections may have somewhat different characteristics, but the trends for most will be similar because the differences in operations are similar.) Views in this test tend to be a win, but they don't always work seamlessly (and they aren't really better than collect except for the empty case).
So, bottom line: prefer filter then map if it aids clarity when speed doesn't matter, or prefer it for speed when you're filtering out almost everything but still want to keep things functional (so don't want to use a builder); and otherwise use collect.
I guess this is rather opinion based, but given the following definitions:
scala> val l = List(1,2,3,4)
l: List[Int] = List(1, 2, 3, 4)
scala> def f(x: Int) = 2*x
f: (x: Int)Int
scala> def p(x: Int) = x%2 == 0
p: (x: Int)Boolean
Which of the two do you find nicer to read:
l.filter(p).map(f)
or
l.collect{ case i if p(i) => f(i) }
(Note that I had to fix your syntax above, as you need the bracket and case to add the if condition).
I personally find the filter+map much nicer to read and understand. It's all a matter of the exact syntax that you use, but given p and f, you don't have to write anonymous functions when using filter or map, while you do need them when using collect.
You also have the possibility to use flatMap:
l.flatMap(i => if(p(i)) Some(f(i)) else None)
Which is likely to be the most efficient among the 3 solutions, but I find it less nice than map and filter.
Overall, it's very difficult to say which one would be faster, as it depends a lot of which optimizations end up being performed by scalac and then the JVM. All 3 should be pretty close, and definitely not a factor in deciding which one to use.
One case where filter/map looks cleaner is when you want to flatten the result of filter
def getList(x: Int) = {
List.range(x, 0, -1)
}
val xs = List(1,2,3,4)
//Using filter and flatMap
xs.filter(_ % 2 == 0).flatMap(getList)
//Using collect and flatten
xs.collect{ case x if x % 2 == 0 => getList(x)}.flatten

Stack - scala implementation/performance issue

I've been looking at the scala-lang notes on various data structures and their performance characteristics.
I have noticed that immutable.Stack has C (Const.) complexity for both appending and prepending, while mutable.Stack stack has C complexity for prepending and L (Linear) complexity for appending. This surprised me a bit.
I take it, that "appending" means just push() to the top of the stack. And since complexities for prepending and appending are different, does it mean that "prepending" is in fact putting something on the bottom of a Stack? Why does it perform better (C for mutable) than appending (L for mutable)? And also, how can I even prepend to the stack? I can't see any method suitable for this in the scaladoc.
EDIT.
As #Łukasz noted in comments, you can prepend and append to the stack with +: and :+ operators. The question remains though - why does prepending work better (faster) than appending to the Stack? Should I add to the bottom instead of pushing to the top?
It looks like there is a mistake in this table, or I do not get something. If you look at the implementation, push for both mutable and immutable Stack takes constant time, and :+ for both mutable and immutable takes linear time, because :+ is coming from SeqLike which does that in linear time, which is very reasonable for stack as data structure
Both mutable and immutable stacks using immutable List inside and using :: operation, which is constant. List has it's append operation as L, so it's no way Stack can do it better
For immutable stack it is:
def push[B >: A](elem: B): Stack[B] = new Stack(elem :: elems)
And for mutable it is:
def push(elem: A): this.type = { elems = elem :: elems; this }
Also please notice that immutable Stack is deprecated since 2.11
P.S. I even checked the latest sources of 2.12, but it seems the code didn't change since 2.11
P.P.S. I couldn't find any insert implementation for Stack, and looking at the table it seems weird that only Stack among immutable structures can insert data, so I'd guess that L from that column should have been in append column

Why do we need flatMap (in general)?

I have been looking into FP languages (off and on) for some time and have played with Scala, Haskell, F#, and some others. I like what I see and understand some of the fundamental concepts of FP (with absolutely no background in Category Theory - so don't talk Math, please).
So, given a type M[A] we have map which takes a function A=>B and returns a M[B]. But we also have flatMap which takes a function A=>M[B] and returns a M[B]. We also have flatten which takes a M[M[A]] and returns a M[A].
In addition, many of the sources I have read describe flatMap as map followed by flatten.
So, given that flatMap seems to be equivalent to flatten compose map, what is its purpose? Please don't say it is to support 'for comprehensions' as this question really isn't Scala-specific. And I am less concerned with the syntactic sugar than I am in the concept behind it. The same question arises with Haskell's bind operator (>>=). I believe they both are related to some Category Theory concept but I don't speak that language.
I have watched Brian Beckman's great video Don't Fear the Monad more than once and I think I see that flatMap is the monadic composition operator but I have never really seen it used the way he describes this operator. Does it perform this function? If so, how do I map that concept to flatMap?
BTW, I had a long writeup on this question with lots of listings showing experiments I ran trying to get to the bottom of the meaning of flatMap and then ran into this question which answered some of my questions. Sometimes I hate Scala implicits. They can really muddy the waters. :)
FlatMap, known as "bind" in some other languages, is as you said yourself for function composition.
Imagine for a moment that you have some functions like these:
def foo(x: Int): Option[Int] = Some(x + 2)
def bar(x: Int): Option[Int] = Some(x * 3)
The functions work great, calling foo(3) returns Some(5), and calling bar(3) returns Some(9), and we're all happy.
But now you've run into the situation that requires you to do the operation more than once.
foo(3).map(x => foo(x)) // or just foo(3).map(foo) for short
Job done, right?
Except not really. The output of the expression above is Some(Some(7)), not Some(7), and if you now want to chain another map on the end you can't because foo and bar take an Int, and not an Option[Int].
Enter flatMap
foo(3).flatMap(foo)
Will return Some(7), and
foo(3).flatMap(foo).flatMap(bar)
Returns Some(15).
This is great! Using flatMap lets you chain functions of the shape A => M[B] to oblivion (in the previous example A and B are Int, and M is Option).
More technically speaking; flatMap and bind have the signature M[A] => (A => M[B]) => M[B], meaning they take a "wrapped" value, such as Some(3), Right('foo), or List(1,2,3) and shove it through a function that would normally take an unwrapped value, such as the aforementioned foo and bar. It does this by first "unwrapping" the value, and then passing it through the function.
I've seen the box analogy being used for this, so observe my expertly drawn MSPaint illustration:
This unwrapping and re-wrapping behavior means that if I were to introduce a third function that doesn't return an Option[Int] and tried to flatMap it to the sequence, it wouldn't work because flatMap expects you to return a monad (in this case an Option)
def baz(x: Int): String = x + " is a number"
foo(3).flatMap(foo).flatMap(bar).flatMap(baz) // <<< ERROR
To get around this, if your function doesn't return a monad, you'd just have to use the regular map function
foo(3).flatMap(foo).flatMap(bar).map(baz)
Which would then return Some("15 is a number")
It's the same reason you provide more than one way to do anything: it's a common enough operation that you may want to wrap it.
You could ask the opposite question: why have map and flatten when you already have flatMap and a way to store a single element inside your collection? That is,
x map f
x filter p
can be replaced by
x flatMap ( xi => x.take(0) :+ f(xi) )
x flatMap ( xi => if (p(xi)) x.take(0) :+ xi else x.take(0) )
so why bother with map and filter?
In fact, there are various minimal sets of operations you need to reconstruct many of the others (flatMap is a good choice because of its flexibility).
Pragmatically, it's better to have the tool you need. Same reason why there are non-adjustable wrenches.
The simplest reason is to compose an output set where each entry in the input set may produce more than one (or zero!) outputs.
For example, consider a program which outputs addresses for people to generate mailers. Most people have one address. Some have two or more. Some people, unfortunately, have none. Flatmap is a generalized algorithm to take a list of these people and return all of the addresses, regardless of how many come from each person.
The zero output case is particularly useful for monads, which often (always?) return exactly zero or one results (think Maybe- returns zero results if the computation fails, or one if it succeeds). In that case you want to perform an operation on "all of the results", which it just so happens may be one or many.
The "flatMap", or "bind", method, provides an invaluable way to chain together methods that provide their output wrapped in a Monadic construct (like List, Option, or Future). For example, suppose you have two methods that produce a Future of a result (eg. they make long-running calls to databases or web service calls or the like, and should be used asynchronously):
def fn1(input1: A): Future[B] // (for some types A and B)
def fn2(input2: B): Future[C] // (for some types B and C)
How to combine these? With flatMap, we can do this as simply as:
def fn3(input3: A): Future[C] = fn1(a).flatMap(b => fn2(b))
In this sense, we have "composed" a function fn3 out of fn1 and fn2 using flatMap, which has the same general structure (and so can be composed in turn with further similar functions).
The map method would give us a not-so-convenient - and not readily chainable - Future[Future[C]]. Certainly we can then use flatten to reduce this, but the flatMap method does it in one call, and can be chained as far as we wish.
This is so useful a way of working, in fact, that Scala provides the for-comprehension as essentially a short-cut for this (Haskell, too, provides a short-hand way of writing a chain of bind operations - I'm not a Haskell expert, though, and don't recall the details) - hence the talk you will have come across about for-comprehensions being "de-sugared" into a chain of flatMap calls (along with possible filter calls and a final map call for the yield).
Well, one could argue, you don't need .flatten either. Why not just do something like
#tailrec
def flatten[T](in: Seq[Seq[T], out: Seq[T] = Nil): Seq[T] = in match {
case Nil => out
case head ::tail => flatten(tail, out ++ head)
}
Same can be said about map:
#tailrec
def map[A,B](in: Seq[A], out: Seq[B] = Nil)(f: A => B): Seq[B] = in match {
case Nil => out
case head :: tail => map(tail, out :+ f(head))(f)
}
So, why are .flatten and .map provided by the library? Same reason .flatMap is: convenience.
There is also .collect, which is really just
list.filter(f.isDefinedAt _).map(f)
.reduce is actually nothing more then list.foldLeft(list.head)(f),
.headOption is
list match {
case Nil => None
case head :: _ => Some(head)
}
Etc ...

Tail recursion vs head classic recursion

listening to Scala courses and explanations I often hear: "but in real code we are not using recursion, but tail recursion".
Does it mean that in my Real code I should NOT use recursion, but tail recursion that is very much like looping and does not require that epic phrase "in order to understand recursion you first need understand recursion" .
In reality, taking into account your stack.. you more likely would use loop-like tail recursion.
Am I wrong? Is that 'classic' recursion is good only for education purposes to make your brain travel back to the university-past?
Or, for all that, there is place where we can use it.. where the depth of recursion calls is less than X (where X your stack-overflow limit). Or we can start coding from classic-recursion and then, being afraid of your stack blowing one day, apply couple of refactorings to make it tail-like to make use even stronger on refactoring field?
Question: Some real samples that you would use / have used 'classic head' recursion in your real code, which is not refactored yet into tail one, maybe?
Tail Recursion == Loop
You can take any loop and express it as tail-recursive call.
Background: In pure FP, everything must result in some value. while loop in scala doesn't result in any expression, only side-effects (e.g. update some variable). It exists only to support programmers coming from imperative background. Scala encourages developers to reconsider replacing while loop with recursion, which always result in some value.
So according to Scala: Recursion is the new iteration.
However, there is a problem with previous statement: while "Regular" Recursive code is easier to read, it comes with a performance penalty AND carries an inherent risk of overflowing the stack. On the other hand, tail-recursive code will never result in stack overflow (at least in Scala*), and the performance will be the same as loops (In fact, I'm sure Scala converts all tail recursive calls to plain old iterations).
Going back to the question, nothing wrong with sticking to the "Regular" recursion, unless:
The algorithm you are using in calculating large numbers (stack overflow)
Tail Recursion brings a noticeable performance gain
There are two basic kinds of recursion:
head recursion
Tail recursion
In head recursion, a function makes its recursive call and then performs some more calculations, maybe using the result of the recursive call, for example. In a tail recursive function, all calculations happen first and the recursive call is the last thing that happens.
The importance of this distinction doesn’t jump out at you, but it’s extremely important! Imagine a tail recursive function. It runs. It completes all its computation. As its very last action, it is ready to make its recursive call. What, at this point, is the use of the stack frame? None at all. We don’t need our local variables anymore because we’re done with all computations. We don’t need to know which function we’re in because we’re just going to re-enter the very same function. Scala, in the case of tail recursion, can eliminate the creation of a new stack frame and just re-use the current stack frame. The stack never gets any deeper, no matter how many times the recursive call is made. That’s the voodoo that makes tail recursion special in Scala.
Let's see with the example.
def factorial1(n:Int):Int =
if (n == 0) 1 else n * factorial1(n -1)
def factorial2(n:Int):Int = {
def loop(acc:Int,n:Int):Int =
if (n == 0) 1 else loop(acc * n,n -1)
loop(1,n)
}
Incidentally, some languages achieve a similar end by converting tail recursion into iteration rather than by manipulating the stack.
This won’t work with head recursion. Do you see why? Imagine a head recursive function. First it does some work, then it makes its recursive call, then it does a little more work. We can’t just re-use the current stack frame when we make that recursive call. We’re going to NEED that stack frame info after the recursive call completes. It has our local variables, including the result (if any) returned by the recursive call.
Here’s a question for you. Is the example function factorial1 head recursive or tail recursive? Well, what does it do? (A) It checks whether its parameter is 0. (B) If so, it returns 1 since factorial of 0 is 1. (C) If not, it returns n multiply by the result of a recursive call. The recursive call is the last thing we typed before ending the function. That’s tail recursion, right? Wrong. The recursive call is made, and THEN n is multiplied by the result, and this product is returned. This is actually head recursion (or middle recursion, if you like) because the recursive call is not the very last thing that happens.
For more info please refer the link
The first thing one should look at when developing software is the readability and maintainability of the code. Looking at performance characteristics is mostly premature optimization.
There is no reason not to use recursion when it helps to write high quality code.
The same counts for tail recursion vs. normal loops. Just look at this simple tail recursive function:
def gcd(a: Int, b: Int) = {
def loop(a: Int, b: Int): Int =
if (b == 0) a else loop(b, a%b)
loop(math.abs(a), math.abs(b))
}
It calculates the greatest common divisor of two numbers. Once you know the algorithm it is clear how it works - writing this with a while-loop wouldn't make it clearer. Instead you would probably introduce a bug on the first try because you forgot to store a new value into one of the variables a or b.
On the other side see these two functions:
def goRec(i: Int): Unit = {
if (i < 5) {
println(i)
goRec(i+1)
}
}
def goLoop(i: Int): Unit = {
var j = i
while (j < 5) {
println(j)
j += 1
}
}
Which one is easier to read? They are more or less equal - all the syntax sugar you gain for tail recursive functions due to Scalas expression based nature is gone.
For recursive functions there is another thing that comes to play: lazy evaluation. If your code is lazy evaluated it can be recursive but no stack overflow will happen. See this simple function:
def map(f: Int => Int, xs: Stream[Int]): Stream[Int] = f -> xs match {
case (_, Stream.Empty) => Stream.Empty
case (f, x #:: xs) => f(x) #:: map(f, xs)
}
Will it crash for large inputs? I don't think so:
scala> map(_+1, Stream.from(0).takeWhile(_<=1000000)).last
res6: Int = 1000001
Trying the same with Scalas List would kill your program. But because Stream is lazy this is not a problem. In this case you could also write a tail recursive function but generally this not easily possible.
There are many algorithms which will not be clear when they are written iteratively - one example is depth first search of a graph. Do you want to maintain a stack by yourself just to save the values where you need to go back to? No, you won't because it is error prone and looks ugly (beside from any definition of recursion - it would call a iterative depth first search recursion as well because it has to use a stack and "normal" recursion has to use a stack as well - it is just hidden from the developer and maintained by the compiler).
To come back to the point of premature optimization, I have heard a nice analogy: When you have a problem that can't be solved with Int because your numbers will get large and it is likely that you get an overflow then don't switch to Long because it is likely that you get an overflow here as well.
For recursion it means that there may be cases where you will blow up your stack but it is more likely that when you switch to a memory only based solution you will get an out of memory error instead. A better advice is to find a different algorithm that doesn't perform that badly.
As conclusion, try to prefer tail recursion instead of loops or normal recursion because it will for sure not kill your stack. But when you can do better then don't hesitate to do it better.
If you're not dealing with a linear sequence, then trying to write a tail-recursive function to traverse the entire collection is very difficult. In such cases, for the sake of readability/maintainability, you usually just use normal recursion instead.
A common example of this is a traversal of a binary tree data structure. For each node you might need to recur on both the left and right child nodes. If you were to try to write such a function recursively, where first the left node is visited and then the right, you'd need to maintain some sort of auxiliary data structure to track all the remaining right nodes that need to be visited. However, you can achieve the same thing just using the stack, and it's going to be more readable.
An example of this is the iterator method from Scala's RedBlack tree:
def iterator: Iterator[(A, B)] =
left.iterator ++ Iterator.single(Pair(key, value)) ++ right.iterator

Why should I avoid using local modifiable variables in Scala?

I'm pretty new to Scala and most of the time before I've used Java. Right now I have warnings all over my code saying that i should "Avoid mutable local variables" and I have a simple question - why?
Suppose I have small problem - determine max int out of four. My first approach was:
def max4(a: Int, b: Int,c: Int, d: Int): Int = {
var subMax1 = a
if (b > a) subMax1 = b
var subMax2 = c
if (d > c) subMax2 = d
if (subMax1 > subMax2) subMax1
else subMax2
}
After taking into account this warning message I found another solution:
def max4(a: Int, b: Int,c: Int, d: Int): Int = {
max(max(a, b), max(c, d))
}
def max(a: Int, b: Int): Int = {
if (a > b) a
else b
}
It looks more pretty, but what is ideology behind this?
Whenever I approach a problem I'm thinking about it like: "Ok, we start from this and then we incrementally change things and get the answer". I understand that the problem is that I try to change some initial state to get an answer and do not understand why changing things at least locally is bad? How to iterate over collection then in functional languages like Scala?
Like an example: Suppose we have a list of ints, how to write a function that returns sublist of ints which are divisible by 6? Can't think of solution without local mutable variable.
In your particular case there is another solution:
def max4(a: Int, b: Int,c: Int, d: Int): Int = {
val submax1 = if (a > b) a else b
val submax2 = if (c > d) c else d
if (submax1 > submax2) submax1 else submax2
}
Isn't it easier to follow? Of course I am a bit biased but I tend to think it is, BUT don't follow that rule blindly. If you see that some code might be written more readably and concisely in mutable style, do it this way -- the great strength of scala is that you don't need to commit to neither immutable nor mutable approaches, you can swing between them (btw same applies to return keyword usage).
Like an example: Suppose we have a list of ints, how to write a
function that returns the sublist of ints which are divisible by 6?
Can't think of solution without local mutable variable.
It is certainly possible to write such function using recursion, but, again, if mutable solution looks and works good, why not?
It's not so related with Scala as with the functional programming methodology in general. The idea is the following: if you have constant variables (final in Java), you can use them without any fear that they are going to change. In the same way, you can parallelize your code without worrying about race conditions or thread-unsafe code.
In your example is not so important, however imagine the following example:
val variable = ...
new Future { function1(variable) }
new Future { function2(variable) }
Using final variables you can be sure that there will not be any problem. Otherwise, you would have to check the main thread and both function1 and function2.
Of course, it's possible to obtain the same result with mutable variables if you do not ever change them. But using inmutable ones you can be sure that this will be the case.
Edit to answer your edit:
Local mutables are not bad, that's the reason you can use them. However, if you try to think approaches without them, you can arrive to solutions as the one you posted, which is cleaner and can be parallelized very easily.
How to iterate over collection then in functional languages like Scala?
You can always iterate over a inmutable collection, while you do not change anything. For example:
val list = Seq(1,2,3)
for (n <- list)
println n
With respect to the second thing that you said: you have to stop thinking in a traditional way. In functional programming the usage of Map, Filter, Reduce, etc. is normal; as well as pattern matching and other concepts that are not typical in OOP. For the example you give:
Like an example: Suppose we have a list of ints, how to write a function that returns sublist of ints which are divisible by 6?
val list = Seq(1,6,10,12,18,20)
val result = list.filter(_ % 6 == 0)
Firstly you could rewrite your example like this:
def max(first: Int, others: Int*): Int = {
val curMax = Math.max(first, others(0))
if (others.size == 1) curMax else max(curMax, others.tail : _*)
}
This uses varargs and tail recursion to find the largest number. Of course there are many other ways of doing the same thing.
To answer your queston - It's a good question and one that I thought about myself when I first started to use scala. Personally I think the whole immutable/functional programming approach is somewhat over hyped. But for what it's worth here are the main arguments in favour of it:
Immutable code is easier to read (subjective)
Immutable code is more robust - it's certainly true that changing mutable state can lead to bugs. Take this for example:
for (int i=0; i<100; i++) {
for (int j=0; j<100; i++) {
System.out.println("i is " + i = " and j is " + j);
}
}
This is an over simplified example but it's still easy to miss the bug and the compiler won't help you
Mutable code is generally not thread safe. Even trivial and seemingly atomic operations are not safe. Take for example i++ this looks like an atomic operation but it's actually equivalent to:
int i = 0;
int tempI = i + 0;
i = tempI;
Immutable data structures won't allow you to do something like this so you would need to explicitly think about how to handle it. Of course as you point out local variables are generally threadsafe, but there is no guarantee. It's possible to pass a ListBuffer instance variable as a parameter to a method for example
However there are downsides to immutable and functional programming styles:
Performance. It is generally slower in both compilation and runtime. The compiler must enforce the immutability and the JVM must allocate more objects than would be required with mutable data structures. This is especially true of collections.
Most scala examples show something like val numbers = List(1,2,3) but in the real world hard coded values are rare. We generally build collections dynamically (from a database query etc). Whilst scala can reassign the values in a colection it must still create a new collection object every time you modify it. If you want to add 1000 elements to a scala List (immutable) the JVM will need to allocate (and then GC) 1000 objects
Hard to maintain. Functional code can be very hard to read, it's not uncommon to see code like this:
val data = numbers.foreach(_.map(a => doStuff(a).flatMap(somethingElse)).foldleft("", (a : Int,b: Int) => a + b))
I don't know about you but I find this sort of code really hard to follow!
Hard to debug. Functional code can also be hard to debug. Try putting a breakpoint halfway into my (terrible) example above
My advice would be to use a functional/immutable style where it genuinely makes sense and you and your colleagues feel comfortable doing it. Don't use immutable structures because they're cool or it's "clever". Complex and challenging solutions will get you bonus points at Uni but in the commercial world we want simple solutions to complex problems! :)
Your two main questions:
Why warn against local state changes?
How can you iterate over collections without mutable state?
I'll answer both.
Warnings
The compiler warns against the use of mutable local variables because they are often a cause of error. That doesn't mean this is always the case. However, your sample code is pretty much a classic example of where mutable local state is used entirely unnecessarily, in a way that not only makes it more error prone and less clear but also less efficient.
Your first code example is more inefficient than your second, functional solution. Why potentially make two assignments to submax1 when you only ever need to assign one? You ask which of the two inputs is larger anyway, so why not ask that first and then make one assignment? Why was your first approach to temporarily store partial state only halfway through the process of asking such a simple question?
Your first code example is also inefficient because of unnecessary code duplication. You're repeatedly asking "which is the biggest of two values?" Why write out the code for that 3 times independently? Needlessly repeating code is a known bad habit in OOP every bit as much as FP and for precisely the same reasons. Each time you needlessly repeat code, you open a potential source of error. Adding mutable local state (especially when so unnecessary) only adds to the fragility and to the potential for hard to spot errors, even in short code. You just have to type submax1 instead of submax2 in one place and you may not notice the error for a while.
Your second, FP solution removes the code duplication, dramatically reducing the chance of error, and shows that there was simply no need for mutable local state. It's also, as you yourself say, cleaner and clearer - and better than the alternative solution in om-nom-nom's answer.
(By the way, the idiomatic Scala way to write such a simple function is
def max(a: Int, b: Int) = if (a > b) a else b
which terser style emphasises its simplicity and makes the code less verbose)
Your first solution was inefficient and fragile, but it was your first instinct. The warning caused you to find a better solution. The warning proved its value. Scala was designed to be accessible to Java developers and is taken up by many with a long experience of imperative style and little or no knowledge of FP. Their first instinct is almost always the same as yours. You have demonstrated how that warning can help improve code.
There are cases where using mutable local state can be faster but the advice of Scala experts in general (not just the pure FP true believers) is to prefer immutability and to reach for mutability only where there is a clear case for its use. This is so against the instincts of many developers that the warning is useful even if annoying to experienced Scala devs.
It's funny how often some kind of max function comes up in "new to FP/Scala" questions. The questioner is very often tripping up on errors caused by their use of local state... which link both demonstrates the often obtuse addiction to mutable state among some devs while also leading me on to your other question.
Functional Iteration over Collections
There are three functional ways to iterate over collections in Scala
For Comprehensions
Explicit Recursion
Folds and other Higher Order Functions
For Comprehensions
Your question:
Suppose we have a list of ints, how to write a function that returns sublist of ints which are divisible by 6? Can't think of solution without local mutable variable
Answer: assuming xs is a list (or some other sequence) of integers, then
for (x <- xs; if x % 6 == 0) yield x
will give you a sequence (of the same type as xs) containing only those items which are divisible by 6, if any. No mutable state required. Scala just iterates over the sequence for you and returns anything matching your criteria.
If you haven't yet learned the power of for comprehensions (also known as sequence comprehensions) you really should. Its a very expressive and powerful part of Scala syntax. You can even use them with side effects and mutable state if you want (look at the final example on the tutorial I just linked to). That said, there can be unexpected performance penalties and they are overused by some developers.
Explicit Recursion
In the question I linked to at the end of the first section, I give in my answer a very simple, explicitly recursive solution to returning the largest Int from a list.
def max(xs: List[Int]): Option[Int] = xs match {
case Nil => None
case List(x: Int) => Some(x)
case x :: y :: rest => max( (if (x > y) x else y) :: rest )
}
I'm not going to explain how the pattern matching and explicit recursion work (read my other answer or this one). I'm just showing you the technique. Most Scala collections can be iterated over recursively, without any need for mutable state. If you need to keep track of what you've been up to along the way, you pass along an accumulator. (In my example code, I stick the accumulator at the front of the list to keep the code smaller but look at the other answers to those questions for more conventional use of accumulators).
But here is a (naive) explicitly recursive way of finding those integers divisible by 6
def divisibleByN(n: Int, xs: List[Int]): List[Int] = xs match {
case Nil => Nil
case x :: rest if x % n == 0 => x :: divisibleByN(n, rest)
case _ :: rest => divisibleByN(n, rest)
}
I call it naive because it isn't tail recursive and so could blow your stack. A safer version can be written using an accumulator list and an inner helper function but I leave that exercise to you. The result will be less pretty code than the naive version, no matter how you try, but the effort is educational.
Recursion is a very important technique to learn. That said, once you have learned to do it, the next important thing to learn is that you can usually avoid using it explicitly yourself...
Folds and other Higher Order Functions
Did you notice how similar my two explicit recursion examples are? That's because most recursions over a list have the same basic structure. If you write a lot of such functions, you'll repeat that structure many times. Which makes it boilerplate; a waste of your time and a potential source of error.
Now, there are any number of sophisticated ways to explain folds but one simple concept is that they take the boilerplate out of recursion. They take care of the recursion and the management of accumulator values for you. All they ask is that you provide a seed value for the accumulator and the function to apply at each iteration.
For example, here is one way to use fold to extract the highest Int from the list xs
xs.tail.foldRight(xs.head) {(a, b) => if (a > b) a else b}
I know you aren't familiar with folds, so this may seem gibberish to you but surely you recognise the lambda (anonymous function) I'm passing in on the right. What I'm doing there is taking the first item in the list (xs.head) and using it as the seed value for the accumulator. Then I'm telling the rest of the list (xs.tail) to iterate over itself, comparing each item in turn to the accumulator value.
This kind of thing is a common case, so the Collections api designers have provided a shorthand version:
xs.reduce {(a, b) => if (a > b) a else b}
(If you look at the source code, you'll see they have implemented it using a fold).
Anything you might want to do iteratively to a Scala collection can be done using a fold. Often, the api designers will have provided a simpler higher-order function which is implemented, under the hood, using a fold. Want to find those divisible-by-six Ints again?
xs.foldRight(Nil: List[Int]) {(x, acc) => if (x % 6 == 0) x :: acc else acc}
That starts with an empty list as the accumulator, iterates over every item, only adding those divisible by 6 to the accumulator. Again, a simpler fold-based HoF has been provided for you:
xs filter { _ % 6 == 0 }
Folds and related higher-order functions are harder to understand than for comprehensions or explicit recursion, but very powerful and expressive (to anybody else who understands them). They eliminate boilerplate, removing a potential source of error. Because they are implemented by the core language developers, they can be more efficient (and that implementation can change, as the language progresses, without breaking your code). Experienced Scala developers use them in preference to for comprehensions or explicit recursion.
tl;dr
Learn For comprehensions
Learn explicit recursion
Don't use them if a higher-order function will do the job.
It is always nicer to use immutable variables since they make your code easier to read. Writing a recursive code can help solve your problem.
def max(x: List[Int]): Int = {
if (x.isEmpty == true) {
0
}
else {
Math.max(x.head, max(x.tail))
}
}
val a_list = List(a,b,c,d)
max_value = max(a_list)