Is recursion in scala very necessary? - scala

In the coursera scala tutorial, most examples are using top-down iterations. Partially, as I can see, iterations are used to avoid for/while loops. I'm from C++ and feel a little confused about this.
Is iteration chosen over for/while loops? Is it practical in production? Any risk of stackoverflow? How about efficiency? How about bottom up Dynamic Programming (especially when they are not tail-recusions)?
Also, should I use less "if" conditions, instead use more "case" and subclasses?

Truly high-quality Scala will use very little iteration and only slightly more recursion. What would be done with looping in lower-level imperative languages is usually best done with higher-order combinators, map and flatmap most especially, but also filter, zip, fold, foreach, reduce, collect, partition, scan, groupBy, and a good few others. Iteration is best done only in performance critical sections, and recursion done only in a some deep edge cases where the higher-order combinators don't quite fit (which usually aren't tail recursive, fwiw). In three years of coding Scala in production systems, I used iteration once, recursion twice, and map about five times per day.

Hmm, several questions in one.
Necessity of Recursion
Recursion is not necessary, but it can sometimes provide a very elegant solution.
If the solution is tail recursive and the compiler supports tail call optimisation, then the solution can even be efficient.
As has been well said already, Scala has many combinator functions which can be used to perform the same tasks more expressively and efficiently.
One classic example is writing a function to return the nth Fibonacci number. Here's a naive recursive implementation:
def fib (n: Long): Long = n match {
case 0 | 1 => n
case _ => fib( n - 2) + fib( n - 1 )
}
Now, this is inefficient (definitely not tail recursive) but it is very obvious how its structure relates to the Fibonacci sequence. We can make it properly tail recursive, though:
def fib (n: Long): Long = {
def fibloop(current: Long, next: => Long, iteration: Long): Long = {
if (n == iteration)
current
else
fibloop(next, current + next, iteration + 1)
}
fibloop(0, 1, 0)
}
That could have been written more tersely, but it is an efficient recursive implementation. That said, it is not as pretty as the first and it's structure is less clearly related to the original problem.
Finally, stolen shamelessly from elsewhere on this site is Luigi Plinge's streams-based implementation:
val fibs: Stream[Int] = 0 #:: fibs.scanLeft(1)(_ + _)
Very terse, efficient, elegant and (if you understand streams and lazy evaluation) very expressive. It is also, in fact, recursive; #:: is a recursive function, but one that operates in a lazily-evaluated context. You certainly have to be able to think recursively to come up with this kind of solution.
Iteration compared to For/While loops
I'm assuming you mean the traditional C-Style for, here.
Recursive solutions can often be preferable to while loops because C/C++/Java-style while loops do not return a value and require side effects to achieve anything (this is also true for C-Style for and Java-style foreach). Frankly, I often wish Scala had never implemented while (or had implemented it as syntactic sugar for something like Scheme's named let), because it allows classically-trained Java developers to keep doing things the way they always did. There are situations where a loop with side effects, which is what while gives you, is a more expressive way of achieving something but I had rather Java-fixated devs were forced to reach a little harder for it (e.g. by abusing a for comprehension).
Simply, traditional while and for make clunky imperative coding much too easy. If you don't care about that, why are you using Scala?
Efficiency and risk of Stackoverflow
Tail optimisation eliminates the risk of stackoverflow. Rewriting recursive solutions to be properly tail recursive can make them very ugly (particularly in any language running on the JVM).
Recursive solutions can be more efficient than more imperative solutions, sometimes suprisingly so. One reason is that they often operate on lists, in a way that only involves head and tail access. Head and tail operations on lists are actually faster than random access operations on more structured collections.
Dynamic Programming
A good recursive algorithm typically reduces a complex problem to a small set of simpler problems, picks one to solve and delegates the rest to another function (usually a recursive call to itself). Now, to me this sounds like a great fit for dynamic programming. Certainly, if I am trying a recursive approach to a problem, I often start with a naive solution which I know can't solve every case, see where it fails, add that pattern to the solution and iterate towards success.
The Little Schemer has many examples of this iterative approach to recursive programming, particularly because it re-uses earlier solutions as sub-components for later, more complex ones. I would say it is the epitome of the Dynamic Programming approach. (It is also one of the best-written educational books about software ever produced). I can recommend it, not least because it teaches you Scheme at the same time. If you really don't want to learn Scheme (why? why would you not?), it has been adapted for a few other languages
If versus Match
if expressions, in Scala, return values (which is very useful and why Scala has no need for a ternary operator). There is no reason to avoid simple
if (something)
// do something
else
// do something else
expressions. The principle reason to match instead of a simple if...else is to use the power of case statements to extract information from complex objects. Here is one example.
On the other hand, if...else if...else if...else is a terrible pattern
There's no easy way to see if you covered all the possibilities properly, even with a final else in place.
Unintentionally nested if expressions are hard to spot
It is too easy to link unrelated conditions together (accidentally or through bone-headed design)
Wherever you find you have written else if, look for an alternative. match is a good place to start.

I'm assuming that, since you say "recursion" in your title, you also mean "recursion" in your question, and not "iteration" (which cannot be chosen "over for/while loops", because those are iterative :D).
You might be interested in reading Effective Scala, especially the section on control structures, which should mostly answer your question. In short:
Recursion isn't "better" than iteration. Often it is easier to write a recursive algorithm for a given problem, then it is to write an iterative algorithm (of course there are cases where the opposite applies). When "tail call optimization" can be applied to a problem, the compiler actually converts it to an iterative algorithm, thus making it impossible for a StackOverflow to happen, and without performance impact. You can read about tail call optimization in Effective Scala, too.
The main problem with your question is that it is very broad. There are many many resources available on functional programming, idiomatic scala, dynamic programming and so on, and no answer here on Stack Overflow would be able to cover all those topics. It'd be probably a good idea to just roam the interwebs for a while, and then come back with more concrete questions :)

One of the main benefits of recursion is that it lets you create solutions without mutation. for following example, you have to calculate the sum of all the elements of a List.
One of the many ways to solve this problem is as below. The imperative solution to this problem uses for loop as shown:
scala> var total = 0
scala> for(f <- List(1,2,3)) { total += f }
And recursion solution would look like following:
def total(xs: List[Int]): Int = xs match {
case Nil => 0
case x :: ys => x + total(ys)
}
The difference is that a recursive solution doesn’t use any mutable temporary variables by letting you break the problem into smaller pieces. Because Functional programming is all about writing side effect free programs it's always encourage to use recursion vs loops (that use mutating variables).
Head recursion is a traditional way of doing recursion, where you perform the recursive call first and then take the return value from the recursive function and calculate the result.
Generally when you call a function an entry is added to the call stack of a currently running thread. The downside is that the call stack has a defined size so quickly you may get StackOverflowError exception. This is why Java prefers to iterate rather than recurse. Because Scala runs on the JVM, Scala also suffers from this problem. But starting with Scala 2.8.1, Scala gets away this limitation by doing tail call optimization. you can do tail recursion in Scala.
To recap recursion is preferred way in functional programming to avoid using mutation and secondly tail recursion is supported in Scala so you don't get into StackOverFlow exceptions which you get in Java.
Hope this helps.

As for stack overflow, a lot of the time you can get away with it because of tail call elimination.
The reason scala and other function paradigms avoid for/while loops they are highly dependent on state and time. That makes it much harder to reason about complex "loops" in a formal and precise manor.

Related

for vs map in functional programming

I am learning functional programming using scala. In general I notice that for loops are not much used in functional programs instead they use map.
Questions
What are the advantages of using map over for loop in terms of performance, readablity etc ?
What is the intention of bringing in a map function when it can be achieved using loop ?
Program 1: Using For loop
val num = 1 to 1000
val another = 1000 to 2000
for ( i <- num )
{
for ( j <- another)
{
println(i,j)
}
}
Program 2 : Using map
val num = 1 to 1000
val another = 1000 to 2000
val mapper = num.map(x => another.map(y => (x,y))).flatten
mapper.map(x=>println(x))
Both program 1 and program 2 does the same thing.
The answer is quite simple actually.
Whenever you use a loop over a collection it has a semantic purpose. Either you want to iterate the items of the collection and print them. Or you want to transform the type of the elements to another type (map). Or you want to change the cardinality, such as computing the sum of the elements of a collection (fold).
Of course, all that can also be done using for - loops but to the reader of the code, it is more work to figure out which semantic purpose the loop has, compared to a well known named operation such as map, iter, fold, filter, ...
Another aspect is, that for loops lead to the dark side of using mutable state. How would you sum the elements of a collection in a for loop without mutable state? You would not. Instead you would need to write a recursive function. So, for good measure, it is best to drop the habit of thinking in for loops early and enjoy the brave new functional way of doing things.
I'll start by quoting Programming in Scala.
"Every for expression can be expressed in terms of the three higher-order functions map, flatMap and filter. This section describes the translation scheme, which is also used by the Scala compiler."
http://www.artima.com/pins1ed/for-expressions-revisited.html#23.4
So the reason that you noticed for-loops are not used as much is because they technically aren't needed, and any for expressions you do see are just syntactic sugar which the compiler will translate into some equivalent. The rules for translating a for expression into a map/flatMap/filter expression are listed in the link above.
Generally speaking, in functional programming there is no index variable to mutate. This means one typically makes heavy use of function calls (often in the form of recursion) such as list folds in place of a while or for loop.
For a good example of using list folds in place of while/for loops, I recommend "Explain List Folds to Yourself" by Tony Morris.
https://vimeo.com/64673035
If a function is tail-recursive (denoted with #tailrec) then it can be optimized so as to not incur the high use of the stack which is common in recursive functions. In this case the compiler can translate the tail-recursive function to the "while loop equivalent".
To answer the second part of Question 1, there are some cases where one could make an argument that a for expression is clearer (although certainly there are cases where the opposite is true too.) One such example is given in the Coursera.org course "Functional Programming with Scala" by Dr. Martin Odersky:
for {
i <- 1 until n
j <- 1 until i
if isPrime(i + j)
} yield (i, j)
is arguably more clear than
(1 until n).flatMap(i =>
(1 until i).withFilter(j => isPrime(i + j))
.map(j => (i, j)))
For more information check out Dr. Martin Odersky's "Functional Programming with Scala" course on Coursera.org. Lecture 6.5 "Translation of For" in particular discusses this in more detail.
Also, as a quick side note, in your example you use
mapper.map(x => println(x))
It is generally more accepted to use foreach in this case because you have the intent of side-effecting. Also, there is short hand
mapper.foreach(println)
As for Question 2, it is better to use the map function in place of loops (especially when there is mutation in the loop) because map is a function and it can be composed. Also, once one is acquainted and used to using map, it is very easy to reason about.
The two programs that you have provided are not the same, even if the output might suggest that they are. It is true that for comprehensions are de-sugared by the compiler, but the first program you have is actually equivalent to:
val num = 1 to 1000
val another = 1000 to 2000
num.foreach(i => another.foreach(j => println(i,j)))
It should be noted that the resultant type for the above (and your example program) is Unit
In the case of your second program, the resultant type of the program is, as determined by the compiler, Seq[Unit] - which is now a Seq that has the length of the product of the loop members. As a result, you should always use foreach to indicate an effect that results in a Unit result.
Think about what is happening at the machine-language level. Loops are still fundamental. Functional programming abstracts the loop that is implemented in conventional programming.
Essentially, instead of writing a loop as you would in conventional or imparitive programming, the use of chaining or pipelining in functional programming allows the compiler to optimize the code for the user, and map is simply mapping the function to each element as a list or collection is iterated through. Functional programming, is more convenient, and abstracts the mundane implementation of "for" loops etc. There are limitations to this convenience, particularly if you intend to use functional programming to implement parallel processing.
It is arguable depending on the Software Engineer or developer, that the compiler will be more efficient and know ahead of time the situation it is implemented in. IMHO, mid-level Software Engineers who are familiar with functional programming, well versed in conventional programming, and knowledgeable in parallel processing, will implement both conventional and functional.

Cost in Scala of iteratively manipulating functions rather than simpler types

Having spent some time exploring Haskell, I've become used to building and manipulating chains of functions - and to assessing how this will perform in non-strict evaluation. I realise I do not understand well how that style performs in Scala.
To take a simple example, imagine that the Scala List had no append method. You could concatenate two lists of integers like this:
xs.reverse.foldLeft(ys)(_.::(_))
(Yes, I know Scala recently changed foldRight so that it just does reverse.foldLeft. sigh)
An alternative is to build up a function chain like this:
xs.foldLeft(identity[List[Int]] _)((f, a) => f compose (a :: _)) (ys)
In non-strict Haskell, the second approach is often better because it is "productive" and the entire chain will not necessarily be evaluated (I know, in Haskell you would just use a right fold for this). But in Scala, these two approaches will perform similarly. Both are O(n). Both are effectively creating a reversed collection on the heap and then iteratively consing that onto ys (you can think of the second style as creating a linked list of functions).
I have a general inclination towards the second style, but my question is: are there hidden gotchas (performance penalties I had not considered, significantly higher heap usage, that kind of thing) in this approach? In Scala, what factors should I keep in mind (other than code readability) when choosing between manipulating data or functions?

Tail recursion vs head classic recursion

listening to Scala courses and explanations I often hear: "but in real code we are not using recursion, but tail recursion".
Does it mean that in my Real code I should NOT use recursion, but tail recursion that is very much like looping and does not require that epic phrase "in order to understand recursion you first need understand recursion" .
In reality, taking into account your stack.. you more likely would use loop-like tail recursion.
Am I wrong? Is that 'classic' recursion is good only for education purposes to make your brain travel back to the university-past?
Or, for all that, there is place where we can use it.. where the depth of recursion calls is less than X (where X your stack-overflow limit). Or we can start coding from classic-recursion and then, being afraid of your stack blowing one day, apply couple of refactorings to make it tail-like to make use even stronger on refactoring field?
Question: Some real samples that you would use / have used 'classic head' recursion in your real code, which is not refactored yet into tail one, maybe?
Tail Recursion == Loop
You can take any loop and express it as tail-recursive call.
Background: In pure FP, everything must result in some value. while loop in scala doesn't result in any expression, only side-effects (e.g. update some variable). It exists only to support programmers coming from imperative background. Scala encourages developers to reconsider replacing while loop with recursion, which always result in some value.
So according to Scala: Recursion is the new iteration.
However, there is a problem with previous statement: while "Regular" Recursive code is easier to read, it comes with a performance penalty AND carries an inherent risk of overflowing the stack. On the other hand, tail-recursive code will never result in stack overflow (at least in Scala*), and the performance will be the same as loops (In fact, I'm sure Scala converts all tail recursive calls to plain old iterations).
Going back to the question, nothing wrong with sticking to the "Regular" recursion, unless:
The algorithm you are using in calculating large numbers (stack overflow)
Tail Recursion brings a noticeable performance gain
There are two basic kinds of recursion:
head recursion
Tail recursion
In head recursion, a function makes its recursive call and then performs some more calculations, maybe using the result of the recursive call, for example. In a tail recursive function, all calculations happen first and the recursive call is the last thing that happens.
The importance of this distinction doesn’t jump out at you, but it’s extremely important! Imagine a tail recursive function. It runs. It completes all its computation. As its very last action, it is ready to make its recursive call. What, at this point, is the use of the stack frame? None at all. We don’t need our local variables anymore because we’re done with all computations. We don’t need to know which function we’re in because we’re just going to re-enter the very same function. Scala, in the case of tail recursion, can eliminate the creation of a new stack frame and just re-use the current stack frame. The stack never gets any deeper, no matter how many times the recursive call is made. That’s the voodoo that makes tail recursion special in Scala.
Let's see with the example.
def factorial1(n:Int):Int =
if (n == 0) 1 else n * factorial1(n -1)
def factorial2(n:Int):Int = {
def loop(acc:Int,n:Int):Int =
if (n == 0) 1 else loop(acc * n,n -1)
loop(1,n)
}
Incidentally, some languages achieve a similar end by converting tail recursion into iteration rather than by manipulating the stack.
This won’t work with head recursion. Do you see why? Imagine a head recursive function. First it does some work, then it makes its recursive call, then it does a little more work. We can’t just re-use the current stack frame when we make that recursive call. We’re going to NEED that stack frame info after the recursive call completes. It has our local variables, including the result (if any) returned by the recursive call.
Here’s a question for you. Is the example function factorial1 head recursive or tail recursive? Well, what does it do? (A) It checks whether its parameter is 0. (B) If so, it returns 1 since factorial of 0 is 1. (C) If not, it returns n multiply by the result of a recursive call. The recursive call is the last thing we typed before ending the function. That’s tail recursion, right? Wrong. The recursive call is made, and THEN n is multiplied by the result, and this product is returned. This is actually head recursion (or middle recursion, if you like) because the recursive call is not the very last thing that happens.
For more info please refer the link
The first thing one should look at when developing software is the readability and maintainability of the code. Looking at performance characteristics is mostly premature optimization.
There is no reason not to use recursion when it helps to write high quality code.
The same counts for tail recursion vs. normal loops. Just look at this simple tail recursive function:
def gcd(a: Int, b: Int) = {
def loop(a: Int, b: Int): Int =
if (b == 0) a else loop(b, a%b)
loop(math.abs(a), math.abs(b))
}
It calculates the greatest common divisor of two numbers. Once you know the algorithm it is clear how it works - writing this with a while-loop wouldn't make it clearer. Instead you would probably introduce a bug on the first try because you forgot to store a new value into one of the variables a or b.
On the other side see these two functions:
def goRec(i: Int): Unit = {
if (i < 5) {
println(i)
goRec(i+1)
}
}
def goLoop(i: Int): Unit = {
var j = i
while (j < 5) {
println(j)
j += 1
}
}
Which one is easier to read? They are more or less equal - all the syntax sugar you gain for tail recursive functions due to Scalas expression based nature is gone.
For recursive functions there is another thing that comes to play: lazy evaluation. If your code is lazy evaluated it can be recursive but no stack overflow will happen. See this simple function:
def map(f: Int => Int, xs: Stream[Int]): Stream[Int] = f -> xs match {
case (_, Stream.Empty) => Stream.Empty
case (f, x #:: xs) => f(x) #:: map(f, xs)
}
Will it crash for large inputs? I don't think so:
scala> map(_+1, Stream.from(0).takeWhile(_<=1000000)).last
res6: Int = 1000001
Trying the same with Scalas List would kill your program. But because Stream is lazy this is not a problem. In this case you could also write a tail recursive function but generally this not easily possible.
There are many algorithms which will not be clear when they are written iteratively - one example is depth first search of a graph. Do you want to maintain a stack by yourself just to save the values where you need to go back to? No, you won't because it is error prone and looks ugly (beside from any definition of recursion - it would call a iterative depth first search recursion as well because it has to use a stack and "normal" recursion has to use a stack as well - it is just hidden from the developer and maintained by the compiler).
To come back to the point of premature optimization, I have heard a nice analogy: When you have a problem that can't be solved with Int because your numbers will get large and it is likely that you get an overflow then don't switch to Long because it is likely that you get an overflow here as well.
For recursion it means that there may be cases where you will blow up your stack but it is more likely that when you switch to a memory only based solution you will get an out of memory error instead. A better advice is to find a different algorithm that doesn't perform that badly.
As conclusion, try to prefer tail recursion instead of loops or normal recursion because it will for sure not kill your stack. But when you can do better then don't hesitate to do it better.
If you're not dealing with a linear sequence, then trying to write a tail-recursive function to traverse the entire collection is very difficult. In such cases, for the sake of readability/maintainability, you usually just use normal recursion instead.
A common example of this is a traversal of a binary tree data structure. For each node you might need to recur on both the left and right child nodes. If you were to try to write such a function recursively, where first the left node is visited and then the right, you'd need to maintain some sort of auxiliary data structure to track all the remaining right nodes that need to be visited. However, you can achieve the same thing just using the stack, and it's going to be more readable.
An example of this is the iterator method from Scala's RedBlack tree:
def iterator: Iterator[(A, B)] =
left.iterator ++ Iterator.single(Pair(key, value)) ++ right.iterator

Is there any advantage to avoiding while loops in Scala?

Reading Scala docs written by the experts one can get the impression that tail recursion is better than a while loop, even when the latter is more concise and clearer. This is one example
object Helpers {
implicit class IntWithTimes(val pip:Int) {
// Recursive
def times(f: => Unit):Unit = {
#tailrec
def loop(counter:Int):Unit = {
if (counter >0) { f; loop(counter-1) }
}
loop(pip)
}
// Explicit loop
def :#(f: => Unit) = {
var lc = pip
while (lc > 0) { f; lc -= 1 }
}
}
}
(To be clear, the expert was not addressing looping at all, but in the example they chose to write a loop in this fashion as if by instinct, which is what the raised the question for me: should I develop a similar instinct..)
The only aspect of the while loop that could be better is the iteration variable should be local to the body of the loop, and the mutation of the variable should be in a fixed place, but Scala chooses not to provide that syntax.
Clarity is subjective, but the question is does the (tail) recursive style offer improved performance?
I'm pretty sure that, due to the limitations of the JVM, not every potentially tail-recursive function will be optimised away by the Scala compiler as so, so the short (and sometimes wrong) answer to your question on performance is no.
The long answer to your more general question (having an advantage) is a little more contrived. Note that, by using while, you are in fact:
creating a new variable that holds a counter.
mutating that variable.
Off-by-one errors and the perils of mutability will ensure that, on the long run, you'll introduce bugs with a while pattern. In fact, your times function could easily be implemented as:
def times(f: => Unit) = (1 to pip) foreach f
Which not only is simpler and smaller, but also avoids any creation of transient variables and mutability. In fact, if the type of the function you are calling would be something to which the results matter, then the while construction would start to be even more difficult to read. Please attempt to implement the following using nothing but whiles:
def replicate(l: List[Int])(times: Int) = l.flatMap(x => List.fill(times)(x))
Then proceed to define a tail-recursive function that does the same.
UPDATE:
I hear you saying: "hey! that's cheating! foreach is neither a while nor a tail-rec call". Oh really? Take a look into Scala's definition of foreach for Lists:
def foreach[B](f: A => B) {
var these = this
while (!these.isEmpty) {
f(these.head)
these = these.tail
}
}
If you want to learn more about recursion in Scala, take a look at this blog post. Once you are into functional programming, go crazy and read Rúnar's blog post. Even more info here and here.
In general, a directly tail recursive function (i.e., one that always calls itself directly and cannot be overridden) will always be optimized into a while loop by the compiler. You can use the #tailrec annotation to verify that the compiler is able to do this for a particular function.
As a general rule, any tail recursive function can be rewritten (usually automatically by the compiler) as a while loop and vice versa.
The purpose of writing functions in a (tail) recursive style is not to maximize performance or even conciseness, but to make the intent of the code as clear as possible, while simultaneously minimizing the chance of introducing bugs (by eliminating mutable variables, which generally make it harder to keep track of what the "inputs" and "outputs" of the function are). A properly written recursive function consists of a series of checks for terminating conditions (using either cascading if-else or a pattern match) with the recursive call(s) (plural only if not tail recursive) made if none of the terminating conditions are met.
The benefit of using recursion is most dramatic when there are several different possible terminating conditions. A series of if conditionals or patterns is generally much easier to comprehend than a single while condition with a whole bunch of (potentially complex and inter-related) boolean expressions &&'d together, especially if the return value needs to be different depending on which terminating condition is met.
Did these experts say that performance was the reason? I'm betting their reasons are more to do with expressive code and functional programming. Could you cite examples of their arguments?
One interesting reason why recursive solutions can be more efficient than more imperative alternatives is that they very often operate on lists and in a way that uses only head and tail operations. These operations are actually faster than random-access operations on more complex collections.
Anther reason that while-based solutions may be less efficient is that they can become very ugly as the complexity of the problem increases...
(I have to say, at this point, that your example is not a good one, since neither of your loops do anything useful. Your recursive loop is particularly atypical since it returns nothing, which implies that you are missing a major point about recursive functions. The functional bit. A recursive function is much more than another way of repeating the same operation n times.)
While loops do not return a value and require side effects to achieve anything. It is a control structure which only works at all for very simple tasks. This is because each iteration of the loop has to examine all of the state to decide what to next. The loops boolean expression may also have to be come very complex if there are multiple potential exit paths (or that complexity has to be distributed throughout the code in the loop, which can be ugly and obfuscatory).
Recursive functions offer the possibility of a much cleaner implementation. A good recursive solution breaks a complex problem down in to simpler parts, then delegates each part on to another function which can deal with it - the trick being that that other function is itself (or possibly a mutually recursive function, though that is rarely seen in Scala - unlike the various Lisp dialects, where it is common - because of the poor tail recursion support). The recursively called function receives in its parameters only the simpler subset of data and only the relevant state; it returns only the solution to the simpler problem. So, in contrast to the while loop,
Each iteration of the function only has to deal with a simple subset of the problem
Each iteration only cares about its inputs, not the overall state
Sucess in each subtask is clearly defined by the return value of the call that handled it.
State from different subtasks cannot become entangled (since it is hidden within each recursive function call).
Multiple exit points, if they exist, are much easier to represent clearly.
Given these advantages, recursion can make it easier to achieve an efficient solution. Especially if you count maintainability as an important factor in long-term efficiency.
I'm going to go find some good examples of code to add. Meanwhile, at this point I always recommend The Little Schemer. I would go on about why but this is the second Scala recursion question on this site in two days, so look at my previous answer instead.

Scala recursion no side effects

Ok, I get this all recursion is more functional because you are not changing the state of any object in an iteration. However there is nothing stopping you do this in scala.
var magoo = 7;
def mergeSort(xs: List[Int]): List[Int] = {
...
magoo = magoo + 1
mergeSort(xs1, xs2);
}
In fact, you can make recursion just as side effectless in Scala as you can in Java.
So is it fair to say that Scala just make it easier to write concise recursion by using pattern matching? Like there is nothing stopping me writing any stateless recursion code in Java that I can write in Scala?
The point is really that in Scala complex recursion can be achieved with neater code.
That's all.
Correct?
There is something that will stop you from writing recursion code in Java: Tail call elimination (TCE). In Java it is possible to get StackOverflowException on deep recursion, whereas in Scala tail calls will be optimized (internally represented as loops).
So is it fair to say that Scala just make it easier to write concise
recursion by using pattern matching?
I think in Scala those two concepts are orthogonal to each other.
If course you can do complex recursion in Java. You could do complex recursion in assembler if you wanted to. But in Scala it is easier to do.
Also Scala has tail call optimisation which is very important if you want to write any arbitrary iterative algorithm as a recursive method without getting a stack overflow or performance drop.
Few programming language actually forbids you writing immutable code. Actually, the real pure functional language might be just Haskell, and even Scheme and ML have some way to use mutable value. So, the functional style just encourage you to write immutable code. That depends on yourself to choose whether to change the value or not.