What is the time complexity of Pattern matching in scala? - scala

Does the time complexity depend on what is being matched on or does it get compiled down to some form of lookup table which can do O(1) lookups?

Some Scala's match statements can be compiled to the same byte code as Java's switch statements. There is an annotation to ensure that.
But, for most cases, especially complex ones like deconstructions, it will be compiled to the same byte code as a series of if / else statements.
In general, I would not expect them to be a "constant" operation, but rather a "linear" operation.
In any case, since the maximum number of checks will not change for the input, and usually it will not be more than a ten of them. Formally one will say it has O(1) complexity.
See yǝsʞǝlA's answer for a more detailed explanation of that.
If you are worried about it, you can put most common cases first and then the others. However, I would not really care about the performance of it, your application will not really notice it. And I would favor readability and correctness of the code instead.

Pattern matching in most cases will be O(1) because you are usually matching against a small number or possible cases and each match is comprised of a few constant time operations on average.
Since pattern matching is achieved by calling unapply method on the matched object docs and optionally comparing extracted values, the time complexity will depend on unapplys method's implementation and can be of any complexity. There is no compiler optimization possible for general case because some pattern matches depend on data being passed to them.
Compare these scenarios:
List(1, 2, 3) match {
case _ :+ last => ... // O(n) with respect to list length
case head :: tail => ... // O(1) w.r.t. list length
case _ => ... // O(1) - default case, no operation needs to be done
}
Most of the time we would pattern match something like a list to get head and tail split with :: - O(1) because unapply is simply returning head if it exists.
We usually don't use :+ because it's not common and expensive (library code):
/** An extractor used to init/last deconstruct sequences. */
object :+ {
/** Splits a sequence into init :+ last.
* #return Some((init, last)) if sequence is non-empty. None otherwise.
*/
def unapply[T,Coll <: SeqLike[T, Coll]](
t: Coll with SeqLike[T, Coll]): Option[(Coll, T)] =
if(t.isEmpty) None
else Some(t.init -> t.last)
}
To get last element of a sequence (t.last) we need to loop, which is O(n).
So it will really depend how you pattern match, but usually you pattern match case classes, tuples, options, collections to get first element and not last, etc. In such overwhelming majority of cases you'll be getting O(1) time complexity and a ton of type safety.
Additionally:
In the worst case here there will be m patterns each doing on average c operations to perform a match (this assumes unapply has constant time, but there are exceptions). Additionally there will be an object with n properties which we need to match against these patterns which gives us a total of: m * c * n operations. However, since m is really small (patterns never grow dynamically and usually written by a human) we can safely call it a constant b giving us: T(n) = b * c * n. In terms of big-O: T(n) = O(n). So we established theoretical bound of O(n) that is for cases where we need to check all n properties of an object. As I was pointing out above in most of the cases we don't need to check all properties/elements like when we use head :: tail where n is replaced with constant and we get O(1). It's only if we always do something like head :+ tail we would get O(n). Amortized cost I believe is still O(1) for all cases in your program.

Related

Queue implementation in Odersky Scala book. Chapter 19

I see this code on page 388 of the Odersky book on Scala:
class SlowAppendQueue[T](elems: List[T]) {
def head = elems.head
def tail = new SowAppendQueue(elems.tail)
def enqueue(x: T) = new SlowAppendQueue(elems ::: List(x))
}
class SlowHeadQueue[T](smele: List[T]) {
def head = smele.last
def tail = new SlowHeadQueue(smele.init)
def enqueue(x: T) = new SlowHeadQueue(x :: smele)
}
Is the following correct to say:
Both implementations of tail takes time proportional to the number of elements in the queue.
The second implementation of head is slower than the first. The second implementation takes time proportional to the length of the queue. Why is this? How is it implemented? Is it like a linked list where each element has a pointer to the next?
Why does Odersky say the second class' implementation of tail is problematic but not the first?
No. In the first case, tail works in constant time, because elems.tail is a constant time operation (it just returns the the tail of the list). The constructor new SlowAppendQueue(...) is also a constant time operation, because it just wraps the list.
Because if smele has N > 1 elements, then smele.init must rebuild a new list with N - 1 elements from scratch. This takes linear time, therefore it is much slower than the O(1) operation from the first queue implementation.
O(N) operations are problematic because they are slow for large N, whereas O(1) operations are essentially never problematic.
I think you should take a closer look into how the immutable single-linked list is implemented, and what it takes to prepend an element (O(1)), append an element (O(N)), to access the tail (O(1)), rebuild the init (O(N)). Then everything else becomes obvious.
No, the first tail implementation takes constant time. This is because List.tail is a constant time operation due to structural sharing, and wrapping the list in a new SlowAppendQueue is also a constant time operation.
The second implementation of head takes constant time because of the way functional linked lists (including Scala's List class) work. Each list node has a link to the node after it. In order to remove the last element via init, the entire list must be rebuilt.
In summary, List is fast when operating on the beginning, but not when solely operating on the end. See also the Scala docs for List.

Why do Scala's index methods return -1 instead of None if the element is not found?

I've always been wondering why in Scala the various index methods for determining the position of an element in a collection (e.g. List.indexOf, List.indexWhere) return -1 to indicate the absence of the given element in the collection, instead of a more idiomatic Option[Int]. Is there some particular advantage to returning -1 instead of None, or is this just for historical reasons?
It is just for historical reasons, but then one wants to know what the historical reasons are: what was the history, and why did it turn out that way?
The immediate history is the java.lang.String.indexOf method, which returns the index, or -1 if no matching character is found. But this is hardly new; the Fortran SCAN function returns 0 if no character is found in a string, which is the same thing given that Fortran uses 1-indexing.
The reason to do this is that strings have only positive length, so any negative length can be used as a None value without any overhead of boxing. -1 is the most convenient negative number, so that's it.
And this can add up if the compiler isn't smart enough to realize that all the boxing and unboxing and everything is irrelevant. In particular, an object creation tends to take 5-10 ns, while a function call or comparison typically takes more like 1-2 ns, so if the collection is short, creating a new object can have a sizable fractional penalty (more so if your memory is already taxed and the GC has a lot of work to do).
If Scala had initially had an amazing optimizer, then the choice probably would have been different, as one would just write things with options, which is safer and less of a special case, and then trust the compiler to convert it into appropriately high-performance code.
Speed? (not sure)
def a(): Option[Int] = Some(Math.random().toInt)
def b(): Int = Math.random().toInt
val t0 = System.nanoTime; (0 to 1000000).foreach(_ => a()); println("" + (System.nanoTime - t0))
// 53988000
val t0 = System.nanoTime; (0 to 1000000).foreach(_ => b()); println("" + (System.nanoTime - t0))
// 49273000
And you also should always check for index < 0 in Some(index)
There is also the benefit that just returning an Int can use Java's built-in types, whereas Option[Int] would need to wrap the integer in an Object. This means both worse speed (as indicated by #idonnie) but also more memory usage.
While Option is great as a general tool (and I use it a lot) also other non-value presentations s.a. Double.NaN or an empty string are perfectly valid, and useful.
One of the benefits of using Option is the ability to pass it to for loops etc. as a collection. If you are not likely to do that, checking for -1 or NaN may be more concise than doing matches for None/Some.

why use foldLeft instead of procedural version?

So in reading this question it was pointed out that instead of the procedural code:
def expand(exp: String, replacements: Traversable[(String, String)]): String = {
var result = exp
for ((oldS, newS) <- replacements)
result = result.replace(oldS, newS)
result
}
You could write the following functional code:
def expand(exp: String, replacements: Traversable[(String, String)]): String = {
replacements.foldLeft(exp){
case (result, (oldS, newS)) => result.replace(oldS, newS)
}
}
I would almost certainly write the first version because coders familiar with either procedural or functional styles can easily read and understand it, while only coders familiar with functional style can easily read and understand the second version.
But setting readability aside for the moment, is there something that makes foldLeft a better choice than the procedural version? I might have thought it would be more efficient, but it turns out that the implementation of foldLeft is actually just the procedural code above. So is it just a style choice, or is there a good reason to use one version or the other?
Edit: Just to be clear, I'm not asking about other functions, just foldLeft. I'm perfectly happy with the use of foreach, map, filter, etc. which all map nicely onto for-comprehensions.
Answer: There are really two good answers here (provided by delnan and Dave Griffith) even though I could only accept one:
Use foldLeft because there are additional optimizations, e.g. using a while loop which will be faster than a for loop.
Use fold if it ever gets added to regular collections, because that will make the transition to parallel collections trivial.
It's shorter and clearer - yes, you need to know what a fold is to understand it, but when you're programming in a language that's 50% functional, you should know these basic building blocks anyway. A fold is exactly what the procedural code does (repeatedly applying an operation), but it's given a name and generalized. And while it's only a small wheel you're reinventing, but it's still a wheel reinvention.
And in case the implementation of foldLeft should ever get some special perk - say, extra optimizations - you get that for free, without updating countless methods.
Other than a distaste for mutable variable (even mutable locals), the basic reason to use fold in this case is clarity, with occasional brevity. Most of the wordiness of the fold version is because you have to use an explicit function definition with a destructuring bind. If each element in the list is used precisely once in the fold operation (a common case), this can be simplified to use the short form. Thus the classic definition of the sum of a collection of numbers
collection.foldLeft(0)(_+_)
is much simpler and shorter than any equivalent imperative construct.
One additional meta-reason to use functional collection operations, although not directly applicable in this case, is to enable a move to using parallel collection operations if needed for performance. Fold can't be parallelized, but often fold operations can be turned into commutative-associative reduce operations, and those can be parallelized. With Scala 2.9, changing something from non-parallel functional to parallel functional utilizing multiple processing cores can sometimes be as easy as dropping a .par onto the collection you want to execute parallel operations on.
One word I haven't seen mentioned here yet is declarative:
Declarative programming is often defined as any style of programming that is not imperative. A number of other common definitions exist that attempt to give the term a definition other than simply contrasting it with imperative programming. For example:
A program that describes what computation should be performed and not how to compute it
Any programming language that lacks side effects (or more specifically, is referentially transparent)
A language with a clear correspondence to mathematical logic.
These definitions overlap substantially.
Higher-order functions (HOFs) are a key enabler of declarativity, since we only specify the what (e.g. "using this collection of values, multiply each value by 2, sum the result") and not the how (e.g. initialize an accumulator, iterate with a for loop, extract values from the collection, add to the accumulator...).
Compare the following:
// Sugar-free Scala (Still better than Java<5)
def sumDoubled1(xs: List[Int]) = {
var sum = 0 // Initialized correctly?
for (i <- 0 until xs.size) { // Fenceposts?
sum = sum + (xs(i) * 2) // Correct value being extracted?
// Value extraction and +/* smashed together
}
sum // Correct value returned?
}
// Iteration sugar (similar to Java 5)
def sumDoubled2(xs: List[Int]) = {
var sum = 0
for (x <- xs) // We don't need to worry about fenceposts or
sum = sum + (x * 2) // value extraction anymore; that's progress
sum
}
// Verbose Scala
def sumDoubled3(xs: List[Int]) = xs.map((x: Int) => x*2). // the doubling
reduceLeft((x: Int, y: Int) => x+y) // the addition
// Idiomatic Scala
def sumDoubled4(xs: List[Int]) = xs.map(_*2).reduceLeft(_+_)
// ^ the doubling ^
// \ the addition
Note that our first example, sumDoubled1, is already more declarative than (most would say superior to) C/C++/Java<5 for loops, because we haven't had to micromanage the iteration state and termination logic, but we're still vulnerable to off-by-one errors.
Next, in sumDoubled2, we're basically at the level of Java>=5. There are still a couple things that can go wrong, but we're getting pretty good at reading this code-shape, so errors are quite unlikely. However, don't forget that a pattern that's trivial in a toy example isn't always so readable when scaled up to production code!
With sumDoubled3, desugared for didactic purposes, and sumDoubled4, the idiomatic Scala version, the iteration, initialization, value extraction and choice of return value are all gone.
Sure, it takes time to learn to read the functional versions, but we've drastically foreclosed our options for making mistakes. The "business logic" is clearly marked, and the plumbing is chosen from the same menu that everyone else is reading from.
It is worth pointing out that there is another way of calling foldLeft which takes advantages of:
The ability to use (almost) any Unicode symbol in an identifier
The feature that if a method name ends with a colon :, and is called infix, then the target and parameter are switched
For me this version is much clearer, because I can see that I am folding the expr value into the replacements collection
def expand(expr: String, replacements: Traversable[(String, String)]): String = {
(expr /: replacements) { case (r, (o, n)) => r.replace(o, n) }
}

Elegant way to reverse a list using foldRight?

I was reading about fold techniques in Programming in Scala book and came across this snippet:
def reverseLeft[T](xs:List[T]) = (List[T]() /: xs) {
(y,ys) => ys :: y
}
As you can see, it was done using foldLeft or /: operator. Curious how it would look like if I did it using :\, I came up with this:
def reverseRight[T](xs:List[T]) = (xs :\ List[T]()) {
(y,ys) => ys ::: List(y)
}
As I understand it, ::: doesn't seem to be as fast as :: and has a linear cost depending on the size of the operand list. Admittedly, I don't have a background in CS and no prior FP experience. So my questions are:
How do you recognise/distinguish between foldLeft/foldRight in problem approaches?
Is there a better way of doing this without using :::?
Since foldRight on List in the standard library is strict and implemented using linear recursion, you should avoid using it, as a rule. An iterative implementation of foldRight would be as follows:
def foldRight[A,B](f: (A, B) => B, z: B, xs: List[A]) =
xs.reverse.foldLeft(z)((x, y) => f(y, x))
A recursive implementation of foldLeft could be this:
def foldLeft[A,B](f: (B, A) => B, z: B, xs: List[A]) =
xs.reverse.foldRight(z)((x, y) => f(y, x))
So you see, if both are strict, then one or the other of foldRight and foldLeft is going to be implemented (conceptually anyway) with reverse. Since the way lists are constructed with :: associates to the right, the straightforward iterative fold is going to be foldLeft, and foldRight is simply "reverse then foldLeft".
Intuitively, you might think that this would be a slow implementation of foldRight, since it folds the list twice. But:
"Twice" is a constant factor anyway, so it's asymptotically equivalent to folding once.
You have to go over the list twice anyway. Once to push computations onto the stack and again to pop them off the stack.
The implementation of foldRight above is faster than the one in the standard library.
Operations on a List are intentionally not symmetric. The List data structure is a singly-linked list where each node (both data and pointer) are immutable. The idea behind this data structure is that you perform modifications on the front of the list by taking references to internal nodes and adding new nodes that point to them -- different versions of the list will share the same nodes for the end of the list.
The ::: operator which appends a new element on to the end of the list has to create a new copy of the entire list, because otherwise it would modify other lists that share nodes with the list you're appending to. This is why ::: takes linear time.
Scala has a data structure called a ListBuffer that you can use instead of the ::: operator to make appending to the end of a list faster. Basically, you create a new ListBuffer and it starts with an empty list. The ListBuffer maintains a list completely separate from any other list that the program knows about, so it's safe to modify it by adding on to the end. When you're finished adding on to the end, you call ListBuffer.toList, which releases the list into the world, at which point you can no longer add on to the end without copying it.
foldLeft and foldRight also share a similar assymmetry. foldRight requires you to walk the entire list to get to the end of the list, and keep track of everywhere you've visited on the way there, so that you an visit them in reverse order. This is usually done recursively, and it can lead to foldRight causing stack overflows on large lists. foldLeft on the other hand, deals with nodes in the order they appear in the list, so it can forget the ones it's visited already and only needs to know about one node at a time. Though foldLeft is also usually implemented recursively, it can take advantage of an optimization called tail recursion elimination, in which the compiler transforms the recursive calls into a loop because the function doesn't do anything after returning from the recursive call. Thus, foldLeft doesn't overflow the stack even on very long lists. EDIT: foldRight in Scala 2.8 is actually implemented by reversing the list and running foldLeft on the reversed list -- so the tail recursion issue is not an issue -- both data structures optimize tail recursion correctly, and you could choose either one (You do get into the issue now that you're defining reverse in terms of reverse -- you don't need to worry if you're defining your own reverse method for the fun of it, but you wouldn't have the foldRight option at all if you were defining Scala's reverse method.)
Thus, you should prefer foldLeft and :: over foldRight and :::.
(In an algorithm that would combine foldLeft with ::: or foldRight with ::, then you need to make a decision for yourself about which is more important: stack space or running time. Or you should use foldLeft with a ListBuffer.)

Why should I avoid using local modifiable variables in Scala?

I'm pretty new to Scala and most of the time before I've used Java. Right now I have warnings all over my code saying that i should "Avoid mutable local variables" and I have a simple question - why?
Suppose I have small problem - determine max int out of four. My first approach was:
def max4(a: Int, b: Int,c: Int, d: Int): Int = {
var subMax1 = a
if (b > a) subMax1 = b
var subMax2 = c
if (d > c) subMax2 = d
if (subMax1 > subMax2) subMax1
else subMax2
}
After taking into account this warning message I found another solution:
def max4(a: Int, b: Int,c: Int, d: Int): Int = {
max(max(a, b), max(c, d))
}
def max(a: Int, b: Int): Int = {
if (a > b) a
else b
}
It looks more pretty, but what is ideology behind this?
Whenever I approach a problem I'm thinking about it like: "Ok, we start from this and then we incrementally change things and get the answer". I understand that the problem is that I try to change some initial state to get an answer and do not understand why changing things at least locally is bad? How to iterate over collection then in functional languages like Scala?
Like an example: Suppose we have a list of ints, how to write a function that returns sublist of ints which are divisible by 6? Can't think of solution without local mutable variable.
In your particular case there is another solution:
def max4(a: Int, b: Int,c: Int, d: Int): Int = {
val submax1 = if (a > b) a else b
val submax2 = if (c > d) c else d
if (submax1 > submax2) submax1 else submax2
}
Isn't it easier to follow? Of course I am a bit biased but I tend to think it is, BUT don't follow that rule blindly. If you see that some code might be written more readably and concisely in mutable style, do it this way -- the great strength of scala is that you don't need to commit to neither immutable nor mutable approaches, you can swing between them (btw same applies to return keyword usage).
Like an example: Suppose we have a list of ints, how to write a
function that returns the sublist of ints which are divisible by 6?
Can't think of solution without local mutable variable.
It is certainly possible to write such function using recursion, but, again, if mutable solution looks and works good, why not?
It's not so related with Scala as with the functional programming methodology in general. The idea is the following: if you have constant variables (final in Java), you can use them without any fear that they are going to change. In the same way, you can parallelize your code without worrying about race conditions or thread-unsafe code.
In your example is not so important, however imagine the following example:
val variable = ...
new Future { function1(variable) }
new Future { function2(variable) }
Using final variables you can be sure that there will not be any problem. Otherwise, you would have to check the main thread and both function1 and function2.
Of course, it's possible to obtain the same result with mutable variables if you do not ever change them. But using inmutable ones you can be sure that this will be the case.
Edit to answer your edit:
Local mutables are not bad, that's the reason you can use them. However, if you try to think approaches without them, you can arrive to solutions as the one you posted, which is cleaner and can be parallelized very easily.
How to iterate over collection then in functional languages like Scala?
You can always iterate over a inmutable collection, while you do not change anything. For example:
val list = Seq(1,2,3)
for (n <- list)
println n
With respect to the second thing that you said: you have to stop thinking in a traditional way. In functional programming the usage of Map, Filter, Reduce, etc. is normal; as well as pattern matching and other concepts that are not typical in OOP. For the example you give:
Like an example: Suppose we have a list of ints, how to write a function that returns sublist of ints which are divisible by 6?
val list = Seq(1,6,10,12,18,20)
val result = list.filter(_ % 6 == 0)
Firstly you could rewrite your example like this:
def max(first: Int, others: Int*): Int = {
val curMax = Math.max(first, others(0))
if (others.size == 1) curMax else max(curMax, others.tail : _*)
}
This uses varargs and tail recursion to find the largest number. Of course there are many other ways of doing the same thing.
To answer your queston - It's a good question and one that I thought about myself when I first started to use scala. Personally I think the whole immutable/functional programming approach is somewhat over hyped. But for what it's worth here are the main arguments in favour of it:
Immutable code is easier to read (subjective)
Immutable code is more robust - it's certainly true that changing mutable state can lead to bugs. Take this for example:
for (int i=0; i<100; i++) {
for (int j=0; j<100; i++) {
System.out.println("i is " + i = " and j is " + j);
}
}
This is an over simplified example but it's still easy to miss the bug and the compiler won't help you
Mutable code is generally not thread safe. Even trivial and seemingly atomic operations are not safe. Take for example i++ this looks like an atomic operation but it's actually equivalent to:
int i = 0;
int tempI = i + 0;
i = tempI;
Immutable data structures won't allow you to do something like this so you would need to explicitly think about how to handle it. Of course as you point out local variables are generally threadsafe, but there is no guarantee. It's possible to pass a ListBuffer instance variable as a parameter to a method for example
However there are downsides to immutable and functional programming styles:
Performance. It is generally slower in both compilation and runtime. The compiler must enforce the immutability and the JVM must allocate more objects than would be required with mutable data structures. This is especially true of collections.
Most scala examples show something like val numbers = List(1,2,3) but in the real world hard coded values are rare. We generally build collections dynamically (from a database query etc). Whilst scala can reassign the values in a colection it must still create a new collection object every time you modify it. If you want to add 1000 elements to a scala List (immutable) the JVM will need to allocate (and then GC) 1000 objects
Hard to maintain. Functional code can be very hard to read, it's not uncommon to see code like this:
val data = numbers.foreach(_.map(a => doStuff(a).flatMap(somethingElse)).foldleft("", (a : Int,b: Int) => a + b))
I don't know about you but I find this sort of code really hard to follow!
Hard to debug. Functional code can also be hard to debug. Try putting a breakpoint halfway into my (terrible) example above
My advice would be to use a functional/immutable style where it genuinely makes sense and you and your colleagues feel comfortable doing it. Don't use immutable structures because they're cool or it's "clever". Complex and challenging solutions will get you bonus points at Uni but in the commercial world we want simple solutions to complex problems! :)
Your two main questions:
Why warn against local state changes?
How can you iterate over collections without mutable state?
I'll answer both.
Warnings
The compiler warns against the use of mutable local variables because they are often a cause of error. That doesn't mean this is always the case. However, your sample code is pretty much a classic example of where mutable local state is used entirely unnecessarily, in a way that not only makes it more error prone and less clear but also less efficient.
Your first code example is more inefficient than your second, functional solution. Why potentially make two assignments to submax1 when you only ever need to assign one? You ask which of the two inputs is larger anyway, so why not ask that first and then make one assignment? Why was your first approach to temporarily store partial state only halfway through the process of asking such a simple question?
Your first code example is also inefficient because of unnecessary code duplication. You're repeatedly asking "which is the biggest of two values?" Why write out the code for that 3 times independently? Needlessly repeating code is a known bad habit in OOP every bit as much as FP and for precisely the same reasons. Each time you needlessly repeat code, you open a potential source of error. Adding mutable local state (especially when so unnecessary) only adds to the fragility and to the potential for hard to spot errors, even in short code. You just have to type submax1 instead of submax2 in one place and you may not notice the error for a while.
Your second, FP solution removes the code duplication, dramatically reducing the chance of error, and shows that there was simply no need for mutable local state. It's also, as you yourself say, cleaner and clearer - and better than the alternative solution in om-nom-nom's answer.
(By the way, the idiomatic Scala way to write such a simple function is
def max(a: Int, b: Int) = if (a > b) a else b
which terser style emphasises its simplicity and makes the code less verbose)
Your first solution was inefficient and fragile, but it was your first instinct. The warning caused you to find a better solution. The warning proved its value. Scala was designed to be accessible to Java developers and is taken up by many with a long experience of imperative style and little or no knowledge of FP. Their first instinct is almost always the same as yours. You have demonstrated how that warning can help improve code.
There are cases where using mutable local state can be faster but the advice of Scala experts in general (not just the pure FP true believers) is to prefer immutability and to reach for mutability only where there is a clear case for its use. This is so against the instincts of many developers that the warning is useful even if annoying to experienced Scala devs.
It's funny how often some kind of max function comes up in "new to FP/Scala" questions. The questioner is very often tripping up on errors caused by their use of local state... which link both demonstrates the often obtuse addiction to mutable state among some devs while also leading me on to your other question.
Functional Iteration over Collections
There are three functional ways to iterate over collections in Scala
For Comprehensions
Explicit Recursion
Folds and other Higher Order Functions
For Comprehensions
Your question:
Suppose we have a list of ints, how to write a function that returns sublist of ints which are divisible by 6? Can't think of solution without local mutable variable
Answer: assuming xs is a list (or some other sequence) of integers, then
for (x <- xs; if x % 6 == 0) yield x
will give you a sequence (of the same type as xs) containing only those items which are divisible by 6, if any. No mutable state required. Scala just iterates over the sequence for you and returns anything matching your criteria.
If you haven't yet learned the power of for comprehensions (also known as sequence comprehensions) you really should. Its a very expressive and powerful part of Scala syntax. You can even use them with side effects and mutable state if you want (look at the final example on the tutorial I just linked to). That said, there can be unexpected performance penalties and they are overused by some developers.
Explicit Recursion
In the question I linked to at the end of the first section, I give in my answer a very simple, explicitly recursive solution to returning the largest Int from a list.
def max(xs: List[Int]): Option[Int] = xs match {
case Nil => None
case List(x: Int) => Some(x)
case x :: y :: rest => max( (if (x > y) x else y) :: rest )
}
I'm not going to explain how the pattern matching and explicit recursion work (read my other answer or this one). I'm just showing you the technique. Most Scala collections can be iterated over recursively, without any need for mutable state. If you need to keep track of what you've been up to along the way, you pass along an accumulator. (In my example code, I stick the accumulator at the front of the list to keep the code smaller but look at the other answers to those questions for more conventional use of accumulators).
But here is a (naive) explicitly recursive way of finding those integers divisible by 6
def divisibleByN(n: Int, xs: List[Int]): List[Int] = xs match {
case Nil => Nil
case x :: rest if x % n == 0 => x :: divisibleByN(n, rest)
case _ :: rest => divisibleByN(n, rest)
}
I call it naive because it isn't tail recursive and so could blow your stack. A safer version can be written using an accumulator list and an inner helper function but I leave that exercise to you. The result will be less pretty code than the naive version, no matter how you try, but the effort is educational.
Recursion is a very important technique to learn. That said, once you have learned to do it, the next important thing to learn is that you can usually avoid using it explicitly yourself...
Folds and other Higher Order Functions
Did you notice how similar my two explicit recursion examples are? That's because most recursions over a list have the same basic structure. If you write a lot of such functions, you'll repeat that structure many times. Which makes it boilerplate; a waste of your time and a potential source of error.
Now, there are any number of sophisticated ways to explain folds but one simple concept is that they take the boilerplate out of recursion. They take care of the recursion and the management of accumulator values for you. All they ask is that you provide a seed value for the accumulator and the function to apply at each iteration.
For example, here is one way to use fold to extract the highest Int from the list xs
xs.tail.foldRight(xs.head) {(a, b) => if (a > b) a else b}
I know you aren't familiar with folds, so this may seem gibberish to you but surely you recognise the lambda (anonymous function) I'm passing in on the right. What I'm doing there is taking the first item in the list (xs.head) and using it as the seed value for the accumulator. Then I'm telling the rest of the list (xs.tail) to iterate over itself, comparing each item in turn to the accumulator value.
This kind of thing is a common case, so the Collections api designers have provided a shorthand version:
xs.reduce {(a, b) => if (a > b) a else b}
(If you look at the source code, you'll see they have implemented it using a fold).
Anything you might want to do iteratively to a Scala collection can be done using a fold. Often, the api designers will have provided a simpler higher-order function which is implemented, under the hood, using a fold. Want to find those divisible-by-six Ints again?
xs.foldRight(Nil: List[Int]) {(x, acc) => if (x % 6 == 0) x :: acc else acc}
That starts with an empty list as the accumulator, iterates over every item, only adding those divisible by 6 to the accumulator. Again, a simpler fold-based HoF has been provided for you:
xs filter { _ % 6 == 0 }
Folds and related higher-order functions are harder to understand than for comprehensions or explicit recursion, but very powerful and expressive (to anybody else who understands them). They eliminate boilerplate, removing a potential source of error. Because they are implemented by the core language developers, they can be more efficient (and that implementation can change, as the language progresses, without breaking your code). Experienced Scala developers use them in preference to for comprehensions or explicit recursion.
tl;dr
Learn For comprehensions
Learn explicit recursion
Don't use them if a higher-order function will do the job.
It is always nicer to use immutable variables since they make your code easier to read. Writing a recursive code can help solve your problem.
def max(x: List[Int]): Int = {
if (x.isEmpty == true) {
0
}
else {
Math.max(x.head, max(x.tail))
}
}
val a_list = List(a,b,c,d)
max_value = max(a_list)