Fold method using List as accumulator - scala

To find prime factors of a number I was using this piece of code :
def primeFactors(num: Long): List[Long] = {
val exists = (2L to math.sqrt(num).toLong).find(num % _ == 0)
exists match {
case Some(d) => d :: primeFactors(num/d)
case None => List(num)
}
}
but this I found a cool and more functional approach to solve this using this code:
def factors(n: Long): List[Long] = (2 to math.sqrt(n).toInt)
.find(n % _ == 0).fold(List(n)) ( i => i.toLong :: factors(n / i))
Earlier I was using foldLeft or fold simply to get sum of a list or other simple calculations, but here I can't seem to understand how fold is working and how this is breaking out of the recursive function.Can somebody plz explain how fold functionality is working here.

Option's fold
If you look at the signature of Option's fold function, it takes two parameters:
def fold[B](ifEmpty: => B)(f: A => B): B
What it does is, it applies f on the value of Option if it is not empty. If Option is empty, it simply returns output of ifEmpty (this is termination condition for recursion).
So in your case, i => i.toLong :: factors(n / i) represents f which will be evaluated if Option is not empty. While List(n) is termination condition.
fold used for collection / iterators
The other fold that you are taking about for getting sum of collection, comes from TraversableOnce and it has signature like:
def foldLeft[B](z: B)(op: (B, A) => B): B
Here, z is starting value (suppose incase of sum it's 0) and op is associative binary operator which is applied on z and each value of collection from left to right.
So both folds differ in their implementation.

Related

Correct way to work with two instances of Option together

When I have one Option[T] instance it is quite easy to perform any operation on T using monadic operations such as map() and flatMap(). This way I don't have to do checks to see whether it is defined or empty, and chain operations together to ultimately get an Option[R] for the result R.
My difficulty is whether there is a similar elegant way to perform functions on two Option[T] instances.
Lets take a simple example where I have two vals, x and y of type Option[Int]. And I want to get the maximum of them if they are both defined, or the one that is defined if only one is defined, and None if none are defined.
How would one write this elegantly without involving lots of isDefined checks inside the map() of the first Option?
You can use something like this:
def optMax(op1:Option[Int], op2: Option[Int]) = op1 ++ op2 match {
case Nil => None
case list => list.max
}
Or one much better:
def f(vars: Option[Int]*) = (for( vs <- vars) yield vs).max
#jwvh,thanks for a good improvement:
def f(vars: Option[Int]*) = vars.max
Usually, you'll want to do something if both values are defined.
In that case, you could use a for-comprehension:
val aOpt: Option[Int] = getIntOpt
val bOpt: Option[Int] = getIntOpt
val maxOpt: Option[Int] =
for {
a <- aOpt
b <- bOpt
} yield max(a, b)
Now, the problem you described is not as common. You want to do something if both values are defined, but you also want to retrieve the value of an option if only one of them is defined.
I would just use the for-comprehension above, and then chain two calls to orElse to provide alternative values if maxOpt turns out to be None.
maxOpt orElse aOpt orElse bOpt
orElse's signature:
def orElse[B >: A](alternative: ⇒ Option[B]): Option[B]
Here's another fwiw:
import scala.util.Try
def maxOpt (a:Option[Int]*)= Try(a.flatten.max).toOption
It works with n arguments (including zero arguments).
Pattern matching would allow something easy to grasp, but that might not be the most elegant way:
def maxOpt[T](optA: Option[T], optB: Option[T])(implicit f: (T, T) => T): Option[T] = (optA, optB) match {
case (Some(a), Some(b)) => Some(f(a, b))
case (None, Some(b)) => Some(b)
case (Some(a), None) => Some(a)
case (None, None) => None
}
You end up with something like:
scala> maxOpt(Some(1), None)(Math.max)
res2: Option[Int] = Some(1)
Once you have that building, block, you can use it inside for-comp or monadic operations.
To get maxOpt, you can also use an applicative, which using Scalaz would look like (aOpt |#| bOpt) { max(_, _) } & then chain orElses as #dcastro suggested.
I assume you expect Some[Int]|None as a result, not Int|None (otherwise return type has to be Any):
def maxOption(opts: Option[Int]*) = {
val flattened = opts.flatten
flattened.headOption.map { _ => flattened.max }
}
Actually, Scala already gives you this ability more or less directly.
scala> import Ordering.Implicits._
import Ordering.Implicits._
scala> val (a,b,n:Option[Int]) = (Option(4), Option(9), None)
a: Option[Int] = Some(4)
b: Option[Int] = Some(9)
n: Option[Int] = None
scala> a max b
res60: Option[Int] = Some(9)
scala> a max n
res61: Option[Int] = Some(4)
scala> n max b
res62: Option[Int] = Some(9)
scala> n max n
res63: Option[Int] = None
A Haskell-ish take on this question is to observe that the following operations:
max, min :: Ord a => a -> a -> a
max a b = if a < b then b else a
min a b = if a < b then a else b
...are associative:
max a (max b c) == max (max a b) c
min a (min b c) == min (min a b) c
As such, any type Ord a => a together with either of these operations is a semigroup, a concept for which reusable abstractions can be built.
And you're dealing with Maybe (Haskell for "option"), which adds a generic "neutral" element to the base a type (you want max Nothing x == x to hold as a law). This takes you into monoids, which are a subtype of semigroups.
The Haskell semigroups library provides a Semigroup type class and two wrapper types, Max and Min, that generically implement the corresponding behaviors.
Since we're dealing with Maybe, in terms of that library the type that captures the semantics you want is Option (Max a)—a monoid that has the same binary operation as the Max semigroup, and uses Nothing as the identity element. So then the function simply becomes:
maxOpt :: Ord a => Option (Max a) -> Option (Max a) -> Option (Max a)
maxOpt a b = a <> b
...which since it's just the <> operator for Option (Max a) is not worth writing. You also gain all the other utility functions and classes that work on Semigroup and Monoid, so for example to find the maximum element of a [Option (Max a)] you'd just use the mconcat function.
The scalaz library comes with a Semigroup and a Monoid trait, as well as Max, Min, MaxVal and MinVal tags that implement those traits, so in fact the stuff that I've demonstrated here in Haskell exists in scalaz as well.

What is point of /: function?

Documentation for /: includes
Note: might return different results for different runs, unless the underlying collection type is ordered or the operator
is associative and commutative.
( src)
This just applies if the par version of this function is run, otherwise the result is deterministic (same as foldLeft) ?
Also this function is calling foldLeft under the hood : def /:[B](z: B)(op: (B, A) => B): B = foldLeft(z)(op)
Their function definitions are same (except for function param label, "op" instad of "f") :
def /:[B](z: B)(op: (B, A) ⇒ B): B
def foldLeft[B](z: B)(f: (B, A) ⇒ B): B
For these reasons what is point of /: function and when should it be used in favour of foldLeft ?
Is my reasoning incorrect ?
It's just an alternative syntax. Methods ending in : are called on the right hand side.
Instead of
list.foldLeft(0) { op(_, _) }
or
list./:(0) { op(_, _) }
you can
( z /: list ) { op(_, _) }
For example,
scala> val a = List(1,2,3,4)
a: List[Int] = List(1, 2, 3, 4)
scala> ( 0 /: a ) { _ + _ }
res5: Int = 10
Yes, those are aliases originating from dark times when people liked their operators like this:
val x = y |#<#|: z.
The point of the note is to remind that for collections with unspecified iteration order the result of folds might differ. Consider having a Set {1,2,3} that doesn't guarantee the same access order even if left unmodified, and applying an operation that is not e. g. associative (like /). Even if run not after par call, this might result in the following (pseudocode):
{1,2,3} foldLeft / ==> (1 / 2) / 3 ==> 1/6 = 0.1(6)
{3,1,2} foldLeft / ==> (3 / 1) / 2 ==> 3/2 = 1.5
In terms of consistency this is similar to applying non-parallelizable operations to parallel collections, though.

Binary operator with Option arguments

In scala, how do I define addition over two Option arguments? Just to be specific, let's say they're wrappers for Int types (I'm actually working with maps of doubles but this example is simpler).
I tried the following but it just gives me an error:
def addOpt(a:Option[Int], b:Option[Int]) = {
a match {
case Some(x) => x.get
case None => 0
} + b match {
case Some(y) => y.get
case None => 0
}
}
Edited to add:
In my actual problem, I'm adding two maps which are standins for sparse vectors. So the None case returns Map[Int, Double] and the + is actually a ++ (with the tweak at stackoverflow.com/a/7080321/614684)
Monoids
You might find life becomes a lot easier when you realize that you can stand on the shoulders of giants and take advantage of common abstractions and the libraries built to use them. To this end, this question is basically about dealing with
monoids (see related questions below for more about this) and the library in question is called scalaz.
Using scalaz FP, this is just:
def add(a: Option[Int], b: Option[Int]) = ~(a |+| b)
What is more this works on any monoid M:
def add[M: Monoid](a: Option[M], b: Option[M]) = ~(a |+| b)
Even more usefully, it works on any number of them placed inside a Foldable container:
def add[M: Monoid, F: Foldable](as: F[Option[M]]) = ~as.asMA.sum
Note that some rather useful monoids, aside from the obvious Int, String, Boolean are:
Map[A, B: Monoid]
A => (B: Monoid)
Option[A: Monoid]
In fact, it's barely worth the bother of extracting your own method:
scala> some(some(some(1))) #:: some(some(some(2))) #:: Stream.empty
res0: scala.collection.immutable.Stream[Option[Option[Option[Int]]]] = Stream(Some(Some(Some(1))), ?)
scala> ~res0.asMA.sum
res1: Option[Option[Int]] = Some(Some(3))
Some related questions
Q. What is a monoid?
A monoid is a type M for which there exists an associative binary operation (M, M) => M and an identity I under this operation, such that mplus(m, I) == m == mplus(I, m) for all m of type M
Q. What is |+|?
This is just scalaz shorthand (or ASCII madness, ymmv) for the mplus binary operation
Q. What is ~?
It is a unary operator meaning "or identity" which is retrofitted (using scala's implicit conversions) by the scalaz library onto Option[M] if M is a monoid. Obviously a non-empty option returns its contents; an empty option is replaced by the monoid's identity.
Q. What is asMA.sum?
A Foldable is basically a datastructure which can be folded over (like foldLeft, for example). Recall that foldLeft takes a seed value and an operation to compose successive computations. In the case of summing a monoid, the seed value is the identity I and the operation is mplus. You can hence call asMA.sum on a Foldable[M : Monoid]. You might need to use asMA because of the name clash with the standard library's sum method.
Some References
Slides and Video of a talk I gave which gives practical examples of using monoids in the wild
def addOpts(xs: Option[Int]*) = xs.flatten.sum
This will work for any number of inputs.
If they both default to 0 you don't need pattern matching:
def addOpt(a:Option[Int], b:Option[Int]) = {
a.getOrElse(0) + b.getOrElse(0)
}
(Repeating comment above in an answer as requested)
You don't extract the content of the option the proper way. When you match with case Some(x), x is the value inside the option(type Int) and you don't call get on that. Just do
case Some(x) => x
Anyway, if you want content or default, a.getOrElse(0) is more convenient
def addOpt(ao: Option[Int], bo: Option[Int]) =
for {
a <- ao
b <- bo
} yield a + b

difference between foldLeft and reduceLeft in Scala

I have learned the basic difference between foldLeft and reduceLeft
foldLeft:
initial value has to be passed
reduceLeft:
takes first element of the collection as initial value
throws exception if collection is empty
Is there any other difference ?
Any specific reason to have two methods with similar functionality?
Few things to mention here, before giving the actual answer:
Your question doesn't have anything to do with left, it's rather about the difference between reducing and folding
The difference is not the implementation at all, just look at the signatures.
The question doesn't have anything to do with Scala in particular, it's rather about the two concepts of functional programming.
Back to your question:
Here is the signature of foldLeft (could also have been foldRight for the point I'm going to make):
def foldLeft [B] (z: B)(f: (B, A) => B): B
And here is the signature of reduceLeft (again the direction doesn't matter here)
def reduceLeft [B >: A] (f: (B, A) => B): B
These two look very similar and thus caused the confusion. reduceLeft is a special case of foldLeft (which by the way means that you sometimes can express the same thing by using either of them).
When you call reduceLeft say on a List[Int] it will literally reduce the whole list of integers into a single value, which is going to be of type Int (or a supertype of Int, hence [B >: A]).
When you call foldLeft say on a List[Int] it will fold the whole list (imagine rolling a piece of paper) into a single value, but this value doesn't have to be even related to Int (hence [B]).
Here is an example:
def listWithSum(numbers: List[Int]) = numbers.foldLeft((List.empty[Int], 0)) {
(resultingTuple, currentInteger) =>
(currentInteger :: resultingTuple._1, currentInteger + resultingTuple._2)
}
This method takes a List[Int] and returns a Tuple2[List[Int], Int] or (List[Int], Int). It calculates the sum and returns a tuple with a list of integers and it's sum. By the way the list is returned backwards, because we used foldLeft instead of foldRight.
Watch One Fold to rule them all for a more in depth explanation.
reduceLeft is just a convenience method. It is equivalent to
list.tail.foldLeft(list.head)(_)
foldLeft is more generic, you can use it to produce something completely different than what you originally put in. Whereas reduceLeft can only produce an end result of the same type or super type of the collection type. For example:
List(1,3,5).foldLeft(0) { _ + _ }
List(1,3,5).foldLeft(List[String]()) { (a, b) => b.toString :: a }
The foldLeft will apply the closure with the last folded result (first time using initial value) and the next value.
reduceLeft on the other hand will first combine two values from the list and apply those to the closure. Next it will combine the rest of the values with the cumulative result. See:
List(1,3,5).reduceLeft { (a, b) => println("a " + a + ", b " + b); a + b }
If the list is empty foldLeft can present the initial value as a legal result. reduceLeft on the other hand does not have a legal value if it can't find at least one value in the list.
For reference, reduceLeft will error if applied to an empty container with the following error.
java.lang.UnsupportedOperationException: empty.reduceLeft
Reworking the code to use
myList foldLeft(List[String]()) {(a,b) => a+b}
is one potential option. Another is to use the reduceLeftOption variant which returns an Option wrapped result.
myList reduceLeftOption {(a,b) => a+b} match {
case None => // handle no result as necessary
case Some(v) => println(v)
}
The basic reason they are both in Scala standard library is probably because they are both in Haskell standard library (called foldl and foldl1). If reduceLeft wasn't, it would quite often be defined as a convenience method in different projects.
From Functional Programming Principles in Scala (Martin Odersky):
The function reduceLeft is defined in terms of a more general function, foldLeft.
foldLeft is like reduceLeft but takes an accumulator z, as an additional parameter, which is returned when foldLeft is called on an empty list:
(List (x1, ..., xn) foldLeft z)(op) = (...(z op x1) op ...) op x
[as opposed to reduceLeft, which throws an exception when called on an empty list.]
The course (see lecture 5.5) provides abstract definitions of these functions, which illustrates their differences, although they are very similar in their use of pattern matching and recursion.
abstract class List[T] { ...
def reduceLeft(op: (T,T)=>T) : T = this match{
case Nil => throw new Error("Nil.reduceLeft")
case x :: xs => (xs foldLeft x)(op)
}
def foldLeft[U](z: U)(op: (U,T)=>U): U = this match{
case Nil => z
case x :: xs => (xs foldLeft op(z, x))(op)
}
}
Note that foldLeft returns a value of type U, which is not necessarily the same type as List[T], but reduceLeft returns a value of the same type as the list).
To really understand what are you doing with fold/reduce,
check this: http://wiki.tcl.tk/17983
very good explanation. once you get the concept of fold,
reduce will come together with the answer above:
list.tail.foldLeft(list.head)(_)
Scala 2.13.3, Demo:
val names = List("Foo", "Bar")
println("ReduceLeft: "+ names.reduceLeft(_+_))
println("ReduceRight: "+ names.reduceRight(_+_))
println("Fold: "+ names.fold("Other")(_+_))
println("FoldLeft: "+ names.foldLeft("Other")(_+_))
println("FoldRight: "+ names.foldRight("Other")(_+_))
outputs:
ReduceLeft: FooBar
ReduceRight: FooBar
Fold: OtherFooBar
FoldLeft: OtherFooBar
FoldRight: FooBarOther

Selection sort in functional Scala

I'm making my way through "Programming in Scala" and wrote a quick implementation of the selection sort algorithm. However, since I'm still a bit green in functional programming, I'm having trouble translating to a more Scala-ish style. For the Scala programmers out there, how can I do this using Lists and vals rather than falling back into my imperative ways?
http://gist.github.com/225870
As starblue already said, you need a function that calculates the minimum of a list and returns the list with that element removed. Here is my tail recursive implementation of something similar (as I believe foldl is tail recursive in the standard library), and I tried to make it as functional as possible :). It returns a list that contains all the elements of the original list (but kindof reversed - see the explanation below) with the minimum as a head.
def minimum(xs: List[Int]): List[Int] =
(List(xs.head) /: xs.tail) {
(ys, x) =>
if(x < ys.head) (x :: ys)
else (ys.head :: x :: ys.tail)
}
This basically does a fold, starting with a list containing of the first element of xs If the first element of xs is smaller than the head of that list, we pre-append it to the list ys. Otherwise, we add it to the list ys as the second element. And so on recursively, we've folded our list into a new list containing the minimum element as a head and a list containing all the elements of xs (not necessarily in the same order) with the minimum removed, as a tail. Note that this function does not remove duplicates.
After creating this helper function, it's now easy to implement selection sort.
def selectionSort(xs: List[Int]): List[Int] =
if(xs.isEmpty) List()
else {
val ys = minimum(xs)
if(ys.tail.isEmpty)
ys
else
ys.head :: selectionSort(ys.tail)
}
Unfortunately this implementation is not tail recursive, so it will blow up the stack for large lists. Anyway, you shouldn't use a O(n^2) sort for large lists, but still... it would be nice if the implementation was tail recursive. I'll try to think of something... I think it will look like the implementation of a fold.
Tail Recursive!
To make it tail recursive, I use quite a common pattern in functional programming - an accumulator. It works a bit backward, as now I need a function called maximum, which basically does the same as minimum, but with the maximum element - its implementation is exact as minimum, but using > instead of <.
def selectionSort(xs: List[Int]) = {
def selectionSortHelper(xs: List[Int], accumulator: List[Int]): List[Int] =
if(xs.isEmpty) accumulator
else {
val ys = maximum(xs)
selectionSortHelper(ys.tail, ys.head :: accumulator)
}
selectionSortHelper(xs, Nil)
}
EDIT: Changed the answer to have the helper function as a subfunction of the selection sort function.
It basically accumulates the maxima to a list, which it eventually returns as the base case. You can also see that it is tail recursive by replacing accumulator by throw new NullPointerException - and then inspect the stack trace.
Here's a step by step sorting using an accumulator. The left hand side shows the list xs while the right hand side shows the accumulator. The maximum is indicated at each step by a star.
64* 25 12 22 11 ------- Nil
11 22 12 25* ------- 64
22* 12 11 ------- 25 64
11 12* ------- 22 25 64
11* ------- 12 22 25 64
Nil ------- 11 12 22 25 64
The following shows a step by step folding to calculate the maximum:
maximum(25 12 64 22 11)
25 :: Nil /: 12 64 22 11 -- 25 > 12, so it stays as head
25 :: 12 /: 64 22 11 -- same as above
64 :: 25 12 /: 22 11 -- 25 < 64, so the new head is 64
64 :: 22 25 12 /: 11 -- and stays so
64 :: 11 22 25 12 /: Nil -- until the end
64 11 22 25 12
You should have problems doing selection sort in functional style, as it is an in-place sort algorithm. In-place, by definition, isn't functional.
The main problem you'll face is that you can't swap elements. Here's why this is important. Suppose I have a list (a0 ... ax ... an), where ax is the minimum value. You need to get ax away, and then compose a list (a0 ... ax-1 ax+1 an). The problem is that you'll necessarily have to copy the elements a0 to ax-1, if you wish to remain purely functional. Other functional data structures, particularly trees, can have better performance than this, but the basic problem remains.
here is another implementation of selection sort (generic version).
def less[T <: Comparable[T]](i: T, j: T) = i.compareTo(j) < 0
def swap[T](xs: Array[T], i: Int, j: Int) { val tmp = xs(i); xs(i) = xs(j); xs(j) = tmp }
def selectiveSort[T <: Comparable[T]](xs: Array[T]) {
val n = xs.size
for (i <- 0 until n) {
val min = List.range(i + 1, n).foldLeft(i)((a, b) => if (less(xs(a), xs(b))) a else b)
swap(xs, i, min)
}
}
You need a helper function which does the selection. It should return the minimal element and the rest of the list with the element removed.
I think it's reasonably feasible to do a selection sort in a functional style, but as Daniel indicated, it has a good chance of performing horribly.
I just tried my hand at writing a functional bubble sort, as a slightly simpler and degenerate case of selection sort. Here's what I did, and this hints at what you could do:
define bubble(data)
if data is empty or just one element: return data;
otherwise, if the first element < the second,
return first element :: bubble(rest of data);
otherwise, return second element :: bubble(
first element :: (rest of data starting at 3rd element)).
Once that's finished recursing, the largest element is at the end of the list. Now,
define bubblesort [data]
apply bubble to data as often as there are elements in data.
When that's done, your data is indeed sorted. Yes, it's horrible, but my Clojure implementation of this pseudocode works.
Just concerning yourself with the first element or two and then leaving the rest of the work to a recursed activity is a lisp-y, functional-y way to do this kind of thing. But once you've gotten your mind accustomed to that kind of thinking, there are more sensible approaches to the problem.
I would recommend implementing a merge sort:
Break list into two sub-lists,
either by counting off half the elements into one sublist
and the rest in the other,
or by copying every other element from the original list
into either of the new lists.
Sort each of the two smaller lists (recursion here, obviously).
Assemble a new list by selecting the smaller from the front of either sub-list
until you've exhausted both sub-lists.
The recursion is in the middle of that, and I don't see a clever way of making the algorithm tail recursive. Still, I think it's O(log-2) in time and also doesn't place an exorbitant load on the stack.
Have fun, good luck!
Thanks for the hints above, they were very inspiring. Here's another functional approach to the selection sort algorithm. I tried to base it on the following idea: finding a max / min can be done quite easily by min(A)=if A=Nil ->Int.MaxValue else min(A.head, min(A.tail)). The first min is the min of a list, the second the min of two numbers. This is easy to understand, but unfortunately not tail recursive. Using the accumulator method the min definition can be transformed like this, now in correct Scala:
def min(x: Int,y: Int) = if (x<y) x else y
def min(xs: List[Int], accu: Int): Int = xs match {
case Nil => accu
case x :: ys => min(ys, min(accu, x))
}
(This is tail recursive)
Now a min version is needed which returns a list leaving out the min value. The following function returns a list whose head is the min value, the tail contains the rest of the original list:
def minl(xs: List[Int]): List[Int] = minl(xs, List(Int.MaxValue))
def minl(xs: List[Int],accu:List[Int]): List[Int] = xs match {
// accu always contains min as head
case Nil => accu take accu.length-1
case x :: ys => minl(ys,
if (x<accu.head) x::accu else accu.head :: x :: accu.tail )
}
Using this selection sort can be written tail recursively as:
def ssort(xs: List[Int], accu: List[Int]): List[Int] = minl(xs) match {
case Nil => accu
case min :: rest => ssort(rest, min::accu)
}
(reverses the order). In a test with 10000 list elements this algorithm is only about 4 times slower than the usual imperative algorithm.
Even though, when coding Scala, I'm used to prefer functional programming style (via combinators or recursion) over imperative style (via variables and iterations), THIS TIME, for this specific problem, old school imperative nested loops result in simpler and more performant code.
I don't think falling back to imperative style is a mistake for certain classes of problems, such as sorting algorithms which usually transform the input buffer in place rather than resulting to a new collection.
My solution is:
package bitspoke.algo
import scala.math.Ordered
import scala.collection.mutable.Buffer
abstract class Sorter[T <% Ordered[T]] {
// algorithm provided by subclasses
def sort(buffer : Buffer[T]) : Unit
// check if the buffer is sorted
def sorted(buffer : Buffer[T]) = buffer.isEmpty || buffer.view.zip(buffer.tail).forall { t => t._2 > t._1 }
// swap elements in buffer
def swap(buffer : Buffer[T], i:Int, j:Int) {
val temp = buffer(i)
buffer(i) = buffer(j)
buffer(j) = temp
}
}
class SelectionSorter[T <% Ordered[T]] extends Sorter[T] {
def sort(buffer : Buffer[T]) : Unit = {
for (i <- 0 until buffer.length) {
var min = i
for (j <- i until buffer.length) {
if (buffer(j) < buffer(min))
min = j
}
swap(buffer, i, min)
}
}
}
As you can see, to achieve parametric polymorphism, rather than using java.lang.Comparable, I preferred scala.math.Ordered and Scala View Bounds rather than Upper Bounds. That's certainly works thanks to Scala Implicit Conversions of primitive types to Rich Wrappers.
You can write a client program as follows:
import bitspoke.algo._
import scala.collection.mutable._
val sorter = new SelectionSorter[Int]
val buffer = ArrayBuffer(3, 0, 4, 2, 1)
sorter.sort(buffer)
assert(sorter.sorted(buffer))
A simple functional program for selection-sort in Scala
def selectionSort(list:List[Int]):List[Int] = {
#tailrec
def selectSortHelper(list:List[Int], accumList:List[Int] = List[Int]()): List[Int] = {
list match {
case Nil => accumList
case _ => {
val min = list.min
val requiredList = list.filter(_ != min)
selectSortHelper(requiredList, accumList ::: List.fill(list.length - requiredList.length)(min))
}
}
}
selectSortHelper(list)
}
You may want to try replacing your while loops with recursion, so, you have two places where you can create new recursive functions.
That would begin to get rid of some vars.
This was probably the toughest lesson for me, trying to move more toward FP.
I hesitate to show solutions here, as I think it would be better for you to try first.
But, if possible you should be using tail-recursion, to avoid problems with stack overflows (if you are sorting a very, very large list).
Here is my point of view on this problem: SelectionSort.scala
def selectionsort[A <% Ordered[A]](list: List[A]): List[A] = {
def sort(as: List[A], bs: List[A]): List[A] = as match {
case h :: t => select(h, t, Nil, bs)
case Nil => bs
}
def select(m: A, as: List[A], zs: List[A], bs: List[A]): List[A] =
as match {
case h :: t =>
if (m > h) select(m, t, h :: zs, bs)
else select(h, t, m :: zs, bs)
case Nil => sort(zs, m :: bs)
}
sort(list, Nil)
}
There are two inner functions: sort and select, which represents two loops in original algorithm. The first function sort iterates through the elements and call select for each of them. When the source list is empty it return bs list as result, which is initially Nil. The sort function tries to search for maximum (not minimum, since we build result list in reversive order) element in source list. It suppose that maximum is head by the default and then just replace it with a proper value.
This is 100% functional implementation of Selection Sort in Scala.
Here is my solution
def sort(list: List[Int]): List[Int] = {
#tailrec
def pivotCompare(p: Int, l: List[Int], accList: List[Int] = List.empty): List[Int] = {
l match {
case Nil => p +: accList
case x :: xs if p < x => pivotCompare(p, xs, accList :+ x)
case x :: xs => pivotCompare(x, xs, accList :+ p)
}
}
#tailrec
def loop(list: List[Int], accList: List[Int] = List.empty): List[Int] = {
list match {
case x :: xs =>
pivotCompare(x, xs) match {
case Nil => accList
case h :: tail => loop(tail, accList :+ h)
}
case Nil => accList
}
}
loop(list)
}