Since RosettaCode's Standard ML solution is a very slow version of Quicksort according to the question (and discussion) "Why is the minimalist, example Haskell quicksort not a "true" quicksort?", how would a functional Quicksort look like in Standard ML if it behaved according to the complexity of Hoare's algoritm?
fun quicksort [] = []
| quicksort (x::xs) =
let
val (left, right) = List.partition (fn y => y<x) xs
in
quicksort left # [x] # quicksort right
end
That is, one that employs some aspects of functional programming where it makes sense. Unlike the Haskell version that needs to encapsulate its in-place partitioning, would there be any need for a Quicksort in SML to vary in any way from the C version besides syntax? Whether the function accepts an array/vector or spends O(n) time converting the list is less relevant.
Edit: Rephrased question with regards to John Coleman's comments.
Here is my attempt:
fun swap(A,i,j) =
let
val t = Array.sub(A,i)
in
Array.update(A,i,Array.sub(A,j));
Array.update(A,j,t)
end
fun firstAfter(A,i,f) =
if f(Array.sub(A,i)) then i else firstAfter(A,i+1,f)
fun lastBefore(A,j,f) =
if f(Array.sub(A,j)) then j else lastBefore(A,j-1,f)
fun partition(A,lo,hi)=
let
fun partition'(A,lo,hi,pivot) =
let
val i = firstAfter(A,lo,fn k => k >= pivot)
val j = lastBefore(A,hi,fn k => k <= pivot)
in
if i >= j then
j
else
(
swap(A,i,j);
partition'(A,i+1,j-1,pivot)
)
end
in
partition'(A,lo,hi,Array.sub(A,lo))
end
fun quicksort(A,lo,hi) =
if hi <= lo then
()
else
let
val p = partition(A,lo,hi)
in
(
quicksort(A,lo,p);
quicksort(A,p+1,hi)
)
end;
fun qsort A = quicksort(A,0,Array.length A - 1);
It follows Hoare's algorithm as described in Wikipedia fairly closely but uses recursion rather than loops, and has a somewhat functional approach for looking for pairs of indices to swap. There is no question that this is nowhere nearly as elegant as the 2 or 3 line pseudo-quicksort which is often taught in introductory treatments of functional programming and it doesn't really showcase the powers of functional programming. Hopefully someone who knows more SML than I can come up with a more idiomatic SML qsort.
You can find some of sorting algorithms written in Standard ML in my github repository https://github.com/Khanzadeh-AH/sorting_algorithms_SML
fun pivot_split([], pivot) = ([],[])
| pivot_split(h::t, pivot) =
let
val (L,R) = pivot_split(t, pivot)
in
if h < pivot then (h::L, R)
else (L, h::R)
end;
fun quicksort([]) = []
| quicksort([a]) = [a]
| quicksort(h::t) =
let
val (L, R) = pivot_split(t, h)
in
quicksort(L)#[h]#quicksort(R)
end;
pivot_split([12,3,5,67,1,2,3,5,7],3);
quicksort([12,3,5,67,1,2,3,5,7]);
Related
After you get a basic idea, coding dynamic programming (DP) problems in imperative style is pretty straightforward, at least for simpler DP problems. It usually involves some form of table, that we iteratively fill based on some formula. This is pretty much all there is for implementing bottom-up DP.
Let's take Longest Increasing Subsequence (LIS) as a simple and common example of a problem that can be solved with bottom-up DP algorithm.
C++ implementation is straightforward (not tested):
// Where A is input (std::vector<int> of size n)
std::vector<int> DP(n, 0);
for(int i = 0; i < n; i++) {
DP[i] = 1;
for(int j = 0; j < i; j++) {
if (A[i] > A[j]) {
DP[i] = std::max(DP[i], DP[j] + 1);
}
}
}
We've just described "how to fill DP" which is exactly what imperative programming is.
If we decide to describe "what DP is", which is kind of a more FP way to think about it, it gets a bit more complicated.
Here's an example Scala code for this problem:
// Where A is input
val DP = A.zipWithIndex.foldLeft(Seq[Int]()) {
case (_, (_, 0)) => Seq(1)
case (d, (a, _)) =>
d :+ d.zipWithIndex.map { case (dj, j) =>
dj + (if (a > A(j)) 1 else 0)
}.max
}
To be quite honest, this Scala implementation doesn't seem that much idiomatic. It's just a translation of imperative solution, with immutability added to it.
I'm curious, what is a general FP way to deal with things like this?
I don't know much about DP but I have a few observations that I hope will contribute to the conversation.
First, your example Scala code doesn't appear to solve the LIS problem. I plugged in the Van der Corput sequence as found on the Wikipedia page and did not get the designated result.
Working through the problem this is the solution I came up with.
def lis(a: Seq[Int], acc: Seq[Int] = Seq.empty[Int]): Int =
if (a.isEmpty) acc.length
else if (acc.isEmpty) lis(a.tail, Seq(a.head)) max lis(a.tail)
else if (a.head > acc.last) lis(a.tail, acc :+ a.head) max lis(a.tail, acc)
else lis(a.tail, acc)
lis(Seq(0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15)) // res0: Int = 6
This can be adjusted to return the subsequence itself, and I'm sure it can be tweaked for better performance.
As for memoization, it's not hard to roll-your-own on an as-needed basis. Here's the basic outline to memoize any arity-2 function.
def memo[A,B,R](f: (A,B)=>R): ((A,B)=>R) = {
val cache = new collection.mutable.WeakHashMap[(A,B),R]
(a:A,b:B) => cache.getOrElseUpdate((a,b),f(a,b))
}
With this I can create a memoized version of some often called method/function.
val myfunc = memo{(s:String, i:Int) => s.length > i}
myfunc("bsxxlakjs",7) // res0: Boolean = true
Note: It used to be that WeakHashMap was recommended so that the cache could drop lesser used elements in memory-challenged environments. I don't know if that's still the case.
I'm new to Scala and was just reading Scala By Example. In chapter 2, the author has 2 different versions of Quicksort.
One is imperative style:
def sort(xs: Array[Int]) {
def swap(i: Int, j: Int) {
val t = xs(i); xs(i) = xs(j); xs(j) = t
}
def sort1(l: Int, r: Int) {
val pivot = xs((l + r) / 2)
var i = l; var j = r
while (i <= j) {
while (xs(i) < pivot) i += 1
while (xs(j) > pivot) j -= 1
if (i <= j) {
swap(i, j)
i += 1
j -= 1
}
}
if (l < j) sort1(l, j)
if (j < r) sort1(i, r)
}
sort1(0, xs.length - 1)
}
One is functional style:
def sort(xs: Array[Int]): Array[Int] = {
if (xs.length <= 1) xs
else {
val pivot = xs(xs.length / 2)
Array.concat(
sort(xs filter (pivot >)),
xs filter (pivot ==),
sort(xs filter (pivot <)))
}
}
The obvious advantage the functional style has over imperative style is conciseness. But what about performance? Since it uses recursion, do we pay for the performance penalty just like we do in other imperative languages like C? Or, Scala being a hybrid language, the "Scala way" (functional) is preferred, thus more efficient.
Note: The author did mention the functional style does use more memory.
It depends. If you look in the Scala sources, there is often an imperative style used "under the hood" in order to be performant - but in many cases exactly these tweaks allow you write performant functional code. So usually you can come up with a functional solution that is fast enough, but you must be careful and know what you do (especially concerning your data structures). E.g. the array concat in the second example is not nice, but probably not too bad - but using Lists here and concat them with ::: would be overkill.
But that is not more than educated guessing if you don't actually measure the performance. In complex projects it's really hard to predict the performance, especially as things like object creation and method calls get more and more optimized by the compiler and the JVM.
I'd suggest to start out with the functional style. If it is too slow, profile it. Usually there is a better functional solution. If not, you can use the imperative style (or a mix of both) as a last resort.
Is there a concise, idiomatic way how to express function iteration? That is, given a number n and a function f :: a -> a, I'd like to express \x -> f(...(f(x))...) where f is applied n-times.
Of course, I could make my own, recursive function for that, but I'd be interested if there is a way to express it shortly using existing tools or libraries.
So far, I have these ideas:
\n f x -> foldr (const f) x [1..n]
\n -> appEndo . mconcat . replicate n . Endo
but they all use intermediate lists, and aren't very concise.
The shortest one I found so far uses semigroups:
\n f -> appEndo . times1p (n - 1) . Endo,
but it works only for positive numbers (not for 0).
Primarily I'm focused on solutions in Haskell, but I'd be also interested in Scala solutions or even other functional languages.
Because Haskell is influenced by mathematics so much, the definition from the Wikipedia page you've linked to almost directly translates to the language.
Just check this out:
Now in Haskell:
iterateF 0 _ = id
iterateF n f = f . iterateF (n - 1) f
Pretty neat, huh?
So what is this? It's a typical recursion pattern. And how do Haskellers usually treat that? We treat that with folds! So after refactoring we end up with the following translation:
iterateF :: Int -> (a -> a) -> (a -> a)
iterateF n f = foldr (.) id (replicate n f)
or point-free, if you prefer:
iterateF :: Int -> (a -> a) -> (a -> a)
iterateF n = foldr (.) id . replicate n
As you see, there is no notion of the subject function's arguments both in the Wikipedia definition and in the solutions presented here. It is a function on another function, i.e. the subject function is being treated as a value. This is a higher level approach to a problem than implementation involving arguments of the subject function.
Now, concerning your worries about the intermediate lists. From the source code perspective this solution turns out to be very similar to a Scala solution posted by #jmcejuela, but there's a key difference that GHC optimizer throws away the intermediate list entirely, turning the function into a simple recursive loop over the subject function. I don't think it could be optimized any better.
To comfortably inspect the intermediate compiler results for yourself, I recommend to use ghc-core.
In Scala:
Function chain Seq.fill(n)(f)
See scaladoc for Function. Lazy version: Function chain Stream.fill(n)(f)
Although this is not as concise as jmcejuela's answer (which I prefer), there is another way in scala to express such a function without the Function module. It also works when n = 0.
def iterate[T](f: T=>T, n: Int) = (x: T) => (1 to n).foldLeft(x)((res, n) => f(res))
To overcome the creation of a list, one can use explicit recursion, which in reverse requires more static typing.
def iterate[T](f: T=>T, n: Int): T=>T = (x: T) => (if(n == 0) x else iterate(f, n-1)(f(x)))
There is an equivalent solution using pattern matching like the solution in Haskell:
def iterate[T](f: T=>T, n: Int): T=>T = (x: T) => n match {
case 0 => x
case _ => iterate(f, n-1)(f(x))
}
Finally, I prefer the short way of writing it in Caml, where there is no need to define the types of the variables at all.
let iterate f n x = match n with 0->x | n->iterate f (n-1) x;;
let f5 = iterate f 5 in ...
I like pigworker's/tauli's ideas the best, but since they only gave it as a comments, I'm making a CW answer out of it.
\n f x -> iterate f x !! n
or
\n f -> (!! n) . iterate f
perhaps even:
\n -> ((!! n) .) . iterate
I have been looking into recursion and TCO. It seems that TCO can make the code verbose and also impact the performance. e.g. I have implemented the code which takes in 7 digit phone number and gives back all possible permutation of words e.g. 464-7328 can be "GMGPDAS ... IMGREAT ... IOIRFCU" Here is the code.
/*Generate the alphabet table*/
val alphabet = (for (ch <- 'a' to 'z') yield ch.toString).toList
/*Given the number, return the possible alphabet List of String(Instead of Char for convenience)*/
def getChars(num : Int) : List[String] = {
if (num > 1) return List[String](alphabet((num - 2) * 3), alphabet((num - 2) * 3 + 1), alphabet((num - 2) * 3 + 2))
List[String](num.toString)
}
/*Recursion without TCO*/
def getTelWords(input : List[Int]) : List[String] = {
if (input.length == 1) return getChars(input.head)
getChars(input.head).foldLeft(List[String]()) {
(l, ch) => getTelWords(input.tail).foldLeft(List[String]()) { (ll, x) => ch + x :: ll } ++ l
}
}
It is short and I don't have to spend too much time on this. However when I try to do that in tail call recursion to get it TCO'ed. I have to spend a considerable amount of time and The code become very verbose. I won't be posing the whole code to save space. Here is a link to git repo link. It is for sure that quite a lot of you can write better and concise tail recursive code than mine. I still believe that in general TCO is more verbose (e.g. Factorial and Fibonacci tail call recursion has extra parameter, accumulator.) Yet, TCO is needed to prevent the stack overflow. I would like to know how you would approach TCO and recursion. The Scheme implementation of Akermann with TCO in this thread epitomize my problem statement.
Is it possible that you're using the term "tail call optimization", when in fact you really either mean writing a function in iterative recursive style, or continuation passing style, so that all the recursive calls are tail calls?
Implementing TCO is the job of a language implementer; one paper that talks about how it can be done efficiently is the classic Lambda: the Ultimate GOTO paper.
Tail call optimization is something that your language's evaluator will do for you. Your question, on the other hand, sounds like you are asking how to express functions in a particular style so that the program's shape allows your evaluator to perform tail call optimization.
As sclv mentioned in the comments, tail recursion is pointless for this example in Haskell. A simple implementation of your problem can be written succinctly and efficiently using the list monad.
import Data.Char
getChars n | n > 1 = [chr (ord 'a' + 3*(n-2)+i) | i <- [0..2]]
| otherwise = ""
getTelNum = mapM getChars
As said by others, I would not be worried about tail call for this case, as it does not recurse very deeply (length of the input) compared to the size of the output. You should be out of memory (or patience) before you are out of stack
I would implement probably implement with something like
def getTelWords(input: List[Int]): List[String] = input match {
case Nil => List("")
case x :: xs => {
val heads = getChars(x)
val tails = getTelWords(xs)
for(c <- heads; cs <- tails) yield c + cs
}
}
If you insist on a tail recursive one, that might be based on
def helper(reversedPrefixes: List[String], input: List[Int]): List[String]
= input match {
case Nil => reversedPrefixes.map(_.reverse)
case (x :: xs) => helper(
for(c <- getChars(x); rp <- reversedPrefixes) yield c + rp,
xs)
}
(the actual routine should call helper(List(""), input))
I run out of memory while finding the 10,001th prime number.
object Euler0007 {
def from(n: Int): Stream[Int] = n #:: from(n + 1)
def sieve(s: Stream[Int]): Stream[Int] = s.head #:: sieve(s.filter(_ % s.head != 0))
def primes = sieve(from(2))
def main(args: Array[String]): Unit = {
println(primes(10001))
}
}
Is this because after each "iteration" (is this the correct term in this context?) of primes, I increase the stack of functions to be called to get the next element by one?
One solution that I've found on the web which doesn't resort to an iterative solution (which I'd like to avoid to get into functional programming/idiomatic scala) is this (Problem 7):
lazy val ps: Stream[Int] = 2 #:: Stream.from(3).filter(i => ps.takeWhile(j => j * j <= i).forall(i % _ > 0))
From what I can see, this does not lead to this recursion-like way. Is this a good way to do it, or do you know of a better way?
One reason why this is slow is that it isn't the sieve of Eratosthenes. Read http://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf for a detailled explanation (the examples are in Haskell, but can be translated directly into Scala).
My old solution for Euler problem #7 wasn't the "true" sieve either, but it seems to work good enough for little numbers:
object Sieve {
val primes = 2 #:: sieve(3)
def sieve(n: Int) : Stream[Int] =
if (primes.takeWhile(p => p*p <= n).exists(n % _ == 0)) sieve(n + 2)
else n #:: sieve(n + 2)
def main(args: Array[String]) {
println(primes(10000)) //note that indexes are zero-based
}
}
I think the problem with your first version is that you have only defs and no val which collects the results and can be consulted by the generating function, so you always recalculate from scratch.
Yes, it is because you "increase the stack of functions to be called to get the next element, by one after each "iteration" " - i.e. add a new filter on top of stack of filters each time after getting each prime. That's way too many filters.
This means that each produced prime gets tested by all its preceding primes - but only those below its square root are really needed. For instance, to get the 10001-th prime, 104743, there will be 10000 filters created, at run-time. But there are just 66 primes below 323, the square root of 104743, so only 66 filters were really needed. All the 9934 others will be there needlessly, taking up memory, hard at work producing absolutely no added value.
This is the key deficiency of that "functional sieve", which seems to have originated in the 1970s code by David Turner, and later have found its way into the SICP book and other places. It is not that it's a trial division sieve (rather than the sieve of Eratosthenes). That's far too remote a concern for it. Trial division, when optimally implemented, is perfectly capable of producing the 10000th prime very fast.
The key deficiency of that code is that it does not postpone the creation of filters to the right moment, and ends up creating far too many of them.
Talking complexities now, the "old sieve" code is O(n2), in n primes produced. The optimal trial division is O(n1.5/log0.5(n)), and the sieve of Eratosthenes is O(n*log(n)*log(log(n))). As empirical orders of growth the first is seen typically as ~ n^2, the second as ~ n^1.45 and the third ~ n^1.2.
You can find Python generators-based code for optimal trial division implemented in this answer (2nd half of it). It was originally discussed here dealing with the Haskell equivalent of your sieve function.
Just as an illustration, a "readable pseudocode" :) for the old sieve is
primes = sieve [2..] where
sieve (x:xs) = x : sieve [ y | y <- xs, rem y x > 0 ]
-- list of 'y's, drawn from 'xs',
-- such that (y % x > 0)
and for optimal trial division (TD) sieve, synchronized on primes' squares,
primes = sieve [2..] primes where
sieve (x:xs) ps = x : (h ++ sieve [ y | y <- t, rem y p > 0 ] qs)
where
(p:qs) = ps -- 'p' is head elt in 'ps', and 'qs' the rest
(h,t) = span (< p*p) xs -- 'h' are elts below p^2 in 'xs'
-- and 't' are the rest
and for a sieve of Eratosthenes, devised by Richard Bird, as seen in that JFP article mentioned in another answer here,
primes = 2 : minus [3..]
(foldr (\p r-> p*p : union [p*p+p, p*p+2*p..] r) [] primes)
-- function of 'p' and 'r', that returns
-- a list with p^2 as its head elt, ...
Short and fast. (minus a b is a list a with all the elts of b progressively removed from it; union a b is a list a with all the elts of b progressively added to it without duplicates; both dealing with ordered, non-decreasing lists). foldr is the right fold of a list. Because it is linear this runs at ~ n^1.33, to make it run at ~ n^1.2 the tree-like folding function foldi can be used).
The answer to your second question is also a yes. Your second code, re-written in same "pseudocode",
ps = 2 : [i | i <- [3..], all ((> 0).rem i) (takeWhile ((<= i).(^2)) ps)]
is very similar to the optimal TD sieve above - both arrange for each candidate to be tested by all primes below its square root. While the sieve arranges that with a run-time sequence of postponed filters, the latter definition re-fetches the needed primes anew for each candidate. One might be faster than another depending on a compiler, but both are essentially the same.
And the third is also a yes: the sieve of Eratosthenes is better,
ps = 2 : 3 : minus [5,7..] (unionAll [[p*p, p*p+2*p..] | p <- drop 1 ps])
unionAll = foldi union' [] -- one possible implementation
union' (x:xs) ys = x : union xs ys
-- unconditionally produce first elt of the 1st arg
-- to avoid run-away access to infinite lists
It looks like it can be implemented in Scala too, judging by the similarity of other code snippets. (Though I don't know Scala). unionAll here implements tree-like folding structure (click for a picture and full code) but could also be implemented with a sliding array, working segment by segment along the streams of primes' multiples.
TL;DR: yes, yes, and yes.
FWIW, here's a real Sieve of Eratosthenes:
def sieve(n: Int) = (2 to math.sqrt(n).toInt).foldLeft((2 to n).toSet) { (ps, x) =>
if (ps(x)) ps -- (x * x to n by x)
else ps
}
Here's an infinite stream of primes using a variation on the Sieve of Eratosthenes that preserves its fundamental properties:
case class Cross(next: Int, incr: Int)
def adjustCrosses(crosses: List[Cross], current: Int) = {
crosses map {
case cross # Cross(`current`, incr) => cross copy (next = current + incr)
case unchangedCross => unchangedCross
}
}
def notPrime(crosses: List[Cross], current: Int) = crosses exists (_.next == current)
def sieve(s: Stream[Int], crosses: List[Cross]): Stream[Int] = {
val current #:: rest = s
if (notPrime(crosses, current)) sieve(rest, adjustCrosses(crosses, current))
else current #:: sieve(rest, Cross(current * current, current) :: crosses)
}
def primes = sieve(Stream from 2, Nil)
This is somewhat difficult to use, however, since each element of the Stream is composed using the crosses list, which has as many numbers as there have been primes up to a number, and it seems that, for some reason, these lists are being kept in memory for each number in the Stream.
For example, prompted by a comment, primes take 6000 contains 56993 would throw a GC exception whereas primes drop 5000 take 1000 contains 56993 would return a result rather fast on my tests.