Random value in Tree (Scala) - scala

I'm working with Tree.scala defined in Functional Programming in Scala. I have implemented all the functions I needed but I have one more that I'm trying to do. The concept is to generate with a RNG home-made, a value between 1 and the numbers of leaves in the Tree (function count). The generated value represents the position of the value. So for example, if the generated value is 3, I will return the leaf at the 3rd position from left to right.
Data structure of a Tree
So I've made this function (according n is the random number generated):
def loop[A](tree: Tree[A], n: Int): A = tree match {
case Leaf(a) => if (n == 0) a else loop(tree, n - 1)
case Branch(left, right) => Branch(loop(left, n), loop(right, n))
}
The idea behind this is, if n > 0 and we are on a leaf, then we can minus a position, if n == 0 we return the value, and if we are on a branch we continue to travel the tree. But it's not working and I'm in lack of ideas, I also tried some things before but without the expected result.

Related

How does the contains method in a binary search tree return false when no conditions are satisfied?

I have a method for a class that implements a set for integers using the binary search tree data structure. One of the methods is a contains method that returns true if a set contains a given integer
Both implementation of the method below is correct, but I'm trying to understand why doing it this way:
def contains(x: Int): Boolean =
if (x < elem) left.contains(x)
else if (elem < x) right.contains(x)
else true
is sufficient compared to being exhaustive and doing it this way:
def contains(x: Int): Boolean =
if (x < elem) left.contains(x)
else if (elem < x) right.contains(x)
else if (elem == x) true
else false
What is the intuitive reason for the method in the first approach returning false if the integer does not belong in the set?
You are right to be suspicious because it appears that the only way to end recursion is by returning true.
However I think you will find that the tree has different classes for node elements and leaf elements. What you have shown is the contains method for a node. The contains method for a leaf will just returns false, ending the recursion.
Alternatively, it may be the left and right are Option[Node]. In this case you will get false if the Option is None because None.contains(_) is always false.
A binary search tree is always constructed such that left branch has elements that are smaller than the node value - elem, and right branch elems are all larger than elem.
Following this logic if your x is neither smaller, nor larger than elem you are left with only possibility: it's equal to the elem. Hence, no need to test for equality, and the second implementation's: else false will always be a dead branch.
Think about it this way: if you take two numbers and neither of them is smaller or bigger than the other, then they must be the same number.

How to pick a random value from a collection in Scala

I need a method to pick uniformly a random value from a collection.
Here is my current impl.
implicit class TraversableOnceOps[A, Repr](val elements: TraversableOnce[A]) extends AnyVal {
def pickRandomly : A = elements.toSeq(Random.nextInt(elements.size))
}
But this code instantiate a new collection, so not ideal in term of memory.
Any way to improve ?
[update] make it work with Iterator
implicit class TraversableOnceOps[A, Repr](val elements: TraversableOnce[A]) extends AnyVal {
def pickRandomly : A = {
val seq = elements.toSeq
seq(Random.nextInt(seq.size))
}
}
It may seem at first glance that you can't do this without counting the elements first, but you can!
Iterate through the sequence f and take each element fi with probability 1/i:
def choose[A](it: Iterator[A], r: util.Random): A =
it.zip(Iterator.iterate(1)(_ + 1)).reduceLeft((x, y) =>
if (r.nextInt(y._2) == 0) y else x
)._1
A quick demonstration of uniformity:
scala> ((1 to 1000000)
| .map(_ => choose("abcdef".iterator, r))
| .groupBy(identity).values.map(_.length))
res45: Iterable[Int] = List(166971, 166126, 166987, 166257, 166698, 166961)
Here's a discussion of the math I wrote a while back, though I'm afraid it's a bit unnecessarily long-winded. It also generalizes to choosing any fixed number of elements instead of just one.
Simplest way is just to think of the problem as zipping the collection with an equal-sized list of random numbers, and then just extract the maximum element. You can do this without actually realizing the zipped sequence. This does require traversing the entire iterator, though
val maxElement = s.maxBy(_=>Random.nextInt)
Or, for the implicit version
implicit class TraversableOnceOps[A, Repr](val elements: TraversableOnce[A]) extends AnyVal {
def pickRandomly : A = elements.maxBy(_=>Random.nextInt)
}
It's possible to select an element uniformly at random from a collection, traversing it once without copying the collection.
The following algorithm will do the trick:
def choose[A](elements: TraversableOnce[A]): A = {
var x: A = null.asInstanceOf[A]
var i = 1
for (e <- elements) {
if (Random.nextDouble <= 1.0 / i) {
x = e
}
i += 1
}
x
}
The algorithm works by at each iteration makes a choice: take the new element with probability 1 / i, or keep the previous one.
To understand why the algorithm choose the element uniformly at random, consider this: Start by considering an element in the collection, for example the first one (in this example the collection only has three elements).
At iteration:
Chosen with probability: 1.
Chosen with probability:
(probability of keeping the element at previous iteration) * (keeping at current iteration)
probability => 1 * 1/2 = 1/2
Chosen with probability: 1/2 * 2/3=1/3 (in other words, uniformly)
If we take another element, for example the second one:
0 (not possible to choose the element at this iteration).
1/2.
1/2*2/3=1/3.
Finally for the third one:
0.
0.
1/3.
This shows that the algorithm selects an element uniformly at random. This can be proved formally using induction.
If the collection is large enough that you care about about instantiations, here is the constant memory solution (I assume, it contains ints' but that only matters for passing initial param to fold):
collection.fold((0, 0)) {
case ((0, _), x) => (1, x)
case ((n, x), _) if (random.nextDouble() > 1.0/n) => (n+1, x)
case ((n, _), x) => (n+1, x)
}._2
I am not sure if this requires a further explanation ... Basically, it does the same thing that #svenslaggare suggested above, but in a functional way, since this is tagged as a scala question.

Generate a DAG from a poset using stricly functional programming

Here is my problem: I have a sequence S of (nonempty but possibly not distinct) sets s_i, and for each s_i need to know how many sets s_j in S (i ≠ j) are subsets of s_i.
I also need incremental performance: once I have all my counts, I may replace one set s_i by some subset of s_i and update the counts incrementally.
Performing all this using purely functional code would be a huge plus (I code in Scala).
As set inclusion is a partial ordering, I thought the best way to solve my problem would be to build a DAG that would represent the Hasse diagram of the sets, with edges representing inclusion, and join an integer value to each node representing the size of the sub-dag below the node plus 1. However, I have been stuck for several days trying to develop the algorithm that builds the Hasse diagram from the partial ordering (let's not talk about incrementality!), even though I thought it would be some standard undergraduate material.
Here is my data structure :
case class HNode[A] (
val v: A,
val child: List[HNode[A]]) {
val rank = 1 + child.map(_.rank).sum
}
My DAG is defined by a list of roots and some partial ordering:
class Hasse[A](val po: PartialOrdering[A], val roots: List[HNode[A]]) {
def +(v: A): Hasse[A] = new Hasse[A](po, add(v, roots))
private def collect(v: A, roots: List[HNode[A]], collected: List[HNode[A]]): List[HNode[A]] =
if (roots == Nil) collected
else {
val (subsets, remaining) = roots.partition(r => po.lteq(r.v, v))
collect(v, remaining.map(_.child).flatten, subsets.filter(r => !collected.exists(c => po.lteq(r.v, c.v))) ::: collected)
}
}
I am pretty stuck here. The last I came up to add a new value v to the DAG is:
find all "root subsets" rs_i of v in the DAG, i.e., subsets of v such that no superset of rs_i is a subset of v. This can be done quite easily by performing a search (BFS or DFS) on the graph (collect function, possibly non-optimal or even flawed).
build the new node n_v, the children of which are the previously found rs_i.
Now, let's find out where n_v should be attached: for a given list of roots, find out supersets of v. If none are found, add n_v to the roots and remove subsets of n_v from the roots. Else, perform step 3 recursively on the supersets's children.
I have not yet implemented fully this algorithm, but it seems uncessarily circonvoluted and nonoptimal for my apparently simple problem. Is there some simpler algorithm available (Google was clueless on this)?
After some work, I finally ended up solving my problem, following my initial intuition. The collect method and rank evaluation were flawed, I rewrote them with tail-recursion as a bonus. Here is the code I obtained:
final case class HNode[A](
val v: A,
val child: List[HNode[A]]) {
val rank: Int = 1 + count(child, Set.empty)
#tailrec
private def count(stack: List[HNode[A]], c: Set[HNode[A]]): Int =
if (stack == Nil) c.size
else {
val head :: rem = stack
if (c(head)) count(rem, c)
else count(head.child ::: rem, c + head)
}
}
// ...
private def add(v: A, roots: List[HNode[A]]): List[HNode[A]] = {
val newNode = HNode(v, collect(v, roots, Nil))
attach(newNode, roots)
}
private def attach(n: HNode[A], roots: List[HNode[A]]): List[HNode[A]] =
if (roots.contains(n)) roots
else {
val (supersets, remaining) = roots.partition { r =>
// Strict superset to avoid creating cycles in case of equal elements
po.tryCompare(n.v, r.v) == Some(-1)
}
if (supersets.isEmpty) n :: remaining.filter(r => !po.lteq(r.v, n.v))
else {
supersets.map(s => HNode(s.v, attach(n, s.child))) ::: remaining
}
}
#tailrec
private def collect(v: A, stack: List[HNode[A]], collected: List[HNode[A]]): List[HNode[A]] =
if (stack == Nil) collected
else {
val head :: tail = stack
if (collected.exists(c => po.lteq(head.v, c.v))) collect(v, tail, collected)
else if (po.lteq(head.v, v)) collect(v, tail, head :: (collected.filter(c => !po.lteq(c.v, head.v))))
else collect(v, head.child ::: tail, collected)
}
I now must check some optimization:
- cut off branches with totally distinct sets when collecting subsets (as Rex Kerr suggested)
- see if sorting the sets by size improves the process (as mitchus suggested)
The following problem is to work the (worst case) complexity of the add() operation out.
With n the number of sets, and d the size of the largest set, the complexity will probably be O(n²d), but I hope it can be refined. Here is my reasoning: if all sets are distinct, the DAG will be reduced to a sequence of roots/leaves. Thus, every time I try to add a node to the data structure, I still have to check for inclusion with each node already present (both in collect and attach procedures). This leads to 1 + 2 + … + n = n(n+1)/2 ∈ O(n²) inclusion checks.
Each set inclusion test is O(d), hence the result.
Suppose your DAG G contains a node v for each set, with attributes v.s (the set) and v.count (the number of instances of the set), including a node G.root with G.root.s = union of all sets (where G.root.count=0 if this set never occurs in your collection).
Then to count the number of distinct subsets of s you could do the following (in a bastardized mixture of Scala, Python and pseudo-code):
sum(apply(lambda x: x.count, get_subsets(s, G.root)))
where
get_subsets(s, v) :
if(v.s is not a subset of s, {},
union({v} :: apply(v.children, lambda x: get_subsets(s, x))))
In my opinion though, for performance reasons you would be better off abandoning this kind of purely functional solution... it works well on lists and trees, but beyond that the going gets tough.

Off by one with sliding?

One of the advantages of not handling collections through indices is to avoid off-by-one errors. That's certainly not the only advantage, but it is one of them.
Now, I often use sliding in some algorithms in Scala, but I feel that it usually results in something very similar to the off-by-one errors, because a sliding of m elements in a collection of size n has size n - m + 1 elements. Or, more trivially, list sliding 2 is one element shorter than list.
The feeling I get is that there's a missing abstraction in this pattern, something that would be part sliding, part something more -- like foldLeft is to reduceLeft. I can't think of what that might be, however. Can anyone help me find enlightenment here?
UPDATE
Since people are not clear one what I'm talking, let's consider this case. I want to capitalize a string. Basically, every letter that is not preceded by a letter should be upper case, and all other letters should be lower case. Using sliding, I have to special case either the first or the last letter. For example:
def capitalize(s: String) = s(0).toUpper +: s.toSeq.sliding(2).map {
case Seq(c1, c2) if c2.isLetter => if (c1.isLetter) c2.toLower else c2.toUpper
case Seq(_, x) => x
}.mkString
I’m taking Owen’s answer as an inspiration to this.
When you want to apply a simple diff() to a list, this can be seen as equivalent to the following matrix multiplication.
a = (0 1 4 3).T
M = ( 1 -1 0 0)
( 0 1 -1 0)
( 0 0 1 -1)
diff(a) = M * a = (1 3 1).T
We may now use the same scheme for general list operations, if we replace addition and multiplication (and if we generalise the numbers in our matrix M).
So, with plus being a list append operation (with flatten afterwards – or simply a collect operation), and the multiplicative equivalent being either Some(_) or None, a slide with a window size of two becomes:
M = (Some(_) Some(_) None None)
(None Some(_) Some(_) None)
(None None Some(_) Some(_))
slide(a) = M “*” a = ((0 1) (1 4) (4 3)).T
Not sure, if this is the kind of abstraction you’re looking for, but it would be a generalisation on a class of operations which change the number of items.
diff or slide operations of order m for an input of length n will need to use Matrixes of size n-m+1 × n.
Edit: A solution could be to transform List[A] to List[Some[A]] and then to prepend or append (slideLeft or slideRight) these with None. That way you could handle all the magic inside the map method.
list.slideLeft(2) {
case Seq(Some(c1), Some(c2)) if c2.isLetter => if (c1.isLetter) c2.toLower else c2.toUpper
case Seq(_, Some(x)) => x
}
I run into this problem all the time in python/R/Matlab where you diff() a vector and then can't line it up with the original one! It is very frustrating.
I think what's really missing is that the vector only hold the dependent variables, and assumes that you, the programmer, are keeping track of the independent variables, ie the dimension that the collection ranges over.
I think the way to solve this is to have the language to some degree keep track of independent variables; perhaps statically through types, or dynamically by storing them along with the vector. Then it can check the independent axes, make sure they line up, or, I don't know if this is possible, shuffle things around to make them line up.
That's the best I've thought of so far.
EDIT
Another way of thinking about this is, why does your collection have order? Why is it not just a Set? The order means something, but the collection doesn't keep track of that -- it's basically using sequential position (which is about as informative as numerical indices) to proxy for the real meaning.
EDIT
Another consequence would be that transformations like sliding actually represent two transformations, one for the dependent variables, and one for their axis.
In your example, I think the code is made more complex because, you basically want to do a map but working with sliding which introduces edge conditions in a way that doesn't work nicely. I think a fold left with an accumulator that remembers the relevant state may be easier conceptually:
def capitalize2(s: String) = (("", true) /: s){ case ((res, notLetter), c) =>
(res + (if (notLetter) c.toUpper else c.toLower), !c.isLetter)
}._1
I think this could be generalized so that notLetter could remember n elements where n is the size of the sliding window.
The transformation you're asking for inherently reduces the size of the data. Sorry--there's no other way to look at it. tail also gives you off-by-one errors.
Now, you might say--well, fine, but I want a convenience method to maintain the original size.
In that case, you might want these methods on List:
initializedSliding(init: List[A]) = (init ::: this).sliding(1 + init.length)
finalizedSliding(tail: List[A]) = (this ::: tail).sliding(1 + tail.length)
which will maintain your list length. (You can envision how to generalize to non-lists, I'm sure.)
This is the analog to fold left/right in that you supply the missing data in order to perform a pairwise (or more) operation on every element of the list.
The off by one problem you describe reminds me in the boundary condition issue in digital signal processing. The problem occurs since the data (list) is finite. It doesn't occur for infinite data (stream). In digital signal processing the issues is remedied by extending the finite signal to an infinite one. This can be done in various ways like repeating the data or repeating the data and reversing it on every repetition (like it is done for the discrete cosine transform).
Borrowing from these approached for sliding would lead to an abstraction which does not exhibit the off by one problem:
(1::2::3::Nil).sliding(2)
would yield
(1,2), (2,3), (3,1)
for circular boundary conditions and
(1,2), (2,3), (3,2)
for circular boundary conditions with reversal.
Off-by-one errors suggest that you are trying to put the original list in one-to-one correspondence with the sliding list, but something strange is going on, since the sliding list has fewer elements.
The problem statement for your example can be roughly phrased as: "Uppercase every character if it (a) is the first character, or (b) follows a letter character". As Owen points, the first character is a special case, and any abstraction should respect this. Here's a possibility,
def slidingPairMap[A, B](s: List[A], f1: A => B, f2: (A, A) => B): List[B] = s match {
case Nil => Nil
case x :: _ => f1(x) +: s.sliding(2).toList.map { case List(x, y) => f2(x, y) }
}
(not the best implementation, but you get the idea). This generalizes to sliding triples, with off-by-two errors, and so on. The type of slidingPairMap makes it clear that special casing is being done.
An equivalent signature could be
def slidingPairMap[A, B](s: List[A], f: Either[A, (A, A)] => B): List[B]
Then f could use pattern matching to figure out if it's working with the first element, or with a subsequent one.
Or, as Owen says in the comments, why not make a modified sliding method that gives information about whether the element is first or not,
def slidingPairs[A](s: List[A]): List[Either[A, (A, A)]]
I guess this last idea is isomorphic to what Debilski suggests in the comments: pad the beginning of the list with None, wrap all the existing elements with Some, and then call sliding.
I realize this is an old question but I just had a similar problem and I wanted to solve it without having to append or prepend anything, and where it would handle the last elements of the sequence in a seamless manner. The approach I came up with is a slidingFoldLeft. You have to handle the first element as a special case (like some others mentioned, for capitalize, it is a special case), but for the end of the sequence you can just handle it like other cases. Here is the implementation and some silly examples:
def slidingFoldLeft[A, B] (seq: Seq[A], window: Int)(acc: B)(
f: (B, Seq[A]) => B): B = {
if (window > 0) {
val iter = seq.sliding(window)
iter.foldLeft(acc){
// Operate normally
case (acc, next) if iter.hasNext => f(acc, next)
// It's at the last <window> elements of the seq, handle current case and
// call recursively with smaller window
case (acc, next) =>
slidingFoldLeft(next.tail, window - 1)(f(acc, next))(f)
}
} else acc
}
def capitalizeAndQuestionIncredulously(s: String) =
slidingFoldLeft(s.toSeq, 2)("" + s(0).toUpper) {
// Normal iteration
case (acc, Seq(c1, c2)) if c1.isLetter && c2.isLetter => acc + c2.toLower
case (acc, Seq(_, c2)) if c2.isLetter => acc + c2.toUpper
case (acc, Seq(_, c2)) => acc + c2
// Last element of string
case (acc, Seq(c)) => acc + "?!"
}
def capitalizeAndInterruptAndQuestionIncredulously(s: String) =
slidingFoldLeft(s.toSeq, 3)("" + s(0).toUpper) {
// Normal iteration
case (acc, Seq(c1, c2, _)) if c1.isLetter && c2.isLetter => acc + c2.toLower
case (acc, Seq(_, c2, _)) if c2.isLetter => acc + c2.toUpper
case (acc, Seq(_, c2, _)) => acc + c2
// Last two elements of string
case (acc, Seq(c1, c2)) => acc + " (commercial break) " + c2
// Last element of string
case (acc, Seq(c)) => acc + "?!"
}
println(capitalizeAndQuestionIncredulously("hello my name is mAtthew"))
println(capitalizeAndInterruptAndQuestionIncredulously("hello my name is mAtthew"))
And the output:
Hello My Name Is Matthew?!
Hello My Name Is Matthe (commercial break) w?!
I would prepend None after mapping with Some(_) the elements; note that the obvious way of doing it (matching for two Some in the default case, as done in the edit by Debilski) is wrong, as we must be able to modify even the first letter. This way, the abstraction respects the fact that simply sometimes there is no predecessor. Using getOrElse(false) ensures that a missing predecessor is treated as having failed the test.
((None +: "foo1bar".toSeq.map(Some(_))) sliding 2).map {
case Seq(c1Opt, Some(c2)) if c2.isLetter => if (c1Opt.map(_.isLetter).getOrElse(false)) c2.toLower else c2.toUpper
case Seq(_, Some(x)) => x
}.mkString
res13: String = "Foo1Bar"
Acknowledgments: the idea of mapping the elements with Some(_) did come to me through Debilski's post.
I'm not sure if this solves your concrete problem, but we could easily imagine a pair of methods e.g. slidingFromLeft(z: A, size: Int) and slidingToRight(z: A, size: Int) (where A is collection's element type) which, when called on e.g.
List(1, 2, 3, 4, 5)
with arguments e.g. (0, 3), should produce respectively
List(0, 0, 1), List(0, 1, 2), List(1, 2, 3), List(2, 3, 4), List(3, 4, 5)
and
List(1, 2, 3), List(2, 3, 4), List(3, 4, 5), List(4, 5, 0), List(5, 0, 0)
This is the sort of problem nicely-suited to an array-oriented functional language like J. Basically, we generate a boolean with a one corresponding to the first letter of each word. To do this, we start with a boolean marking the spaces in a string. For example (lines indented three spaces are inputs; results are flush with left margin; "NB." starts a comment):
str=. 'now is the time' NB. Example w/extra spaces for interest
]whspc=. ' '=str NB. Mark where spaces are=1
0 0 0 1 1 0 0 1 1 0 0 0 1 1 1 1 0 0 0 0
Verify that (*.-.) ("and not") returns one only for "1 0":
]tt=. #:i.4 NB. Truth table
0 0
0 1
1 0
1 1
(*.-.)/"1 tt NB. Apply to 1-D sub-arrays (rows)
0 0 1 0 NB. As hoped.
Slide our tacit function across pairs in the boolean:
2(*.-.)/\whspc NB. Apply to 2-ples
0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0
But this doesn't handle the edge condition of the initial letter, so force a one into the first position. This actually helps as the reduction of 2-ples left us one short. Here we compare lengths of the original boolean and the target boolean:
#whspc
20
#1,2(*.-.)/\whspc
20
1,2(*.-.)/\whspc
1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0
We get uppercase by using the index into the lowercase vector to select from the uppercase vector (after defining these two vectors):
'lc uc'=. 'abcdefghijklmnopqrstuvwxyz';'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
(uc,' '){~lc i. str
NOW IS THE TIME
Check that insertion by boolean gives correct result:
(1,2(*.-.)/\whspc) } str,:(uc,' '){~lc i. str
Now Is The Time
Now is the time to combine all this into one statement:
(1,2(*.-.)/\' '=str) } str,:(uc,' '){~lc i. str
Now Is The Time

Why is this scala prime generation so slow/memory intensive?

I run out of memory while finding the 10,001th prime number.
object Euler0007 {
def from(n: Int): Stream[Int] = n #:: from(n + 1)
def sieve(s: Stream[Int]): Stream[Int] = s.head #:: sieve(s.filter(_ % s.head != 0))
def primes = sieve(from(2))
def main(args: Array[String]): Unit = {
println(primes(10001))
}
}
Is this because after each "iteration" (is this the correct term in this context?) of primes, I increase the stack of functions to be called to get the next element by one?
One solution that I've found on the web which doesn't resort to an iterative solution (which I'd like to avoid to get into functional programming/idiomatic scala) is this (Problem 7):
lazy val ps: Stream[Int] = 2 #:: Stream.from(3).filter(i => ps.takeWhile(j => j * j <= i).forall(i % _ > 0))
From what I can see, this does not lead to this recursion-like way. Is this a good way to do it, or do you know of a better way?
One reason why this is slow is that it isn't the sieve of Eratosthenes. Read http://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf for a detailled explanation (the examples are in Haskell, but can be translated directly into Scala).
My old solution for Euler problem #7 wasn't the "true" sieve either, but it seems to work good enough for little numbers:
object Sieve {
val primes = 2 #:: sieve(3)
def sieve(n: Int) : Stream[Int] =
if (primes.takeWhile(p => p*p <= n).exists(n % _ == 0)) sieve(n + 2)
else n #:: sieve(n + 2)
def main(args: Array[String]) {
println(primes(10000)) //note that indexes are zero-based
}
}
I think the problem with your first version is that you have only defs and no val which collects the results and can be consulted by the generating function, so you always recalculate from scratch.
Yes, it is because you "increase the stack of functions to be called to get the next element, by one after each "iteration" " - i.e. add a new filter on top of stack of filters each time after getting each prime. That's way too many filters.
This means that each produced prime gets tested by all its preceding primes - but only those below its square root are really needed. For instance, to get the 10001-th prime, 104743, there will be 10000 filters created, at run-time. But there are just 66 primes below 323, the square root of 104743, so only 66 filters were really needed. All the 9934 others will be there needlessly, taking up memory, hard at work producing absolutely no added value.
This is the key deficiency of that "functional sieve", which seems to have originated in the 1970s code by David Turner, and later have found its way into the SICP book and other places. It is not that it's a trial division sieve (rather than the sieve of Eratosthenes). That's far too remote a concern for it. Trial division, when optimally implemented, is perfectly capable of producing the 10000th prime very fast.
The key deficiency of that code is that it does not postpone the creation of filters to the right moment, and ends up creating far too many of them.
Talking complexities now, the "old sieve" code is O(n2), in n primes produced. The optimal trial division is O(n1.5/log0.5(n)), and the sieve of Eratosthenes is O(n*log(n)*log(log(n))). As empirical orders of growth the first is seen typically as ~ n^2, the second as ~ n^1.45 and the third ~ n^1.2.
You can find Python generators-based code for optimal trial division implemented in this answer (2nd half of it). It was originally discussed here dealing with the Haskell equivalent of your sieve function.
Just as an illustration, a "readable pseudocode" :) for the old sieve is
primes = sieve [2..] where
sieve (x:xs) = x : sieve [ y | y <- xs, rem y x > 0 ]
-- list of 'y's, drawn from 'xs',
-- such that (y % x > 0)
and for optimal trial division (TD) sieve, synchronized on primes' squares,
primes = sieve [2..] primes where
sieve (x:xs) ps = x : (h ++ sieve [ y | y <- t, rem y p > 0 ] qs)
where
(p:qs) = ps -- 'p' is head elt in 'ps', and 'qs' the rest
(h,t) = span (< p*p) xs -- 'h' are elts below p^2 in 'xs'
-- and 't' are the rest
and for a sieve of Eratosthenes, devised by Richard Bird, as seen in that JFP article mentioned in another answer here,
primes = 2 : minus [3..]
(foldr (\p r-> p*p : union [p*p+p, p*p+2*p..] r) [] primes)
-- function of 'p' and 'r', that returns
-- a list with p^2 as its head elt, ...
Short and fast. (minus a b is a list a with all the elts of b progressively removed from it; union a b is a list a with all the elts of b progressively added to it without duplicates; both dealing with ordered, non-decreasing lists). foldr is the right fold of a list. Because it is linear this runs at ~ n^1.33, to make it run at ~ n^1.2 the tree-like folding function foldi can be used).
The answer to your second question is also a yes. Your second code, re-written in same "pseudocode",
ps = 2 : [i | i <- [3..], all ((> 0).rem i) (takeWhile ((<= i).(^2)) ps)]
is very similar to the optimal TD sieve above - both arrange for each candidate to be tested by all primes below its square root. While the sieve arranges that with a run-time sequence of postponed filters, the latter definition re-fetches the needed primes anew for each candidate. One might be faster than another depending on a compiler, but both are essentially the same.
And the third is also a yes: the sieve of Eratosthenes is better,
ps = 2 : 3 : minus [5,7..] (unionAll [[p*p, p*p+2*p..] | p <- drop 1 ps])
unionAll = foldi union' [] -- one possible implementation
union' (x:xs) ys = x : union xs ys
-- unconditionally produce first elt of the 1st arg
-- to avoid run-away access to infinite lists
It looks like it can be implemented in Scala too, judging by the similarity of other code snippets. (Though I don't know Scala). unionAll here implements tree-like folding structure (click for a picture and full code) but could also be implemented with a sliding array, working segment by segment along the streams of primes' multiples.
TL;DR: yes, yes, and yes.
FWIW, here's a real Sieve of Eratosthenes:
def sieve(n: Int) = (2 to math.sqrt(n).toInt).foldLeft((2 to n).toSet) { (ps, x) =>
if (ps(x)) ps -- (x * x to n by x)
else ps
}
Here's an infinite stream of primes using a variation on the Sieve of Eratosthenes that preserves its fundamental properties:
case class Cross(next: Int, incr: Int)
def adjustCrosses(crosses: List[Cross], current: Int) = {
crosses map {
case cross # Cross(`current`, incr) => cross copy (next = current + incr)
case unchangedCross => unchangedCross
}
}
def notPrime(crosses: List[Cross], current: Int) = crosses exists (_.next == current)
def sieve(s: Stream[Int], crosses: List[Cross]): Stream[Int] = {
val current #:: rest = s
if (notPrime(crosses, current)) sieve(rest, adjustCrosses(crosses, current))
else current #:: sieve(rest, Cross(current * current, current) :: crosses)
}
def primes = sieve(Stream from 2, Nil)
This is somewhat difficult to use, however, since each element of the Stream is composed using the crosses list, which has as many numbers as there have been primes up to a number, and it seems that, for some reason, these lists are being kept in memory for each number in the Stream.
For example, prompted by a comment, primes take 6000 contains 56993 would throw a GC exception whereas primes drop 5000 take 1000 contains 56993 would return a result rather fast on my tests.