How to make this code more functional? - scala

I am a newbie in functional programming. I just tried solving the following problem :
[ a rough specification ]
e.g.1:
dividend : {3,5,9}
divisor : {2,2}
radix = 10
ans (remainder) : {7}
Procedure :
dividend = 3*10^2+5*10^1+9*10^0 = 359
similarly, divisor = 22
so 359 % 22 = 7
e.g.2:
dividend : {555,555,555,555,555,555,555,555,555,555}
divisor: {112,112,112,112,112,112,112,112,112,112}
radix = 1000
ans (remainder) : {107,107,107,107,107,107,107,107,107,107}
My solution to this problem is :
object Tornedo {
def main(args: Array[String]) {
val radix: BigInt = 1000
def buildNum(segs: BigInt*) = (BigInt(0) /: segs.toList) { _ * radix + _ }
val dividend = buildNum(555,555,555,555,555,555,555,555,555,555)
val divisor = buildNum(112,112,112,112,112,112,112,112,112,112)
var remainder = dividend % divisor
var rem = List[BigInt]()
while(remainder > 0) {
rem = (remainder % radix) :: rem
remainder /= radix
}
println(rem)
}
}
Although I am pretty satisfied with this code I'd like to know how to eliminate the while loop & two mutable variables and make this code more functional.
Any help would be greatly appreciated.
Thanks. :)

This tail recursive function remove your two mutable var and the loop:
object Tornedo {
def main(args: Array[String]) {
val radix: BigInt = 1000
def buildNum(segs: BigInt*) = (BigInt(0) /: segs.toList) { _ * radix + _ }
val dividend = buildNum(555,555,555,555,555,555,555,555,555,555)
val divisor = buildNum(112,112,112,112,112,112,112,112,112,112)
def breakup(n: BigInt, segs: List[BigInt]): List[BigInt] =
if (n == 0) segs else breakup(n / radix, n % radix :: segs)
println(breakup(dividend % divisor, Nil))
}
}

Tail recursive solution in Scala 2.8:
def reradix(value: BigInt, radix: BigInt, digits:List[BigInt] = Nil): List[BigInt] = {
if (remainder==0) digits
else reradix(value/radix ,radix ,(value % radix) :: digits)
}
The idea is generally to convert a while into a recursive solution where you keep track of your solution along the way (so it can be tail recursive, as it is here). If you instead used
(value % radix) :: reradix(value/radix, radix)
you would also compute the solution, but it would not be tail recursive so the partial answers would get pushed onto the stack. With default parameters, adding a final parameter that allows you to store the accumulating answer and use tail recursion is syntactically nice, as you can just call reradix(remainder,radix) and get the Nil passed in for free.

Rahul, as I said, there isn't an unfold function in Scala. There is one in Scalaz, so I'm gonna show the solution using that one. The solution below is simply adapting Patrick's answer to use unfold instead of recursion.
import scalaz.Scalaz._
object Tornedo {
def main(args: Array[String]) {
val radix: BigInt = 1000
def buildNum(segs: BigInt*) = (BigInt(0) /: segs.toList) { _ * radix + _ }
val dividend = buildNum(555,555,555,555,555,555,555,555,555,555)
val divisor = buildNum(112,112,112,112,112,112,112,112,112,112)
val unfoldingFunction = (n: BigInt) =>
if (n == 0) None else Some((n % radix, n / radix))
println((dividend % divisor).unfold[List, BigInt](unfoldingFunction))
}
}

I think it's quite expensive way to solve the problem, but very intuitive one IMHO:
scala> Stream.iterate(255)(_ / 10).takeWhile(_ > 0).map(_ % 10).reverse
res6: scala.collection.immutable.Stream[Int] = Stream(2, 5, 5)

Related

Scala functional solution for spoj "Prime Generator"

I worked on the Prime Generator problem for almost 3 days.
I want to make a Scala functional solution(which means "no var", "no mutable data"), but every time it exceed the time limitation.
My solution is:
object Main {
def sqrt(num: Int) = math.sqrt(num).toInt
def isPrime(num: Int): Boolean = {
val end = sqrt(num)
def isPrimeHelper(current: Int): Boolean = {
if (current > end) true
else if (num % current == 0) false
else isPrimeHelper(current + 1)
}
isPrimeHelper(2)
}
val feedMax = sqrt(1000000000)
val feedsList = (2 to feedMax).filter(isPrime)
val feedsSet = feedsList.toSet
def findPrimes(min: Int, max: Int) = (min to max) filter {
num => if (num <= feedMax) feedsSet.contains(num)
else feedsList.forall(p => num % p != 0 || p * p > num)
}
def main(args: Array[String]) {
val total = readLine().toInt
for (i <- 1 to total) {
val Array(from, to) = readLine().split("\\s+")
val primes = findPrimes(from.toInt, to.toInt)
primes.foreach(println)
println()
}
}
}
I'm not sure where can be improved. I also searched a lot, but can't find a scala solution(most are c/c++ ones)
Here is a nice fully functional scala solution using the sieve of eratosthenes: http://en.literateprograms.org/Sieve_of_Eratosthenes_(Scala)#chunk def:ints
Check out this elegant and efficient one liner by Daniel Sobral: http://dcsobral.blogspot.se/2010/12/sieve-of-eratosthenes-real-one-scala.html?m=1
lazy val unevenPrimes: Stream[Int] = {
def nextPrimes(n: Int, sqrt: Int, sqr: Int): Stream[Int] =
if (n > sqr) nextPrimes(n, sqrt + 1, (sqrt + 1)*(sqrt + 1)) else
if (unevenPrimes.takeWhile(_ <= sqrt).exists(n % _ == 0)) nextPrimes(n + 2, sqrt, sqr)
else n #:: nextPrimes(n + 2, sqrt, sqr)
3 #:: 5 #:: nextPrimes(7, 3, 9)
}

Why this function call in Scala is not optimized away?

I'm running this program with Scala 2.10.3:
object Test {
def main(args: Array[String]) {
def factorial(x: BigInt): BigInt =
if (x == 0) 1 else x * factorial(x - 1)
val N = 1000
val t = new Array[Long](N)
var r: BigInt = 0
for (i <- 0 until N) {
val t0 = System.nanoTime()
r = r + factorial(300)
t(i) = System.nanoTime()-t0
}
val ts = t.sortWith((x, y) => x < y)
for (i <- 0 to 10)
print(ts(i) + " ")
println("*** " + ts(N/2) + "\n" + r)
}
}
and call to a pure function factorial with constant argument is evaluated during each loop iteration (conclusion based on timing results). Shouldn't the optimizer reuse function call result after the first call?
I'm using Scala IDE for Eclipse. Are there any optimization flags for the compiler, which may produce more efficient code?
Scala is not a purely functional language, so without an effect system it cannot know that factorial is pure (for example, it doesn't "know" anything about the multiplication of big ints).
You need to add your own memoization approach here. Most simply add a val f300 = factorial(300) outside your loop.
Here is a question about memoization.

Scala Doubles, and Precision

Is there a function that can truncate or round a Double? At one point in my code I would like a number like: 1.23456789 to be rounded to 1.23
You can use scala.math.BigDecimal:
BigDecimal(1.23456789).setScale(2, BigDecimal.RoundingMode.HALF_UP).toDouble
There are a number of other rounding modes, which unfortunately aren't very well documented at present (although their Java equivalents are).
Here's another solution without BigDecimals
Truncate:
(math floor 1.23456789 * 100) / 100
Round (see rint):
(math rint 1.23456789 * 100) / 100
Or for any double n and precision p:
def truncateAt(n: Double, p: Int): Double = { val s = math pow (10, p); (math floor n * s) / s }
Similar can be done for the rounding function, this time using currying:
def roundAt(p: Int)(n: Double): Double = { val s = math pow (10, p); (math round n * s) / s }
which is more reusable, e.g. when rounding money amounts the following could be used:
def roundAt2(n: Double) = roundAt(2)(n)
Since no-one mentioned the % operator yet, here comes. It only does truncation, and you cannot rely on the return value not to have floating point inaccuracies, but sometimes it's handy:
scala> 1.23456789 - (1.23456789 % 0.01)
res4: Double = 1.23
How about :
val value = 1.4142135623730951
//3 decimal places
println((value * 1000).round / 1000.toDouble)
//4 decimal places
println((value * 10000).round / 10000.toDouble)
Edit: fixed the problem that #ryryguy pointed out. (Thanks!)
If you want it to be fast, Kaito has the right idea. math.pow is slow, though. For any standard use you're better off with a recursive function:
def trunc(x: Double, n: Int) = {
def p10(n: Int, pow: Long = 10): Long = if (n==0) pow else p10(n-1,pow*10)
if (n < 0) {
val m = p10(-n).toDouble
math.round(x/m) * m
}
else {
val m = p10(n).toDouble
math.round(x*m) / m
}
}
This is about 10x faster if you're within the range of Long (i.e 18 digits), so you can round at anywhere between 10^18 and 10^-18.
For those how are interested, here are some times for the suggested solutions...
Rounding
Java Formatter: Elapsed Time: 105
Scala Formatter: Elapsed Time: 167
BigDecimal Formatter: Elapsed Time: 27
Truncation
Scala custom Formatter: Elapsed Time: 3
Truncation is the fastest, followed by BigDecimal.
Keep in mind these test were done running norma scala execution, not using any benchmarking tools.
object TestFormatters {
val r = scala.util.Random
def textFormatter(x: Double) = new java.text.DecimalFormat("0.##").format(x)
def scalaFormatter(x: Double) = "$pi%1.2f".format(x)
def bigDecimalFormatter(x: Double) = BigDecimal(x).setScale(2, BigDecimal.RoundingMode.HALF_UP).toDouble
def scalaCustom(x: Double) = {
val roundBy = 2
val w = math.pow(10, roundBy)
(x * w).toLong.toDouble / w
}
def timed(f: => Unit) = {
val start = System.currentTimeMillis()
f
val end = System.currentTimeMillis()
println("Elapsed Time: " + (end - start))
}
def main(args: Array[String]): Unit = {
print("Java Formatter: ")
val iters = 10000
timed {
(0 until iters) foreach { _ =>
textFormatter(r.nextDouble())
}
}
print("Scala Formatter: ")
timed {
(0 until iters) foreach { _ =>
scalaFormatter(r.nextDouble())
}
}
print("BigDecimal Formatter: ")
timed {
(0 until iters) foreach { _ =>
bigDecimalFormatter(r.nextDouble())
}
}
print("Scala custom Formatter (truncation): ")
timed {
(0 until iters) foreach { _ =>
scalaCustom(r.nextDouble())
}
}
}
}
It's actually very easy to handle using Scala f interpolator - https://docs.scala-lang.org/overviews/core/string-interpolation.html
Suppose we want to round till 2 decimal places:
scala> val sum = 1 + 1/4D + 1/7D + 1/10D + 1/13D
sum: Double = 1.5697802197802198
scala> println(f"$sum%1.2f")
1.57
You may use implicit classes:
import scala.math._
object ExtNumber extends App {
implicit class ExtendedDouble(n: Double) {
def rounded(x: Int) = {
val w = pow(10, x)
(n * w).toLong.toDouble / w
}
}
// usage
val a = 1.23456789
println(a.rounded(2))
}
Recently, I faced similar problem and I solved it using following approach
def round(value: Either[Double, Float], places: Int) = {
if (places < 0) 0
else {
val factor = Math.pow(10, places)
value match {
case Left(d) => (Math.round(d * factor) / factor)
case Right(f) => (Math.round(f * factor) / factor)
}
}
}
def round(value: Double): Double = round(Left(value), 0)
def round(value: Double, places: Int): Double = round(Left(value), places)
def round(value: Float): Double = round(Right(value), 0)
def round(value: Float, places: Int): Double = round(Right(value), places)
I used this SO issue. I have couple of overloaded functions for both Float\Double and implicit\explicit options. Note that, you need to explicitly mention the return type in case of overloaded functions.
Those are great answers in this thread. In order to better show the difference, here is just an example. The reason I put it here b/c during my work the numbers are required to be NOT half-up :
import org.apache.spark.sql.types._
val values = List(1.2345,2.9998,3.4567,4.0099,5.1231)
val df = values.toDF
df.show()
+------+
| value|
+------+
|1.2345|
|2.9998|
|3.4567|
|4.0099|
|5.1231|
+------+
val df2 = df.withColumn("floor_val", floor(col("value"))).
withColumn("dec_val", col("value").cast(DecimalType(26,2))).
withColumn("floor2", (floor(col("value") * 100.0)/100.0).cast(DecimalType(26,2)))
df2.show()
+------+---------+-------+------+
| value|floor_val|dec_val|floor2|
+------+---------+-------+------+
|1.2345| 1| 1.23| 1.23|
|2.9998| 2| 3.00| 2.99|
|3.4567| 3| 3.46| 3.45|
|4.0099| 4| 4.01| 4.00|
|5.1231| 5| 5.12| 5.12|
+------+---------+-------+------+
floor function floors to the largest interger less than current value. DecimalType by default will enable HALF_UP mode, not just cut to precision you want. If you want to cut to a certain precision without using HALF_UP mode, you can use above solution instead ( or use scala.math.BigDecimal (where you have to explicitly define rounding modes).
I wouldn't use BigDecimal if you care about performance. BigDecimal converts numbers to string and then parses it back again:
/** Constructs a `BigDecimal` using the decimal text representation of `Double` value `d`, rounding if necessary. */
def decimal(d: Double, mc: MathContext): BigDecimal = new BigDecimal(new BigDec(java.lang.Double.toString(d), mc), mc)
I'm going to stick to math manipulations as Kaito suggested.
Since the question specified rounding for doubles specifically, this seems way simpler than dealing with big integer or excessive string or numerical operations.
"%.2f".format(0.714999999999).toDouble
A bit strange but nice. I use String and not BigDecimal
def round(x: Double)(p: Int): Double = {
var A = x.toString().split('.')
(A(0) + "." + A(1).substring(0, if (p > A(1).length()) A(1).length() else p)).toDouble
}
You can do:Math.round(<double precision value> * 100.0) / 100.0
But Math.round is fastest but it breaks down badly in corner cases with either a very high number of decimal places (e.g. round(1000.0d, 17)) or large integer part (e.g. round(90080070060.1d, 9)).
Use Bigdecimal it is bit inefficient as it converts the values to string but more relieval:
BigDecimal(<value>).setScale(<places>, RoundingMode.HALF_UP).doubleValue()
use your preference of Rounding mode.
If you are curious and want to know more detail why this happens you can read this:
I think previous answers are:
Plain wrong: using math.floor for example doesn't work for negative values..
Unnecessary complicated.
Here is a suggestion based on #kaito's answer (i can't comment yet):
def truncateAt(x: Double, p: Int): Double = {
val s = math.pow(10, p)
(x * s).toInt / s
}
toInt will work for positive and negative values.

What is the fastest way to write Fibonacci function in Scala?

I've looked over a few implementations of Fibonacci function in Scala starting from a very simple one, to the more complicated ones.
I'm not entirely sure which one is the fastest. I'm leaning towards the impression that the ones that uses memoization is faster, however I wonder why Scala doesn't have a native memoization.
Can anyone enlighten me toward the best and fastest (and cleanest) way to write a fibonacci function?
The fastest versions are the ones that deviate from the usual addition scheme in some way. Very fast is the calculation somehow similar to a fast binary exponentiation based on these formulas:
F(2n-1) = F(n)² + F(n-1)²
F(2n) = (2F(n-1) + F(n))*F(n)
Here is some code using it:
def fib(n:Int):BigInt = {
def fibs(n:Int):(BigInt,BigInt) = if (n == 1) (1,0) else {
val (a,b) = fibs(n/2)
val p = (2*b+a)*a
val q = a*a + b*b
if(n % 2 == 0) (p,q) else (p+q,p)
}
fibs(n)._1
}
Even though this is not very optimized (e.g. the inner loop is not tail recursive), it will beat the usual additive implementations.
for me the simplest defines a recursive inner tail function:
def fib: Stream[Long] = {
def tail(h: Long, n: Long): Stream[Long] = h #:: tail(n, h + n)
tail(0, 1)
}
This doesn't need to build any Tuple objects for the zip and is easy to understand syntactically.
Scala does have memoization in the form of Streams.
val fib: Stream[BigInt] = 0 #:: 1 #:: fib.zip(fib.tail).map(p => p._1 + p._2)
scala> fib take 100 mkString " "
res22: String = 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 ...
Stream is a LinearSeq so you might like to convert it to an IndexedSeq if you're doing a lot of fib(42) type calls.
However I would question what your use-case is for a fibbonaci function. It will overflow Long in less than 100 terms so larger terms aren't much use for anything. The smaller terms you can just stick in a table and look them up if speed is paramount. So the details of the computation probably don't matter much since for the smaller terms they're all quick.
If you really want to know the results for very big terms, then it depends on whether you just want one-off values (use Landei's solution) or, if you're making a sufficient number of calls, you may want to pre-compute the whole lot. The problem here is that, for example, the 100,000th element is over 20,000 digits long. So we're talking gigabytes of BigInt values which will crash your JVM if you try to hold them in memory. You could sacrifice accuracy and make things more manageable. You could have a partial-memoization strategy (say, memoize every 100th term) which makes a suitable memory / speed trade-off. There is no clear anwser for what is the fastest: it depends on your usage and resources.
This could work. it takes O(1) space O(n) time to calculate a number, but has no caching.
object Fibonacci {
def fibonacci(i : Int) : Int = {
def h(last : Int, cur: Int, num : Int) : Int = {
if ( num == 0) cur
else h(cur, last + cur, num - 1)
}
if (i < 0) - 1
else if (i == 0 || i == 1) 1
else h(1,2,i - 2)
}
def main(args: Array[String]){
(0 to 10).foreach( (x : Int) => print(fibonacci(x) + " "))
}
}
The answers using Stream (including the accepted answer) are very short and idiomatic, but they aren't the fastest. Streams memoize their values (which isn't necessary in iterative solutions), and even if you don't keep the reference to the stream, a lot of memory may be allocated and then immediately garbage-collected. A good alternative is to use an Iterator: it doesn't cause memory allocations, is functional in style, short and readable.
def fib(n: Int) = Iterator.iterate(BigInt(0), BigInt(1)) { case (a, b) => (b, a+b) }.
map(_._1).drop(n).next
A little simpler tail Recursive solution that can calculate Fibonacci for large values of n. The Int version is faster but is limited, when n > 46 integer overflow occurs
def tailRecursiveBig(n :Int) : BigInt = {
#tailrec
def aux(n : Int, next :BigInt, acc :BigInt) :BigInt ={
if(n == 0) acc
else aux(n-1, acc + next,next)
}
aux(n,1,0)
}
This has already been answered, but hopefully you will find my experience helpful. I had a lot of trouble getting my mind around scala infinite streams. Then, I watched Paul Agron's presentation where he gave very good suggestions: (1) implement your solution with basic Lists first, then if you are going to generify your solution with parameterized types, create a solution with simple types like Int's first.
using that approach I came up with a real simple (and for me, easy to understand solution):
def fib(h: Int, n: Int) : Stream[Int] = { h #:: fib(n, h + n) }
var x = fib(0,1)
println (s"results: ${(x take 10).toList}")
To get to the above solution I first created, as per Paul's advice, the "for-dummy's" version, based on simple lists:
def fib(h: Int, n: Int) : List[Int] = {
if (h > 100) {
Nil
} else {
h :: fib(n, h + n)
}
}
Notice that I short circuited the list version, because if i didn't it would run forever.. But.. who cares? ;^) since it is just an exploratory bit of code.
The code below is both fast and able to compute with high input indices. On my computer it returns the 10^6:th Fibonacci number in less than two seconds. The algorithm is in a functional style but does not use lists or streams. Rather, it is based on the equality \phi^n = F_{n-1} + F_n*\phi, for \phi the golden ratio. (This is a version of "Binet's formula".) The problem with using this equality is that \phi is irrational (involving the square root of five) so it will diverge due to finite-precision arithmetics if interpreted naively using Float-numbers. However, since \phi^2 = 1 + \phi it is easy to implement exact computations with numbers of the form a + b\phi for a and b integers, and this is what the algorithm below does. (The "power" function has a bit of optimization in it but is really just iteration of the "mult"-multiplication on such numbers.)
type Zphi = (BigInt, BigInt)
val phi = (0, 1): Zphi
val mult: (Zphi, Zphi) => Zphi = {
(z, w) => (z._1*w._1 + z._2*w._2, z._1*w._2 + z._2*w._1 + z._2*w._2)
}
val power: (Zphi, Int) => Zphi = {
case (base, ex) if (ex >= 0) => _power((1, 0), base, ex)
case _ => sys.error("no negative power plz")
}
val _power: (Zphi, Zphi, Int) => Zphi = {
case (t, b, e) if (e == 0) => t
case (t, b, e) if ((e & 1) == 1) => _power(mult(t, b), mult(b, b), e >> 1)
case (t, b, e) => _power(t, mult(b, b), e >> 1)
}
val fib: Int => BigInt = {
case n if (n < 0) => 0
case n => power(phi, n)._2
}
EDIT: An implementation which is more efficient and in a sense also more idiomatic is based on Typelevel's Spire library for numeric computations and abstract algebra. One can then paraphrase the above code in a way much closer to the mathematical argument (We do not need the whole ring-structure but I think it's "morally correct" to include it). Try running the following code:
import spire.implicits._
import spire.algebra._
case class S(fst: BigInt, snd: BigInt) {
override def toString = s"$fst + $snd"++"φ"
}
object S {
implicit object SRing extends Ring[S] {
def zero = S(0, 0): S
def one = S(1, 0): S
def plus(z: S, w: S) = S(z.fst + w.fst, z.snd + w.snd): S
def negate(z: S) = S(-z.fst, -z.snd): S
def times(z: S, w: S) = S(z.fst * w.fst + z.snd * w.snd
, z.fst * w.snd + z.snd * w.fst + z.snd * w.snd)
}
}
object Fibo {
val phi = S(0, 1)
val fib: Int => BigInt = n => (phi pow n).snd
def main(arg: Array[String]) {
println( fib(1000000) )
}
}

Scala performance - Sieve

Right now, I am trying to learn Scala . I've started small, writing some simple algorithms . I've encountered some problems when I wanted to implement the Sieve algorithm from finding all all prime numbers lower than a certain threshold .
My implementation is:
import scala.math
object Sieve {
// Returns all prime numbers until maxNum
def getPrimes(maxNum : Int) = {
def sieve(list: List[Int], stop : Int) : List[Int] = {
list match {
case Nil => Nil
case h :: list if h <= stop => h :: sieve(list.filterNot(_ % h == 0), stop)
case _ => list
}
}
val stop : Int = math.sqrt(maxNum).toInt
sieve((2 to maxNum).toList, stop)
}
def main(args: Array[String]) = {
val ap = printf("%d ", (_:Int));
// works
getPrimes(1000).foreach(ap(_))
// works
getPrimes(100000).foreach(ap(_))
// out of memory
getPrimes(1000000).foreach(ap(_))
}
}
Unfortunately it fails when I want to computer all the prime numbers smaller than 1000000 (1 million) . I am receiving OutOfMemory .
Do you have any idea on how to optimize the code, or how can I implement this algorithm in a more elegant fashion .
PS: I've done something very similar in Haskell, and there I didn't encountered any issues .
I would go with an infinite Stream. Using a lazy data structure allows to code pretty much like in Haskell. It reads automatically more "declarative" than the code you wrote.
import Stream._
val primes = 2 #:: sieve(3)
def sieve(n: Int) : Stream[Int] =
if (primes.takeWhile(p => p*p <= n).exists(n % _ == 0)) sieve(n + 2)
else n #:: sieve(n + 2)
def getPrimes(maxNum : Int) = primes.takeWhile(_ < maxNum)
Obviously, this isn't the most performant approach. Read The Genuine Sieve of Eratosthenes for a good explanation (it's Haskell, but not too difficult). For real big ranges you should consider the Sieve of Atkin.
The code in question is not tail recursive, so Scala cannot optimize the recursion away. Also, Haskell is non-strict by default, so you can't hardly compare it to Scala. For instance, whereas Haskell benefits from foldRight, Scala benefits from foldLeft.
There are many Scala implementations of Sieve of Eratosthenes, including some in Stack Overflow. For instance:
(n: Int) => (2 to n) |> (r => r.foldLeft(r.toSet)((ps, x) => if (ps(x)) ps -- (x * x to n by x) else ps))
The following answer is about a 100 times faster than the "one-liner" answer using a Set (and the results don't need sorting to ascending order) and is more of a functional form than the other answer using an array although it uses a mutable BitSet as a sieving array:
object SoE {
def makeSoE_Primes(top: Int): Iterator[Int] = {
val topndx = (top - 3) / 2
val nonprms = new scala.collection.mutable.BitSet(topndx + 1)
def cullp(i: Int) = {
import scala.annotation.tailrec; val p = i + i + 3
#tailrec def cull(c: Int): Unit = if (c <= topndx) { nonprms += c; cull(c + p) }
cull((p * p - 3) >>> 1)
}
(0 to (Math.sqrt(top).toInt - 3) >>> 1).filterNot { nonprms }.foreach { cullp }
Iterator.single(2) ++ (0 to topndx).filterNot { nonprms }.map { i: Int => i + i + 3 }
}
}
It can be tested by the following code:
object Main extends App {
import SoE._
val top_num = 10000000
val strt = System.nanoTime()
val count = makeSoE_Primes(top_num).size
val end = System.nanoTime()
println(s"Successfully completed without errors. [total ${(end - strt) / 1000000} ms]")
println(f"Found $count primes up to $top_num" + ".")
println("Using one large mutable1 BitSet and functional code.")
}
With the results from the the above as follows:
Successfully completed without errors. [total 294 ms]
Found 664579 primes up to 10000000.
Using one large mutable BitSet and functional code.
There is an overhead of about 40 milliseconds for even small sieve ranges, and there are various non-linear responses with increasing range as the size of the BitSet grows beyond the different CPU caches.
It looks like List isn't very effecient space wise. You can get an out of memory exception by doing something like this
1 to 2000000 toList
I "cheated" and used a mutable array. Didn't feel dirty at all.
def primesSmallerThan(n: Int): List[Int] = {
val nonprimes = Array.tabulate(n + 1)(i => i == 0 || i == 1)
val primes = new collection.mutable.ListBuffer[Int]
for (x <- nonprimes.indices if !nonprimes(x)) {
primes += x
for (y <- x * x until nonprimes.length by x if (x * x) > 0) {
nonprimes(y) = true
}
}
primes.toList
}