I have written this function in Scala to calculate the fibonacci number given a particular index n:
def fibonacci(n: Long): Long = {
if(n <= 1) n
else
fibonacci(n - 1) + fibonacci(n - 2)
}
However it is not efficient when calculating with large indexes. Therefore I need to implement a function using a tuple and this function should return two consecutive values as the result.
Can somebody give me any hints about this? I have never used Scala before. Thanks!
This question should maybe go to Mathematics.
There is an explicit formula for the Fibonacci sequence. If you need to calculate the Fibonacci number for n without the previous ones, this is much faster. You find it here (Binet's formula): http://en.wikipedia.org/wiki/Fibonacci_number
Here's a simple tail-recursive solution:
def fibonacci(n: Long): Long = {
def fib(i: Long, x: Long, y: Long): Long = {
if (i > 0) fib(i-1, x+y, x)
else x
}
fib(n, 0, 1)
}
The solution you posted takes exponential time since it creates two recursive invocation trees (fibonacci(n - 1) and fibonacci(n - 2)) at each step. By simply tracking the last two numbers, you can recursively compute the answer without any repeated computation.
Can you explain the middle part, why (i-1, x+y, x) etc. Sorry if I am asking too much but I hate to copy and paste code without knowing how it works.
It's pretty simple—but my poor choice of variable names might have made it confusing.
i is simply a counter saying how many steps we have left. If we're calculating the Mth (I'm using M since I already used n in my code) Fibonacci number, then i tells us how many more terms we have left to calculate before we reach the Mth term.
x is the mth term in the Fibonacci sequence, or Fm (where m = M - i).
y is the m-1th term in the Fibonacci sequence, or Fm-1 .
So, on the first call fib(n, 0, 1), we have i=M, x=0, y=1. If you look up the bidirectional Fibonacci sequence, you'll see that F0 = 0 and F-1 = 1, which is why x=0 and y=1 here.
On the next recursive call, fib(i-1, x+y, x), we pass x+y as our next x value. This come straight from the definiton:
Fn = Fn-1 + Fn-2
We pass x as the next y term, since our current Fn-1 is the same as Fn-2 for the next term.
On each step we decrement i since we're one step closer to the final answer.
I am assuming that you don't have saved values from previous computations. If so, it will be faster for you to use the direct formula using the golden ratio instead of the recursive definition. The formula can be found in the Wikipedia page for Fibonnaci number:
floor(pow(phi, n)/root_of_5 + 0.5)
where phi = (1 + sqrt(5)/2).
I have no knowledge of programming in Scala. I am hoping someone on SO will upgrade my pseudo-code to actual Scala code.
Update
Here's another solution again using Streams as below (getting Memoization for free) but a bit more intuitive (aka: without using zip/tail invocation on fibs Stream):
val fibs = Stream.iterate( (0,1) ) { case (a,b)=>(b,a+b) }.map(_._1)
that yields the same output as below for:
fibs take 5 foreach println
Scala supports Memoizations through Streams that is an implementation of lazy lists. This is a perfect fit for Fibonacci implementation which is actually provided as an example in the Scala Api for Streams. Quoting here:
import scala.math.BigInt
object Main extends App {
val fibs: Stream[BigInt] = BigInt(0) #:: BigInt(1) #:: fibs.zip(fibs.tail).map { n => n._1 + n._2 }
fibs take 5 foreach println
}
// prints
//
// 0
// 1
// 1
// 2
// 3
Related
val dimensionality = 10
val zeros = DenseVector.zeros[Double](dimensionality)
#tailrec private def specials(list: List[DenseVector[Int]], i: Int): List[DenseVector[Int]] = {
if(i >= dimensionality) list
else {
val vec = zeros.copy
vec(i to i) := 1
specials(vec :: list, i + 1)
}
}
val specialList = specials(Nil, 0).toVector
specialList.map(...doing my thing...)
Should I write my tail recursive function using a List as accumulator above and then write
specials(Nil, 0).toVector
or should I write my trail recursion with a Vector in the first place? What is computationally more efficient?
By the way: specialList is a list that contains DenseVectors where every entry is 0 with the exception of one entry, which is 1. There are as many DenseVectors as they are long.
I'm not sur what you're trying to do here but you could rewrite your code like so:
type Mat = List[Vector[Int]]
#tailrec
private def specials(mat: Mat, i: Int): Mat = i match {
case `dimensionality` => mat
case _ =>
val v = zeros.copy.updated(i,1)
specials(v :: mat, i + 1)
}
As you are dealing with a matrix, Vector is probably a better choice.
Let's compare the performance characteristics of both variants:
List: prepending takes constant time, conversion to Vector takes linear time.
Vector: prepending takes "effectively" constant time (eC), no subsequent conversion needed.
If you compare the implementations of List and Vector, then you'll find out that prepending to a List is a simpler and cheaper operation than prepending to a Vector. Instead of just adding another element at the front as it is done by List, Vector potentially has to replace a whole branch/subtree internally. On average, this still happens in constant time ("effectively" constant, because the subtrees can differ in their size), but is more expensive than prepending to List. On the plus side, you can avoid the call to toVector.
Eventually, the crucial point of interest is the size of the collection you want to create (or in other words, the amount of recursive prepend-steps you are doing). It's totally possible that there is no clear winner and one of the two variants is faster for <= n steps, whereas the other variant is faster for > n steps. In my naive toy benchmark, List/toVecor seemed to be faster for less than 8k elements, but you should perform a set of well-chosen benchmarks that represent your scenario adequately.
I'm new to functional programming, so some problems seems harder to solve using functional approach.
Let's say I have a list of numbers, like 1 to 10.000, and I want to get the items of the list which sums up to at most a number n (let's say 100). So, it would get the numbers until their sum is greater than 100.
In imperative programming, it's trivial to solve this problem, because I can keep a variable in each interaction, and stop once the objective is met.
But how can I do the same in functional programming? Since the sum function operates on completed lists, and I still don't have the completed list, how can I 'carry on' the computation?
If sum was lazily computed, I could write something like that:
(1 to 10000).sum.takeWhile(_ < 100)
P.S.:Even though any answer will be appreciated, I'd like one that doesn't compute the sum each time, since obviously the imperative version will be much more optimal regarding speed.
Edit:
I know that I can "convert" the imperative loop approach to a functional recursive function. I'm more interested in finding if one of the existing library functions can provide a way for me not to write one each time I need something.
Use Stream.
scala> val ss = Stream.from(1).take(10000)
ss: scala.collection.immutable.Stream[Int] = Stream(1, ?)
scala> ss.scanLeft(0)(_ + _)
res60: scala.collection.immutable.Stream[Int] = Stream(0, ?)
scala> res60.takeWhile(_ < 100).last
res61: Int = 91
EDIT:
Obtaining components is not very tricky either. This is how you can do it:
scala> ss.scanLeft((0, Vector.empty[Int])) { case ((sum, compo), cur) => (sum + cur, compo :+ cur) }
res62: scala.collection.immutable.Stream[(Int, scala.collection.immutable.Vector[Int])] = Stream((0,Vector()), ?)
scala> res62.takeWhile(_._1 < 100).last
res63: (Int, scala.collection.immutable.Vector[Int]) = (91,Vector(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13))
The second part of the tuple is your desired result.
As should be obvious, in this case, building a vector is wasteful. Instead we can only store the last number from the stream that contributed to sum.
scala> ss.scanLeft(0)(_ + _).zipWithIndex
res64: scala.collection.immutable.Stream[(Int, Int)] = Stream((0,0), ?)
scala> res64.takeWhile(_._1 < 100).last._2
res65: Int = 13
The way I would do this is with recursion. On each call, add the next number. Your base case is when the sum is greater than 100, at which point you return all the way up the stack. You'll need a helper function to do the actual recursion, but that's no big deal.
This isn't hard using "functional" methods either.
Using recursion, rather than maintaining your state in a local variable that you mutate, you keep it in parameters and return values.
So, to return the longest initial part of a list whose sum is at most N:
If the list is empty, you're done; return the empty list.
If the head of the list is greater than N, you're done; return the empty list.
Otherwise, let H be the head of the list.
All we need now is the initial part of the tail of the list whose sum is at most N - H, then we can "cons" H onto that list, and we're done.
We can compute this recursively using the same procedure as we have used this far, so it's an easy step.
A simple pseudocode solution:
sum_to (n, ls) = if isEmpty ls or n < (head ls)
then Nil
else (head ls) :: sum_to (n - head ls, tail ls)
sum_to(100, some_list)
All sequence operations which require only one pass through the sequence can be implemented using folds our reduce like it is sometimes called.
I find myself using folds very often since I became used to functional programming
so here odd one possible approach
Use an empty collection as initial value and fold according to this strategy
Given the processed collection and the new value check if their sum is low enough and if then spend the value to the collection else do nothing
that solution is not very efficient but I want to emphasize the following
map fold filter zip etc are the way to get accustomed to functional programming try to use them as much as possible instead of loping constructs or recursive functions your code will be more declarative and functional
I have following simple code
def fib(i:Long,j:Long):Stream[Long] = i #:: fib(j, i+j)
(0l /: fib(1,1).take(10000000)) (_+_)
And it throws OutOfMemmoryError exception.
I can not understand why, because I think all the parts use constant memmory i.e. lazy evaluation streams and foldLeft...
Those code also don't work
fib(1,1).take(10000000).sum or max, min e.t.c.
How to correctly implement infinite streams and do iterative operations upon it?
Scala version: 2.9.0
Also scala javadoc said, that foldLeft operation is memmory safe for streams
/** Stream specialization of foldLeft which allows GC to collect
* along the way.
*/
#tailrec
override final def foldLeft[B](z: B)(op: (B, A) => B): B = {
if (this.isEmpty) z
else tail.foldLeft(op(z, head))(op)
}
EDIT:
Implementation with iterators still not useful, since it throws ${domainName} exception
def fib(i:Long,j:Long): Iterator[Long] = Iterator(i) ++ fib(j, i + j)
How to define correctly infinite stream/iterator in Scala?
EDIT2:
I don't care about int overflow, I just want to understand how to create infinite stream/iterator etc in scala without side effects .
The reason to use Stream instead of Iterator is so that you don't have to calculate all the small terms in the series over again. But this means that you need to store ten million stream nodes. These are pretty large, unfortunately, so that could be enough to overflow the default memory. The only realistic way to overcome this is to start with more memory (e.g. scala -J-Xmx2G). (Also, note that you're going to overflow Long by an enormous margin; the Fibonacci series increases pretty quickly.)
P.S. The iterator implementation I have in mind is completely different; you don't build it out of concatenated singleton Iterators:
def fib(i: Long, j: Long) = Iterator.iterate((i,j)){ case (a,b) => (b,a+b) }.map(_._1)
Now when you fold, past results can be discarded.
The OutOfMemoryError happens indenpendently from the fact that you use Stream. As Rex Kerr mentioned above, Stream -- unlike Iterator -- stores everything in memory. The difference with List is that the elements of Stream are calculated lazily, but once you reach 10000000, there will be 10000000 elements, just like List.
Try with new Array[Int](10000000), you will have the same problem.
To calculate the fibonacci number as above you may want to use different approach. You can take into account the fact that you only need to have two numbers, instead of the whole fibonacci numbers discovered so far.
For example:
scala> def fib(i:Long,j:Long): Iterator[Long] = Iterator(i) ++ fib(j, i + j)
fib: (i: Long,j: Long)Iterator[Long]
And to get, for example, the index of the first fibonacci number exceeding 1000000:
scala> fib(1, 1).indexWhere(_ > 1000000)
res12: Int = 30
Edit: I added the following lines to cope with the StackOverflow
If you really want to work with 1 millionth fibonacci number, the iterator definition above will not work either for StackOverflowError. The following is the best I have in mind at the moment:
class FibIterator extends Iterator[BigDecimal] {
var i: BigDecimal = 1
var j: BigDecimal = 1
def next = {val temp = i
i = i + j
j = temp
j }
def hasNext = true
}
scala> new FibIterator().take(1000000).foldLeft(0:BigDecimal)(_ + _)
res49: BigDecimal = 82742358764415552005488531917024390424162251704439978804028473661823057748584031
0652444660067860068576582339667553466723534958196114093963106431270812950808725232290398073106383520
9370070837993419439389400053162345760603732435980206131237515815087375786729469542122086546698588361
1918333940290120089979292470743729680266332315132001038214604422938050077278662240891771323175496710
6543809955073045938575199742538064756142664237279428808177636434609546136862690895665103636058513818
5599492335097606599062280930533577747023889877591518250849190138449610994983754112730003192861138966
1418736269315695488126272680440194742866966916767696600932919528743675517065891097024715258730309025
7920682881137637647091134870921415447854373518256370737719553266719856028732647721347048627996967...
#yura's problem:
def fib(i:Long,j:Long):Stream[Long] = i #:: fib(j, i+j)
(0l /: fib(1,1).take(10000000)) (_+_)
besides using a Long which can't possibly hold the Fibonacci of 10,000,000, it does work. That is, if the foldLeft is written as:
fib(1,1).take(10000000).foldLeft(0L)(_+_)
Looking at the Streams.scala source, foldLeft() is clearly designed for Garbage Collection, but /: is not def'd.
The other answers alluded to another problem. The Fibonacci of 10 million is a big number and if BigInt is used, instead of just overflowing like with a Long, absolutely enormous numbers are being added to each over and over again.
Since Stream.foldLeft is optimized for GC it does look like the way to solve for really big Fibonacci numbers, rather than using a zip or tail recursion.
// Fibonacci using BigInt
def fib(i:BigInt,j:BigInt):Stream[BigInt] = i #:: fib(j, i+j)
fib(1,0).take(10000000).foldLeft(BigInt("0"))(_+_)
Results of the above code: 10,000,000 is a 8-figure number. How many figures in fib(10000000)? 2,089,877
fib(1,1).take(10000000) is the "this" of the method /:, it is likely that the JVM will consider the reference alive as long as the method runs, even if in this case, it might get rid of it.
So you keep a reference on the head of the stream all along, hence on the whole stream as you build it to 10M elements.
You could just use recursion, which is about as simple:
def fibSum(terms: Int, i: Long = 1, j: Long = 1, total: Long = 2): Long = {
if (terms == 2) total
else fibSum(terms - 1, j, i + j, total + i + j)
}
With this, you can "fold" a billion elements in only a couple of seconds, but as Rex points out, summing the Fibbonaci sequence overflows Long very quickly.
If you really wanted to know the answer to your original problem and don't mind sacrificing some accuracy you could do this:
def fibSum(terms: Int, i: Double = 1, j: Double = 1, tot: Double = 2,
exp: Int = 0): String = {
if (terms == 2) "%.6f".format(tot) + " E+" + exp
else {
val (i1, j1, tot1, exp1) =
if (tot + i + j > 10) (i/10, j/10, tot/10, exp + 1)
else (i, j, tot, exp)
fibSum(terms - 1, j1, i1 + j1, tot1 + i1 + j1, exp1)
}
}
scala> fibSum(10000000)
res54: String = 2.957945 E+2089876
I run out of memory while finding the 10,001th prime number.
object Euler0007 {
def from(n: Int): Stream[Int] = n #:: from(n + 1)
def sieve(s: Stream[Int]): Stream[Int] = s.head #:: sieve(s.filter(_ % s.head != 0))
def primes = sieve(from(2))
def main(args: Array[String]): Unit = {
println(primes(10001))
}
}
Is this because after each "iteration" (is this the correct term in this context?) of primes, I increase the stack of functions to be called to get the next element by one?
One solution that I've found on the web which doesn't resort to an iterative solution (which I'd like to avoid to get into functional programming/idiomatic scala) is this (Problem 7):
lazy val ps: Stream[Int] = 2 #:: Stream.from(3).filter(i => ps.takeWhile(j => j * j <= i).forall(i % _ > 0))
From what I can see, this does not lead to this recursion-like way. Is this a good way to do it, or do you know of a better way?
One reason why this is slow is that it isn't the sieve of Eratosthenes. Read http://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf for a detailled explanation (the examples are in Haskell, but can be translated directly into Scala).
My old solution for Euler problem #7 wasn't the "true" sieve either, but it seems to work good enough for little numbers:
object Sieve {
val primes = 2 #:: sieve(3)
def sieve(n: Int) : Stream[Int] =
if (primes.takeWhile(p => p*p <= n).exists(n % _ == 0)) sieve(n + 2)
else n #:: sieve(n + 2)
def main(args: Array[String]) {
println(primes(10000)) //note that indexes are zero-based
}
}
I think the problem with your first version is that you have only defs and no val which collects the results and can be consulted by the generating function, so you always recalculate from scratch.
Yes, it is because you "increase the stack of functions to be called to get the next element, by one after each "iteration" " - i.e. add a new filter on top of stack of filters each time after getting each prime. That's way too many filters.
This means that each produced prime gets tested by all its preceding primes - but only those below its square root are really needed. For instance, to get the 10001-th prime, 104743, there will be 10000 filters created, at run-time. But there are just 66 primes below 323, the square root of 104743, so only 66 filters were really needed. All the 9934 others will be there needlessly, taking up memory, hard at work producing absolutely no added value.
This is the key deficiency of that "functional sieve", which seems to have originated in the 1970s code by David Turner, and later have found its way into the SICP book and other places. It is not that it's a trial division sieve (rather than the sieve of Eratosthenes). That's far too remote a concern for it. Trial division, when optimally implemented, is perfectly capable of producing the 10000th prime very fast.
The key deficiency of that code is that it does not postpone the creation of filters to the right moment, and ends up creating far too many of them.
Talking complexities now, the "old sieve" code is O(n2), in n primes produced. The optimal trial division is O(n1.5/log0.5(n)), and the sieve of Eratosthenes is O(n*log(n)*log(log(n))). As empirical orders of growth the first is seen typically as ~ n^2, the second as ~ n^1.45 and the third ~ n^1.2.
You can find Python generators-based code for optimal trial division implemented in this answer (2nd half of it). It was originally discussed here dealing with the Haskell equivalent of your sieve function.
Just as an illustration, a "readable pseudocode" :) for the old sieve is
primes = sieve [2..] where
sieve (x:xs) = x : sieve [ y | y <- xs, rem y x > 0 ]
-- list of 'y's, drawn from 'xs',
-- such that (y % x > 0)
and for optimal trial division (TD) sieve, synchronized on primes' squares,
primes = sieve [2..] primes where
sieve (x:xs) ps = x : (h ++ sieve [ y | y <- t, rem y p > 0 ] qs)
where
(p:qs) = ps -- 'p' is head elt in 'ps', and 'qs' the rest
(h,t) = span (< p*p) xs -- 'h' are elts below p^2 in 'xs'
-- and 't' are the rest
and for a sieve of Eratosthenes, devised by Richard Bird, as seen in that JFP article mentioned in another answer here,
primes = 2 : minus [3..]
(foldr (\p r-> p*p : union [p*p+p, p*p+2*p..] r) [] primes)
-- function of 'p' and 'r', that returns
-- a list with p^2 as its head elt, ...
Short and fast. (minus a b is a list a with all the elts of b progressively removed from it; union a b is a list a with all the elts of b progressively added to it without duplicates; both dealing with ordered, non-decreasing lists). foldr is the right fold of a list. Because it is linear this runs at ~ n^1.33, to make it run at ~ n^1.2 the tree-like folding function foldi can be used).
The answer to your second question is also a yes. Your second code, re-written in same "pseudocode",
ps = 2 : [i | i <- [3..], all ((> 0).rem i) (takeWhile ((<= i).(^2)) ps)]
is very similar to the optimal TD sieve above - both arrange for each candidate to be tested by all primes below its square root. While the sieve arranges that with a run-time sequence of postponed filters, the latter definition re-fetches the needed primes anew for each candidate. One might be faster than another depending on a compiler, but both are essentially the same.
And the third is also a yes: the sieve of Eratosthenes is better,
ps = 2 : 3 : minus [5,7..] (unionAll [[p*p, p*p+2*p..] | p <- drop 1 ps])
unionAll = foldi union' [] -- one possible implementation
union' (x:xs) ys = x : union xs ys
-- unconditionally produce first elt of the 1st arg
-- to avoid run-away access to infinite lists
It looks like it can be implemented in Scala too, judging by the similarity of other code snippets. (Though I don't know Scala). unionAll here implements tree-like folding structure (click for a picture and full code) but could also be implemented with a sliding array, working segment by segment along the streams of primes' multiples.
TL;DR: yes, yes, and yes.
FWIW, here's a real Sieve of Eratosthenes:
def sieve(n: Int) = (2 to math.sqrt(n).toInt).foldLeft((2 to n).toSet) { (ps, x) =>
if (ps(x)) ps -- (x * x to n by x)
else ps
}
Here's an infinite stream of primes using a variation on the Sieve of Eratosthenes that preserves its fundamental properties:
case class Cross(next: Int, incr: Int)
def adjustCrosses(crosses: List[Cross], current: Int) = {
crosses map {
case cross # Cross(`current`, incr) => cross copy (next = current + incr)
case unchangedCross => unchangedCross
}
}
def notPrime(crosses: List[Cross], current: Int) = crosses exists (_.next == current)
def sieve(s: Stream[Int], crosses: List[Cross]): Stream[Int] = {
val current #:: rest = s
if (notPrime(crosses, current)) sieve(rest, adjustCrosses(crosses, current))
else current #:: sieve(rest, Cross(current * current, current) :: crosses)
}
def primes = sieve(Stream from 2, Nil)
This is somewhat difficult to use, however, since each element of the Stream is composed using the crosses list, which has as many numbers as there have been primes up to a number, and it seems that, for some reason, these lists are being kept in memory for each number in the Stream.
For example, prompted by a comment, primes take 6000 contains 56993 would throw a GC exception whereas primes drop 5000 take 1000 contains 56993 would return a result rather fast on my tests.
Say I have a function, for example the old favourite
def factorial(n:Int) = (BigInt(1) /: (1 to n)) (_*_)
Now I want to find the biggest value of n for which factorial(n) fits in a Long. I could do
(1 to 100) takeWhile (factorial(_) <= Long.MaxValue) last
This works, but the 100 is an arbitrary large number; what I really want on the left hand side is an infinite stream that keeps generating higher numbers until the takeWhile condition is met.
I've come up with
val s = Stream.continually(1).zipWithIndex.map(p => p._1 + p._2)
but is there a better way?
(I'm also aware I could get a solution recursively but that's not what I'm looking for.)
Stream.from(1)
creates a stream starting from 1 and incrementing by 1. It's all in the API docs.
A Solution Using Iterators
You can also use an Iterator instead of a Stream. The Stream keeps references of all computed values. So if you plan to visit each value only once, an iterator is a more efficient approach. The downside of the iterator is its mutability, though.
There are some nice convenience methods for creating Iterators defined on its companion object.
Edit
Unfortunately there's no short (library supported) way I know of to achieve something like
Stream.from(1) takeWhile (factorial(_) <= Long.MaxValue) last
The approach I take to advance an Iterator for a certain number of elements is drop(n: Int) or dropWhile:
Iterator.from(1).dropWhile( factorial(_) <= Long.MaxValue).next - 1
The - 1 works for this special purpose but is not a general solution. But it should be no problem to implement a last method on an Iterator using pimp my library. The problem is taking the last element of an infinite Iterator could be problematic. So it should be implemented as method like lastWith integrating the takeWhile.
An ugly workaround can be done using sliding, which is implemented for Iterator:
scala> Iterator.from(1).sliding(2).dropWhile(_.tail.head < 10).next.head
res12: Int = 9
as #ziggystar pointed out, Streams keeps the list of previously computed values in memory, so using Iterator is a great improvment.
to further improve the answer, I would argue that "infinite streams", are usually computed (or can be computed) based on pre-computed values. if this is the case (and in your factorial stream it definately is), I would suggest using Iterator.iterate instead.
would look roughly like this:
scala> val it = Iterator.iterate((1,BigInt(1))){case (i,f) => (i+1,f*(i+1))}
it: Iterator[(Int, scala.math.BigInt)] = non-empty iterator
then, you could do something like:
scala> it.find(_._2 >= Long.MaxValue).map(_._1).get - 1
res0: Int = 22
or use #ziggystar sliding solution...
another easy example that comes to mind, would be fibonacci numbers:
scala> val it = Iterator.iterate((1,1)){case (a,b) => (b,a+b)}.map(_._1)
it: Iterator[Int] = non-empty iterator
in these cases, your'e not computing your new element from scratch every time, but rather do an O(1) work for every new element, which would improve your running time even more.
The original "factorial" function is not optimal, since factorials are computed from scratch every time. The simplest/immutable implementation using memoization is like this:
val f : Stream[BigInt] = 1 #:: (Stream.from(1) zip f).map { case (x,y) => x * y }
And now, the answer can be computed like this:
println( "count: " + (f takeWhile (_<Long.MaxValue)).length )
The following variant does not test the current, but the next integer, in order to find and return the last valid number:
Iterator.from(1).find(i => factorial(i+1) > Long.MaxValue).get
Using .get here is acceptable, since find on an infinite sequence will never return None.