I'm reading 《hands on scala》, and one of its exercise is parallelizing merge sort.
I want to know why for-comprehension, which can be translated into flatMap and map, takes more time than zip and map.
my code:
def mergeSortParallel0[T: Ordering](items: IndexedSeq[T]): Future[IndexedSeq[T]] = {
if (items.length <= 16) Future.successful(mergeSortSequential(items))
else {
val (left, right) = items.splitAt(items.length / 2)
for (
l <- mergeSortParallel0(left);
r <- mergeSortParallel0(right)
) yield merge(l, r)
}
}
the standard answer provided by book:
def mergeSortParallel0[T: Ordering](items: IndexedSeq[T]): Future[IndexedSeq[T]] = {
if (items.length <= 16) Future.successful(mergeSortSequential(items))
else {
val (left, right) = items.splitAt(items.length / 2)
mergeSortParallel0(left).zip(mergeSortParallel0(right)).map{
case (sortedLeft, sortedRight) => merge(sortedLeft, sortedRight)
}
}
}
flatMap or map are sequential operations on Scala Future and on their own have nothing to do with running things in parallel. They can be viewed as simple callbacks executed when a Future completes. Or in other words, provided code inside map(...) or flatMap(...) will start to execute only when the previous Future is finished.
zip on the other hand will run your Futures in parallel and return the result as a Tuple when both of them are complete. Similarly, you could use zipWith which takes a function to transform the results of two Futures (combines zip and map operations):
mergeSortParallel0(left).zipWith(mergeSortParallel0(right)){
case (sortedLeft, sortedRight) => merge(sortedLeft, sortedRight)
}
Another way to achieve parallelism is to declare Futures outside for-comprehension. This works as Futures in Scala are 'eager' and they start as soon as you declare them (assign to val):
def mergeSortParallel0[T: Ordering](items: IndexedSeq[T]): Future[IndexedSeq[T]] = {
if (items.length <= 16) Future.successful(mergeSortSequential(items))
else {
val (left, right) = items.splitAt(items.length / 2)
val leftF = mergeSortParallel0(left)
val rightF = mergeSortParallel0(right)
for {
sortedLeft <- leftF
sortedRight <- rightF
} yield {
merge(sortedLeft, sortedRight)
}
}
}
Related
Background
I have the following scenario. I want to execute the method of a class from an external library, repeatedly, and I want to do so until a certain timeout condition and result condition (compared to the previous result) is met. Furthermore I want to collect the return values, even on the "failed" run (the run with the "failing" result condition that should interrupt further execution).
Thus far I have accomplished this with initializing an empty var result: Result, a var stop: Boolean and using a while loop that runs while the conditions are true and modifying the outer state. I would like to get rid of this and use a functional approach.
Some context. Each run is expected to run from 0 to 60 minutes and the total time of iteration is capped at 60 minutes. Theoretically, there's no bound to how many times it executes in this period but in practice, it's generally 2-60 times.
The problem is, the runs take a long time so I need to stop the execution. My idea is to use some kind of lazy Iterator or Stream coupled with scanLeft and Option.
Code
Boiler plate
This code isn't particularly relevant but used in my approach samples and provide identical but somewhat random pseudo runtime results.
import scala.collection.mutable.ListBuffer
import scala.util.Random
val r = Random
r.setSeed(1)
val sleepingTimes: Seq[Int] = (1 to 601)
.map(x => Math.pow(2, x).toInt * r.nextInt(100))
.toList
.filter(_ > 0)
.sorted
val randomRes = r.shuffle((0 to 600).map(x => r.nextInt(10)).toList)
case class Result(val a: Int, val slept: Int)
class Lib() {
def run(i: Int) = {
println(s"running ${i}")
Thread.sleep(sleepingTimes(i))
Result(randomRes(i), sleepingTimes(i))
}
}
case class Baz(i: Int, result: Result)
val lib = new Lib()
val timeout = 10 * 1000
While approach
val iteratorStart = System.currentTimeMillis()
val iterator = for {
i <- (0 to 600).iterator
if System.currentTimeMillis() < iteratorStart + timeout
f = Baz(i, lib.run(i))
} yield f
val iteratorBuffer = ListBuffer[Baz]()
if (iterator.hasNext) iteratorBuffer.append(iterator.next())
var run = true
while (run && iterator.hasNext) {
val next = iterator.next()
run = iteratorBuffer.last.result.a < next.result.a
iteratorBuffer.append(next)
}
Stream approach (Scala.2.12)
Full example
val streamStart = System.currentTimeMillis()
val stream = for {
i <- (0 to 600).toStream
if System.currentTimeMillis() < streamStart + timeout
} yield Baz(i, lib.run(i))
var last: Option[Baz] = None
val head = stream.headOption
val tail = if (stream.nonEmpty) stream.tail else stream
val streamVersion = (tail
.scanLeft((head, true))((x, y) => {
if (x._1.exists(_.result.a > y.result.a)) (Some(y), false)
else (Some(y), true)
})
.takeWhile {
case (baz, continue) =>
if (!baz.eq(head)) last = baz
continue
}
.map(_._1)
.toList :+ last).flatten
LazyList approach (Scala 2.13)
Full example
val lazyListStart = System.currentTimeMillis()
val lazyList = for {
i <- (0 to 600).to(LazyList)
if System.currentTimeMillis() < lazyListStart + timeout
} yield Baz(i, lib.run(i))
var last: Option[Baz] = None
val head = lazyList.headOption
val tail = if (lazyList.nonEmpty) lazyList.tail else lazyList
val lazyListVersion = (tail
.scanLeft((head, true))((x, y) => {
if (x._1.exists(_.result.a > y.result.a)) (Some(y), false)
else (Some(y), true)
})
.takeWhile {
case (baz, continue) =>
if (!baz.eq(head)) last = baz
continue
}
.map(_._1)
.toList :+ last).flatten
Result
Both approaches appear to yield the correct end result:
List(Baz(0,Result(4,170)), Baz(1,Result(5,208)))
and they interrupt execution as desired.
Edit: The desired outcome is to not execute the next iteration but still return the result of the iteration that caused the interruption. Thus the desired result is
List(Baz(0,Result(4,170)), Baz(1,Result(5,208)), Baz(2,Result(2,256))
and lib.run(i) should only run 3 times.
This is achieved by the while approach, as well as the LazyList approach but not the Stream approach which executes lib.run 4 times (Bad!).
Question
Is there another stateless approach, which is hopefully more elegant?
Edit
I realized my examples were faulty and not returning the "failing" result, which it should, and that they kept executing beyond the stop condition. I rewrote the code and examples but I believe the spirit of the question is the same.
I would use something higher level, like fs2.
(or any other high-level streaming library, like: monix observables, akka streams or zio zstreams)
def runUntilOrTimeout[F[_]: Concurrent: Timer, A](work: F[A], timeout: FiniteDuration)
(stop: (A, A) => Boolean): Stream[F, A] = {
val interrupt =
Stream.sleep_(timeout)
val run =
Stream
.repeatEval(work)
.zipWithPrevious
.takeThrough {
case (Some(p), c) if stop(p, c) => false
case _ => true
} map {
case (_, c) => c
}
run mergeHaltBoth interrupt
}
You can see it working here.
Consider the following method - which has been verified to conform to the proper tail recursion :
#tailrec
def getBoundaries(grps: Seq[(BigDecimal, Int)], groupSize: Int, curSum: Int = 0, curOffs: Seq[BigDecimal] = Seq.empty[BigDecimal]): Seq[BigDecimal] = {
if (grps.isEmpty) curOffs
else {
val (id, cnt) = grps.head
val newSum = curSum + cnt.toInt
if (newSum%50==0) { println(s"id=$id newsum=$newSum") }
if (newSum >= groupSize) {
getBoundaries(grps.tail, groupSize, 0, curOffs :+ id) // r1
} else {
getBoundaries(grps.tail, groupSize, newSum, curOffs) // r2
}
}
}
This is running very slowly - about 75 loops per second. When I hit the stacktrace (a nice feature of Intellij) almost every time the line that is currently being invoked is the second tail-recursive call r2. That fact makes me suspicious of the purported "scala unwraps the recursive calls into a while loop". If the unwrapping were occurring then why are we seeing so much time in the invocations themselves?
Beyond having a properly structured tail recursive method are there other considerations to get a recursive routine have performance approaching a direct iteration?
The performance will depend on the underlying type of the Seq.
If it is List then the problem is appending (:+) to the List because this gets very slow with long lists because it has to scan the whole list to find the end.
One solution is to prepend to the list (+:) each time and then reverse at the end. This can give very significant performance improvements, because adding to the start of a list is very quick.
Other Seq types will have different performance characteristics, but you can convert to a List before the recursive call so that you know how it is going to perform.
Here is sample code
def getBoundaries(grps: Seq[(BigDecimal, Int)], groupSize: Int): Seq[BigDecimal] = {
#tailrec
def loop(grps: List[(BigDecimal, Int)], curSum: Int, curOffs: List[BigDecimal]): List[BigDecimal] =
if (grps.isEmpty) curOffs
else {
val (id, cnt) = grps.head
val newSum = curSum + cnt.toInt
if (newSum >= groupSize) {
loop(grps.tail, 0, id +: curOffs) // r1
} else {
loop(grps.tail, newSum, curOffs) // r2
}
}
loop(grps.toList, 0, Nil).reverse
}
This version gives 10x performance improvement over the original code using the test data provided by the questioner in his own answer to the question.
The issue is not in the recursion but instead in the array manipulation . With the following testcase it runs at about 200K recursions per second
type Fgroups = Seq[(BigDecimal, Int)]
test("testGetBoundaries") {
val N = 200000
val grps: Fgroups = (N to 1 by -1).flatMap { x => Array.tabulate(x % 20){ x2 => (BigDecimal(x2 * 1e9), 1) }}
val sgrps = grps.sortWith { case (a, b) =>
a._1.longValue.compare(b._1.longValue) < 0
}
val bb = getBoundaries(sgrps, 100 )
println(bb.take(math.min(50,bb.length)).mkString(","))
assert(bb.length==1900)
}
My production data sample has a similar number of entries (Array with 233K rows ) but runs at 3 orders of magnitude more slowly. I am looking into the tail operation and other culprits now.
Update The following reference from Alvin Alexander indicates that the tail operation should be v fast for immutable collections - but deadly slow for long mutable ones - including Array's !
https://alvinalexander.com/scala/understanding-performance-scala-collections-classes-methods-cookbook
Wow! I had no idea about the performance implications of using mutable collections in scala!
Update By adding code to convert the Array to an (immutable) Seq I see the 3 orders of magnitude performance improvement on the production data sample:
val grps = if (grpsIn.isInstanceOf[mutable.WrappedArray[_]] || grpsIn.isInstanceOf[Array[_]]) {
Seq(grpsIn: _*)
} else grpsIn
The (now fast ~200K/sec) final code is:
type Fgroups = Seq[(BigDecimal, Int)]
val cntr = new java.util.concurrent.atomic.AtomicInteger
#tailrec
def getBoundaries(grpsIn: Fgroups, groupSize: Int, curSum: Int = 0, curOffs: Seq[BigDecimal] = Seq.empty[BigDecimal]): Seq[BigDecimal] = {
val grps = if (grpsIn.isInstanceOf[mutable.WrappedArray[_]] || grpsIn.isInstanceOf[Array[_]]) {
Seq(grpsIn: _*)
} else grpsIn
if (grps.isEmpty) curOffs
else {
val (id, cnt) = grps.head
val newSum = curSum + cnt.toInt
if (cntr.getAndIncrement % 500==0) { println(s"[${cntr.get}] id=$id newsum=$newSum") }
if (newSum >= groupSize) {
getBoundaries(grps.tail, groupSize, 0, curOffs :+ id)
} else {
getBoundaries(grps.tail, groupSize, newSum, curOffs)
}
}
}
I am Java developer and learning Scala at the moment. It is generally admitted, that Java is more verbose then Scala. I just need to call 2 or more methods concurrently and then combine the result. Official Scala documentation at docs.scala-lang.org/overviews/core/futures.html suggests to use for-comprehention for that. So I used that out-of-the-box solution straightforwardly. Then I thought how I would do it with CompletableFuture and was surprised that it produced more concise and faster code, then Scala's Future
Let's consider a basic concurrent case: summing up values in array. For simplicity, let's split array in 2 parts(hence it will be 2 worker threads). Java's sumConcurrently takes only 4 LOC, while Scala's version requires 12 LOC. Also Java's version is 15% faster on my computer.
Complete code, not benchmark optimised.
Java impl.:
public class CombiningCompletableFuture {
static int sumConcurrently(List<Integer> numbers) throws ExecutionException, InterruptedException {
int mid = numbers.size() / 2;
return CompletableFuture.supplyAsync( () -> sumSequentially(numbers.subList(0, mid)))
.thenCombine(CompletableFuture.supplyAsync( () -> sumSequentially(numbers.subList(mid, numbers.size())))
, (left, right) -> left + right).get();
}
static int sumSequentially(List<Integer> numbers) {
try {
Thread.sleep(TimeUnit.SECONDS.toMillis(1));
} catch (InterruptedException ignored) { }
return numbers.stream().mapToInt(Integer::intValue).sum();
}
public static void main(String[] args) throws ExecutionException, InterruptedException {
List<Integer> from1toTen = IntStream.rangeClosed(1, 10).boxed().collect(toList());
long start = System.currentTimeMillis();
long sum = sumConcurrently(from1toTen);
long duration = System.currentTimeMillis() - start;
System.out.printf("sum is %d in %d ms.", sum, duration);
}
}
Scala's impl.:
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.{Await, Future}
import scala.concurrent.duration._
object CombiningFutures extends App {
def sumConcurrently(numbers: Seq[Int]) = {
val splitted = numbers.splitAt(5)
val leftFuture = Future {
sumSequentally(splitted._1)
}
val rightFuture = Future {
sumSequentally(splitted._2)
}
val totalFuture = for {
left <- leftFuture
right <- rightFuture
} yield left + right
Await.result(totalFuture, Duration.Inf)
}
def sumSequentally(numbers: Seq[Int]) = {
Thread.sleep(1000)
numbers.sum
}
val from1toTen = 1 to 10
val start = System.currentTimeMillis
val sum = sumConcurrently(from1toTen)
val duration = System.currentTimeMillis - start
println(s"sum is $sum in $duration ms.")
}
Any explanations and suggestions how to improve Scala code without impacting readability too much?
A verbose scala version of your sumConcurrently,
def sumConcurrently(numbers: List[Int]): Future[Int] = {
val (v1, v2) = numbers.splitAt(numbers.length / 2)
for {
sum1 <- Future(v1.sum)
sum2 <- Future(v2.sum)
} yield sum1 + sum2
}
A more concise version
def sumConcurrently2(numbers: List[Int]): Future[Int] = numbers.splitAt(numbers.length / 2) match {
case (l1, l2) => Future.sequence(List(Future(l1.sum), Future(l2.sum))).map(_.sum)
}
And all this is because we have to partition the list. Lets say we had to write a function which takes a few lists and returns the sum of their sum's using multiple concurrent computations,
def sumConcurrently3(lists: List[Int]*): Future[Int] =
Future.sequence(lists.map(l => Future(l.sum))).map(_.sum)
If the above looks cryptic... then let me de-crypt it,
def sumConcurrently3(lists: List[Int]*): Future[Int] = {
val listOfFuturesOfSums = lists.map { l => Future(l.sum) }
val futureOfListOfSums = Future.sequence(listOfFuturesOfSums)
futureOfListOfSums.map { l => l.sum }
}
Now, whenever you use the result of a Future (lets say the future completes at time t1) in a computation, it means that this computation is bound to happen after time t1. You can do it with blocking like this in Scala,
val sumFuture = sumConcurrently(List(1, 2, 3, 4))
val sum = Await.result(sumFuture, Duration.Inf)
val anotherSum = sum + 100
println("another sum = " + anotherSum)
But what is the point of all that, you are blocking the current thread while for the computation on those threads to finish. Why not move the whole computation into the future itself.
val sumFuture = sumConcurrently(List(1, 2, 3, 4))
val anotherSumFuture = sumFuture.map(s => s + 100)
anotherSumFuture.foreach(s => println("another sum = " + s))
Now, you are not blocking anywhere and the threads can be used anywhere required.
Future implementation and api in Scala is designed to enable you to write your program avoiding blocking as far as possible.
For the task at hand, the following is probably not the tersest option either:
def sumConcurrently(numbers: Vector[Int]): Future[Int] = {
val (v1, v2) = numbers.splitAt(numbers.length / 2)
Future(v1.sum).zipWith(Future(v2.sum))(_ + _)
}
As I mentioned in my comment there are several issues with your example.
I have written a quicksort (method quicksortF()) that uses a Scala's Future to let the recursive sorting of the partitions be done concurrently. I also have implemented a regular quicksort (method quicksort()). Unfortunately, the Future version ends up in a deadlock (apparently blocks forever) when the list to sort is greater than about 1000 elements (900 would work). The source is shown below.
I am relatively new to Actors and Futures. What is goind wrong here?
Thanks!
import util.Random
import actors.Futures._
/**
* Quicksort with and without using the Future pattern.
* #author Markus Gumbel
*/
object FutureQuickSortProblem {
def main(args: Array[String]) {
val n = 1000 // works for n = 900 but not for 1000 anymore.
// Create a random list of size n:
val list = (1 to n).map(c => Random.nextInt(n * 10)).toList
println(list)
// Sort it with regular quicksort:
val sortedList = quicksort(list)
println(sortedList)
// ... and with quicksort using Future (which hangs):
val sortedListF = quicksortF(list)
println(sortedListF)
}
// This one works.
def quicksort(list: List[Int]): List[Int] = {
if (list.length <= 1) list
else {
val pivot = list.head
val leftList = list.filter(_ < pivot)
val middleList = list.filter(pivot == _)
val rightList = list.filter(_ > pivot)
val sortedLeftList = quicksort(leftList)
val sortedRightList = quicksort(rightList)
sortedLeftList ::: middleList ::: sortedRightList
}
}
// Almost the same as quicksort() except that Future is used.
// However, this one hangs.
def quicksortF(list: List[Int]): List[Int] = {
if (list.length <= 1) list
else {
val pivot = list.head
val leftList = list.filter(_ < pivot)
val middleList = list.filter(pivot == _)
val rightList = list.filter(_ > pivot)
// Same as quicksort() but here we are using a Future
// to sort the left and right partitions independently:
val sortedLeftListFuture = future {
quicksortF(leftList)
}
val sortedRightListFuture = future {
quicksortF(rightList)
}
sortedLeftListFuture() ::: middleList ::: sortedRightListFuture()
}
}
}
class FutureQuickSortProblem // If not defined, Intellij won't find the main method.?!
Disclaimer: I've never personally used the (pre-2.10) standard library's actors or futures in any serious way, and there are a number of things I don't like (or at least don't understand) about the API there, compared for example to the implementations in Scalaz or Akka or Play 2.0.
But I can tell you that the usual approach in a case like this is to combine your futures monadically instead of claiming them immediately and combining the results. For example, you could write something like this (note the new return type):
import scala.actors.Futures._
def quicksortF(list: List[Int]): Responder[List[Int]] = {
if (list.length <= 1) future(list)
else {
val pivot = list.head
val leftList = list.filter(_ < pivot)
val middleList = list.filter(pivot == _)
val rightList = list.filter(_ > pivot)
for {
left <- quicksortF(leftList)
right <- quicksortF(rightList)
} yield left ::: middleList ::: right
}
}
Like your vanilla implementation, this won't necessarily be very efficient, and it will also blow the stack pretty easily, but it shouldn't run out of threads.
As a side note, why does flatMap on a Future return a Responder instead of a Future? I don't know, and neither do some other folks. For reasons like this I'd suggest skipping the now-deprecated pre-2.10 standard library actor-based concurrency stuff altogether.
As I understand, calling apply on the Future (as you do when concatenating the results of the recursive calls) will block until the result is retrieved.
Much like this question:
Functional code for looping with early exit
Say the code is
def findFirst[T](objects: List[T]):T = {
for (obj <- objects) {
if (expensiveFunc(obj) != null) return /*???*/ Some(obj)
}
None
}
How to yield a single element from a for loop like this in scala?
I do not want to use find, as proposed in the original question, i am curious about if and how it could be implemented using the for loop.
* UPDATE *
First, thanks for all the comments, but i guess i was not clear in the question. I am shooting for something like this:
val seven = for {
x <- 1 to 10
if x == 7
} return x
And that does not compile. The two errors are:
- return outside method definition
- method main has return statement; needs result type
I know find() would be better in this case, i am just learning and exploring the language. And in a more complex case with several iterators, i think finding with for can actually be usefull.
Thanks commenters, i'll start a bounty to make up for the bad posing of the question :)
If you want to use a for loop, which uses a nicer syntax than chained invocations of .find, .filter, etc., there is a neat trick. Instead of iterating over strict collections like list, iterate over lazy ones like iterators or streams. If you're starting with a strict collection, make it lazy with, e.g. .toIterator.
Let's see an example.
First let's define a "noisy" int, that will show us when it is invoked
def noisyInt(i : Int) = () => { println("Getting %d!".format(i)); i }
Now let's fill a list with some of these:
val l = List(1, 2, 3, 4).map(noisyInt)
We want to look for the first element which is even.
val r1 = for(e <- l; val v = e() ; if v % 2 == 0) yield v
The above line results in:
Getting 1!
Getting 2!
Getting 3!
Getting 4!
r1: List[Int] = List(2, 4)
...meaning that all elements were accessed. That makes sense, given that the resulting list contains all even numbers. Let's iterate over an iterator this time:
val r2 = (for(e <- l.toIterator; val v = e() ; if v % 2 == 0) yield v)
This results in:
Getting 1!
Getting 2!
r2: Iterator[Int] = non-empty iterator
Notice that the loop was executed only up to the point were it could figure out whether the result was an empty or non-empty iterator.
To get the first result, you can now simply call r2.next.
If you want a result of an Option type, use:
if(r2.hasNext) Some(r2.next) else None
Edit Your second example in this encoding is just:
val seven = (for {
x <- (1 to 10).toIterator
if x == 7
} yield x).next
...of course, you should be sure that there is always at least a solution if you're going to use .next. Alternatively, use headOption, defined for all Traversables, to get an Option[Int].
You can turn your list into a stream, so that any filters that the for-loop contains are only evaluated on-demand. However, yielding from the stream will always return a stream, and what you want is I suppose an option, so, as a final step you can check whether the resulting stream has at least one element, and return its head as a option. The headOption function does exactly that.
def findFirst[T](objects: List[T], expensiveFunc: T => Boolean): Option[T] =
(for (obj <- objects.toStream if expensiveFunc(obj)) yield obj).headOption
Why not do exactly what you sketched above, that is, return from the loop early? If you are interested in what Scala actually does under the hood, run your code with -print. Scala desugares the loop into a foreach and then uses an exception to leave the foreach prematurely.
So what you are trying to do is to break out a loop after your condition is satisfied. Answer here might be what you are looking for. How do I break out of a loop in Scala?.
Overall, for comprehension in Scala is translated into map, flatmap and filter operations. So it will not be possible to break out of these functions unless you throw an exception.
If you are wondering, this is how find is implemented in LineerSeqOptimized.scala; which List inherits
override /*IterableLike*/
def find(p: A => Boolean): Option[A] = {
var these = this
while (!these.isEmpty) {
if (p(these.head)) return Some(these.head)
these = these.tail
}
None
}
This is a horrible hack. But it would get you the result you wished for.
Idiomatically you'd use a Stream or View and just compute the parts you need.
def findFirst[T](objects: List[T]): T = {
def expensiveFunc(o : T) = // unclear what should be returned here
case class MissusedException(val data: T) extends Exception
try {
(for (obj <- objects) {
if (expensiveFunc(obj) != null) throw new MissusedException(obj)
})
objects.head // T must be returned from loop, dummy
} catch {
case MissusedException(obj) => obj
}
}
Why not something like
object Main {
def main(args: Array[String]): Unit = {
val seven = (for (
x <- 1 to 10
if x == 7
) yield x).headOption
}
}
Variable seven will be an Option holding Some(value) if value satisfies condition
I hope to help you.
I think ... no 'return' impl.
object TakeWhileLoop extends App {
println("first non-null: " + func(Seq(null, null, "x", "y", "z")))
def func[T](seq: Seq[T]): T = if (seq.isEmpty) null.asInstanceOf[T] else
seq(seq.takeWhile(_ == null).size)
}
object OptionLoop extends App {
println("first non-null: " + func(Seq(null, null, "x", "y", "z")))
def func[T](seq: Seq[T], index: Int = 0): T = if (seq.isEmpty) null.asInstanceOf[T] else
Option(seq(index)) getOrElse func(seq, index + 1)
}
object WhileLoop extends App {
println("first non-null: " + func(Seq(null, null, "x", "y", "z")))
def func[T](seq: Seq[T]): T = if (seq.isEmpty) null.asInstanceOf[T] else {
var i = 0
def obj = seq(i)
while (obj == null)
i += 1
obj
}
}
objects iterator filter { obj => (expensiveFunc(obj) != null } next
The trick is to get some lazy evaluated view on the colelction, either an iterator or a Stream, or objects.view. The filter will only execute as far as needed.