Scala Infinite Iterator OutOfMemory - scala

I'm playing around with Scala's lazy iterators, and I've run into an issue. What I'm trying to do is read in a large file, do a transformation, and then write out the result:
object FileProcessor {
def main(args: Array[String]) {
val inSource = Source.fromFile("in.txt")
val outSource = new PrintWriter("out.txt")
try {
// this "basic" lazy iterator works fine
// val iterator = inSource.getLines
// ...but this one, which incorporates my process method,
// throws OutOfMemoryExceptions
val iterator = process(inSource.getLines.toSeq).iterator
while(iterator.hasNext) outSource.println(iterator.next)
} finally {
inSource.close()
outSource.close()
}
}
// processing in this case just means upper-cases every line
private def process(contents: Seq[String]) = contents.map(_.toUpperCase)
}
So I'm getting an OutOfMemoryException on large files. I know you can run afoul of Scala's lazy Streams if you keep around references to the head of the Stream. So in this case I'm careful to convert the result of process() to an iterator and throw-away the Seq it initially returns.
Does anyone know why this still causes O(n) memory consumption? Thanks!
Update
In response to fge and huynhjl, it seems like the Seq might be the culprit, but I don't know why. As an example, the following code works fine (and I'm using Seq all over the place). This code does not produce an OutOfMemoryException:
object FileReader {
def main(args: Array[String]) {
val inSource = Source.fromFile("in.txt")
val outSource = new PrintWriter("out.txt")
try {
writeToFile(outSource, process(inSource.getLines.toSeq))
} finally {
inSource.close()
outSource.close()
}
}
#scala.annotation.tailrec
private def writeToFile(outSource: PrintWriter, contents: Seq[String]) {
if (! contents.isEmpty) {
outSource.println(contents.head)
writeToFile(outSource, contents.tail)
}
}
private def process(contents: Seq[String]) = contents.map(_.toUpperCase)

As hinted by fge, modify process to take an iterator and remove the .toSeq. inSource.getLines is already an iterator.
Converting to a Seq will cause the items to be remembered. I think it will convert the iterator into a Stream and cause all items to be remembered.
Edit: Ok, it's more subtle. You are doing the equivalent of Iterator.toSeq.iterator by calling iterator on the result of process. This can cause an out of memory exception.
scala> Iterator.continually(1).toSeq.iterator.take(300*1024*1024).size
java.lang.OutOfMemoryError: Java heap space
It may be the same issue as reported here: https://issues.scala-lang.org/browse/SI-4835. Note my comment at the end of the bug, this is from personal experience.

Related

Scala hashmap not getting appended

I don't understand what is wrong with the code below. This works fine and hashmap typeMap gets updated if my input data frame is not partitioned. But if the code below is executed in a partitioned environment, typeMap is always empty and not updated. What is wrong with this code? Thanks for all your help.
var typeMap = new mutable.HashMap[String, (String, Array[String])]
case class Combiner(,,,,,,, mapTypes: mutable.HashMap[String, (String, Array[String])]) {
def execute() {
<...>
val combinersResult = dfInput.rdd.aggregate(combiners.toArray) (incrementCount, mergeCount)
}
def updateTypes(arr: Array[String], tempMapTypes:mutable.HashMap[String, (String, Array[String])]): Unit = {
<...>
typeMap ++= tempMapTypes
}
def incrementCount(combiners: Array[Combiner], row: Row): Array[Combiner] = {
for (i <- 0 until row.length) {
val array = getMyType(row(i), tempMapTypes)
combiners(i). updateTypes(array, tempMapTypes)
}
combiners
}
It is a really bad idea to use mutable values in distributed computing. With Spark in particular, RDD operations are shipped from the driver to the executors and are executed in parallel on all the different machines in the cluster. Updates made to your mutable.HashMap are never sent back to the driver, so you are stuck with the empty map that got constructed on the driver in the first place.
So you need to completely rethink your data structures by preferring immutability and to remember that operations firing on the executors are independent and parallel.

How to get truly atomic update for TrieMap.getOrElseUpdate

As I understand, TrieMap.getOrElseUpdate is still not truly atomic, and this fixes only returned result (it could return different instances for different callers before this fix), so the updater function still might be called several times, as documentation (for 2.11.7) says:
Note: This method will invoke op at most once. However, op may be invoked without the result being added to the map if a concurrent process is also trying to add a value corresponding to the same key k.
*I've checked that manually for 2.11.7, still "at least once"
How to guarantee one-time call (if I use TrieMap for factories)?
I think this solution should work for my requirements:
trait LazyComp { val get: Int }
val map = new TrieMap[String, LazyComp]()
val count = new AtomicInteger() //just for test, you don't need it
def getSingleton(key: String) = {
val v = new LazyComp {
lazy val get = {
//compute something
count.incrementAndGet() //just for test, you don't need it
}
}
map.putIfAbsent(key, v).getOrElse(v).get
}
I believe, lazy val actually uses synchronized inside. And also the code inside get should be safe from exceptions
However, performance could be improved in future: SIP-20
Test:
scala> (0 to 10000000).par.map(_ => getSingleton("zzz")).last
res8: Int = 1
P.S. Java has computeIfAbscent method on ConcurrentHashMap which I could use as well.

Source.fromInputStream exception handling during reading lines

I have created a function where I take in as a parameter an inputstream and return an iterator consisting of a string. I accomplish this as follows:
def lineEntry(fileInputStream:InputStream):Iterator[String] = {
Source.fromInputStream(fileInputStream).getLines()
}
I use the method as follows:
val fStream = getSomeInputStreamFromSource()
lineEntry(fStream).foreach{
processTheLine(_)
}
Now it is quite possible that the method lineEntry might blow up if it encounters a bad character while it's iterating over the inputstream using the foreach.
What are some of the ways to counter this situation?
Quick solution (for Scala 2.10):
def lineEntry(fileInputStream:InputStream):Iterator[String] = {
implicit val codec = Codec.UTF8 // or any other you like
codec.onMalformedInput(CodingErrorAction.IGNORE)
Source.fromInputStream(fileInputStream).getLines()
}
In Scala 2.9 there's a small difference:
implicit val codec = Codec(Codec.UTF8)
Codec has also a few more configuration options with which you can tune its behaviour in such cases.

Memory consumption of a parallel Scala Stream

I have written a Scala (2.9.1-1) application that needs to process several million rows from a database query. I am converting the ResultSet to a Stream using the technique shown in the answer to one of my previous questions:
class Record(...)
val resultSet = statement.executeQuery(...)
new Iterator[Record] {
def hasNext = resultSet.next()
def next = new Record(resultSet.getString(1), resultSet.getInt(2), ...)
}.toStream.foreach { record => ... }
and this has worked very well.
Since the body of the foreach closure is very CPU intensive, and as a testament to the practicality of functional programming, if I add a .par before the foreach, the closures get run in parallel with no other effort, except to make sure that the body of the closure is thread safe (it is written in a functional style with no mutable data except printing to a thread-safe log).
However, I am worried about memory consumption. Is the .par causing the entire result set to load in RAM, or does the parallel operation load only as many rows as it has active threads? I've allocated 4G to the JVM (64-bit with -Xmx4g) but in the future I will be running it on even more rows and worry that I'll eventually get an out-of-memory.
Is there a better pattern for doing this kind of parallel processing in a functional manner? I've been showing this application to my co-workers as an example of the value of functional programming and multi-core machines.
If you look at the scaladoc of Stream, you will notice that the definition class of par is the Parallelizable trait... and, if you look at the source code of this trait, you will notice that it takes each element from the original collection and put them into a combiner, thus, you will load each row into a ParSeq:
def par: ParRepr = {
val cb = parCombiner
for (x <- seq) cb += x
cb.result
}
/** The default `par` implementation uses the combiner provided by this method
* to create a new parallel collection.
*
* #return a combiner for the parallel collection of type `ParRepr`
*/
protected[this] def parCombiner: Combiner[A, ParRepr]
A possible solution is to explicitly parallelize your computation, thanks to actors for example. You can take a look at this example from the akka documentation for example, that might be helpful in your context.
The new akka stream library is the fix you're looking for:
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import akka.stream.scaladsl.{Source, Sink}
def iterFromQuery() : Iterator[Record] = {
val resultSet = statement.executeQuery(...)
new Iterator[Record] {
def hasNext = resultSet.next()
def next = new Record(...)
}
}
def cpuIntensiveFunction(record : Record) = {
...
}
implicit val actorSystem = ActorSystem()
implicit val materializer = ActorMaterializer()
implicit val execContext = actorSystem.dispatcher
val poolSize = 10 //number of Records in memory at once
val stream =
Source(iterFromQuery).runWith(Sink.foreachParallel(poolSize)(cpuIntensiveFunction))
stream onComplete {_ => actorSystem.shutdown()}

scala lazy val: a good way to go easy on the garbage collector?

I'm wondering if it makes sense to use lazy val to prevent unnecessary heap allocations, thereby making the GC's job a little easier. For instance, I am in the habit of writing code like so:
lazy val requiredParameterObject = new Foo {
val a = new ...
val b = new ...
// etc
}
for (a <- collection) a.someFunction(requiredParameterObject)
The idea here is that by making requiredParameterObject lazy, I am saving a heap allocation in the case that collection is empty. But I am wondering: do the internal details of lazy make this not an effective performance win?
You can pass lazy val by name:
lazy val requiredParameterObject = new Foo {
val a = new ...
val b = new ...
// etc
}
class C {
def someFunction(f: => Foo) {
f //(1)
}
}
val collection = List(new C, new C)
for (a <- collection) a.someFunction(requiredParameterObject)
In the code above Foo is created once when the first item of a collection accesses it in (1) - or never if the collection is empty.
You might also get better results if requiredParameterObject is an ordinary function, effectively calling requiredParameterObject lazily only when it's needed. However it will call it as many times as the the number of elements in the collection.
A similar question was asked here a while back, but somebody points out there that the implementation of lazy varies between Scala compiler versions. Probably the most authoritative answer you'll get to this question is to write a minimal snippet, like:
def test(pred : Boolean) {
lazy val v = // ...
if (pred)
doSomething(v)
}
run it through your version of scalac and then peek at the classfile with scalap or javap to see what you get.