I'm trying to crash my program (run in IntelliJ) with an OutOfMemoryException:
def OOMCrasher(acc: String): String = {
OOMCrasher(acc + "ADSJKFAKLWJEFLASDAFSDFASDFASERASDFASEASDFASDFASERESFDHFDYJDHJSDGFAERARDSHFDGJGHYTDJKXJCV")
}
OOMCrasher("")
However, it just runs for a very long time. My suspicions is that it simply takes a very long time to fill up all the gigabytes of memory allocated to the JVM with a string. So I'm looking at how to make IntelliJ allocate less memory to the JVM. Here's what I've tried:
In Run Configurations -> VM options:
--scala.driver.memory 1k || --driver.memory 1k
Both of these cause crashes with:
Unrecognized option: --scala.driver.memory
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
I've also tried to put the options in the Edit Configurations -> Program Arguments. This causes the program to run for a very long time again, not yielding an OutOfMemoryException.
EDIT:
I will accept any answer that successfully explains how to allocate less memory to the program, since that is the main question.
UPDATE:
Changing the function to:
def OOMCrasher(acc: HisList[String]): HisList[String] = {
OOMCrasher(acc.add("Hi again!"))
}
OOMCrasher(Cons("Hi again!", Empty))
where HisList is a simple LinkedList implementation as well as running with -Xmx3m caused the wanted exception.
To functionally reach an OutOfMemoryException is harder than it looks, because recursive functions almost always run first into a StackOverflowException.
But there is a mutable approach that will guarantee an OutOfMemoryException: Doubling a List over and over again. Scala's Lists are not limited by the maximum array size and thus can expand until there is just no more memory left.
Here's an example:
def explodeList[A](list: List[A]): Unit = {
var mlist = list
while(true) {
mlist = mlist ++ mlist
}
}
To answer your actual question, try to fiddle with the JVM option -Xmx___m (e.g. -Xmx256m). This defines the maximum heap size the JVM is allowed to allocate.
Related
Allocating temporary objects on the heap every frame in Unity is costly, and we all do our best to avoid this by caching heap objects and avoiding garbage generating functions. It's not always obvious when something will generate garbage though. For example:
enum MyEnum {
Zero,
One,
Two
}
List<MyEnum> MyList = new List<MyEnum>();
MyList.Contains(MyEnum.Zero); // Generates garbage
MyList.Contains() generates garbage because the default equality comparer for List uses objects which causes boxing of the enum value types.
In order to prevent inadvertent heap allocations like these, I would like to be able to detect them in my unit tests.
I think there are 2 requirements for this:
A function to return the amount of heap allocated memory
A way to prevent garbage collection occurring during the test
I haven't found a clean way to ensure #2. The closest thing I've found for #1 is GC.GetTotalMemory()
[UnityTest]
IEnumerator MyTest()
{
long before = GC.GetTotalMemory(false);
const int numObjects = 1;
for (int i = 0 ; i < numObjects; ++i)
{
System.Version v = new System.Version();
}
long after = GC.GetTotalMemory(false);
Assert.That(before == after);
}
The problem is that GC.GetTotalMemory() returns the same value before and after in this test. I suspect that Unity/Mono only allocates memory from the system heap in chunks, say 4kb, so you need to allocate <= 4kb before Unity/Mono will actually request more memory from the system heap, at which point GC.GetTotalMemory() will return a different value. I confirmed that if I change numObjects to 1000, GC.GetTotalMemory() returns different values for before and after.
So in summary, 1. how can i accurately get amount of heap allocated memory, accurate to the byte and 2. can the garbage collector run during the body of my test, and if so, is there any non-hacky way of disabling GC for the duration of my test
TL;DR
Thanks for your help!
I posted the same question over on Unity answers and got a reply:
https://answers.unity.com/questions/1535588/is-there-a-way-to-accurately-measure-heap-allocati.html
No, basically it's not possible. You could run your unit test a bunch of times in a loop and hope that it generates enough garbage to cause a change in the value returned by GC.GetTotalMemory(), but that's about it.
A mutable Set's retain method is implemented as follows:
def retain(p: A => Boolean): Unit =
for (elem <- this.toList) // SI-7269 toList avoids ConcurrentModificationException
if (!p(elem)) this -= elem
But if I implement my own method that doesn't make a copy for iterating, nothing blows up.
def dumbRetain[A](self: mutable.Set[A], p: A => Boolean): Unit =
for (elem <- self)
if (!p(elem)) self -= elem
dumbRetain(mutable.HashSet(1,2,3,4,5,6), Set(2,4,6))
// everything is ok
I see that SI-7269's test case uses the JavaConversions wrapper around a java Set/Map, and it seems like the issue arises from the underlying java collection.
I know there will never be a java collection passed to my algorithm, so can I use dumbRetain without worrying about the ConcurrentModificationException? Or is this "coincidental behavior" that I shouldn't rely on?
edit to clarify, I would be using dumbRetain as an implementation detail in an algorithm which would be in full control of what it passes to dumbRetain. And this would be run in a single-threaded context.
This seems to rely on the specific implementation of mutable.HashSet, and there is nothing in the API that guarantees that it would work for all other implementations of mutable.Set, even if we exclude all wrappers for the Java collections.
The for-loop
for (elem <- self) {
...
}
is desugared into foreach, which for mutable.HashSet is implemented as follows:
override def foreach[U](f: A => U) {
var i = 0
val len = table.length
while (i < len) {
val curEntry = table(i)
if (curEntry ne null) f(entryToElem(curEntry))
i += 1
}
}
Essentially, it simply loops through the Array of the underlying FlatHashTable, and invokes the passed function f on every element. The whole foreach simply does not have any lines which could throw anything, it doesn't check for concurrent [footnote-1] modifications at all.
A ConcurrentModificationException seems to be the less troubling case: at least, your program fails fast, and even returns a detailed stack trace that points to the line in which the problem occurred. It would be actually much worse if it simply deteriorated into undefined behavior without throwing anything. This would be the worst case. However, this worst case shouldn't occur for collections from the standard library: Throw ConcurrentModificationException exception's in scala collections? #188
Quote:
In scala/scala#5295 (merged in to 2.12.x) I made sure that removing the element last returned from an iterator would not cause a problem for the iterator.
So, as long as you clearly state in the documentation that only the collections from standard library are supported, you will most likely not have any problems using it in your own code. But if you use it in a public interface, this would be an invitation for a bug analogous to "SI-7269" quoted in your question.
[footnote-1] "concurrent" as in "ConcurrentModificationException", not as in "concurrently executed threads".
EDIT: I've tried to choose less ambiguous formulations. Great Thanks #Dima for the feedback and the numerous suggestions.
Yeah, you can do it, as long as you are sure this is the scala's native HashSet implementation, not a wrapper around java ... and with understanding, that this is not thread-safe, and should never be used concurrently (the original HashSet.retain is that way too as well as the other mutators).
Better yet, just use immutable Set.filter, unless you actually have real hard evidence (not just intuition) demonstrating that your specific case absolutely requires mutable container.
EDIT, Summary: So, in the long chain of back and too, I think the "final answer" is a little hard to find. In essence however, Yuval pointed out that the incremental allocation of a large amount of memory forces a heap resize (actually, two by the look of the graph). A heap resize on a normal JVM involves a full GC, the most expensive, timeconsuming, collection possible. So, the reality is that my process isn't collecting lots of garbage per se, rather its doing heap resizes which inherently trigger expensive GC as part of the heap reorganization process. Those of us more familiar with Java than Scala are more likely to have allocated a simple ArrayList, which even if it causes heap resizing, is only a few objects (and likely allocated directly into old-gen if it's a big array) which would be far less work--because it's far fewer objects!--for a full GC anyway. Moral is likely that some other structure would be more appropriate for very large "lists".
I was trying to experiment with some of Scala's data structures (actually, with the parallel stuff, but that's not relevant to the problem I bumped into). I'm trying to create a fairly long list (with the intention of processing it purely sequentially). But try as I might, I'm failing to create a simple list without invoking vast quantities of garbage collection. I'm fairly sure that I'm simply pre-pending the new items to the existing tail, but the GC load suggests that I'm not. I've tried a couple of techniques so far (I'm starting to suspect that I'm misunderstanding something truly fundamental about this structure :( )
Here's the first effort:
val myList = {
#tailrec
def addToList(remaining:Long, acc:List[Double]): List[Double] =
if (remaining > 0) addToList(remaining - 1, 0 :: acc)
else acc
addToList(10000000, Nil)
}
And when I began to doubt I knew how to do recursion after all, I came up with this mutating beast.
val myList = {
var rv: List[Double] = Nil
var count = 10000000
while (count > 0) {
rv = 0.0 :: rv
}
rv
}
They both give the same effect: 8 cores running flat out doing GC (according to jvisualvm) and memory allocation reaching peaks at just over 1GB, which I assume is the real allocated space required for the data, but on the way, it creates a seemingly vast amount of trash on the way.
Am I doing something horribly wrong here? Am I somehow forcing the recreation of the entire list with every new element (I'm trying very hard to only do "prepend" type operations, which I thought should avoid that).
Or maybe, I have half a memory of hearing that Scala List does something odd to help it transform into a mutable list, or a parallel list, or something. Really don't recall what. Is this something to do with that? And if so, what the heck was "that" anyway?
Oh, and here's the image of the GC process. Notice the front-loading on the otherwise triangular rise of the memory that represents the "real" allocated data. That huge hump, and the associated CPU usage are my problem:
EDIT: I should clarify, I'm interested in two things. First, if my creation of the list is intrinsically faulty (i.e. if I'm not in fact only performing prepend operations) then I'd like to understand why, and how I should do this "right". Second, if my construction is sound and the odd behavior is intrinsic in the List, I'd like to understand the List better, so I know what it's doing, and why. I'm not particularly interested (at this point) in alternative ways to build a sequential data structure that sidesteps this problem. I anticipate using List a lot, and would like to know what's happening. (Later, I might well want to investigate other structures in this level of detail, but not right now).
First, if my creation of the list is intrinsically faulty (i.e. if
I'm not in fact only performing prepend operations) then I'd like to
understand why
You are constructing the list properly, there's no problem there.
Second, if my construction is sound and the odd behavior is intrinsic
in the List, I'd like to understand the List better, so I know what
it's doing, and why
List[A] in Scala is based on a linked list implementation, where you have a head of type A, and a tail of type List[A]. List[A] is an abstract class with two implementations, one presenting the empty list called Nil, and one called "Cons", or ::, representing a list which has a head value and a tail, which can be either full or empty:
def ::[B >: A] (x: B): List[B] =
new scala.collection.immutable.::(x, this)
If we look at the implementation for ::, we can see that it is a simple case class with two fields:
final case class ::[B](override val head: B, private[scala] var tl: List[B]) extends List[B] {
override def tail : List[B] = tl
override def isEmpty: Boolean = false
}
A quick look using the memory tab in IntelliJ shows:
That we have ten million Double values, and ten million instances of the :: case class, which in itself has additional overhead for being a case class (the compiler "enhances" these classes with additional structure).
Your JVisualVM instance doesn't show the GC graph being fully utilized, it is rather showing your CPU is overworked from generating the large list of items. During the allocation process, you generate a lot of intermediate lists until you reach your fully generated list, which means data has to be evicted between the different GC levels (Eden, Survivor and Old, assuming you're running the JVM flavor of Scala).
If we want a bit more information, we can use Mission Control to see into what's causing the memory pressure. This is a sample generated from a 30 second profile running:
def main(args: Array[String]): Unit = {
def myList: List[Double] = {
#tailrec
def addToList(remaining:Long, acc:List[Double]): List[Double] =
if (remaining > 0) addToList(remaining - 1, 0 :: acc)
else acc
addToList(10000000, Nil)
}
while (true) {
myList
}
}
We see that we have a call to BoxesRunTime.boxToDouble which happens due to the fact that :: is a generic class and doesn't have a #specialized attribute for double. We go scala.Int -> scala.Double -> java.lang.Double.
A tricky problem happened related to the sharp increase of executing time.
I run my scala code in local spark, part of which is to build a n*n matrix.
When running a small dataset, it just takes 5s to finish. The most time-consuming part is to build 2000*2000 matrix. And this part is executed within map, which just deals with array data structure.
However, just out of curiosity, I add "println" within the matrix-building code to see the number of iterations. Suddenly, the whole running time increases to 1min23s.
And the final results are same.
I am new to Spark and have no idea what really causes this situation.
The codes are simply:
val x = someRDD.map(buildMatrix)
def buildMatrix(stringVect:Array[String]): Array[Array[Double]] = {
//var count = 0
val num = stringVect.length
var simi_matrix = Array[Array[Double]]()
for (i<- 0 until num-1){
for (j<- (i+1) until num){
"build the matrix with some computation"
//println(count)
//count += 1
}
}
}
TL;DR
This does not have to do anything with Spark. I/O access to the console is synchronized and costly. It will slow down any program on the JVM (Scala/Java/Clojure/...).
println defaults to java.lang.System.out which is a PrintStream. println delegates to PrintStream#println, hence entering the synchronized block of the println implementation to output to the console: There are two expenses:
Getting a synchronized lock
I/O to the console OutputStream
The slowdown observed is to be expected. Just don't use println in hot parts of the code (like a tight loop in this case).
Consider the following Set benchmark:
import scala.collection.immutable._
object SetTest extends App {
def time[a](f: => a): (a,Double) = {
val start = System.nanoTime()
val result: a = f
val end = System.nanoTime()
(result, 1e-9*(end-start))
}
for (n <- List(1000000,10000000)) {
println("n = %d".format(n))
val (s2,t2) = time((Set() ++ (1 to n)).sum)
println("sum %d, time %g".format(s2,t2))
}
}
Compiling and running produces
tile:scalafab% scala SetTest
n = 1000000
sum 1784293664, time 0.982045
n = 10000000
Exception in thread "Poller SunPKCS11-Darwin" java.lang.OutOfMemoryError: Java heap space
...
I.e., Scala is unable to represent a set of 10 million Ints on a machine with 8 GB of memory. Is this expected behavior? Is there some way to reduce the memory footprint?
Generic immutable sets do take a lot of memory. The default is for only 256M heap, which leaves only 26 bytes per object. The hash trie for immutable sets generally takes one to two hundred bytes per object an extra 60 or so bytes per element. If you add -J-Xmx2G on your command line to increase the heap space to 2G, you should be fine.
(This level of overhead is one reason why there are bitsets, for example.)
I'm not that familiar with Scala, but here's what I think is happening:
First off, the integers are being stored on the heap (as the must be, since the data structure is stored on the heap). So we are talking about available heap memory, not stack memory at all (just to clarify the validity of what I'm about to say next).
The real kicker is that Java's default heap size is pretty small - I believe its only 128 megabytes (this is probably an really old number, but the point is that the number exists, and it's quite small).
So it's not that your program uses too much memory - it's more like Java just doesn't give you enough in the first place. There is a solution, though: the minimum and maximum heap sizes can be set with the -Xms and -Xmx command line options. They can be used like:
java -Xms32m -Xmx128m MyClass (starts MyClass with a minimum heap of 32 megabytes, maximum of 128 megabytes)
java -Xms1g -Xmx3g MyClass (executes MyClass with a minimum heap of 1 gigabytes, maximum of 3 gigabytes)
If you use an IDE, there are probably options in there to change the heap size as well.
This should always overflow. Holding such large values is not required in this case. If you want to sum use an iterator or a range.
val (s2,t2) = time( (1 to n).sum)
The above line completes in a second with no overflow.
You can always increase memory allocation using other answers.