Consider the following Set benchmark:
import scala.collection.immutable._
object SetTest extends App {
def time[a](f: => a): (a,Double) = {
val start = System.nanoTime()
val result: a = f
val end = System.nanoTime()
(result, 1e-9*(end-start))
}
for (n <- List(1000000,10000000)) {
println("n = %d".format(n))
val (s2,t2) = time((Set() ++ (1 to n)).sum)
println("sum %d, time %g".format(s2,t2))
}
}
Compiling and running produces
tile:scalafab% scala SetTest
n = 1000000
sum 1784293664, time 0.982045
n = 10000000
Exception in thread "Poller SunPKCS11-Darwin" java.lang.OutOfMemoryError: Java heap space
...
I.e., Scala is unable to represent a set of 10 million Ints on a machine with 8 GB of memory. Is this expected behavior? Is there some way to reduce the memory footprint?
Generic immutable sets do take a lot of memory. The default is for only 256M heap, which leaves only 26 bytes per object. The hash trie for immutable sets generally takes one to two hundred bytes per object an extra 60 or so bytes per element. If you add -J-Xmx2G on your command line to increase the heap space to 2G, you should be fine.
(This level of overhead is one reason why there are bitsets, for example.)
I'm not that familiar with Scala, but here's what I think is happening:
First off, the integers are being stored on the heap (as the must be, since the data structure is stored on the heap). So we are talking about available heap memory, not stack memory at all (just to clarify the validity of what I'm about to say next).
The real kicker is that Java's default heap size is pretty small - I believe its only 128 megabytes (this is probably an really old number, but the point is that the number exists, and it's quite small).
So it's not that your program uses too much memory - it's more like Java just doesn't give you enough in the first place. There is a solution, though: the minimum and maximum heap sizes can be set with the -Xms and -Xmx command line options. They can be used like:
java -Xms32m -Xmx128m MyClass (starts MyClass with a minimum heap of 32 megabytes, maximum of 128 megabytes)
java -Xms1g -Xmx3g MyClass (executes MyClass with a minimum heap of 1 gigabytes, maximum of 3 gigabytes)
If you use an IDE, there are probably options in there to change the heap size as well.
This should always overflow. Holding such large values is not required in this case. If you want to sum use an iterator or a range.
val (s2,t2) = time( (1 to n).sum)
The above line completes in a second with no overflow.
You can always increase memory allocation using other answers.
Related
I'm trying to crash my program (run in IntelliJ) with an OutOfMemoryException:
def OOMCrasher(acc: String): String = {
OOMCrasher(acc + "ADSJKFAKLWJEFLASDAFSDFASDFASERASDFASEASDFASDFASERESFDHFDYJDHJSDGFAERARDSHFDGJGHYTDJKXJCV")
}
OOMCrasher("")
However, it just runs for a very long time. My suspicions is that it simply takes a very long time to fill up all the gigabytes of memory allocated to the JVM with a string. So I'm looking at how to make IntelliJ allocate less memory to the JVM. Here's what I've tried:
In Run Configurations -> VM options:
--scala.driver.memory 1k || --driver.memory 1k
Both of these cause crashes with:
Unrecognized option: --scala.driver.memory
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
I've also tried to put the options in the Edit Configurations -> Program Arguments. This causes the program to run for a very long time again, not yielding an OutOfMemoryException.
EDIT:
I will accept any answer that successfully explains how to allocate less memory to the program, since that is the main question.
UPDATE:
Changing the function to:
def OOMCrasher(acc: HisList[String]): HisList[String] = {
OOMCrasher(acc.add("Hi again!"))
}
OOMCrasher(Cons("Hi again!", Empty))
where HisList is a simple LinkedList implementation as well as running with -Xmx3m caused the wanted exception.
To functionally reach an OutOfMemoryException is harder than it looks, because recursive functions almost always run first into a StackOverflowException.
But there is a mutable approach that will guarantee an OutOfMemoryException: Doubling a List over and over again. Scala's Lists are not limited by the maximum array size and thus can expand until there is just no more memory left.
Here's an example:
def explodeList[A](list: List[A]): Unit = {
var mlist = list
while(true) {
mlist = mlist ++ mlist
}
}
To answer your actual question, try to fiddle with the JVM option -Xmx___m (e.g. -Xmx256m). This defines the maximum heap size the JVM is allowed to allocate.
I am doing some experiments on NBDCache of Rocket Chip. I want to change cache-line size, and illustrate the trade-off between performance improvement and storage overhead of L1 Cache.
As I figured out the default value for the cache line in rocket chip is 64 bits, which is relatively small. I tried to change the cache line size by means of parameters defined for WithNBigCores in subsystem/config.scala but I got the following assertion while compiling the new code.
[error] Caused by: java.lang.IllegalArgumentException: requirement failed: rowBits(256) != cacheDataBits(64)
I am looking for the process of changing cache-line size in Rocket Chip.
class WithNBigCores(n: Int) extends Config((site, here, up) => {
case RocketTilesKey => {
val big = RocketTileParams(
dcache = Some(DCacheParams(
rowBits = 256 // site(SystemBusKey).beatBits,
nMSHRs = 1,
...
)))
}
})
The size of cache lines in Rocket-Chip is nearly hardwired to 64 Byte (rather than 64 bit) as you said. It is not an easy work to change it. The configurability on this part s not great I am afraid.
If you really, you need to revise the corresponding places in the NBDcache, including the data array, the part related to refill, anything related to the beat size of the TileLink.
Allocating temporary objects on the heap every frame in Unity is costly, and we all do our best to avoid this by caching heap objects and avoiding garbage generating functions. It's not always obvious when something will generate garbage though. For example:
enum MyEnum {
Zero,
One,
Two
}
List<MyEnum> MyList = new List<MyEnum>();
MyList.Contains(MyEnum.Zero); // Generates garbage
MyList.Contains() generates garbage because the default equality comparer for List uses objects which causes boxing of the enum value types.
In order to prevent inadvertent heap allocations like these, I would like to be able to detect them in my unit tests.
I think there are 2 requirements for this:
A function to return the amount of heap allocated memory
A way to prevent garbage collection occurring during the test
I haven't found a clean way to ensure #2. The closest thing I've found for #1 is GC.GetTotalMemory()
[UnityTest]
IEnumerator MyTest()
{
long before = GC.GetTotalMemory(false);
const int numObjects = 1;
for (int i = 0 ; i < numObjects; ++i)
{
System.Version v = new System.Version();
}
long after = GC.GetTotalMemory(false);
Assert.That(before == after);
}
The problem is that GC.GetTotalMemory() returns the same value before and after in this test. I suspect that Unity/Mono only allocates memory from the system heap in chunks, say 4kb, so you need to allocate <= 4kb before Unity/Mono will actually request more memory from the system heap, at which point GC.GetTotalMemory() will return a different value. I confirmed that if I change numObjects to 1000, GC.GetTotalMemory() returns different values for before and after.
So in summary, 1. how can i accurately get amount of heap allocated memory, accurate to the byte and 2. can the garbage collector run during the body of my test, and if so, is there any non-hacky way of disabling GC for the duration of my test
TL;DR
Thanks for your help!
I posted the same question over on Unity answers and got a reply:
https://answers.unity.com/questions/1535588/is-there-a-way-to-accurately-measure-heap-allocati.html
No, basically it's not possible. You could run your unit test a bunch of times in a loop and hope that it generates enough garbage to cause a change in the value returned by GC.GetTotalMemory(), but that's about it.
In Scala 2.10, I create a stream, write some text into it and take its byte array:
val stream = new ByteArrayOutputStream()
// write into a stream
val ba: Array[Byte] = stream.toByteArray
I can get the number of characters using ba.length or stream.toString().length(). Now, how can I estimate the amount of memory taken by the data? Is it 4 (array reference) + ba.length (each array cell occupies exactly 1 byte) - and this is in bytes?
It occupies exactly as much memory as in java. Scala arrays are java arrays.
scala> Array[Byte](1,2,3).getClass
res1: java.lang.Class[_ <: Array[Byte]] = class [B
So the memory usage is the size of the array plus some little overhead that depends on the architecture of the machine (32 or 64bit) and the JVM.
To precisely measure the memory usage of a byte array on the JVM, you will have to use an java agent library such as JAMM.
Here is a scala project that has JAMM set up as a java agent: https://github.com/rklaehn/scalamuc_20150707 . The build sbt shows how to configure JAMM as an agent for an sbt project, and there are some example tests here.
Here is some example code and output:
val mm = new MemoryMeter()
val test = Array.ofDim[Byte](100)
println(s"A byte array of size ${test.length} uses "+ mm.measure(test) + "bytes");
> A byte array of size 100 uses 120 bytes
(This is on a 64bit linux machine using oracle java 7)
I am trying to find out how well Scala's hash functions scale for big hash tables (with billions of entries, e.g. to store how often a particular bit of DNA appeared).
Interestingly, however, both HashMap and OpenHashMap seem to ignore the parameters which specify initial size (2.9.2. and 2.10.0, latest build).
I think that this is so because adding new elements becomes much slower after the first 800.000 or so.
I have tried increasing the entropy in the strings which are to be inserted (only the chars ACGT in the code below), without effect.
Any advice on this specific issue? I would also be grateful to hear your opinion on whether using Scala's inbuilt types is a good idea for a hash table with billions of entries.
import scala.collection.mutable.{ HashMap, OpenHashMap }
import scala.util.Random
object HelloWorld {
def main(args: Array[String]) {
val h = new collection.mutable.HashMap[String, Int] {
override def initialSize = 8388608
}
// val h = new scala.collection.mutable.OpenHashMap[Int,Int](8388608);
for (i <- 0 until 10000000) {
val kMer = genkMer()
if(! h.contains(kMer))
{
h(kMer) = 0;
}
h(kMer) = h(kMer) + 1;
if(i % 100000 == 0)
{
println(h.size);
}
}
println("Exit. Hashmap size:\n");
println(h.size);
}
def genkMer() : String =
{
val nucs = "A" :: "C" :: "G" :: "T" :: Nil
var s:String = "";
val r = new scala.util.Random
val nums = for(i <- 1 to 55 toList) yield r.nextInt(4)
for (i <- 0 until 55) {
s = s + nucs(nums(i))
}
s
}
}
I wouldn't use Java data structures to manage a map of billions of entries. Reasons:
The max buckets in a Java HashMap is 2^30 (~1B), so
with default load factor you'll fail when the map tries to resize after 750 M entries
you'll need to use a load factor > 1 (5 would theoretically get you 5 billion items, for example)
With a high load factor you're going to get a lot of hash collisions and both read and write performance is going to start to degrade badly
Once you actually exceed Integer.MAX_INTEGER values I have no idea what gotchas exist -- .size() on the map wouldn't be able to return the real count, for example
I would be very worried about running a 256 GB heap in Java -- if you ever hit a full GC it is going lock the world for a long time to check the billions of objects in old gen
If it was me I'd be looking at an off-heap solution: a database of some sort. If you're just storing (hashcode, count) then one of the many key-value stores out the might work. The biggest hurdle is finding one that can support many billions of records (some max out at 2^32).
If you can accept some error, probabilistic methods might be worth looking at. I'm no expert here, but the stuff listed here sounds relevant.
First, you can't override initialSize, I think scala let's you because it's package private in HashTable:
private[collection] final def initialSize: Int = 16
Second, if you want to set the initial size, you have to give it a HashTable of the initial size that you want. So there's really no good way of constructing this map without starting at 16, but it does grow by a power of 2, so each resize should get better.
Third, scala collections are relatively slow, I would recommend java/guava/etc collections instead.
Finally, billions of entries is a bit much for most hardware, you'll probably run out of memory. You'll most likely need to use memory mapped files, here's a good example (no hashing though):
https://github.com/peter-lawrey/Java-Chronicle
UPDATE 1
Here's a good drop in replacement for java collections:
https://github.com/boundary/high-scale-lib
UPDATE 2
I ran your code and it did slow down around 800,000 entries, but then I boosted the java heap size and it ran fine. Try using something like this for jvm:
-Xmx2G
Or, if you want to use every last bit of your memory:
-Xmx256G
These are the wrong data structures. You will hit a ram limit pretty fast (unless you have 100+gb, and even then you will still hit limits very fast).
I don't know if suitable data structures exist for scala, although someone will have done something with Java probably.