Optimal HashSet Initialization (Scala | Java) - scala

I'm writing an A.I. to solve a "Maze of Life" puzzle. Attempting to store states to a HashSet slows everything down. It's faster to run it without a set of explored states. I'm fairly confident my node (state storage) implements equals and hashCode well as tests show a HashSet doesn't add duplicate states. I may need to rework the hashCode function, but I believe what's slowing it down is the HashSet rehashing and resizing.
I've tried setting the initial capacity to a very large number, but it's still extremely slow:
val initCapacity = java.lang.Math.pow(initialGrid.width*initialGrid.height,3).intValue()
val frontier = new QuickQueue[Node](initCapacity)
Here is the quick queue code:
class QuickQueue[T](capacity: Int) {
val hashSet = new HashSet[T](capacity)
val queue = new Queue[T]
//methods below
For more info, here is the hash function. I store the grid values in bytes in two arrays and access it using tuples:
override def hashCode(): Int = {
var sum = Math.pow(grid.goalCoords._1, grid.goalCoords._2).toInt
for (y <- 0 until grid.height) {
for (x <- 0 until grid.width) {
sum += Math.pow(grid((x, y)).doubleValue(), x.toDouble).toInt
}
sum += Math.pow(sum, y).toInt
}
return sum
}
Any suggestions on how to setup a HashSet that wont slow things down? Maybe another suggestion of how to remember explored states?
P.S. using java.util.HashSet, and even with initial capacity set, it takes 80 seconds vs < 7 seconds w/o the set

Okay, for a start, please replace
override def hashCode(): Int =
with
override lazy val hashCode: Int =
so you don't calculate (grid.height*grid.width) floating point powers every time you need to access the hash code. That should speed things up by an enormous amount.
Then, unless you somehow rely upon close cells having close hash codes, don't re-invent the wheel. Use scala.util.hashing.MurmurHash3.seqHash or somesuch to calculate your hash. This should speed your hash up by another factor of 20 or so. (Still keep the lazy val.)
Then you only have overhead from the required set operations. Right now, unless you have a lot of 0x0 grids, you are using up the overwhelming majority of your time waiting for math.pow to give you a result (and risking everything becoming Double.PositiveInfinity or 0.0, depending on how big the values are, which will create hash collisions which will slow things down still further).

Note that the following assumes all your objects are immutable. This is a sane assumption when using hashing.
Also you should profile your code before applying optimization (use e.g. the free jvisualvm, that comes with the JDK).
Memoization for fast hashCode
Computing the hash code is usually a bottleneck. By computing the hash code only once for each object and storing the result you can reduce the cost of hash code computation to a minimum (once at object creation) at the expense of increased space consumption (probably moderate). To achieve this turn the def hashCode into a lazy val or val.
Interning for fast equals
Once you have the cost of hashCode eliminated, computing equals becomes a problem. equals is particularly expensive for collection fields and deep structures in general.
You can minimize the cost of equals by interning. This means that you acquire new objects of the class through a factory method, which checks whether the requested new object already exists, and if so, returns a reference to the existing object. If you assert that every object of this type is constructed in this way you know that there is only one instance of each distinct object and equals becomes equivalent to object identity, which is a cheap reference comparison (eq in Scala).

Related

Scala Array.view memory usage

I'm learning Scala and have been trying some LeetCode problems with it, but I'm having trouble with the memory limit being exceeded. One problem I have tried goes like this:
A swap is defined as taking two distinct positions in an array and swapping the values in them.
A circular array is defined as an array where we consider the first element and the last element to be adjacent.
Given a binary circular array nums, return the minimum number of swaps required to group all 1's present in the array together at any location.
and my attempted solution looks like
object Solution {
def minSwaps(nums: Array[Int]): Int = {
val count = nums.count(_==1)
if (count == 0) return 0
val circular = nums.view ++ nums.view
circular.sliding(count).map(_.count(_==0)).min
}
}
however, when I submit it, I'm hit with Memory Limit Exceeded for one of the test case where nums is very large.
My understanding is that, because I'm using .view, I shouldn't be allocating over O(1) memory. Is that understanding incorrect? To be clear, I realise this is the most time efficient way of solving this, but I didn't expect it to be memory inefficient.
The version used is Scala 2.13.7, in case that makes a difference.
Update
I did some inspection of the types and it seems circular is only a View unless I replace ++ with concat which makes it IndexedSeqView, why is that, I thought ++ was just an alias for concat?
If I make the above change, and replace circular.sliding(count) with (0 to circular.size - count).view.map(i => circular.slice(i, i + count)) it "succeeds" in hitting the time limit instead, so I think sliding might not be optimised for IndexedSeqView.

Scala large list allocation causes heavy garbage collection

EDIT, Summary: So, in the long chain of back and too, I think the "final answer" is a little hard to find. In essence however, Yuval pointed out that the incremental allocation of a large amount of memory forces a heap resize (actually, two by the look of the graph). A heap resize on a normal JVM involves a full GC, the most expensive, timeconsuming, collection possible. So, the reality is that my process isn't collecting lots of garbage per se, rather its doing heap resizes which inherently trigger expensive GC as part of the heap reorganization process. Those of us more familiar with Java than Scala are more likely to have allocated a simple ArrayList, which even if it causes heap resizing, is only a few objects (and likely allocated directly into old-gen if it's a big array) which would be far less work--because it's far fewer objects!--for a full GC anyway. Moral is likely that some other structure would be more appropriate for very large "lists".
I was trying to experiment with some of Scala's data structures (actually, with the parallel stuff, but that's not relevant to the problem I bumped into). I'm trying to create a fairly long list (with the intention of processing it purely sequentially). But try as I might, I'm failing to create a simple list without invoking vast quantities of garbage collection. I'm fairly sure that I'm simply pre-pending the new items to the existing tail, but the GC load suggests that I'm not. I've tried a couple of techniques so far (I'm starting to suspect that I'm misunderstanding something truly fundamental about this structure :( )
Here's the first effort:
val myList = {
#tailrec
def addToList(remaining:Long, acc:List[Double]): List[Double] =
if (remaining > 0) addToList(remaining - 1, 0 :: acc)
else acc
addToList(10000000, Nil)
}
And when I began to doubt I knew how to do recursion after all, I came up with this mutating beast.
val myList = {
var rv: List[Double] = Nil
var count = 10000000
while (count > 0) {
rv = 0.0 :: rv
}
rv
}
They both give the same effect: 8 cores running flat out doing GC (according to jvisualvm) and memory allocation reaching peaks at just over 1GB, which I assume is the real allocated space required for the data, but on the way, it creates a seemingly vast amount of trash on the way.
Am I doing something horribly wrong here? Am I somehow forcing the recreation of the entire list with every new element (I'm trying very hard to only do "prepend" type operations, which I thought should avoid that).
Or maybe, I have half a memory of hearing that Scala List does something odd to help it transform into a mutable list, or a parallel list, or something. Really don't recall what. Is this something to do with that? And if so, what the heck was "that" anyway?
Oh, and here's the image of the GC process. Notice the front-loading on the otherwise triangular rise of the memory that represents the "real" allocated data. That huge hump, and the associated CPU usage are my problem:
EDIT: I should clarify, I'm interested in two things. First, if my creation of the list is intrinsically faulty (i.e. if I'm not in fact only performing prepend operations) then I'd like to understand why, and how I should do this "right". Second, if my construction is sound and the odd behavior is intrinsic in the List, I'd like to understand the List better, so I know what it's doing, and why. I'm not particularly interested (at this point) in alternative ways to build a sequential data structure that sidesteps this problem. I anticipate using List a lot, and would like to know what's happening. (Later, I might well want to investigate other structures in this level of detail, but not right now).
First, if my creation of the list is intrinsically faulty (i.e. if
I'm not in fact only performing prepend operations) then I'd like to
understand why
You are constructing the list properly, there's no problem there.
Second, if my construction is sound and the odd behavior is intrinsic
in the List, I'd like to understand the List better, so I know what
it's doing, and why
List[A] in Scala is based on a linked list implementation, where you have a head of type A, and a tail of type List[A]. List[A] is an abstract class with two implementations, one presenting the empty list called Nil, and one called "Cons", or ::, representing a list which has a head value and a tail, which can be either full or empty:
def ::[B >: A] (x: B): List[B] =
new scala.collection.immutable.::(x, this)
If we look at the implementation for ::, we can see that it is a simple case class with two fields:
final case class ::[B](override val head: B, private[scala] var tl: List[B]) extends List[B] {
override def tail : List[B] = tl
override def isEmpty: Boolean = false
}
A quick look using the memory tab in IntelliJ shows:
That we have ten million Double values, and ten million instances of the :: case class, which in itself has additional overhead for being a case class (the compiler "enhances" these classes with additional structure).
Your JVisualVM instance doesn't show the GC graph being fully utilized, it is rather showing your CPU is overworked from generating the large list of items. During the allocation process, you generate a lot of intermediate lists until you reach your fully generated list, which means data has to be evicted between the different GC levels (Eden, Survivor and Old, assuming you're running the JVM flavor of Scala).
If we want a bit more information, we can use Mission Control to see into what's causing the memory pressure. This is a sample generated from a 30 second profile running:
def main(args: Array[String]): Unit = {
def myList: List[Double] = {
#tailrec
def addToList(remaining:Long, acc:List[Double]): List[Double] =
if (remaining > 0) addToList(remaining - 1, 0 :: acc)
else acc
addToList(10000000, Nil)
}
while (true) {
myList
}
}
We see that we have a call to BoxesRunTime.boxToDouble which happens due to the fact that :: is a generic class and doesn't have a #specialized attribute for double. We go scala.Int -> scala.Double -> java.lang.Double.

Why do Scala's index methods return -1 instead of None if the element is not found?

I've always been wondering why in Scala the various index methods for determining the position of an element in a collection (e.g. List.indexOf, List.indexWhere) return -1 to indicate the absence of the given element in the collection, instead of a more idiomatic Option[Int]. Is there some particular advantage to returning -1 instead of None, or is this just for historical reasons?
It is just for historical reasons, but then one wants to know what the historical reasons are: what was the history, and why did it turn out that way?
The immediate history is the java.lang.String.indexOf method, which returns the index, or -1 if no matching character is found. But this is hardly new; the Fortran SCAN function returns 0 if no character is found in a string, which is the same thing given that Fortran uses 1-indexing.
The reason to do this is that strings have only positive length, so any negative length can be used as a None value without any overhead of boxing. -1 is the most convenient negative number, so that's it.
And this can add up if the compiler isn't smart enough to realize that all the boxing and unboxing and everything is irrelevant. In particular, an object creation tends to take 5-10 ns, while a function call or comparison typically takes more like 1-2 ns, so if the collection is short, creating a new object can have a sizable fractional penalty (more so if your memory is already taxed and the GC has a lot of work to do).
If Scala had initially had an amazing optimizer, then the choice probably would have been different, as one would just write things with options, which is safer and less of a special case, and then trust the compiler to convert it into appropriately high-performance code.
Speed? (not sure)
def a(): Option[Int] = Some(Math.random().toInt)
def b(): Int = Math.random().toInt
val t0 = System.nanoTime; (0 to 1000000).foreach(_ => a()); println("" + (System.nanoTime - t0))
// 53988000
val t0 = System.nanoTime; (0 to 1000000).foreach(_ => b()); println("" + (System.nanoTime - t0))
// 49273000
And you also should always check for index < 0 in Some(index)
There is also the benefit that just returning an Int can use Java's built-in types, whereas Option[Int] would need to wrap the integer in an Object. This means both worse speed (as indicated by #idonnie) but also more memory usage.
While Option is great as a general tool (and I use it a lot) also other non-value presentations s.a. Double.NaN or an empty string are perfectly valid, and useful.
One of the benefits of using Option is the ability to pass it to for loops etc. as a collection. If you are not likely to do that, checking for -1 or NaN may be more concise than doing matches for None/Some.

Scala: Hash ignores initial size (fast hash table for billions of entries)

I am trying to find out how well Scala's hash functions scale for big hash tables (with billions of entries, e.g. to store how often a particular bit of DNA appeared).
Interestingly, however, both HashMap and OpenHashMap seem to ignore the parameters which specify initial size (2.9.2. and 2.10.0, latest build).
I think that this is so because adding new elements becomes much slower after the first 800.000 or so.
I have tried increasing the entropy in the strings which are to be inserted (only the chars ACGT in the code below), without effect.
Any advice on this specific issue? I would also be grateful to hear your opinion on whether using Scala's inbuilt types is a good idea for a hash table with billions of entries.
import scala.collection.mutable.{ HashMap, OpenHashMap }
import scala.util.Random
object HelloWorld {
def main(args: Array[String]) {
val h = new collection.mutable.HashMap[String, Int] {
override def initialSize = 8388608
}
// val h = new scala.collection.mutable.OpenHashMap[Int,Int](8388608);
for (i <- 0 until 10000000) {
val kMer = genkMer()
if(! h.contains(kMer))
{
h(kMer) = 0;
}
h(kMer) = h(kMer) + 1;
if(i % 100000 == 0)
{
println(h.size);
}
}
println("Exit. Hashmap size:\n");
println(h.size);
}
def genkMer() : String =
{
val nucs = "A" :: "C" :: "G" :: "T" :: Nil
var s:String = "";
val r = new scala.util.Random
val nums = for(i <- 1 to 55 toList) yield r.nextInt(4)
for (i <- 0 until 55) {
s = s + nucs(nums(i))
}
s
}
}
I wouldn't use Java data structures to manage a map of billions of entries. Reasons:
The max buckets in a Java HashMap is 2^30 (~1B), so
with default load factor you'll fail when the map tries to resize after 750 M entries
you'll need to use a load factor > 1 (5 would theoretically get you 5 billion items, for example)
With a high load factor you're going to get a lot of hash collisions and both read and write performance is going to start to degrade badly
Once you actually exceed Integer.MAX_INTEGER values I have no idea what gotchas exist -- .size() on the map wouldn't be able to return the real count, for example
I would be very worried about running a 256 GB heap in Java -- if you ever hit a full GC it is going lock the world for a long time to check the billions of objects in old gen
If it was me I'd be looking at an off-heap solution: a database of some sort. If you're just storing (hashcode, count) then one of the many key-value stores out the might work. The biggest hurdle is finding one that can support many billions of records (some max out at 2^32).
If you can accept some error, probabilistic methods might be worth looking at. I'm no expert here, but the stuff listed here sounds relevant.
First, you can't override initialSize, I think scala let's you because it's package private in HashTable:
private[collection] final def initialSize: Int = 16
Second, if you want to set the initial size, you have to give it a HashTable of the initial size that you want. So there's really no good way of constructing this map without starting at 16, but it does grow by a power of 2, so each resize should get better.
Third, scala collections are relatively slow, I would recommend java/guava/etc collections instead.
Finally, billions of entries is a bit much for most hardware, you'll probably run out of memory. You'll most likely need to use memory mapped files, here's a good example (no hashing though):
https://github.com/peter-lawrey/Java-Chronicle
UPDATE 1
Here's a good drop in replacement for java collections:
https://github.com/boundary/high-scale-lib
UPDATE 2
I ran your code and it did slow down around 800,000 entries, but then I boosted the java heap size and it ran fine. Try using something like this for jvm:
-Xmx2G
Or, if you want to use every last bit of your memory:
-Xmx256G
These are the wrong data structures. You will hit a ram limit pretty fast (unless you have 100+gb, and even then you will still hit limits very fast).
I don't know if suitable data structures exist for scala, although someone will have done something with Java probably.

Complexity of List.reverse?

In Scala, there is reverse method for lists. What is the complexity of this method? Is it better to simply use the original list and always remember that the list is the reverse of what we expect, or to explicitly use reverse before operating on it.
EDIT: What I am really interested in is to get the last two elements of the original list (or the first two of the reversed list).
So I would do something like:
val myList = origList.reverse
val a = myList(0)
val b = myList(1)
This is not in a loop, just a one-time thing in my library... but if someone else uses the library and puts it in a loop, it is not under my control.
Looking at the source, it's O(n) as you might reasonably expect:
override def reverse: List[A] = {
var result: List[A] = Nil
var these = this
while (!these.isEmpty) {
result = these.head :: result
these = these.tail
}
result
}
If in your code you're able to iterate through the list in reverse order at the same cost of iterating in forward order, then it would be more efficient to do this rather than reversing the List.
In fact, if your alternative operation which involves using the original list works in less than O(n) time, then there's a real argument for going with that. Making an algorithm asymptotically faster will make a huge difference if you ever rely on it more (especially if used inside other loops, as oxbow_lakes points out below).
On the whole though I'd expect that anything where you're reversing a list means that you care about the relative ordering of a non-trivial number of elements, and so whatever you're doing is inherently O(n) anyway. (This might not be true for other data structures such as a binary tree; but lists are linear, and in the extreme case even reverse . head can't be done in O(1) time with a singly-linked list.)
So if you're choosing between two O(n) options - for the vast majority of applications, shaving a few nanoseconds off the iteration time isn't going to really gain you anything. Hence it would be "best" to make your code as readable as possible - which means calling reverse and then iterating, if that's closest to your intention.
(And if your app is too slow, and profiling shows that this list manipulation is a hotspot, then you can think about how to make it more efficient. Which by that point may well involve a different option to both of your current candidates, given the extra context you'll have at that point.)