SBT out of memory with subprojects - scala

We're using SBT 0.13 and a Java 8 JVM on CircleCI to build a Play application with several subprojects. We were occasionally getting out of memory issues on CircleCI, where it aborted our build because it exceeded 4 GB of memory usage.
Yesterday, I added a new subproject to our build, and almost all builds fail now on the out of memory issue. It looks like adding subprojects also adds to the amount of memory used for our build.
I've tried several things to reduce our memory load:
Add _JAVA_OPTIONS: "-Xms512m -Xmx2048" to circle.yml as described on CircleCI's documentation pages. (I noticed from the log that the JVM does pick up on this setting.)
Add a -mem parameter to the SBT call.
Add concurrentRestrictions in Global += Tags.limit(Tags.Test, 1) to the top of the SBT file, to make sure that at least the memory isn't used all at once.
All of these measures seemed to have helped, but I haven't found the definitive solution to this problem yet.
What else can I do to keep SBT's memory usage under control?
EDIT: Our project has 5 subprojects, with about 14000 lines of Scala code (and also, 21000 lines of Java code that we 'inherited'). The out-of-memory usually (but not always) occurs while executing static analysis using FindBugs: we use that in conjunction with the FindSecurityBugs plugin to find security issues.

There two concerns here that are getting mixed:
Circle CI not picking up values for memory limits
SBT using an excessive amount of memory
The first issue has to be addressed looking at CircleCI documentation / examples. To investigate why you use so much memory, you can run your sbt locally with memory limits lower than the 4g (i.e. 2g). You will find yourself in one of these two cases:
Your tests are really using too much memory, maybe because of memory leaks. Your JVM exits because of an java.lang.OutOfMemoryError: GC overhead limit exceeded. You should run the build locally with a profiler and see what's causing you problem (database connections not closed?)
Your tests are using too much memory due to SBT ability to re-load classes dynamically: in SBT is it possible to re-load class completely inside the same JVM (i.e. for example you can launch console, load the class, edit the file, recompile and relaunch console and re-load the class). As described in the Oracle documentation here, there is no limit for the Maximum MetaSpace in Java 8 and you should set one so that your heap + metaspace < 4gb. See https://blogs.oracle.com/poonam/entry/about_g1_garbage_collector_permanent

Related

Building tests in Intellij for Play Framework is very slow

Is there a way to speed up the build time of unit tests for Play Framework in Intellij? I am doing TDD. Whenever I execute a test, it takes about 30 - 60 seconds to compile. Even a simple Hello World test takes time. Rerunning the same test even without any change will still start the make process.
I am on Intellij 14.1, on Play 2.3.8, written in Scala.
I already tried setting the java compiler to eclipse, and also tried setting Scala compiler to SBT.
In intellij 14.1.2, the workaround I did is to:
1) Remove make from tests (Edit Configurations -> Defaults -> Scala Test -> Before launch -> (-) Make)
2) Start activator (or play) with ~ test:compile (ex: activator ~test:compile) or (sbt ~ test:compile)
This prevents Intellij from calling a play compilation server every time a make is invoked. The compilation is delegated to an external sbt/activator/play process to do the continuous compilation. The disadvantage is that, when you run your test immediately before the compilation completes, you may get a NoClassDefinedFound exception. Also, you will need to monitor an extra process. This setup however, is so much faster compared to the default setup by Intellij (for now). Hope this helps anyone.
I'm going to assume that you know that the problem is build-time - that the actual run-time for the tests themselves is negligible.
What do you have for hardware? In my experience, 4GB RAM is not enough for Intellij Scala to perform well - it needs a big disk cache (which the OS uses free RAM for), I think. An SSD helps, too. Use Performance Monitor or analogous for your OS to see whether the time is disk, CPU, or net. If it's CPU, consider whether heap-size may be a problem.
What is your build process like? Are there sbt plugins? How big is your project?
UPDATE
Triggering a full rebuild without changes is wrong. Is there something in your tests that is modifying the project directories? If you run a dummy no-op test, does it do the same thing? Are you maybe writing logs into the project tree, for instance?
In my limited experience, full Play builds under Intellij are orders of magnitude slower than a pure Scala build - I'd guess because of all the SBT plugins (view compiler, xScript compiler, xSS compiler, etc) that have to run. But incrementals aren't that painful.
On OSX, read "Activity Monitor" for "Performance Monitor".
UPDATE
See Intellij issue SCL-8235 for other folks' experience and workarounds for slow incremental Play builds. Vote for the issue to increase its priority and get it fixed quicker.
What about unmarking existing tests and leaving only yours? Right click on test directory (which should be green) and Unmark as Test Source Root.

How to Use VM argument while creating build of an Application?

My Javafx application was taking so much memory.
I read about -Xmx VM arguments in Eclipse and applied.
It gave me good result. After using this maximum memory was 76000k. But when I created build of that project it again start consuming more than 10000k and contentiously increasing.
Suggest some ideas and edit if i'm wrong in my language.

Will running everything from RAM disk speed up scala compile time?

Scenario:
The machine I use for development have 32Gb of DDR3 RAM, i7 3770, SSD. The project is large, Scala compiles fast most of the time during incremental compilation but sometimes a single change leads to recompilation of hundreds of files, it then take some time to compile all and some good time for jrebel to reload all changed files.
Question:
Will putting everything on a RAMFS (Mac) make compile and jrebel reload significantly faster?
My plan was to put everything directly related to the project in a RAMFS partition ( .ivy, project source, .sbt, maybe even copy JDK. etc). I would create a script to do all that in the boot or manually, that won't be a problem. Also, I would setup file sync tasks, so, losing a change won't be a concern in case of a OS failure.
Updates:
log says around 400 among java and scala sources are compiled after a clean.
after changing a file in a core module, it recompiles 130 files in 50s.
jrebel takes 72s to reload after #1 and 50s after #2
adding -Drebel.check_class_hash=true made jrebel reload instantaneous after #2.
I am quite happy with these results, but still interested on how to make scala compilation even faster, since cpu usage gets at most 70% for just about 5 seconds in compilation process that takes 170s, overall cpu usage during the compilation is 20%.
UPDATE:
After putting JVM, source, .ivy2 and .sbt folders on RAMDISK, I noticed a small improvement on compile time only: from 132s to 122s ( after a clean). So, not worth the trouble.
NOTE:
That is excluding the dependency resolution, since I using this approach to avoid losing dependency resolution after a clean.
I have no idea what speedup you can expect with a Mac, but I have seen speedups on Linux compiling the Scala compiler itself that are encouraging enough to try. My report (warning : quite Linux-specific) is there.
You can try setting a VM argument -Drebel.check_class_hash=true which will check the checksum before reloading the classes.
There's often very little point in a RAM disk if you're working on Linux or OSX. Those OS's cache the files anyway.
https://unix.stackexchange.com/a/66402/141286

GWT compiler is running out of memory. How do I configure it within IntelliJ?

I'm running GWT from within IntelliJ. (Plain GWT, IntelliJ v9). I'm able to run my application via my "GWT development mode" configuration, but when I try to run it through my "local tomcat" configuration, I get a bunch of incomprehensible error messages referring to Oracle and all sorts of weird stuff that I don't use, followed by this error which is sorted to the bottom of all the others:
Error: Out of memory; to increase the amount of memory, use the -Xmx flag at startup (java -Xmx128M ...)
I'm guessing that this error is the root cause.
According to my understanding, there is a GWT compilation step which runs in a JVM separate to both IntelliJ and tomcat, and so I'm unsure where to set the -Xmx parameter.
My question is: where do I find this -Xmx parameter? (And: am I on the right track, taking this error message at face value, or is it a symptom of deeper problems?)
You can configure the heap in the GWT facet settings:
As CrazyCoder just said you can increase the amount of memory for gwt compiler in the GWT Facet settings. The GWT Compiler running out of memory on bigger apps is very common if you don`t increase the amount of memory. Most projects should be fine with 512m, but on large projects I already needed more memory.
So this is not some out of memory error caused by some bad design on your behalf.
In my application, I set it to 1024 but it gives me out of memory after 7-9 page refresh on dev mode. And after that I tried it to optimize by doing performance hack, I separated my *.gwt.xml for every gwt module for specific browsers by setting:
MySampleModule_FF.gwt.xml
<set-property name="user.agent" value="gecko1_8"/>
And I run my dev mode with this gwt.xml for Firefox. In addition to it also separated internationalization as well.
After that my out of memory issues considerably decreased.
32 bit JDK also sometimes causes Out Of Memory error. Try using 64 bit JDK.
To those who get the same error under Eclipse, try to restore cache files (if they were removed accidentally in WAR folder ), this way I solve the same problem under my Eclipse project.

STS slow build when loading xyz-context.xml files

i recently started using STS on a 64 bit Windows machine. Often when i "clean" my project STS gets unresponsive or just takes minutes to build while loading context.xml. files.
How can I fix this? Is it looking for resources on the web and waiting for timeouts.?
EDIT: I noticed that during the build process my network usage goes up. Not sure yet what is going on there...
EDIT: Possibly STS is loading all of the referenced springsource XSD files for XML validation?`If so, how can I disable this validation (apart from copying the files and referencing them locally, of course)? I've already tried disabling all of the Preferences related to "Validation" in STS - to no avail.
Often it is, because java is running out of free memory and need to run the garbage collector very often.
You can see the free memory in the bottom right corner of eclipse if you enable Window/Prefercences/General/"Show heap status".
If you can confirm that it is a memory problem, then you can increase the memory in sts.ini (-Xmx).
It is said that the 64bit java version needs up to 1/3 more memory than the 32bit version. But I don't know if this rumour is right or not.