How to Use VM argument while creating build of an Application? - eclipse

My Javafx application was taking so much memory.
I read about -Xmx VM arguments in Eclipse and applied.
It gave me good result. After using this maximum memory was 76000k. But when I created build of that project it again start consuming more than 10000k and contentiously increasing.
Suggest some ideas and edit if i'm wrong in my language.

Related

SBT out of memory with subprojects

We're using SBT 0.13 and a Java 8 JVM on CircleCI to build a Play application with several subprojects. We were occasionally getting out of memory issues on CircleCI, where it aborted our build because it exceeded 4 GB of memory usage.
Yesterday, I added a new subproject to our build, and almost all builds fail now on the out of memory issue. It looks like adding subprojects also adds to the amount of memory used for our build.
I've tried several things to reduce our memory load:
Add _JAVA_OPTIONS: "-Xms512m -Xmx2048" to circle.yml as described on CircleCI's documentation pages. (I noticed from the log that the JVM does pick up on this setting.)
Add a -mem parameter to the SBT call.
Add concurrentRestrictions in Global += Tags.limit(Tags.Test, 1) to the top of the SBT file, to make sure that at least the memory isn't used all at once.
All of these measures seemed to have helped, but I haven't found the definitive solution to this problem yet.
What else can I do to keep SBT's memory usage under control?
EDIT: Our project has 5 subprojects, with about 14000 lines of Scala code (and also, 21000 lines of Java code that we 'inherited'). The out-of-memory usually (but not always) occurs while executing static analysis using FindBugs: we use that in conjunction with the FindSecurityBugs plugin to find security issues.
There two concerns here that are getting mixed:
Circle CI not picking up values for memory limits
SBT using an excessive amount of memory
The first issue has to be addressed looking at CircleCI documentation / examples. To investigate why you use so much memory, you can run your sbt locally with memory limits lower than the 4g (i.e. 2g). You will find yourself in one of these two cases:
Your tests are really using too much memory, maybe because of memory leaks. Your JVM exits because of an java.lang.OutOfMemoryError: GC overhead limit exceeded. You should run the build locally with a profiler and see what's causing you problem (database connections not closed?)
Your tests are using too much memory due to SBT ability to re-load classes dynamically: in SBT is it possible to re-load class completely inside the same JVM (i.e. for example you can launch console, load the class, edit the file, recompile and relaunch console and re-load the class). As described in the Oracle documentation here, there is no limit for the Maximum MetaSpace in Java 8 and you should set one so that your heap + metaspace < 4gb. See https://blogs.oracle.com/poonam/entry/about_g1_garbage_collector_permanent

GWT compiler is running out of memory. How do I configure it within IntelliJ?

I'm running GWT from within IntelliJ. (Plain GWT, IntelliJ v9). I'm able to run my application via my "GWT development mode" configuration, but when I try to run it through my "local tomcat" configuration, I get a bunch of incomprehensible error messages referring to Oracle and all sorts of weird stuff that I don't use, followed by this error which is sorted to the bottom of all the others:
Error: Out of memory; to increase the amount of memory, use the -Xmx flag at startup (java -Xmx128M ...)
I'm guessing that this error is the root cause.
According to my understanding, there is a GWT compilation step which runs in a JVM separate to both IntelliJ and tomcat, and so I'm unsure where to set the -Xmx parameter.
My question is: where do I find this -Xmx parameter? (And: am I on the right track, taking this error message at face value, or is it a symptom of deeper problems?)
You can configure the heap in the GWT facet settings:
As CrazyCoder just said you can increase the amount of memory for gwt compiler in the GWT Facet settings. The GWT Compiler running out of memory on bigger apps is very common if you don`t increase the amount of memory. Most projects should be fine with 512m, but on large projects I already needed more memory.
So this is not some out of memory error caused by some bad design on your behalf.
In my application, I set it to 1024 but it gives me out of memory after 7-9 page refresh on dev mode. And after that I tried it to optimize by doing performance hack, I separated my *.gwt.xml for every gwt module for specific browsers by setting:
MySampleModule_FF.gwt.xml
<set-property name="user.agent" value="gecko1_8"/>
And I run my dev mode with this gwt.xml for Firefox. In addition to it also separated internationalization as well.
After that my out of memory issues considerably decreased.
32 bit JDK also sometimes causes Out Of Memory error. Try using 64 bit JDK.
To those who get the same error under Eclipse, try to restore cache files (if they were removed accidentally in WAR folder ), this way I solve the same problem under my Eclipse project.

STS slow build when loading xyz-context.xml files

i recently started using STS on a 64 bit Windows machine. Often when i "clean" my project STS gets unresponsive or just takes minutes to build while loading context.xml. files.
How can I fix this? Is it looking for resources on the web and waiting for timeouts.?
EDIT: I noticed that during the build process my network usage goes up. Not sure yet what is going on there...
EDIT: Possibly STS is loading all of the referenced springsource XSD files for XML validation?`If so, how can I disable this validation (apart from copying the files and referencing them locally, of course)? I've already tried disabling all of the Preferences related to "Validation" in STS - to no avail.
Often it is, because java is running out of free memory and need to run the garbage collector very often.
You can see the free memory in the bottom right corner of eclipse if you enable Window/Prefercences/General/"Show heap status".
If you can confirm that it is a memory problem, then you can increase the memory in sts.ini (-Xmx).
It is said that the 64bit java version needs up to 1/3 more memory than the 32bit version. But I don't know if this rumour is right or not.

Running a mapreduce jar on Hadoop cluster

I'm trying to run the map reduce implementation of quadratic sieve algorithm on Hadoop. For this purpose I'm using karmasphere Hadoop community plugin with Netbeans. The program works fine using the plugin. But I'm unable to run it on actual cluster.
I'm running this command
bin/hadoop jar MRIF.jar 689
Where MRIF.jar is the jar file made by building the netbeans project and 689 is number to be factored. The input and output directories are hard coded in program itself. When running on actual cluster, it appears that the inside java classes are not being processed as reduce completes to 100% before map being at 0% itself. And input and output files are created with no content.
But this works fine when running using Karmasphere plugin.
Try running it as bin/hadoop -jar MRIF.jar 689. The -jar forces it to run local and displays information to the console as well as logs to that machine. You can also check the Hadoop logs to see if they have any indicators of why it's not happening correctly.
When using -jar you can use System.out.println(...); to display information on the console, further helping to debug.
You can also use Hadoop Counters (link is random blog post I found) to assist in troubleshooting when running (psuedo-)distributed.
I admit this post isn't a 'solution' to the problem; Without more/further information about what is happening and where, there is a wide range of things that could be going on. If it is, as you mention, not processing the 'inside java classes' then it would likely be your implementation, of which we can't see to make suggestions, ect.
More data about the issue, such as logs, errors or output will likely assist in getting more solution-y responses instead of debugging tips. :)
EDIT: Thanks for the link to the files. I think your call is missing a component.
I looked in the run.sh and think this might get it to work for you:
bin/hadoop jar mrif.jar com.javiertordable.mrif.MapReduceQuadraticSieve 689

Out of Memory error starting JBoss with Portal from Eclipse

I cannot get JBoss Portal to start from Eclipse, though the AS alone starts fine, and the Portal starts correctly as well, when started from the command line as opposed to from within Eclipse. I'm running in Windows, with 3GB. Suggestions? Thank you.
I've spend hours to discover this, and almost gave up and started to use JBoss out of Eclipse.
In order to increase your JBoss vmargs when starting it from Eclipse you have to change JBoss launch configuration. If you change standalone.conf, nothing happens because Eclipse doesn't use it.
So, to change JBoss vmargs in Eclipse, you have to go to "Servers" tab, right click on your Jboss instance, and select "Open".
It will appear a new window. In the first section, you have a option: "Open launch configuration". When you click there, you'll see the textbox to change vmargs.
Hope this helps you!
There are different types of OutOfMemory errors:
java.lang.OutOfMemoryError: Java heap space
Increase the -Xms and -Xmx. I'd make sure they are set at least 256m and generally it's a good idea to set them to the same value.
java.lang.OutOfMemoryError: PermGen space
Add either -XX:+CMSPermGenSweepingEnabled or increase the PermGen size: -XX:PermSize=256m
java.lang.OutOfMemoryError: GC overhead limit exceeded
Add more heap, the garbage collector can't free enough memory with each cycle. Also try turning on GC logging.
java.lang.OutOfMemoryError: unable to create new native thread
Decrease your heap :) This means that you have too much memory allocated to the heap that the OS doesn't have enough memory to create threads..
Two last things, the above can be configured in jboss/bin/run.conf.
Also when starting JBoss see what -X parameters are being passed to the JVM, it prints this information by default, verify that it's what you expect it to be.
You need to increase the memory you're allocating to Java, in particular heap space and PermGen. This article is highly relevant. It mentions that this issue often occurs with Eclipse and JBoss (since both are fairly large), and provides a solution (adjusting the command-line flags).
What are you using for running portal from eclipse? Maybe Jboss tools can help you
http://www.jboss.org/tools
According to my experiments, all options of vmargs set in eclipse.ini, plays only once - when creating a new workspace. When you want to change the options in the existing workspace, use run/debug configuration as in https://stackoverflow.com/a/10814631/715269. vmargs in ini won't be read any more.
Be careful, you should set -XX:MaxPermSize=...M, not -XX:PermSize=..., the last sets minimal, starting PermSize.
ad. Jeremy. It is senseless to put mins and maxs to the same value. You deprive Eclipse of adaptability. -Xms and -Xmx ( heap) and PermGen and MaxPermGen should be different. (MaxPermGen =256 by default)