Memory in Eclipse - eclipse

I'm getting the java.lang.OutOfMemoryError exception in Eclipse. I know that Eclipse
by default uses heap size of 256M. I'm trying to increase it but nothing happens.
For example:
eclipse -vmargs -Xmx16g -XX:PermSize=2g -XX:MaxPermSize=2g
I also tried different settings, using only the -Xmx option, using different cases
of g, G, m, M, different memory sizes, but nothing helps. Tried also to specify the values in the eclipse.ini file. Does not matter which params I specify, the heap exception is thrown at the same time, so I assume there's something I'm doing wrong that Eclipse ignores the -Xmx parameter. I'm using a 32GB RAM machine and trying to execute something very simple such as:
double[][] a = new double[15000][15000];
It only works when I reduce the array size to something around 10000 on 10000.
I'm working on Linux and using the top command I can see how much memory the Java
process is consuming; it's less than 2%.
Thanks!

Okay, I found a solution after reading
Why does heap space run out only when running JUnit tests?
When I specify the -Xmx inside eclipse by going to run->configuration->vm arguments
and set the -Xmx there, everything works fine :)

Related

Connect to Apache Phoenix with JDBC and to Postgres with Slick from Play2 [duplicate]

This question already has answers here:
Error java.lang.OutOfMemoryError: GC overhead limit exceeded
(22 answers)
Closed 3 years ago.
I am getting this error in a program that creates several (hundreds of thousands) HashMap objects with a few (15-20) text entries each. These Strings have all to be collected (without breaking up into smaller amounts) before being submitted to a database.
According to Sun, the error happens "if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown.".
Apparently, one could use the command line to pass arguments to the JVM for
Increasing the heap size, via "-Xmx1024m" (or more), or
Disabling the error check altogether, via "-XX:-UseGCOverheadLimit".
The first approach works fine, the second ends up in another java.lang.OutOfMemoryError, this time about the heap.
So, question: is there any programmatic alternative to this, for the particular use case (i.e., several small HashMap objects)? If I use the HashMap clear() method, for instance, the problem goes away, but so do the data stored in the HashMap! :-)
The issue is also discussed in a related topic in StackOverflow.
You're essentially running out of memory to run the process smoothly. Options that come to mind:
Specify more memory like you mentioned, try something in between like -Xmx512m first
Work with smaller batches of HashMap objects to process at once if possible
If you have a lot of duplicate strings, use String.intern() on them before putting them into the HashMap
Use the HashMap(int initialCapacity, float loadFactor) constructor to tune for your case
The following worked for me. Just add the following snippet:
dexOptions {
javaMaxHeapSize "4g"
}
To your build.gradle:
android {
compileSdkVersion 23
buildToolsVersion '23.0.1'
defaultConfig {
applicationId "yourpackage"
minSdkVersion 14
targetSdkVersion 23
versionCode 1
versionName "1.0"
multiDexEnabled true
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
packagingOptions {
}
dexOptions {
javaMaxHeapSize "4g"
}
}
#takrl: The default setting for this option is:
java -XX:+UseConcMarkSweepGC
which means, this option is not active by default. So when you say you used the option
"+XX:UseConcMarkSweepGC"
I assume you were using this syntax:
java -XX:+UseConcMarkSweepGC
which means you were explicitly activating this option.
For the correct syntax and default settings of Java HotSpot VM Options # this
document
For the record, we had the same problem today. We fixed it by using this option:
-XX:-UseConcMarkSweepGC
Apparently, this modified the strategy used for garbage collection, which made the issue disappear.
Ummm... you'll either need to:
Completely rethink your algorithm & data-structures, such that it doesn't need all these little HashMaps.
Create a facade which allows you page those HashMaps in-and-out of memory as required. A simple LRU-cache might be just the ticket.
Up the memory available to the JVM. If necessary, even purchasing more RAM might be the quickest, CHEAPEST solution, if you have the management of the machine that hosts this beast. Having said that: I'm generally not a fan of the "throw more hardware at it" solutions, especially if an alternative algorithmic solution can be thought up within a reasonable timeframe. If you keep throwing more hardware at every one of these problems you soon run into the law of diminishing returns.
What are you actually trying to do anyway? I suspect there's a better approach to your actual problem.
Use alternative HashMap implementation (Trove). Standard Java HashMap has >12x memory overhead.
One can read details here.
Don't store the whole structure in memory while waiting to get to the end.
Write intermediate results to a temporary table in the database instead of hashmaps - functionally, a database table is the equivalent of a hashmap, i.e. both support keyed access to data, but the table is not memory bound, so use an indexed table here rather than the hashmaps.
If done correctly, your algorithm should not even notice the change - correctly here means to use a class to represent the table, even giving it a put(key, value) and a get(key) method just like a hashmap.
When the intermediate table is complete, generate the required sql statement(s) from it instead of from memory.
The parallel collector will throw an OutOfMemoryError if too much time is being spent in garbage collection. In particular, if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, OutOfMemoryError will be thrown. This feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line.
If you're creating hundreds of thousands of hash maps, you're probably using far more than you actually need; unless you're working with large files or graphics, storing simple data shouldn't overflow the Java memory limit.
You should try and rethink your algorithm. In this case, I would offer more help on that subject, but I can't give any information until you provide more about the context of the problem.
If you have java8, and you can use the G1 Garbage Collector, then run your application with:
-XX:+UseG1GC -XX:+UseStringDeduplication
This tells the G1 to find similar Strings and keep only one of them in memory, and the others are only a pointer to that String in memory.
This is useful when you have a lot of repeated strings. This solution may or not work and depends on each application.
More info on:
https://blog.codecentric.de/en/2014/08/string-deduplication-new-feature-java-8-update-20-2/
http://java-performance.info/java-string-deduplication/
Fix memory leaks in your application with help of profile tools like eclipse MAT or VisualVM
With JDK 1.7.x or later versions, use G1GC, which spends 10% on garbage collection unlike 2% in other GC algorithms.
Apart from setting heap memory with -Xms1g -Xmx2g , try `
-XX:+UseG1GC
-XX:G1HeapRegionSize=n,
-XX:MaxGCPauseMillis=m,
-XX:ParallelGCThreads=n,
-XX:ConcGCThreads=n`
Have a look at oracle article for fine-tuning these parameters.
Some question related to G1GC in SE:
Java 7 (JDK 7) garbage collection and documentation on G1
Java G1 garbage collection in production
Agressive garbage collector strategy
For this use below code in your app gradle file under android closure.
dexOptions {
javaMaxHeapSize "4g"
}
In case of the error:
"Internal compiler error: java.lang.OutOfMemoryError: GC overhead limit exceeded at java.lang.AbstractStringBuilder"
increase the java heap space to 2GB i.e., -Xmx2g.
You need to increase the memory size in Jdeveloper go to setDomainEnv.cmd.
set WLS_HOME=%WL_HOME%\server
set XMS_SUN_64BIT=256
set XMS_SUN_32BIT=256
set XMX_SUN_64BIT=3072
set XMX_SUN_32BIT=3072
set XMS_JROCKIT_64BIT=256
set XMS_JROCKIT_32BIT=256
set XMX_JROCKIT_64BIT=1024
set XMX_JROCKIT_32BIT=1024
if "%JAVA_VENDOR%"=="Sun" (
set WLS_MEM_ARGS_64BIT=-Xms256m -Xmx512m
set WLS_MEM_ARGS_32BIT=-Xms256m -Xmx512m
) else (
set WLS_MEM_ARGS_64BIT=-Xms512m -Xmx512m
set WLS_MEM_ARGS_32BIT=-Xms512m -Xmx512m
)
and
set MEM_PERM_SIZE_64BIT=-XX:PermSize=256m
set MEM_PERM_SIZE_32BIT=-XX:PermSize=256m
if "%JAVA_USE_64BIT%"=="true" (
set MEM_PERM_SIZE=%MEM_PERM_SIZE_64BIT%
) else (
set MEM_PERM_SIZE=%MEM_PERM_SIZE_32BIT%
)
set MEM_MAX_PERM_SIZE_64BIT=-XX:MaxPermSize=1024m
set MEM_MAX_PERM_SIZE_32BIT=-XX:MaxPermSize=1024m
For my case increasing the memory using -Xmx option was the solution.
I had a 10g file read in java and each time I got the same error. This happened when the value in the RES column in top command reached to the value set in -Xmx option. Then by increasing the memory using -Xmx option everything went fine.
There was another point as well. When I set JAVA_OPTS or CATALINA_OPTS in my user account and increased the amount of memory again I got the same error. Then, I printed the value of those environment variables in my code which gave me different values than what I set. The reason was that Tomcat was the root for that process and then as I was not a su-doer I asked the admin to increase the memory in catalina.sh in Tomcat.
This helped me to get rid of this error.This option disables
-XX:+DisableExplicitGC

Recursive call stack depth

I have a recursive function that works for input where the call stack depth is up to 1000, but fails for bigger inputs. I converted the function to be tail recursive and that allowed it to get to about 1350.
What are the limits and is there any way to increase that limit?
I am working with pure functions and would like to avoid having to use operations. I have a solution that breaks up the problem into a composition of steps, each of which has a smaller stack depth, but it is rather contrived since its only purpose is to avoid the issue and it is more complex.
This is my mistake again... the setting for the Java stack is -Xss (the -Xms setting is the starting heap size), sorry. So if you use the JVM Arguments section in the Debugger tab of the launcher, and set something like -Xss5m, you should get further.
In a simple experiment with a recursive function, the default stack allowed me a depth of 227 calls. Using -Xss5m gave me 4020 calls, and -Xss10m gave me 8050 calls. Note that these stack sizes are somewhat less that the Gb sizes you were trying - 5Mb of stack is a lot of calls!
Overture does not impose a stack limit over the underlying Java stack limit, so it will simply respect the -Xms JVM argument. I think the regular execution stack for the interpreter comes from the Overture.ini file (top level), where you see the -Xmx argument to set the maximum heap. Can you try adding (say) -Xms128m, or a size of your choice, and see whether that gets you further?
It sounds like you are asking about how to increase the Java Stack Limit in the Overture debugger and not in the Overture IDE (overture.ini).
To change pass additional arguments to the Overture debugger you need to add them to the launch configuration:
Open the launch configuration
Select the "Debugger" tab
The add your arguments to the box shown next to "Arguments:" in the top
Overture Launch configuration
I have tried with -Xms and -Xmx both set up to 2048m but without any impact. I have also tried on Overture 2.3.0 on both Mac OSX and Windows 10 with the same result.
To take my project out of the loop, I created a new project with one very simple function:
countdown(n:nat) res:nat
== if n=0 then n else countdown(n-1)
On both Windows and Mac I can call this with value 807 and be successful, while with 808 it fails with error:
internal error
Main 206: Error evaluating code
Detailed Message: internal error

Out of Memory in MATLAB

I have got two huge matrices dat1=87093*59 and dat2=99802*59. I tried to do following operation R=dat1*dat1' but MATLAB throws me an error
??? Out of memory. Type HELP MEMORY for your options.
I have increased Java Heap Memory to 2012Mb but still the problem repeats. Can anyone help me out.
system config: windows 7-64bit, 8gb ram, MATLAB:r2010a-32bit version
Given dat1 is 87093x59, by doing R=dat1*dat1', the output would be 87093x87093. Say you really meant dat1*dat2', it's even worse, 87093x99802.
Assuming dat1 is double precision (8 bytes per element), dat1*dat1' is 60,681,525,192 bytes (about 60GB). With dat1*dat2' it's close to 69 GB. I'd say give up or re-evaluate your approach.
Try using jconsole or jvisualvm which are bundled in the bin folder of your JDK. Then view your running java process. It could be that your PermGen is the culprit here OR possibly a memory leak, in which case you can dump the memory via these tools and use a heap analysis tool to find out what might be causing the extreme memory usage.

Eclipse heap space (out of memory error)

I am facing memory issue in eclipse. Initially I was getting this error: ‘Unhandled event loop exception java heap space’ and also sometimes ‘An out of memory error has occured’.
I somehow managed to increase my heap size upto -Xmx990m. But still its not working. When I try to increase heap size beyond that, I am getting error ‘Unable to create virtual machine’ while starting eclipse.
I tried to make other changes in eclipse.ini file. When I change XXMaxPermSize, it gives me ‘permGen memory error’. For few times, I got different other kind of errors like ‘Unhandled event loop exception GC overhead limit exceeded’ and 2-3 more different types. Please help me what can be done that would be great!
Jeshurun's somewhat flippant comment about buying more RAM is actually fairly accurate. Eclipse is a memory HOG! On my machine right now Eclipse is using 2.1GB; no joke. If you want to be able to use Eclipse really effectively, with all the great features, you really need lots of memory.
That being said, there are ways to use Eclipse with less memory. The biggest helper I've found is disabling ALL validators (check "Suspend all validators" under Window>Preferences>Validation; just disabling the individual ones doesn't help enough). Another common source of memory-suckage is plugins. If you're going to stay at your current memory limit, I strongly recommend that you:
Uninstall your current Eclipse
Download the core/standalone/just Java version of Eclipse (the one with least filesize/no plug-ins built-in)
Try using just that for awhile, and see how the performance is. If it's ok, try installing the plug-ins you like, one at a time. Never install multiple, and give each one a week or two of trial.
You'll likely find that some plug-ins dramatically increase memory usage; don't use those (or if you do, get more RAM).
Hope that helps.
I also faced the same problem.I resolved by doing the build by following steps as.
-->Right click on the project select RunAs ->Run configurations
Select your project as BaseDirectory.
In place of goals give eclipse:eclipse install
-->In the second tab give -Xmx1024m as VM arguments.
I faced similar situation. My program had to run simulation for 10000 trials.
I tried -Xmx1024m : still it crashed.
Then I realized given my program had too much to put up on console; my console-display memory may be going OOB.
Simple solution=> right-click console > preferences > Check Limit console output > Enter Buffer size(characters)[Default: 80000].
I had unchecked it for analyzing single run, but when the final run had 10000 trials, it started to crash passed 500 trials.
Today was the day: I thought three times, that how programming in Java helps me skip the whole job memory deallocation and cursed C for the same. And here I am, spent last 2 1/2 hours to find how to force GC, how to dellocate variable( By the way, none was required).
Have a good day!

JBoss5.X out of memory error

JBoss crashed with out of memory error, how do I prevent this? I modified the values in run.bat but result is same.
"- Xms1024 Xmx1024 PermGen512"
You might have a resource leak, in which case anything but finding and removing the leak will only delay the error, not prevent it. jhat & -XX:+HeapDumpOnOutOfMemoryError will let you inspect the objects in your heap at the time of the OOM, which is a decent start to figuring out if you have a leak & where your leak is.
As for run.bat, the options you list may not be working the way you intend. I would be sure to specify the "m"egabyte (kilobyte? gigabyte? mb seemed most likely here) suffix explicitly, and to set the max size before the initial size. So, -Xmx1024m -Xms1024m -XX:MaxPermSize=512M.
512 megabytes, btw, is a big size for a permanent generation. Maybe you meant kb?. You can either use jstat or add -XX:-PrintGCDetails to your run.bat to see how much permanent generation space is actually being used.
Your problem might be related to the problem explained here: JVM: Solving OutOfMemoryError with less Memory
In Jboss Version:Version: 5.0.0.GA, while running the application in jboss I have faced the out of memory error because of large data processing from application.
To resolve the same either you can optimize the code so that while processing there will be less data in heap memory or you can increase the heap memory of JBOSS:
JAVA_OPTS="-Xmx4096m -Xms4096m -XX:MaxNewSize=896m -XX:NewSize=896m
You can change the memory values as per your requirement.
If out Of memory error is coming with permgen space issue, then you can restart the server to resolve the same and you can restrict the same by changing the the memory value for the below mentioned variable:
-XX:MaxPermSize=256m
Thanks,
Ankit Adlakha
Might be related to this.
https://issues.jboss.org/browse/JBAS-7553
Apparently, when running as a service, JBoss might ignore -Xms