Eclipse heap space (out of memory error) - eclipse

I am facing memory issue in eclipse. Initially I was getting this error: ‘Unhandled event loop exception java heap space’ and also sometimes ‘An out of memory error has occured’.
I somehow managed to increase my heap size upto -Xmx990m. But still its not working. When I try to increase heap size beyond that, I am getting error ‘Unable to create virtual machine’ while starting eclipse.
I tried to make other changes in eclipse.ini file. When I change XXMaxPermSize, it gives me ‘permGen memory error’. For few times, I got different other kind of errors like ‘Unhandled event loop exception GC overhead limit exceeded’ and 2-3 more different types. Please help me what can be done that would be great!

Jeshurun's somewhat flippant comment about buying more RAM is actually fairly accurate. Eclipse is a memory HOG! On my machine right now Eclipse is using 2.1GB; no joke. If you want to be able to use Eclipse really effectively, with all the great features, you really need lots of memory.
That being said, there are ways to use Eclipse with less memory. The biggest helper I've found is disabling ALL validators (check "Suspend all validators" under Window>Preferences>Validation; just disabling the individual ones doesn't help enough). Another common source of memory-suckage is plugins. If you're going to stay at your current memory limit, I strongly recommend that you:
Uninstall your current Eclipse
Download the core/standalone/just Java version of Eclipse (the one with least filesize/no plug-ins built-in)
Try using just that for awhile, and see how the performance is. If it's ok, try installing the plug-ins you like, one at a time. Never install multiple, and give each one a week or two of trial.
You'll likely find that some plug-ins dramatically increase memory usage; don't use those (or if you do, get more RAM).
Hope that helps.

I also faced the same problem.I resolved by doing the build by following steps as.
-->Right click on the project select RunAs ->Run configurations
Select your project as BaseDirectory.
In place of goals give eclipse:eclipse install
-->In the second tab give -Xmx1024m as VM arguments.

I faced similar situation. My program had to run simulation for 10000 trials.
I tried -Xmx1024m : still it crashed.
Then I realized given my program had too much to put up on console; my console-display memory may be going OOB.
Simple solution=> right-click console > preferences > Check Limit console output > Enter Buffer size(characters)[Default: 80000].
I had unchecked it for analyzing single run, but when the final run had 10000 trials, it started to crash passed 500 trials.
Today was the day: I thought three times, that how programming in Java helps me skip the whole job memory deallocation and cursed C for the same. And here I am, spent last 2 1/2 hours to find how to force GC, how to dellocate variable( By the way, none was required).
Have a good day!

Related

Connect to Apache Phoenix with JDBC and to Postgres with Slick from Play2 [duplicate]

This question already has answers here:
Error java.lang.OutOfMemoryError: GC overhead limit exceeded
(22 answers)
Closed 3 years ago.
I am getting this error in a program that creates several (hundreds of thousands) HashMap objects with a few (15-20) text entries each. These Strings have all to be collected (without breaking up into smaller amounts) before being submitted to a database.
According to Sun, the error happens "if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown.".
Apparently, one could use the command line to pass arguments to the JVM for
Increasing the heap size, via "-Xmx1024m" (or more), or
Disabling the error check altogether, via "-XX:-UseGCOverheadLimit".
The first approach works fine, the second ends up in another java.lang.OutOfMemoryError, this time about the heap.
So, question: is there any programmatic alternative to this, for the particular use case (i.e., several small HashMap objects)? If I use the HashMap clear() method, for instance, the problem goes away, but so do the data stored in the HashMap! :-)
The issue is also discussed in a related topic in StackOverflow.
You're essentially running out of memory to run the process smoothly. Options that come to mind:
Specify more memory like you mentioned, try something in between like -Xmx512m first
Work with smaller batches of HashMap objects to process at once if possible
If you have a lot of duplicate strings, use String.intern() on them before putting them into the HashMap
Use the HashMap(int initialCapacity, float loadFactor) constructor to tune for your case
The following worked for me. Just add the following snippet:
dexOptions {
javaMaxHeapSize "4g"
}
To your build.gradle:
android {
compileSdkVersion 23
buildToolsVersion '23.0.1'
defaultConfig {
applicationId "yourpackage"
minSdkVersion 14
targetSdkVersion 23
versionCode 1
versionName "1.0"
multiDexEnabled true
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
packagingOptions {
}
dexOptions {
javaMaxHeapSize "4g"
}
}
#takrl: The default setting for this option is:
java -XX:+UseConcMarkSweepGC
which means, this option is not active by default. So when you say you used the option
"+XX:UseConcMarkSweepGC"
I assume you were using this syntax:
java -XX:+UseConcMarkSweepGC
which means you were explicitly activating this option.
For the correct syntax and default settings of Java HotSpot VM Options # this
document
For the record, we had the same problem today. We fixed it by using this option:
-XX:-UseConcMarkSweepGC
Apparently, this modified the strategy used for garbage collection, which made the issue disappear.
Ummm... you'll either need to:
Completely rethink your algorithm & data-structures, such that it doesn't need all these little HashMaps.
Create a facade which allows you page those HashMaps in-and-out of memory as required. A simple LRU-cache might be just the ticket.
Up the memory available to the JVM. If necessary, even purchasing more RAM might be the quickest, CHEAPEST solution, if you have the management of the machine that hosts this beast. Having said that: I'm generally not a fan of the "throw more hardware at it" solutions, especially if an alternative algorithmic solution can be thought up within a reasonable timeframe. If you keep throwing more hardware at every one of these problems you soon run into the law of diminishing returns.
What are you actually trying to do anyway? I suspect there's a better approach to your actual problem.
Use alternative HashMap implementation (Trove). Standard Java HashMap has >12x memory overhead.
One can read details here.
Don't store the whole structure in memory while waiting to get to the end.
Write intermediate results to a temporary table in the database instead of hashmaps - functionally, a database table is the equivalent of a hashmap, i.e. both support keyed access to data, but the table is not memory bound, so use an indexed table here rather than the hashmaps.
If done correctly, your algorithm should not even notice the change - correctly here means to use a class to represent the table, even giving it a put(key, value) and a get(key) method just like a hashmap.
When the intermediate table is complete, generate the required sql statement(s) from it instead of from memory.
The parallel collector will throw an OutOfMemoryError if too much time is being spent in garbage collection. In particular, if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, OutOfMemoryError will be thrown. This feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line.
If you're creating hundreds of thousands of hash maps, you're probably using far more than you actually need; unless you're working with large files or graphics, storing simple data shouldn't overflow the Java memory limit.
You should try and rethink your algorithm. In this case, I would offer more help on that subject, but I can't give any information until you provide more about the context of the problem.
If you have java8, and you can use the G1 Garbage Collector, then run your application with:
-XX:+UseG1GC -XX:+UseStringDeduplication
This tells the G1 to find similar Strings and keep only one of them in memory, and the others are only a pointer to that String in memory.
This is useful when you have a lot of repeated strings. This solution may or not work and depends on each application.
More info on:
https://blog.codecentric.de/en/2014/08/string-deduplication-new-feature-java-8-update-20-2/
http://java-performance.info/java-string-deduplication/
Fix memory leaks in your application with help of profile tools like eclipse MAT or VisualVM
With JDK 1.7.x or later versions, use G1GC, which spends 10% on garbage collection unlike 2% in other GC algorithms.
Apart from setting heap memory with -Xms1g -Xmx2g , try `
-XX:+UseG1GC
-XX:G1HeapRegionSize=n,
-XX:MaxGCPauseMillis=m,
-XX:ParallelGCThreads=n,
-XX:ConcGCThreads=n`
Have a look at oracle article for fine-tuning these parameters.
Some question related to G1GC in SE:
Java 7 (JDK 7) garbage collection and documentation on G1
Java G1 garbage collection in production
Agressive garbage collector strategy
For this use below code in your app gradle file under android closure.
dexOptions {
javaMaxHeapSize "4g"
}
In case of the error:
"Internal compiler error: java.lang.OutOfMemoryError: GC overhead limit exceeded at java.lang.AbstractStringBuilder"
increase the java heap space to 2GB i.e., -Xmx2g.
You need to increase the memory size in Jdeveloper go to setDomainEnv.cmd.
set WLS_HOME=%WL_HOME%\server
set XMS_SUN_64BIT=256
set XMS_SUN_32BIT=256
set XMX_SUN_64BIT=3072
set XMX_SUN_32BIT=3072
set XMS_JROCKIT_64BIT=256
set XMS_JROCKIT_32BIT=256
set XMX_JROCKIT_64BIT=1024
set XMX_JROCKIT_32BIT=1024
if "%JAVA_VENDOR%"=="Sun" (
set WLS_MEM_ARGS_64BIT=-Xms256m -Xmx512m
set WLS_MEM_ARGS_32BIT=-Xms256m -Xmx512m
) else (
set WLS_MEM_ARGS_64BIT=-Xms512m -Xmx512m
set WLS_MEM_ARGS_32BIT=-Xms512m -Xmx512m
)
and
set MEM_PERM_SIZE_64BIT=-XX:PermSize=256m
set MEM_PERM_SIZE_32BIT=-XX:PermSize=256m
if "%JAVA_USE_64BIT%"=="true" (
set MEM_PERM_SIZE=%MEM_PERM_SIZE_64BIT%
) else (
set MEM_PERM_SIZE=%MEM_PERM_SIZE_32BIT%
)
set MEM_MAX_PERM_SIZE_64BIT=-XX:MaxPermSize=1024m
set MEM_MAX_PERM_SIZE_32BIT=-XX:MaxPermSize=1024m
For my case increasing the memory using -Xmx option was the solution.
I had a 10g file read in java and each time I got the same error. This happened when the value in the RES column in top command reached to the value set in -Xmx option. Then by increasing the memory using -Xmx option everything went fine.
There was another point as well. When I set JAVA_OPTS or CATALINA_OPTS in my user account and increased the amount of memory again I got the same error. Then, I printed the value of those environment variables in my code which gave me different values than what I set. The reason was that Tomcat was the root for that process and then as I was not a su-doer I asked the admin to increase the memory in catalina.sh in Tomcat.
This helped me to get rid of this error.This option disables
-XX:+DisableExplicitGC

MATLAB error message on startup [duplicate]

Since a couple of days, I constantly receive the same error while using MATLAB which happens at some point with dlopen. I am pretty new to MATLAB, and that is why I don't know what to do. Google doesn't seem to be helping me either. When I try to make an eigenvector, I get this:
Error using eig
LAPACK loading error:
dlopen: cannot load any more object with static TLS
I also get this while making a multiplication:
Error using *
BLAS loading error:
dlopen: cannot load any more object with static TLS
I did of course look for the solutions to this problem, but I don't understand too much and don't know what to do. These are threads I found:
How do I use the BLAS library provided by MATLAB?
http://www.mathworks.de/de/help/matlab/matlab_external/calling-lapack-and-blas-functions-from-mex-files.html
Can someone help me please?
Examples of function calls demonstrating this error
>> randn(3,3)
ans =
2.7694 0.7254 -0.2050
-1.3499 -0.0631 -0.1241
3.0349 0.7147 1.4897
>> eig(ans)
Error using eig
LAPACK loading error:
dlopen: cannot load any more object with static TLS
That's bug no 961964 of MATLAB known since R2012b (8.0). MATLAB dynamically loads some libs with static TLS (thread local storage, e.g. see gcc compiler flag -ftls-model). Loading too many such libs => no space left.
Until now mathwork's only workaround is to load the important(!) libs first by using them early (they suggest to put "ones(10)*ones(10);" in startup.m). I better don't comment on this "solution strategy".
Since R2013b (8.2.0.701) with Linux x86_64 my experience is: Don't use "doc" (the graphical help system)! I think this doc-utility (libxul, etc.) is using a lot of static TLS memory.
Here is an update (2013/12/31)
All the following tests were done with Fedora 20 (with glibc-2.18-11.fc20) and Matlab 8.3.0.73043 (R2014a Prerelease).
For more information on TLS, see
Ulrich Drepper, ELF handling For Thread-Local Storage, Version 0.21, 2013,
currently available at Akkadia and Redhat.
What happens exactly?
MATLAB dynamically (with dlopen) loads several libraries that need tls initialization. All those libs need a slot in the dtv (dynamic thread vector). Because MATLAB loads several of these libs dynamically at runtime at compile/link time the linker (at mathworks) had no chance to count the slots needed (that's the important part). Now it's the task of the dynamic lib loader to handle such a case at runtime. But this is not easy. To cite dl-open.c:
For static TLS we have to allocate the memory here and
now. This includes allocating memory in the DTV. But we
cannot change any DTV other than our own. So, if we
cannot guarantee that there is room in the DTV we don't
even try it and fail the load.
There is a compile time constant (called DTV_SURPLUS, see glibc-source/sysdeps/generic/ldsodefs.h) in the glibc's dynamic lib loader for reserving a number of additional slots for such a mess (dynamically loading libs with static TLS in a multithreading program). In the glibc-Version of Fedora 20 this value is 14.
Here are the first libs (running MATLAB) that needed dtv slots in my case:
matlabroot/bin/glnxa64/libut.so
/lib64/libstdc++.so.6
/lib64/libpthread.so.0
matlabroot/bin/glnxa64/libunwind.so.8
/lib64/libuuid.so.1
matlabroot/sys/java/jre/glnxa64/jre/lib/amd64/server/libjvm.so
matlabroot/sys/java/jre/glnxa64/jre/lib/amd64/libfontmanager.so
matlabroot/sys/java/jre/glnxa64/jre/lib/amd64/libt2k.so
matlabroot/bin/glnxa64/mkl.so
matlabroot/sys/os/glnxa64/libiomp5.so
/lib64/libasound.so.2
matlabroot/sys/jxbrowser/glnxa64/xulrunner/xulrunner-linux-64/libxul.so
/lib64/libselinux.so.1
/lib64/libpixman-1.so.0
/lib64/libEGL.so.1
/lib64/libGL.so.1
/lib64/libglapi.so.0
Yes more than 14 => too many => no slot left in the dtv. That's what the error message tries to tell us and especially mathworks.
For the record: In order not to violate MATLAB's license I didn't debug, decompile or disassemble any part of the binaries shipped with MATLAB. I only debugged the free and open glibc-binaries of Fedora 20 that MATLAB were using to dynamically load the libs.
What can be done, to solve this problem?
There are 3 options:
(a)
Rebuild MATLAB and do not dynamically load those libs
(with initial-exec tls model) instead link against them (then the linker
can count the required slots!)
(b)
Rebuild those libs and ensure they are NOT using the initial-exec tls model.
(c)
Rebuild glibc and increase DTV_SURPLUS in
glibc/sysdeps/generic/ldsodefs.h
Obviously options (a) and (b) can only be done by mathworks.
For option (c) no source of MATLAB is needed and thus can be done without mathworks.
What is the status at mathworks?
I really tried to explain this to the "MathWorks Technical Support Department". But my impression is: they don't understand me. They closed my support ticket and suggested a telephone(!) conversation in January 2014 with a technical support manager.
I'll do my very best to explain this, but to be honest: I'm not very confident.
Update (2014/01/10): Currently mathworks is trying option (b).
Update (2014/03/19): For the file libiomp5.so you can download a newly compiled version (without static TLS) at mathworks, bug report 961964. And the other libs? No improvement there. So don't be suprised to get "dlopen: cannot load any more object with static TLS" with "doc", e.g. see bug report 1003952.
Restarting Matlab solved the problem for me.
long story short: in the directory that you start matlab from create a file
startup.m with content ones(10)*ones(10);. Restart matlab and it will be taken care of.
This is, as I find, an age-old problem yet unsolved by MathWorks.
Here are my two cents, which worked for me (when I wanted IT++ external libraries, with MEX).
Let the library that you found to be the cause of the problem be "libXYZ.so", and that you know where it lies on your system.
The solution is to inform MATLAB to load the specific library at the earliest of its startup. The reason for this error is apparently due to the lack of slots for this thread local storage aka tls purpose (due to they already been filled-up).
Because the latest compilations suddenly required a new library that was not loaded earlier during its startup, MATLAB throws up this error.
Pity that MATLAB never cared to resolve this problem so long.
Fortunately, the solution is a single, very simple terminal command.
Typical steps on a linux-machine should be as follows:
Open command prompt (Ctrl+Alt+T in Ubuntu)
Issue the following command
export LD_PRELOAD=<PATH-TO-libxyz.so>
e.g.: export LD_PRELOAD=/usr/local/lib/libitpp.so
Start matlab from the same terminal
matlab &
Running your program now should resolve the issue, as it is for my case.
Good luck!
Reference:
[1] http://au.mathworks.com/matlabcentral/answers/125117-openmp-mex-files-static-tls-problem
http://www.mathworks.de/support/bugreports/961964 has been updated on 30/01/2014.
There is a zip file attached with libiomp5.so
I tested it on Mageia 4 x86_64 with Matlab R2013b.
I can now use the Documentation of Matlab to open a demo without any problem.
I had the same problem and I think I just solved it.
When installing matlab use the custom installation (I did not do this the first time). Choose to create symbolic links to matlab scripts in the predefined folder (/usr/local/bin). This did the trick for me!
I had the same problem with both Matlab 2013b and Matlab 2014a. The fix provided by mathworks for libiomp5.so only removed the problem of LAPACK not working. However, I could not use external libraries which are using OpenMp (such as VL_FEAT): I still get the error
"dlopen: cannot load any more object with static TLS."
The only thing which worked for me was downgrading to Matlab 2012b.
I came across this problem after "bar" (for bar plots) with a an array gives me just a single blue block, with no errors thrown. Reboot at first solved the problem. But after a memory error (after processing a very large file), I just cannot get past this blue block problem.
Using "hist" on a matrix input gives me the "BLAS loading error" problem and led me to this thread. The Mathwork workaround fixed the hist and bar problems.
Just wanted to bring recognition to the extent of this bug's influence.
I had the same problem and solved it by increasing my Java Heap memory. Go to Preferences > General > Java-Heap Memory, and increase the allocated memory.
Increasing Java heap memory (to 512 mb) also worked for me on R2013b/Ubuntu 12.04. The "BLAS loading error" began when I processed an 11 GB file (with 16 GB RAM), and has not recurred after increasing java heap memory and restarting matlab.

Do scalac "-deprecation" and "-unchecked" compiler options make it slower

Anecdotally, our builds seem slower after enabling these options. I've searched online a bit and tried to do some comparisons but found nothing conclusive. Wondering if anyone knows offhand.
A great way to answer your own question is to try and measure it. For instance, I tried to compile with SBT (which gives the build time in seconds). I took a medium sized project (78 scala source files) that I tried to compile with and without the flags. I started by doing 3 clean/compile invocations to warm-up the disks (be sure that everything is cached properly by the controller and the OS). Then I measured 3 time the build time to get an average.
For both cases (with and without the flags), the build time was identical. However, it is interesting to note that the first warm-up build was really slow: almost 7x slower ! Therefore it is very difficult to rely on impressions, because the build time will be dominated by the way you access your source files.
Unless your desktop is a teletype with particularly slow electromechanical relay switches, you're safe - it does the same work either way, so if there were a difference it'd be in how long it takes to display the deprecation/unchecked warnings.

Memory in Eclipse

I'm getting the java.lang.OutOfMemoryError exception in Eclipse. I know that Eclipse
by default uses heap size of 256M. I'm trying to increase it but nothing happens.
For example:
eclipse -vmargs -Xmx16g -XX:PermSize=2g -XX:MaxPermSize=2g
I also tried different settings, using only the -Xmx option, using different cases
of g, G, m, M, different memory sizes, but nothing helps. Tried also to specify the values in the eclipse.ini file. Does not matter which params I specify, the heap exception is thrown at the same time, so I assume there's something I'm doing wrong that Eclipse ignores the -Xmx parameter. I'm using a 32GB RAM machine and trying to execute something very simple such as:
double[][] a = new double[15000][15000];
It only works when I reduce the array size to something around 10000 on 10000.
I'm working on Linux and using the top command I can see how much memory the Java
process is consuming; it's less than 2%.
Thanks!
Okay, I found a solution after reading
Why does heap space run out only when running JUnit tests?
When I specify the -Xmx inside eclipse by going to run->configuration->vm arguments
and set the -Xmx there, everything works fine :)

JBoss5.X out of memory error

JBoss crashed with out of memory error, how do I prevent this? I modified the values in run.bat but result is same.
"- Xms1024 Xmx1024 PermGen512"
You might have a resource leak, in which case anything but finding and removing the leak will only delay the error, not prevent it. jhat & -XX:+HeapDumpOnOutOfMemoryError will let you inspect the objects in your heap at the time of the OOM, which is a decent start to figuring out if you have a leak & where your leak is.
As for run.bat, the options you list may not be working the way you intend. I would be sure to specify the "m"egabyte (kilobyte? gigabyte? mb seemed most likely here) suffix explicitly, and to set the max size before the initial size. So, -Xmx1024m -Xms1024m -XX:MaxPermSize=512M.
512 megabytes, btw, is a big size for a permanent generation. Maybe you meant kb?. You can either use jstat or add -XX:-PrintGCDetails to your run.bat to see how much permanent generation space is actually being used.
Your problem might be related to the problem explained here: JVM: Solving OutOfMemoryError with less Memory
In Jboss Version:Version: 5.0.0.GA, while running the application in jboss I have faced the out of memory error because of large data processing from application.
To resolve the same either you can optimize the code so that while processing there will be less data in heap memory or you can increase the heap memory of JBOSS:
JAVA_OPTS="-Xmx4096m -Xms4096m -XX:MaxNewSize=896m -XX:NewSize=896m
You can change the memory values as per your requirement.
If out Of memory error is coming with permgen space issue, then you can restart the server to resolve the same and you can restrict the same by changing the the memory value for the below mentioned variable:
-XX:MaxPermSize=256m
Thanks,
Ankit Adlakha
Might be related to this.
https://issues.jboss.org/browse/JBAS-7553
Apparently, when running as a service, JBoss might ignore -Xms