Proper strategy in case no heap dump is generated - centos

I got a server running a Java application on OpenJDK and CentOS.
However the application seems to have a memory leak and crashes every few weeks.
The problem is that although HeapDumpOnOutOfMemoryError is specified no heap dump is generated.
If i create an artificial memory leak which crashes the application immediately a proper heap dump is generated.
Now i'm not asking for a complete solution to this problem but for a good strategy.
Is there a way to pull a heap dump on demand while the application is running after a week for example? Is there a way to figure out whats going wrong in OpenJDK? Do you got any alternative suggestions on how to approach this?

OpenJDK should contain a tool called jmap which can create a heap dump given a process id. For the exact syntax you would have to look at the jmap man page. If there is a memory leak then this should be visible in the heap dump even before the app crashes. I can also recommend the eclipse memory analyzer to browse the heap dump and get a list of leak suspects.

Related

Heap profiler's reports vs task manager's reports: who to believe?

I have a Chrome extension that appears to have a memory leak: Chrome's task manager reports a gradually increasing memory footprint. But when I use the dev tools profiler to take heap snapshots and compare them, I see net negative size deltas. So, why don't these two tools agree?
Is Task Manager measuring something in addition to or different from heap? I see mentions of "javascript heap" vs "native memory" -- how do these affect what these tools report? And what is meant by native memory in this context?
This is basically the same question as in Interpretation of memory usage in chrome task manager and Chrome Heap Snapshot - Why it doesn't show all the memory allocated?, but neither of these got a conclusive answer. So I'm hoping the third try is a charm, and someone can lay it all out.
Additional information: OK, I see you can add more columns to Task Manager, including "javascript memory." (Also: "goats teleported.") Presumably that column should correlate pretty closely with what I see in the Heap Profiler. And indeed it climbs up a little and then goes down, and is generally acting reassuringly non-leaky. So what is the Task Manager memory column? The javascript and the DOM and the HTML and the CSS and ... what else?

Using instruments to find overflow of the stack in code

As Documents say, allocations give a heap analysis of the memory.
However, what I feel is my app is crashing because of storing a lot of data on stack, which might be overflowing.
How do I analyze that? Please Help. Thanks!
First Build your app for Profiling (Command +I); Run it. Select the Allocations tool, Play around with (Use) the application.
In the Allocations you will find a section of Live Bytes this is the current RAM utilization by your application (data on stack I suppose it's the RAM you are talking abt in your question).
Releasing Objects that are not currently in use will reduce Live bytes
Overall Bytes - All bytes (Created & Destroyed + currently live bytes).
For Further reference refer Instruments Programming Guide.
Creating and comparing "heapshots" is a good way to start narrowing down the code parts that show no obvious memory management errors at first glance. See my answer on this question for some further reading or check out this great article directly.

mod_perl memory leak

I recently discovered that one of our sites has a memory leak in it, it's very strange because it happened all of the sudden. I've used GTop to measure the memory size per process and it tells me that the real value is somewhere around 65 MB (on the server) per request and and additional 5 MB shared.
I tried preloading the modules in the startup.pl file a indicated in the performance tuning article for mod_perl. Nothing happened if fact the shared memory decreased down to 3.7 MB, in this situation I thought that my application is leaking memory do before any line of code got executed I measured the memory just to find out that the total value is in fact 64 MB, my questions are: Is there a default preallocation of memory for each process? Is there a configuration issue? Is mod_perl leaking memory ?
Thank you very much.
This is probably a question that's better asked on the mod_perl mailing list. There are too many variables (versions of perl/mod_perl/apache, what modules you're loading, what OS you're running, what MPM, apache configuration, etc) involved here to really help in a Q/A forum like this, since there is not "right" answer.
In mod_perl you can cause memory leaks by using the core exit() function

Xcode iPhone Build fails With Out of Memory

Sometimes the project compiles, and sometimes it fails with
"Out of memory allocating 4072 bytes after a total of 0 bytes"
If the project does compile, when it starts it immediately throws a bad access exception when attempting to access the first (allocated and retained) object, or, throws an error "unable to access memory address xxxxxxxx", where xxxxxxxx is a valid memory address.
Has anyone seen similar symptoms and knows of workarounds?
Thanks in advance.
If compilation or linking is failing with an out of memory error like that, it is likely one of two issues.
First, does your boot drive or the drive that you are building your source on have free space (they may be the same drive)? If not, then that error may arise when the VM subsystem tries to map in a file or, more likely if boot drive is full, the VM subsystem tries to allocate more drive for swap space.
Secondly, is your application just absolutely gigantic? I.e. is it the linker that is failing as it tries to assemble something really really large?
There is also the possibility that the system has some bad RAM in it. Unlikely, though, given that the symptoms are so consistent.
In any case, without more details, it is hard to give a more specific answer.
I've seen this, it is not usually an actual memory error...(of your code)
what is happening is that you have your Xcode target Build settings "optimization level" set to Fast, or faster, or fastest..
there appears to be a bug in there somewhere, set it to none, or try the Os, or O3 (i don't think fastest is effected)..
this will very likely solve someones problem that comes across this thread. for sure try "none" first... this will confirm that this is what is happening in someone's case that sees this...
i can tell that McPragma is having this problem for sure, because he/she describes changing from debug to release, and this causes it (debug is already set to none) and release is set to something else... when that is the case... for sure it is that particular build setting...

Running Eclipse under Valgrind

Has anybody here succeeded in running Eclipse under Valgrind? I'm battling a particularly hairy crash involving JNI code, and was hoping that Valgrind perhaps could (again) prove its excellence, but when I run Eclipse under Valgrind, the JVM terminates with an error message about not being able to create the initial object heap (I currently don't have access to the exact error message; I'll edit this post as soon as I do.)
Does it work if you run valgrind with --smc-check=all?
Also -- valgrind increases a program's memory requirements pretty dramatically. With something as large as Eclipse, there's plenty of room for trouble; hopefully you're 64-bit native (and thus have plenty of address space) and have lots of RAM and/or swap.
If there is a crash in native code, then gdb might be a better choice.
It should even stop the execution automatically on a crash and might show You the stack trace (command bt).