-verbose garbage collector interpretation - jboss

I'm running jboss 5.1 and I get this GC data
34.098: [GC 197635K->91639K(236480K), 0.0356348 secs]
37.139: [GC 217911K->100951K(239936K), 0.0541968 secs]
37.194: [Full GC 100951K->97239K(304704K), 0.3325776 secs]
38.602: [GC 214271K->97547K(285568K), 0.0488937 secs]
41.395: [GC 220811K->111699K(304512K), 0.0334592 secs]
42.734: [GC 235155K->115815K(304384K), 0.0208743 secs]
43.722: [GC 239271K->115801K(303872K), 0.0166861 secs]
44.373: [GC 241049K->118266K(304128K), 0.0106151 secs
can somebody explain when Full GC occurs why there is such small difference between before and after heap size shouldn't it be bigger when full GC. line before full GC is "normal" GC and it has this large difference (and small collection time?), I only noticed that timestamps for this two lines are very close

What you see is both, collections happening in young/eden space and in old space. Those two spaces use different kind of collectors because the objects in those areas have different characteristics.
Those Full GCs are usually the biggest problem, because they are slow and they halt your VM. You luckily do not have a big issue yet, but your question is why it was caused.
Unfortunately it is impossible to say.
If you want to know more, you should activate the switch -XX:+PrintGCDetails
However, we can guess. As you correctly point out, the heap was not full (100MB out of 300MB) and it does not collect much (30MB).
So I am guessing that this GC was caused by a call to System.GC(). You should use -XX:+DisableExplicitGC to prevent them from happening, or you find the code that has actually done this and remove it.

Related

Ever-increasing memory usage in netlogo headless behaviorspace

I'm trying to run a Netlogo model in behaviorspace, in headless mode, on a linux server.
My netlogo version is 5.3.1 (the 64b version).
The server has 32 cores with 64gigs of RAM.
I'm setting Xmx to 3072m.
After a few runs (~300) the memory usage is so high that I get a Java heap space error.
Surprisingly, the memory usage grows regularly, as if there were no flush-like function called between runs. And it gets to a point it shouldn't reach if I understand things well (for example, for 15 parallel threads it reaches 64000MB and beyond when it should stay around 15 * 3072 = 46080.
I'm using ca at setup so I thought everything was supposed to be flushed out between runs. I'm not opening any file from the code (I use the standard behaviorspace output, in table format, not spreadsheet), and I'm not using any extension.
I'm kind oh puzzled here. Is there something I should look at into behaviorspace specific parameterization that says to keeps track of variables, turtles, etc. between runs ? I couldn't find such a thing.
Could someone help me ?
Thanks a lot !
Thomas

Broccoli.js and Ember-cli, Long compile times with Less

When compiling with Ember-cli, broccoli, broccoli-less-single, my compile times are extremely long. I am using a template with bootstrap3 and all the dependent less files. The number of less files is admittedly excessive, but the compile times are 20+ secs.
It is recompiling the less files on every save which seems excessive as the less files are not the ones that I am editing. How do I go about trouble shooting this issue?
Thanks for insight on this. It's also a known issue apparently.https://github.com/stefanpenner/ember-cli/issues/538

How to identify places accumulating memory use in a Perl script?

In my Perl script, it runs with high accumulation speed of occupied memory. I have tried making suspect variables clear immediately when they are no longer needed, but the problem can not be fixed. Is there any method to monitor change of memory occupation before and after executing a block?
I have recently had to troubleshoot an out-of-memory situation in one of my programs. While I do not claim to be an expert in this matter by any means, I'm going to share my findings in the hope that it will benefit someone.
1. High, but stable, memory usage
First, you should ensure that you do not just have a case of high, but stable, memory usage. If memory usage is stable, even if your process does not fit in available memory, the discussion below won't be of much help. Here are some notes worth reading in Perl's documentation here and here, in this SO question, in this PerlMonks discussion. There is an interesting analysis here if you're familiar with Perl internals. A lot of deep information is to be found in Tim Bunce's presentation. You should be aware that Perl may not return memory to the system even if you undef stuff. Finally, there's this opinion from a Perl developer that you shouldn't worry too much about memory usage.
2. Steadily growing memory usage
In case memory usage steadily grows, this may eventually cause an out-of-memory situation. My problem turned out to be a case of circular references. According to this answer on StackOverflow, circular references are a common source of memory leaks in Perl. The underlying reason is that Perl uses a reference counting mechanism and cannot release circularly referenced memory until program exit. (Note: I haven't been able to find a more up-to-date version in Perl's documentation of the last claim.)
You can use Scalar::Util::weaken to 'weaken' a circular reference chain (see also http://perlmaven.com/eliminate-circular-reference-memory-leak-using-weaken).
3. Further reading
Tim Bunce's presentation (slides here); also in this blog post
http://www.perlmonks.org/?node_id=472366
Perl memory usage profiling and leak detection?
and of course the link given by #mpapec: http://perlmaven.com/how-much-memory-does-the-perl-application-use
4. Tools
on Unix, you could do system("ps -p $$ -o vsz,rsz,sz,size") Caution: as explained in Tim Bunce's presentation, you'll want to track VSIZE instead of RSS
How to find the amount of physical memory occupied by a hash in Perl?
https://metacpan.org/pod/Devel::Size
and a more recent take by Tim Bunce, which adds the possibility of estimating the total interpreter memory size: https://metacpan.org/pod/Devel::SizeMe
in test scripts, you can use https://metacpan.org/pod/Test::LeakTrace and https://metacpan.org/pod/Test::Memory::Cycle; an example here
https://metacpan.org/pod/Devel::InterpreterSize

how to profile(timing) in powershell

My powershell script runs slowly, is there any way to profile the powershell script?
Posting your script here would really help in giving an accurate answer.
You can use Measure-Command to see how much time each statement in your script is taking. However, you have to wrap each statement in Measure-Command.
Trace-Command can also be used to trace what is happening when the script runs. The output from this cmdlet can be quite verbose.
http://www.jonathanmedd.net/2010/06/powershell-2-0-one-cmdlet-at-a-time-104-trace-command.html
You can do random-pausing in the Powershell debugger. Get the script running, and while it's running, type Ctrl-C. It will halt and then you can display the stack. That will tell you where it is, what it's doing, and why. Do this several times, not just once.
Suppose it is taking twice as long as it could. That means each time you interrupt it the probability you will catch it doing the slow thing is 50%. So if you interrupt it 10 times, you should see that on about 5 samples.
Suppose it is taking 5 times as long as it could. That means 4/5 of the time is being wasted, so you should see it about 8 times out of 10.
Even if as little as 1/5 of the time is being wasted, you should see it about 2 times out of 10. Anything you see on as few as 2 samples, if you can find a faster way to do it, will give you a good speed improvement.
Here's a recent blog about speeding up for loops that shows you how to build a "test harness" for timing loops:
http://www.dougfinke.com/blog/index.php/2011/01/16/make-your-powershell-for-loops-4x-faster/
A quick and simple poor-man's profiler is simply to step through the code in the ISE debugger. You can sometimes feel how slow a part of the code is just by stepping over it or by running to some breakpoint.

Perl, waiting for non-child process to exit

I have a script which is used to redeploy a couple programs in a custom server environment, (ie: not an established standard container which has code hotswapping). To do this, it takes down the server processes, but these take some time to fully close all their connections. These aren't child processes of the perlscript. They run for hundreds of days at a time normally, so I'd rather not have to wrap the server processes in perlscripts just so I can fork them to shut them down elegantly months or years later.
So currently to wait on them to die during redeployment, I'm parsing the output of ps -ef, grabbing the pid field, killing that pid, waiting 60 seconds, (which seems a reasonable time with these processes), rechecking the ps -ef to make sure they're dead, etc. Go on with copies, chmods, etc.
This solution feels lame/clunky to me. I've google'd all over and have not seen anything on this particular topic; there's a pile of material about waiting on forked children, and waitpid would be perfect if only it operated in this way.
From reading How to wait for exit of non-children processes (which is c specific)I'm guessing there's really not much else I can do, apart from reading /proc/pid instead, but I thought maybe there'd be a perl-specific solution out there somewhere. Any ideas?
You can use kill 0, $pid (returns 1 on success and 0 on failure) instead of rechecking ps -ef, but that has the possible gotcha that the pid may have been reused.
If you already have ps-parsing code, it's probably not worth it to switch, but there's Proc::ProcessTable.
Other than that, no ideas.
In Unix \ Linux only the parent process gets a signal when a process exits parent process - This is an OS feature, and not language specific.
Other solutions will be equivalent to yours - checking the process table for the existence of the process (although the specific method may vary - like using ps or directly querying the kernel)