Computationally intensive scala process using actors hangs uncooperatively - scala

I have a computationally intensive scala application that hangs. By hangs I means it is sitting in the process stack using 1% CPU but does not respond to kill -QUIT nor can it be attached via jdb attach.
Runs 2-12 hours at 800-900% CPU before it gets stuck
The application is using ~10 scala.actors.
Until now I have had great success with kill -QUIT but I am bit stumped as to how to proceed.
The actors write a fair amount to stdout using println which is redirected to a text file but has not been helpful so far diagnostically.
I am just hoping there is some obvious technique when kill -QUIT fails that I am ignorant of.
Or just confirmation that having multiple actors println asynchronously is a real bad idea (though I've been doing it for a long time only recently with these results)
Details
scala 2.8.1 & 2.8.0
mac osx 10.6.5
java version "1.6.0_22"
Thanks

if you just wanna remove the process from the run queue, the obvious choice is
kill -9
Of course you wanna avoid having to do that in the first place, but you do not provide any information that would allow us to help you with that. Writing to stdout in multiple actors is indeed a bad idea, but the worst thing that could come from that is garbled output.

Most times I have seen the JVM react like that is when it goes into no permgen space. It will then be incapable of anything (even dying). You don't find any traces of that in your printout? Have you tried to increase the permgen space?

Related

program execution is too slow in eclipse and was fast just yesterday for the same program

I am executing one java program via eclipse and I was executing the exact same program yesterday and my program execution was only taking 10 min yesterday, today the same program is taking more than an hour and I did not change any single thing in my code. could you plwase give me a solution to revert back to the old duration of my program execution that I had yesterday
If you did not change anything in your sourcecode, I see the following possible reasons for this:
Side-effects on the machine you are running the program on, like other (maybe hidden) processes soak up cpu time and slow down your program.
This could also be the machine itself being slower (slowdown from to much heat, etc.)
Your code is doing some "random" things that require longer runs sometimes (sounds unlikely tho)
Somehow eclipse is causing an issue (try to run your program without it)
Your java runtime might cause a problem (sounds unlikely aswell, but maybe updating it to the newest version can help)

exe stops execution after couple of hours

I have one exe which collect some information and once information collected saved in local machine. I have managed loop such that it will do same task for infinite time.
But exe stops execution after couple of hours (approx 5-6 hours), it neither crashed nor gives exception.
I tried to find reason in windbg but I haven't got any exception in it.
Now, Could anyone help me to detect problem?
should I go for sysinternal tool or any other, which debugger tool should I use?
A few things jump to mind that could be killing your program:
Out of memory condition
Stack overflow
Integer wrap in loop counter
Programs that run forever are notoriously difficult to write correctly, because your memory management must be perfect. Without more information though, it's impossible to answer this question.
If the executable is not yours and is Naitive C/C++ code, you may want to capture first chance exception dumps or monitor the exe using Windows debug tools (such as DebugDiag or ADPlus).
Alternatively, if you have access to the developer of the executable, they may add more tracing to the exe (ETW or otherwise) to understand the possible failure points in the code.

How can I write a dtrace script to dump the stack of a crashing process on Solaris 10?

I have a process, running on Solaris 10, that is terminating due to a SIGSEGV. For various uninteresting reasons it is not possible for me to get a backtrace by the usual means (gdb, backtrace call, corefile are all out). But I think dtrace might be useable.
If so, I'd like to write a dtrace script that will print the thread stacks of the process when the process is killed. I'm not very familiar with dtrace, but this seems like it must be pretty easy for someone who knows it. I'd like to be able to run this in such a way as to monitor a particular process. Any thoughts?
In case anyone else stumbles across this, I'm making some progress experimenting on OS X with the following script I've cobbled together:
#!/usr/sbin/dtrace -s
proc:::fault
/pid == $1/
{
ustack();
}
I'll update this with a complete solution when I have one.
A couple of Solaris engineers wrote a script for using Dtrace to capture crash data and published an article on using it, which can now be found at Oracle Technology Network: Enabling User-Controlled Collection of Application Crash Data With DTrace.
One of the authors also published a number of updates to his blog, which can still be read at https://blogs.oracle.com/gregns/, but since he passed away in 2007, there haven't been any further updates.

Solaris process taking up large CPU

I have a java process seens like taking up all the cpu processing power.
I thinking of killing it but how do i know what is the program that is actually causing such huge usage of solarsi cpu usage?
Try prstat -- it should show you how much CPU each process on your system is using.
Although this question was asked sometime back, I'm posting this so anyone can refer in future.
You can use the top command to investigate the issue. However, the thing with Solaris is, you have to turn off the Irix mode to get the real process information. While running top tool, press I to toggle the Irix mode.
More information can find here -> How to investigate CPU utilization in Solaris

Running Eclipse under Valgrind

Has anybody here succeeded in running Eclipse under Valgrind? I'm battling a particularly hairy crash involving JNI code, and was hoping that Valgrind perhaps could (again) prove its excellence, but when I run Eclipse under Valgrind, the JVM terminates with an error message about not being able to create the initial object heap (I currently don't have access to the exact error message; I'll edit this post as soon as I do.)
Does it work if you run valgrind with --smc-check=all?
Also -- valgrind increases a program's memory requirements pretty dramatically. With something as large as Eclipse, there's plenty of room for trouble; hopefully you're 64-bit native (and thus have plenty of address space) and have lots of RAM and/or swap.
If there is a crash in native code, then gdb might be a better choice.
It should even stop the execution automatically on a crash and might show You the stack trace (command bt).