I'm having an issue with jboss server. when i run jboss server, it stops responding( no fixed time, so cannot predict when will it stop responding after start) after that it doesn't writes anything in log file. my problem is similar to the problem described on jboss community, link given below but it doesn't have the answer. please help.
http://community.jboss.org/message/526193
--Ravi
It sounds like your jboss server is running out of threads to allocate and is waiting for a new one to become available. Try triggering a thread dump (ctrl-\) and see if you find any threads suspiciously locked and waiting in some of your code. Quite possibly you have a deadlock or memory leak somewhere in your code which is causing old threads to lock up and never be released.
Alternatively try what the guy you linked to did, i.e. increasing the amount of threads available.
edit: For some more basic advice, this post might be of use to you.
Related
I have a kind of proxy server running on a WebServer module and I noticed that this server is being killed due to its memory consumption.
Every time the server gets a new request it creates a child client process, the problem I see is that the process remains alive indefinitely.
Here is the server I'm using:
server.js
I thought response.close() was closing and killing client connections, but it is not.
Here is the list of child processes displayed on htop:
(Those process are even more, it is just a fragment of the list)
I really need to kill those processes because they are using all the free memory. Am I missing something?
I could simply restart the server, but the memory will still be wasted.
Thanks you !
EDIT:
The processes I mentioned before are threads and no independient processes as I thought (check this).
Every http request creates a new thread, and that's ok, but this thread is not being killed after the script ends.
Also, I found out that no new threads are created if the request handler doesn't run casper (I mean casper.run(..)).
So, new threads are created only if the server runs a casper instance, the problem is that this instance doesn't end after run function does.
I tried casper.done() as mentioned below, but it kill the whole process instead of the current running thread. (I did not find any doc for this function).
When I execute other casper scripts, outside the server in the same machine, the instanced threads and the whole phantom process ends successfully. What would be happening?
I am using Phantom 2.1.1 and Casper 1.1.1 versions.
Please ask me anything if you want more or specific information.
Thanks again for reading !
This is a well known issue with casper:
https://github.com/casperjs/casperjs/issues/1355
It has not been fixed by the casper guys and is currently marked as an enhancement. I guess it's not on their priority list.
Anyways, the workaround is to write a server side component e.g. a node.js server to handle the incoming requests and for every request run a casper script to do the scraping in a new child process. This child process will be closed when casper terminates it's job. While this is a workaround, it is not an optimal solution as the cost of opening a child process for every request is not cheap. it will be hard to heavily scale an approach similar to this. However, it is a sufficient workaround. More on this fully sensible approach is in the link above.
I have multiple proxies in a message flow.Is there a way in OSB by which I can monitor the memory utilization of each proxy ? I'm getting OOM, want to investigate which proxy is eating away all/most memory.
Thanks !
If you're getting OOME then it's either because a proxy is not freeing up all the memory it uses (so will eventually fail even with one request at a time), or you use too much memory per invocation and it dies over a certain threshold but is fine under low load. Do you know which it is?
Either way, you will want to generate a heap dump on OOME so you can investigate what's going on. It's annoying but sometimes necessary. A colleague had to do that recently to fix some issues (one problem was an SB-transport platform bug, one was a thread starvation issue due to a platform work manager bug, the last one due to a Muxer bug when used in exalogic).
If it just performs poorly under load, then you'll need to do the usual OSB optimisations, like use fewer Assign steps (but assign more variables per step), do a lot more in xquery rather than proxy steps, especially loops that don't need a service callout, since they can easily be rolled into a for loop in xquery; you know, all the standard stuff.
I have an issue where supervisord is crashing due to out of memory conditions. I know what is causing this (other processes consuming too much memory), but it's going to take a little time to fix. In the meantime, I would like to bring supervisor back up if it crashes. The problem here seems to be that it will not pick up where it left off and forgets about all the existing processes. It seems to just attempt to restart all processes instead of recognizing that they were already running.
Is this a problem in my config (pid files not in the right place or something, it seems correct though) or is this just how supervisord works? Are there any workarounds to get the manager process to continue where it left off before it crashed?
We are trying to troubleshoot app pool hang scenarios, so one of the queue we thought of monitoring was http.sys queue.We need to check different parameters like app pool status and requests in queue.
Http.sys request queues are obtained from perfmon .Is there any way I can ping application pool and check status during each stage/requestload.
We are dealing this issue in two phases
1.Remove node out of HLB(we have script) once node is not responding or hung or slow, before end users complain( we get a lot of comlpaints)—priority 1
2.troubleshoot what’s the cause of hung—priority 2
Thanks in advance.
EDIT:
This article looks promising.But not able to find how to execute this.Any help on this please.
http://msdn.microsoft.com/en-us/library/ms691445(v=vs.90).aspx
I'm not sure an app pool's state will tell you if it is hung, just if it is started, stopped, or changing states.
I think you'll want to look at the IIS performance counters. I've never had to do anything like that, but the Get-Counter cmdlet is probably what you'll use.
Looks like there is another Stack Overflow question/answer that has some sample code:
Get-Counter "\\$ServerName\web service($SiteName)\current connections"):
I have a process, running on Solaris 10, that is terminating due to a SIGSEGV. For various uninteresting reasons it is not possible for me to get a backtrace by the usual means (gdb, backtrace call, corefile are all out). But I think dtrace might be useable.
If so, I'd like to write a dtrace script that will print the thread stacks of the process when the process is killed. I'm not very familiar with dtrace, but this seems like it must be pretty easy for someone who knows it. I'd like to be able to run this in such a way as to monitor a particular process. Any thoughts?
In case anyone else stumbles across this, I'm making some progress experimenting on OS X with the following script I've cobbled together:
#!/usr/sbin/dtrace -s
proc:::fault
/pid == $1/
{
ustack();
}
I'll update this with a complete solution when I have one.
A couple of Solaris engineers wrote a script for using Dtrace to capture crash data and published an article on using it, which can now be found at Oracle Technology Network: Enabling User-Controlled Collection of Application Crash Data With DTrace.
One of the authors also published a number of updates to his blog, which can still be read at https://blogs.oracle.com/gregns/, but since he passed away in 2007, there haven't been any further updates.