I have a kind of proxy server running on a WebServer module and I noticed that this server is being killed due to its memory consumption.
Every time the server gets a new request it creates a child client process, the problem I see is that the process remains alive indefinitely.
Here is the server I'm using:
server.js
I thought response.close() was closing and killing client connections, but it is not.
Here is the list of child processes displayed on htop:
(Those process are even more, it is just a fragment of the list)
I really need to kill those processes because they are using all the free memory. Am I missing something?
I could simply restart the server, but the memory will still be wasted.
Thanks you !
EDIT:
The processes I mentioned before are threads and no independient processes as I thought (check this).
Every http request creates a new thread, and that's ok, but this thread is not being killed after the script ends.
Also, I found out that no new threads are created if the request handler doesn't run casper (I mean casper.run(..)).
So, new threads are created only if the server runs a casper instance, the problem is that this instance doesn't end after run function does.
I tried casper.done() as mentioned below, but it kill the whole process instead of the current running thread. (I did not find any doc for this function).
When I execute other casper scripts, outside the server in the same machine, the instanced threads and the whole phantom process ends successfully. What would be happening?
I am using Phantom 2.1.1 and Casper 1.1.1 versions.
Please ask me anything if you want more or specific information.
Thanks again for reading !
This is a well known issue with casper:
https://github.com/casperjs/casperjs/issues/1355
It has not been fixed by the casper guys and is currently marked as an enhancement. I guess it's not on their priority list.
Anyways, the workaround is to write a server side component e.g. a node.js server to handle the incoming requests and for every request run a casper script to do the scraping in a new child process. This child process will be closed when casper terminates it's job. While this is a workaround, it is not an optimal solution as the cost of opening a child process for every request is not cheap. it will be hard to heavily scale an approach similar to this. However, it is a sufficient workaround. More on this fully sensible approach is in the link above.
Related
We use Retrofit/OkHttp3 for all network traffic from our Android application. So far everything seems to run quite smoothly.
However, we have now occasionally had our app/process run out of file handles.
Android allows for a max of 1024 file handles per process
OkHttp will create a new thread for each async call
Each thread created this way will (from our observation) be responsible for 3 new file handles (2 pipes and one socket).
We were able to debug this exactly, where each dispached async call using .enqueue() will lead to an increase of open file handles of 3.
The problem is, that the ConnectionPool in OkHttp seems to be keeping the connection threads around for much longer than they are actually needed. (This post talks of five minutes, though I haven't seen this specified anywhere.)
That means if you are quickly dispatching request, the connection pool will grow in size, and so will the number of file handles - until you reach 1024 where the app crashes.
I've seen that it is possible to limit the number of parallel calls with Dispatcher.setMaxRequests()(although it seems unclear whether this actually works, see here) - but that still doesn't quite address the issue with the open threads and file handles piling up.
How could we prevent OkHttp from creating too many file handles?
I am answering my own question here to document this issue we had. It took us a while to figure this out and I think others might encounter this too and might be glad for this answer.
Our problem was that we created one OkHttpClient per request, as we used it's builder/interceptor API to configure some per-request parameters like HTTP headers or timeouts.
By default each OkHttpClient comes with its own connection pool, which of course blows up the number of connections/threads/file handles and prevents proper reuse in the pool.
Our solution
We solved the problem by manually creating a global ConnectionPool in a singleton, and then passing that to the OkHttpClient.Builder object which builds the actual OkHttpClient.
This still allows for per-request configuration using the OkHttpClient.Builder
Makes sure all OkHttpClient instances are still using a common connection pool.
We were then able to properly size the global connection pool.
I have a WCF Windows Workflow (4.5) Workflow Service hosted under IIS and using AppFabric 1.1. The workflow instances are long-running (up to about a week), but much of the time is spent in Delay activities.
This seemed to work fine at first, but when running multiple instances of the workflow at the same time (2+ instances causes this), some of them just never wake up once they've unloaded from memory during the Delay step. When I look at the logs, the errors I find all look like this:
System.OperationCanceledException: The execution of InstancePersistenceCommands has been canceled because the InstanceHandle was freed.
at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)
at System.ServiceModel.Activities.Dispatcher.DurableInstanceManager.WaitAndHandleStoreEventsCallback(IAsyncResult result)
Unfortunately, I'm not finding any useful information on that error message.
The SuspensionExceptionName and SuspensionReason fields in the AppFabric Persisted Instances Table show System.NullReferenceException: Object reference not set to an instance of an object. But this doesn't happen inside my workflow, only outside.
Additional Info:
I'm running the activity as a Fire & Forget (receive activity, no send)
My workflow calls into other WCF services to fetch data.
I am running this on Server 2012 R2, IIS 8 (not azure)
Workflow Persistence is working. I can reset IIS, reboot... its just when I run 2 instances that it has problems.
I'm definitely not hitting any kind of throttling limits. While the workflow deals with a few MB of data, this issue happens at 2+ instances.
Any idea what might be happening here?
Edit:
I realized I found more information on how the issue operates and never added it to the question. When the delay issue happens, it operates a lot like a static variable getting written by 2 threads.
Here's a visualization:
WF1 Start ---->Do Stuff--->Sleep------------*1----->Cancelled Exception at some point
------WF 2 Start---->Do Stuff------->Sleep->Wake up---------*2------>More Stuff---->End Successfully
*1 - When WF Instance 1 Should Wake up (Same time as WF 2 wakes)
*2 - When WF Instance 2 Should have woken up (Seems to be ignored)
Before anyone asks... I got rid of every static variable, method, class in my code. Nothing is static anymore.
I've been struggling with similar issues for quite a while. I use WFW4 and I find similar errors when a workflow instance is in a long delay.
I don't know what the cause of the problem is, but I have a work around that you might find helpful.
In my case, the errors I get are from Workflow Management Service and say:
Failed to invoke service management endpoint at 'net.pipe://.svc' to activate service '/Alerts/Workflows/.xamlx'. Exception: 'Access is denied.'
These errors start happening sometime between 6 and 30 hours after the instance goes into a long delay.
I have found that if I create a new instance of the workflow when the first instance is in delay and the errors are happening, then Workflow Management Service is able to resume interacting with the first sleeping instance.
So, I made a new workflow whose sole purpose is to periodically launch and then kill instances of the workflow that contains the long delay.
It actually gets a bit more complicated to make this work. I wanted this new workflow to also go to sleep between times when it creates and kills a new instance of the first workflow. But this going to sleep causes the instance of the new workflow to suffer the same problem as the first workflow. So, I modified the new workflow so it does the following:
-- delay for some rather short period, such as 30 minutes
-- create an instance of the first workflow
-- wait a minute
-- kill the just-created instance of the first workflow
-- create a new instance of this new error-preventing workflow
-- terminate
Since having done this, I no longer get the Access is Denied error from Workflow Management Service!
Hope this helps
Turns out my first answer was not correct, but I believe this answer is right, and solves the issue ChrisG is having.
My workaround did not actually work. Took a while for the problem to resurface. 29 hours to be precise - the default time it takes for an app pool to recycle.
So for me, the solution was to make my app pool not recycle. When an app pool recycles while a workflow instance is in a delay activity, the workflowManagementService is not able to wake up the instance and throws Access is Denied errors. If you create a new instance of the workflow after the app pool has recycled, the first instance will pick up where it left off, but sometimes still has problems, which is what I believe is happening to ChrisG.
ChrisG, looking at your visualization, is it possible that an appPool is recycling during the time wf1 is sleeping? I believe that is the cause the exception. If you then launch a new wf instance after *2 has passed (and if an app pool recycle happened prior to *1), that will wake up both wf1 and wf2, but wf1 won't work properly (at least in my experience)
Also, this happens after iisresets and server reboots. To handle those, you need to use IIS7 which allows the web application (as well as the web site) which is hosting the xamlx files to autostart after an iisreset or server reboot. This option is not available in IIS6. See http://www.postseek.com/meta/991815402b369e71ce925cde47ac907d for details
Hope this helps!
We are trying to troubleshoot app pool hang scenarios, so one of the queue we thought of monitoring was http.sys queue.We need to check different parameters like app pool status and requests in queue.
Http.sys request queues are obtained from perfmon .Is there any way I can ping application pool and check status during each stage/requestload.
We are dealing this issue in two phases
1.Remove node out of HLB(we have script) once node is not responding or hung or slow, before end users complain( we get a lot of comlpaints)—priority 1
2.troubleshoot what’s the cause of hung—priority 2
Thanks in advance.
EDIT:
This article looks promising.But not able to find how to execute this.Any help on this please.
http://msdn.microsoft.com/en-us/library/ms691445(v=vs.90).aspx
I'm not sure an app pool's state will tell you if it is hung, just if it is started, stopped, or changing states.
I think you'll want to look at the IIS performance counters. I've never had to do anything like that, but the Get-Counter cmdlet is probably what you'll use.
Looks like there is another Stack Overflow question/answer that has some sample code:
Get-Counter "\\$ServerName\web service($SiteName)\current connections"):
I have exactly one Node server, which is currently running some code. This code is now outdated. How can I switch to the new code without any server down-time? Do I need to get another server to act as a buffer?
Basically you "kill" the old process and immediately start the server again, read the following article for more details & code sample:
http://codegremlins.com/28/Graceful-restart-without-downtime
I'm having an issue with jboss server. when i run jboss server, it stops responding( no fixed time, so cannot predict when will it stop responding after start) after that it doesn't writes anything in log file. my problem is similar to the problem described on jboss community, link given below but it doesn't have the answer. please help.
http://community.jboss.org/message/526193
--Ravi
It sounds like your jboss server is running out of threads to allocate and is waiting for a new one to become available. Try triggering a thread dump (ctrl-\) and see if you find any threads suspiciously locked and waiting in some of your code. Quite possibly you have a deadlock or memory leak somewhere in your code which is causing old threads to lock up and never be released.
Alternatively try what the guy you linked to did, i.e. increasing the amount of threads available.
edit: For some more basic advice, this post might be of use to you.