After supervisord crash, how to restart without killing all process? - supervisord

I have an issue where supervisord is crashing due to out of memory conditions. I know what is causing this (other processes consuming too much memory), but it's going to take a little time to fix. In the meantime, I would like to bring supervisor back up if it crashes. The problem here seems to be that it will not pick up where it left off and forgets about all the existing processes. It seems to just attempt to restart all processes instead of recognizing that they were already running.
Is this a problem in my config (pid files not in the right place or something, it seems correct though) or is this just how supervisord works? Are there any workarounds to get the manager process to continue where it left off before it crashed?

Related

VScode rg process taking all the CPU

My VScode is behaving strangely. When I checkout or pull with many changes in files, it creates many processes called rg that drain the CPU to 100% usage. This problem persists even if I kill the VScode, I have to manually kill the processes.
I found some old threads about disabling symlinks with "search.followSymlinks": false but it didn't help. Might it be some indexing problem?
I have noticed that initializing js/ts language features is spinning but never completes and the whole UI lags. Happy to provide more details like extensions, etc.
I couldn't find a thread with the same problem around 2021/22 so sorry if duplicated.

How to make Libfuzzer run without stopping similar to AFL?

I have been trying to fuzz using both AFL and Libfuzzer. One of the distinct differences that I have come across is that when the AFL is executed, it runs continuously unless it is manually stopped by the developer.
On the other hand, Libfuzzer stops the fuzzing process when a bug is identified.I know that it allow the addition of parallel fuzzing through the jobs=N command, however those processes still stop when a bug is identified.
Is there any reason behind this behavior?
Also, is there any command that allows the Libfuzzer to run continuously unless the developer stops the fuzzing process?
This question is old but I also was in need to run libFuzzer without stopping.
It can be accomplished with the flags -fork=<N of jobs> combined with -ignore_crashes=1.
Be aware that now Ctrl+C doesn't work anymore. It is considered as a crash and just spawns a new job. But I think this is a bug, see here.

PhantomJS not killing webserver client connections

I have a kind of proxy server running on a WebServer module and I noticed that this server is being killed due to its memory consumption.
Every time the server gets a new request it creates a child client process, the problem I see is that the process remains alive indefinitely.
Here is the server I'm using:
server.js
I thought response.close() was closing and killing client connections, but it is not.
Here is the list of child processes displayed on htop:
(Those process are even more, it is just a fragment of the list)
I really need to kill those processes because they are using all the free memory. Am I missing something?
I could simply restart the server, but the memory will still be wasted.
Thanks you !
EDIT:
The processes I mentioned before are threads and no independient processes as I thought (check this).
Every http request creates a new thread, and that's ok, but this thread is not being killed after the script ends.
Also, I found out that no new threads are created if the request handler doesn't run casper (I mean casper.run(..)).
So, new threads are created only if the server runs a casper instance, the problem is that this instance doesn't end after run function does.
I tried casper.done() as mentioned below, but it kill the whole process instead of the current running thread. (I did not find any doc for this function).
When I execute other casper scripts, outside the server in the same machine, the instanced threads and the whole phantom process ends successfully. What would be happening?
I am using Phantom 2.1.1 and Casper 1.1.1 versions.
Please ask me anything if you want more or specific information.
Thanks again for reading !
This is a well known issue with casper:
https://github.com/casperjs/casperjs/issues/1355
It has not been fixed by the casper guys and is currently marked as an enhancement. I guess it's not on their priority list.
Anyways, the workaround is to write a server side component e.g. a node.js server to handle the incoming requests and for every request run a casper script to do the scraping in a new child process. This child process will be closed when casper terminates it's job. While this is a workaround, it is not an optimal solution as the cost of opening a child process for every request is not cheap. it will be hard to heavily scale an approach similar to this. However, it is a sufficient workaround. More on this fully sensible approach is in the link above.

Computationally intensive scala process using actors hangs uncooperatively

I have a computationally intensive scala application that hangs. By hangs I means it is sitting in the process stack using 1% CPU but does not respond to kill -QUIT nor can it be attached via jdb attach.
Runs 2-12 hours at 800-900% CPU before it gets stuck
The application is using ~10 scala.actors.
Until now I have had great success with kill -QUIT but I am bit stumped as to how to proceed.
The actors write a fair amount to stdout using println which is redirected to a text file but has not been helpful so far diagnostically.
I am just hoping there is some obvious technique when kill -QUIT fails that I am ignorant of.
Or just confirmation that having multiple actors println asynchronously is a real bad idea (though I've been doing it for a long time only recently with these results)
Details
scala 2.8.1 & 2.8.0
mac osx 10.6.5
java version "1.6.0_22"
Thanks
if you just wanna remove the process from the run queue, the obvious choice is
kill -9
Of course you wanna avoid having to do that in the first place, but you do not provide any information that would allow us to help you with that. Writing to stdout in multiple actors is indeed a bad idea, but the worst thing that could come from that is garbled output.
Most times I have seen the JVM react like that is when it goes into no permgen space. It will then be incapable of anything (even dying). You don't find any traces of that in your printout? Have you tried to increase the permgen space?

JBoss eventually stops responding to request, but no OOME

I'm having an issue with jboss server. when i run jboss server, it stops responding( no fixed time, so cannot predict when will it stop responding after start) after that it doesn't writes anything in log file. my problem is similar to the problem described on jboss community, link given below but it doesn't have the answer. please help.
http://community.jboss.org/message/526193
--Ravi
It sounds like your jboss server is running out of threads to allocate and is waiting for a new one to become available. Try triggering a thread dump (ctrl-\) and see if you find any threads suspiciously locked and waiting in some of your code. Quite possibly you have a deadlock or memory leak somewhere in your code which is causing old threads to lock up and never be released.
Alternatively try what the guy you linked to did, i.e. increasing the amount of threads available.
edit: For some more basic advice, this post might be of use to you.