I'm having simple script borrowed from supervisor docs (http://supervisord.org/events.html#event-listeners-and-event-notifications) just to test if the eventlistener is getting any updates from the process it's subscribed to. No matter how state of the process is changed (I'm sending SIGSEGV to program) I'm only able to see "READY" state and none of any other data.
Question: is listener script supposed to be called manually?
If not I thought setting the permissions in the following way should be fine to be executable?
-rwxr-xr-x 1 root root 699 Oct 23 11:33 mylistener.py
Add
autorestart=true
redirect_stderr=true
to your [eventlistener:x] section and check the logs.
Related
I have a python app behind supervisor and uwsgi.
At a certain point, my app stopped to answer the queries with this message in the logs:
Tue Sep 6 11:06:53 2022 - *** uWSGI listen queue of socket "127.0.0.1:8200" (fd: 3) full !!! (101/100) ***
In my use case,
If a query is not answered within 1s, the answer does not matter anymore; the client app will automatically re-do the request
restarting the whole uwsgi takes around half an hour
Thus I prefer to lose few requests than restarting.
QUESTIONS :
Is it possible to detect a full queue from inside the python app ?
Is it possible to clear the queue from inside ?
Precision: this question is not about fixing the underlying issue. I'm working on that separately. It is only about knowing if this particular workaround is possible and how to implement it.
I'm using uwsgi 2.0.20. Looking at the queue framework does not help since uwsgi has no attribute (e.g.) queue_slot. Doc outdated ?
EDIT
I can reproduce the error with this simple bash script:
#!/bin/bash
for i in {0..200}
do
echo "Number: $i"
sleep 0.2
curl -X POST "http://localhost:1103/my_app" &
done
(my app accepts POST, not GET)
So i was trying to start some extra processes in my supervisord like i have done before but i got this message
<class 'xmlrpclib.ProtocolError'>, <ProtocolError for 127.0.0.1/RPC2: 500 Internal Server Error>: file: /usr/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/xmlrpc.py line: 470
I restarted the whole thing, and i got the same error when starting the same number of processes. Is there a way to give a second rpc interface or will i have to run a second supervisord process? Will i need to define something different other than pids and socks in its conf if thats the case?
Change your port
socket=tcp://localhost:8003
I happened across this error myself and discovered that I had accidentally set the command directive in my supervisor ini file for that process to nothing because I had accidentally moved it to the next line:
[program:myapp]
command=
/myapp/command/to/start
directory=/path/to/myapp/
user=myapp_user
autostart=true
autorestart=true
redirect_stderr=true
I think I was editing something and forgot to put it back. Maybe this is related to your problem?
I wrote a quick PHP page to handle 502 requests. Nginx will re-direct to this page when a 502 is encountered and an email is fired off.
The problem is, most of the time that the 502 is encountered is because PHP has died, so writing to the DB and sending an email using PHP is no longer possible. Tweaks to PHP-FPM settings have done a lot to help (restarting PHP, etc), but I'd still like a fall-back.
There are numerous ways to send an email outside of PHP, but I am curious what others out there are doing with good success? I'd like to keep it simple for configuration (i.e. not have yet another complex dependency to worry about on the servers) and reliability reasons.
Googling and searching SO didn't turn up much, probably because "dies" and "fail" bring back a lot of false positives for my scenario.
What about use a cronjob (bash based) to parse error_log file periodically (x hours) and send an email (mutt/mail) when find something like resuming normal operations in the last period (x hours). I think is simple and effective...
[Thu Dec 27 14:37:52 2012] [notice] caught SIGTERM, shutting down
[Thu Dec 27 14:37:53 2012] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.6-2~precise+1 configured -- resuming normal operations
UPDATE:
#Brian As #takeshin says cronjobs can run even every second if you want, but some sysadmins could bite you... :|
Here is what I've ended up doing. I've not rolled it out to our prod servers yet, but all testing thus far looks good.
Nginx does not support CGI natively, so you need another means to do it. thttpd fit the bill nicely. There is a good write up the nginx wiki showing how to use it.
I configured thttpd with the following:
dir=/var/www/htdocs
user=thttpd
logfile=/var/log/thttpd.log
pidfile=/var/run/thttpd.pid
port=8000
cgipat=**.cgi
And added this to my nginx config:
error_page 502 #thttpd;
location #thttpd {
include proxy.include;
proxy_pass http://127.0.0.1:8000;
}
Finally, I created a basic CGI script that calls PHP on the command line and passed in my already-written PHP script. This was an ideal solution for me because the script was already set up to log to our alerts table and fire off an email. This is also real-time, as the script will execute as soon as nginx returns a 502 code (subsequent 502s will not hammer me with emails, per the logic of the script).
I was able to run some simulation tests be forcing nginx to return a 502 (see more here).
I'm going to continue tweaking this, but I'm pretty happy with the relative ease of deploying it and that I could re-use existing code.
We have dual solution.
We use shell script to send out email notifications, if PHP dies. We check if php service is running with shell command in the shell script, if it is not running, we'll fire off a shell command to send an email.
This is all in a few lines of Shell Script. Not too hard.
Of course, set it up in cron.
I'm looking at hosting a number of small, static websites and have been looking at a few alternatives including G-WAN. At the moment I'm just trying to get a feel for how well each server suits my needs before picking one.
G-WAN seems to do exactly what I want, though I'm running into problems with updating the configuration (by adding new folders) after the server's started. I can't find anything in the documentation or online about this, so I don't know if I'm doing anything dumb, running an unsupported configuration, or whether it's a feature that doesn't exist in G-WAN.
Here's my setup:
G-WAN 3.3.28 64-bit on Ubuntu 12.04.1 LTS.
I have what I think is the required minimal folder structure:
0.0.0.0_80
#0.0.0.0
www
$site.com
www
$othersite.com
www
I startup gwan via (I'm still messing around, so hopefully ):
sudo .\gwan -d
Everything works brilliantly. I add $thirdsite.com/, $thirdsite.com/www/, and $thirdsite.com/www/index.html; then when I try to visit thirdsite.com it gives me the root host (ie it doesn't seem to pick up the changes).
To reload the modified configuration, I have to either do:
sudo .\gwan -k; sudo .\gwan -d
or kill the non-angel process (kill -s 15) to restart the child process.
Can G-WAN reload the host definitions another way? If so, is it something that works out of the box or is there a command that can cycle the server without dropping requests made to other hosts (/is it safe to kill -s 15 on the non-angel process + if so, is there a reliable way to identify the process)? Thanks in advance!
G-WAN loads the host definitions at startup and does not check them as time goes to reload them dynamically.
To force a reload, you have to stop the child process (when in daemon mode) and v3.9+ keeps the old child alive the time to process any pending request while the new child accepts new connections.
Since stopping the child can also be done from the maintenance script or from a handler or from a servlet by just running exit(0) there is not need for a dedicated command.
Note that when you use kill you can pick the pid file from the gwan directory:
the parent process starts with a capital letter: Gwan_xxxx.pid
the child process starts with a lowercase letter: gwan_xxxx.pid
That will make your life easier.
I'm working with an embedded computer that has a Debian on it. I already manage to run a command just before it has booted and play the "bell" to tell that is ready to work, and for example try to connect to a service.
The problem is that I need to play the bell (or run any command/program) when the system is halted so is safe to un-plug the power. Is there any runscript that run just after halt?
If you have a look in /etc/init.d, you'll see a script called halt. I'm pretty certain that when /sbin/halt is called in a runlevel other than 0 or 6, it calls /sbin/shutdown, which runs this script (unless called with an -n flag). So maybe you could add your own hook into that script? Obviously, it would be before the final halt was called, but nothing runs after that, so maybe that's ok.
Another option would be to use the fact that all running processes get sent a SIGTERM followed (a second or so later) by a SIGKILL. So you could write a simple daemon that just sat there until given a SIGTERM, at which point it went "ping" and died.