Best way to send email when PHP process dies - email

I wrote a quick PHP page to handle 502 requests. Nginx will re-direct to this page when a 502 is encountered and an email is fired off.
The problem is, most of the time that the 502 is encountered is because PHP has died, so writing to the DB and sending an email using PHP is no longer possible. Tweaks to PHP-FPM settings have done a lot to help (restarting PHP, etc), but I'd still like a fall-back.
There are numerous ways to send an email outside of PHP, but I am curious what others out there are doing with good success? I'd like to keep it simple for configuration (i.e. not have yet another complex dependency to worry about on the servers) and reliability reasons.
Googling and searching SO didn't turn up much, probably because "dies" and "fail" bring back a lot of false positives for my scenario.

What about use a cronjob (bash based) to parse error_log file periodically (x hours) and send an email (mutt/mail) when find something like resuming normal operations in the last period (x hours). I think is simple and effective...
[Thu Dec 27 14:37:52 2012] [notice] caught SIGTERM, shutting down
[Thu Dec 27 14:37:53 2012] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.6-2~precise+1 configured -- resuming normal operations
UPDATE:
#Brian As #takeshin says cronjobs can run even every second if you want, but some sysadmins could bite you... :|

Here is what I've ended up doing. I've not rolled it out to our prod servers yet, but all testing thus far looks good.
Nginx does not support CGI natively, so you need another means to do it. thttpd fit the bill nicely. There is a good write up the nginx wiki showing how to use it.
I configured thttpd with the following:
dir=/var/www/htdocs
user=thttpd
logfile=/var/log/thttpd.log
pidfile=/var/run/thttpd.pid
port=8000
cgipat=**.cgi
And added this to my nginx config:
error_page 502 #thttpd;
location #thttpd {
include proxy.include;
proxy_pass http://127.0.0.1:8000;
}
Finally, I created a basic CGI script that calls PHP on the command line and passed in my already-written PHP script. This was an ideal solution for me because the script was already set up to log to our alerts table and fire off an email. This is also real-time, as the script will execute as soon as nginx returns a 502 code (subsequent 502s will not hammer me with emails, per the logic of the script).
I was able to run some simulation tests be forcing nginx to return a 502 (see more here).
I'm going to continue tweaking this, but I'm pretty happy with the relative ease of deploying it and that I could re-use existing code.

We have dual solution.
We use shell script to send out email notifications, if PHP dies. We check if php service is running with shell command in the shell script, if it is not running, we'll fire off a shell command to send an email.
This is all in a few lines of Shell Script. Not too hard.
Of course, set it up in cron.

Related

503 Server Unavailable - Dynamics CRM Web Service down - how to diagnose?

I provide support for a large application across multiple servers. System has been running live for 6+ months.
8th December: total system failure. iisreset across each of the servers sorted it out. Everything back to normal.
Post failure investigation showed various processes not able to get a response from a particular server which hosts an instance of Dynamics CRM (2011 R11). Specifically it seems the SOAP service was not responding (Organization.svc). 503 - Server Unavailable (really it was just the web service). I suspect it died.
Having the exact time of the error I checked the event logs on the server but these did not have anything of use. The last error prior to the failure was a report rendering error which was 9 minutes before the system actually went down. Surely if web service crashed this would be reflected in the event log?
Fast forward to today, 8th January and the system fails again. The 8th of the month again! iisreset fixes it... again!
Again, completely useless event logs showings no errors prior to failure.
Entertained the idea of Dynamics CRM trace logging but this is out of the question due to the performance hit.
Apart from the event logs where else to look? Are there possible external factors or causes? I'm trying to find the root cause but have run out of ideas!
While this may not address the source of your problem, maybe it can help minimize the symptoms. May I suggest that you configure the IIS server to recycle the application pool at a scheduled interval within your production environment.
http://technet.microsoft.com/en-us/library/cc753179%28v=ws.10%29.aspx

Mojolicious response caching?

Hello I experimenting some small things with Mojolicious and I have the following question:
What happens when a request is received ?
Is there some caching like in modperl or is the code compiled each time ?
It depends on the server it runs under.
If you use a pre-forking app server or fastcgi server then you'll get one or more processes re-used for multiple requests.
You can run a simple CGI, launching the script for each request, but it wouldn't be common.
Deployment options are in the manual.

Spawn external process from a CGI script

I've searched and found several very similar questions to mine but nothing in those answers have worked for me yet.
I have a perl CGI script that accepts a file upload. It looks at the file and determines how it should be processed and then calls a second non-CGI script to do the actual processing. At least, that's how it should work.
This is running on Windows with Apache 2.0.59 and ActiveState Perl 5.8.8. The file uploading part works fine but I can't seem to get the upload.cgi script to run the second script that does the actual processing. The second script doesn't communicate in any way with the user that sent the file (other than it sends an email when it's done). I want the CGI script to run the second script (in a separate process) and then 'go away'.
So far I've tried exec, system (passing a 1 as the first parameter), system (without using 1 as first parameter and calling 'start'), and Win32::Process. Using system with 1 as the first parameter gave me errors in the Apache log:
'1' is not recognized as an internal or external command,\r, referer: http://my.server.com/cgi-bin/upload.cgi
Nothing else has given me any errors but they just don't seem to work. The second script logs a message to the Windows event log as one of the first things it does. No log entry is being created.
It works fine on my local machine under Omni webserver but not on the actual server machine running Apache. Is there an Apache config that could be affecting this? The upload.cgi script resides in the d:\wwwroot\test\cgi-bin dir but the other script is elsewhere on the same machine (d:\wwwroot\scripts).
There may be a security related problem, but it should be apparent in the logs.
This won't exactly answer your question but it may give you other implementation ideas where you will not face with potential security and performance problems.
I don't quite like mixing my web server environment with system() calls. Instead, I create an application server (with POE usually) which accepts the relevant parameters from the web server, processes the job, and notifies the web server upon completion. (well, the notification part may not be straightforward but that's another topic.)

PHP Slow to process soap request via browser but fine on the command line

I am trying to connect to an external SOAP service using PHP and have written a small php test script that just connects to the service and performs a simple request to check everything is working.
This all works correctly but when I run via a browser request, it is very slow taking somewhere in the region of 40s to establish the initial connection. When I do the same request using the exact same script on the command line, it goes through straight away.
Does anyone have any ideas as to why this might be?
Cheers
PHP caches the wsdl in /tmp. If you run from the command line first, the cache file will be owned by whatever user you're running the script as, and apache won't be able to read the cache. The wsdl will have to be downloaded and parsed every time which will be slow.
Check the permissions of /tmp/wsdl*.
Maybe external SOAP service trying to check your IP, and your server has ICMP allowed, when your local network - not.
Anyway, this question might be answered more clearly by administrator of external SOAP service :)
Is there a difference between the php.inis that are being used?
On a standard ubuntu server installation:
diff /etc/php5/apache2/php.ini /etc/php5/cli/php.ini
//edit:
Another difference might be in the include paths. Had this trouble myself on a local test server, it didn't actually use the soap class that was included (it didn't include anything, because the search paths weren't valid), but it included the built-in soap_client class.

Where can I find application runtime errors using Nginx, Starman, Plack and Catalyst?

I have managed successfully to server my Catalyst app on my development machine using Plack + Starman, using a daemon script I based on one I found in Dave Rolsky's Silki distribution.
I then set up nginx to reverse proxy to my Starman server, and aliased the static directory for nginx to serve. So far, so good. However, I am at a loss as to where my application STDERR is supposed to be logging to. It isn't reaching nginx (I suppose that makes sense) but I can't find much documentation as to where Starman may be logging it - if anywhere. I did have a look at Plack's Middleware modules but only saw options for access logs.
Can someone help me?
It's going nowhere. Catalyst::Log is sending data to STDERR, and the init script is sending STDERR to /dev/null.
You have a few basic choices:
Replace Catalyst::Log with something like Catalyst::Log::Log4perl or simply a subclass of Catalyst::Log with overridden _send_to_log -- either one will allow you to send the logging output somewhere other than STDERR.
Write some code that runs at the PSGI level to manage a logfile and reopen STDERR to it. I tried this, it wasn't very pleasant. Logfiles are harder than they look.
Use FastCGI instead, and you'll have an error stream that sends the log output back to the webserver. You can still use Plack via Plack::Handler::FCGI / Plack::Handler::FCGI::Engine (I'd recommend the latter, because the FCGI::Engine code is much newer and nicer than FCGI.pm).
I realise it is a long time since the question was asked, but I've just hit the same problem...
You actually have one more option than Hobbs mentioned.
It isn't quite the "init script" that is sending STDERR to /dev/null, it is Starman.
If you look at the source code for Starman, you would discover that, if you give it the --background flag, it uses MooseX::Daemonize::Core.
And once you know that, its documentation will tell you that it deliberately closes STDERR, STDOUT and STDIN and re-directs them to /dev/null, AND that it takes the environment variables MX_DAEMON_STDERR and MX_DAEMON_STDOUT as names of files to use instead.
So if you start your catalyst server with MX_DAEMON_STDERR set to a file name, STDERR will go to that file.
Today Starman has a --error-log command line option which allows you to redirect error messages to a file.
See documentation of starman:
--error-log
Specify the pathname of a file where the error log should be written. This enables you to still have access to the errors when using --daemonize.