Where can I find application runtime errors using Nginx, Starman, Plack and Catalyst? - perl

I have managed successfully to server my Catalyst app on my development machine using Plack + Starman, using a daemon script I based on one I found in Dave Rolsky's Silki distribution.
I then set up nginx to reverse proxy to my Starman server, and aliased the static directory for nginx to serve. So far, so good. However, I am at a loss as to where my application STDERR is supposed to be logging to. It isn't reaching nginx (I suppose that makes sense) but I can't find much documentation as to where Starman may be logging it - if anywhere. I did have a look at Plack's Middleware modules but only saw options for access logs.
Can someone help me?

It's going nowhere. Catalyst::Log is sending data to STDERR, and the init script is sending STDERR to /dev/null.
You have a few basic choices:
Replace Catalyst::Log with something like Catalyst::Log::Log4perl or simply a subclass of Catalyst::Log with overridden _send_to_log -- either one will allow you to send the logging output somewhere other than STDERR.
Write some code that runs at the PSGI level to manage a logfile and reopen STDERR to it. I tried this, it wasn't very pleasant. Logfiles are harder than they look.
Use FastCGI instead, and you'll have an error stream that sends the log output back to the webserver. You can still use Plack via Plack::Handler::FCGI / Plack::Handler::FCGI::Engine (I'd recommend the latter, because the FCGI::Engine code is much newer and nicer than FCGI.pm).

I realise it is a long time since the question was asked, but I've just hit the same problem...
You actually have one more option than Hobbs mentioned.
It isn't quite the "init script" that is sending STDERR to /dev/null, it is Starman.
If you look at the source code for Starman, you would discover that, if you give it the --background flag, it uses MooseX::Daemonize::Core.
And once you know that, its documentation will tell you that it deliberately closes STDERR, STDOUT and STDIN and re-directs them to /dev/null, AND that it takes the environment variables MX_DAEMON_STDERR and MX_DAEMON_STDOUT as names of files to use instead.
So if you start your catalyst server with MX_DAEMON_STDERR set to a file name, STDERR will go to that file.

Today Starman has a --error-log command line option which allows you to redirect error messages to a file.
See documentation of starman:
--error-log
Specify the pathname of a file where the error log should be written. This enables you to still have access to the errors when using --daemonize.

Related

Spawn external process from a CGI script

I've searched and found several very similar questions to mine but nothing in those answers have worked for me yet.
I have a perl CGI script that accepts a file upload. It looks at the file and determines how it should be processed and then calls a second non-CGI script to do the actual processing. At least, that's how it should work.
This is running on Windows with Apache 2.0.59 and ActiveState Perl 5.8.8. The file uploading part works fine but I can't seem to get the upload.cgi script to run the second script that does the actual processing. The second script doesn't communicate in any way with the user that sent the file (other than it sends an email when it's done). I want the CGI script to run the second script (in a separate process) and then 'go away'.
So far I've tried exec, system (passing a 1 as the first parameter), system (without using 1 as first parameter and calling 'start'), and Win32::Process. Using system with 1 as the first parameter gave me errors in the Apache log:
'1' is not recognized as an internal or external command,\r, referer: http://my.server.com/cgi-bin/upload.cgi
Nothing else has given me any errors but they just don't seem to work. The second script logs a message to the Windows event log as one of the first things it does. No log entry is being created.
It works fine on my local machine under Omni webserver but not on the actual server machine running Apache. Is there an Apache config that could be affecting this? The upload.cgi script resides in the d:\wwwroot\test\cgi-bin dir but the other script is elsewhere on the same machine (d:\wwwroot\scripts).
There may be a security related problem, but it should be apparent in the logs.
This won't exactly answer your question but it may give you other implementation ideas where you will not face with potential security and performance problems.
I don't quite like mixing my web server environment with system() calls. Instead, I create an application server (with POE usually) which accepts the relevant parameters from the web server, processes the job, and notifies the web server upon completion. (well, the notification part may not be straightforward but that's another topic.)

Is there a perl function similar to lsof command in linux?

I have a shell script which archives log files based on the whether the process is running or not. If the log file is not used by the process then I archive it. Until now, I'm using lsof to get the log file being used but in future, I have decided to use perl to do this function.
Is there a perl module similar to what lsof in linux can perform ?
There is a perl module, which wraps around lsof. See Unix::Lsof.
As I see it, the big problem with not using lsof is that one would need to work in a way that is independent of the operating system. Using lsof allows the perl programmer to work with a consistent application allowing for operating system independence.
To have a perl module developer to write lsof would, in effect, be writing lsof as a library and then link that into perl - which is much more work than just using the existing binary.
One could also use the fuser command, which shows the process IDs with the file handle. There is also a module which seeks to implement the same functionality. Note from the perldoc:
The way that this works is highly unlikely to work on any other OS
other than Linux and even then it may not work on other than 2.2.*
kernels.
One might try walking /proc/*/fd and looking at the file descriptors in there to see if any are pointing to the file in question. If it is known what the process ID of a running process that would be opening the log file, it would be just as easy to look at that process. Note, that this is how the fuser module works.
That said, it should be asked "why do you want to move away from lsof"?

PHP Slow to process soap request via browser but fine on the command line

I am trying to connect to an external SOAP service using PHP and have written a small php test script that just connects to the service and performs a simple request to check everything is working.
This all works correctly but when I run via a browser request, it is very slow taking somewhere in the region of 40s to establish the initial connection. When I do the same request using the exact same script on the command line, it goes through straight away.
Does anyone have any ideas as to why this might be?
Cheers
PHP caches the wsdl in /tmp. If you run from the command line first, the cache file will be owned by whatever user you're running the script as, and apache won't be able to read the cache. The wsdl will have to be downloaded and parsed every time which will be slow.
Check the permissions of /tmp/wsdl*.
Maybe external SOAP service trying to check your IP, and your server has ICMP allowed, when your local network - not.
Anyway, this question might be answered more clearly by administrator of external SOAP service :)
Is there a difference between the php.inis that are being used?
On a standard ubuntu server installation:
diff /etc/php5/apache2/php.ini /etc/php5/cli/php.ini
//edit:
Another difference might be in the include paths. Had this trouble myself on a local test server, it didn't actually use the soap class that was included (it didn't include anything, because the search paths weren't valid), but it included the built-in soap_client class.

Perl & Apache HTTP server: Can't do Tie MLDBM when the cgi script is executed from the server, but okay when executed from the command line. Why?

please help! I'm really going nuts with this problem!
I have a CGI perl script and it always fails at the following line when executed from the Apache HTTP server:
tie %db, 'MLDBM', "$data_path/$db_name.db", O_RDONLY, 0640 or die $!
and the error is Permission denied:
Software error:
Permission denied at /var/www/cgi-bin/rich/pages/display line 381.
For help, please send mail to the webmaster (root#localhost), giving this error message and the time and date of the error.
But when executed from the command line, it works without any problem.
I have ensured that the directories and the file to tie have the correct permissions.
So what else have I missed? What configurations in the Apache's httpd.conf I could be getting wrong? Admittedly, I didn't have any previous experience with the Apache HTTP server, so this is pretty much my first time playing around with it. However, I have read the manuals more than once to look for things I could be wrong at, but I didn't notice anything. But I could be wrong of course.
Thanks!!
Have you verified that $data_path and $db_name contain what you think they do?
Is $data_path an absolute path which is not reliant on the active user's identity or home directory?
What does ls -l $data_path/$db_name.db show for the file's ownership and permissions?
I've never run across (or heard of) anything in apache that would prevent a CGI process from having permission to open files, so I highly doubt that it's an apache config issue. Most likely it's either looking for the wrong file or the file's permissions are incorrect for the user that apache is running the CGI process as.

What causes "suexec policy violation" when Perl is called via server side include?

I'm working on a Perl script which is called from a server side include on an Apache 2 server. The script is displaying the generic "Internal Server Error" page rather than showing me the actual error. When I check the Apache error log, I see these messages:
unable to include "/foobar/index.pl" in parsed file /home/foouser/domains/foosite.com/public_html/foobar/index.shtml, referer: http://www.foosite.com/foobar/
suexec policy violation: see suexec log for more details, referer: http://www.foosite.com/foobar/
Premature end of script headers: settings.pl, referer: http://www.foosite.com/foobar/
How do I get a Perl script to show an error rather than "Internal Server Error"?
Update:
I should have asked a separate question for this, because I have since learnt that this does send errors to the browser (thanks brian):
use CGI::Carp qw(fatalsToBrowser);
However, if the problem is with the Apache config rather than the Perl script, then the error will not be sent to the browser because the Perl code is not being interpreted. In this case, we can tell that I am experiencing an Apache error (rather than a Perl error) because of this line:
suexec policy violation: see suexec log for more details
This occurs when Apache is running in SUexec mode (which seems to be common for shared hosting). I'm not sure what exactly has been changed to cause this error, but that's what I'm trying to find out.
Probably you are using shared hosting and you have this problem because your scripts directory or the script file does have other rights than 755.
Here is one case translated from Dutch.
Use CGI::Carp's fatalsToBrowser.
use CGI::Carp qw(fatalsToBrowser);
You might also want to see my Troubleshooting Perl CGI scripts.
From the error message, I'm guessing that you aren't allowed to execute CGI scripts from server side includes. Which version of your Apache are you running? If it's an old apache, see the suexec docs for apache 1.3, or if it's a newer apache, see the suexec docs for apache 2.0.
It's not for user friendliness, but often for security that we don't show users the exact error when the user can't do anything about it. For example, imagine that a back end server is unavailable. What can I, as a user, do to fix that in your web application?
In some cases, error messages will contain useful information, like "SQL Error: illegal syntax. Unmatched ' ". If the user had input a quote in their input, this feedback would indicate a SQL injection vulnerability.
Other benign looking messages are bad to show to users, as well. The key thing that the attacker wants is to know "something different happened." If the application prints out one error for one input,and another error for another iinput, then the attacker knows that something different has gone wrong, and that this is an interesting place to focus.
In a production site, errors should be logged to file, and, if appropriate, downloadable through your web interface - but be very careful to sanitize any output to the browser to avoid cross site scripting. And there should be no user-submitted option to reconfigure this between debug and production (don't control it via a POST or CGI parameter, but by a configuration file option).
This could be 3 factors:
Permissions level rwx set wrong (execution/writeness level)
UUID/GUID do not match Apache settings
Combination of 2 above.
Check apache suexec+errorlog for details