I have an example and question regarding unix/apache session scope. Here is the test script I am using:
#! /usr/bin/perl -I/gcne/etc
$pid = $$;
system("mkdir -p /gcne/var/nick/hello.$pid");
chdir "/gcne/var/nick/hello.$pid";
$num = 3;
while($num--){
system("> blah.$pid.$num");
#sleep(5);
system("sleep 5");
}
system("> blahDONE.$pid");
I have noticed that if I call this script TWICE from a web browser that it will execute these requests in sequence — a total of 30 seconds. How does Perl/unix deal with parallel execution and using system commands? Is there a possibility that I get cross-session problems when using system calls? Or does apache treat each of these server calls as a new console session process?
In this example, I'm basically trying to test whether or not different PID files would be created in the "wrong" PID folder.
CentOS release 5.3
Apache/2.2.3 Jul 14 2009
Thanks
If you call the script via the normal CGI interface, then each time you request a web page your script is called. This means each time it gets a new process ID. Basically for CGI's the interface between Apache and your programm are the commandline args, the environment variables and STDOUT and STDERR. Otherwise everything is a normal command call.
Situation is a little different when you use mechanism like mod_perl, but it seems you don't do this ATM.
Apache does not do any synchronisation, so you can expect up to MaxClients (see apache docs) parallel calls of your script.
P.S. The environment variables between a call from apache and from shell are a bit different, but this is not relevant for your question (but you'll probably wonder if e.g. USER or similar variables are missing).
See also for more information: http://httpd.apache.org/docs/2.4/howto/cgi.html
Especially: http://httpd.apache.org/docs/2.4/howto/cgi.html#behindscenes
A browser may only issue one call at a time (tested with firefox), so when testing it may appear requests are handled one after another. This is not server related, but caused by the web browser.
Related
I need some heads up here. I need to write a CGI script which has to connect unix host and execute set of perl scripts.
I am new to CGI world and I have couple of questions in my mind for which I don't know perl solution.
How will connect to Unix host from CGI script. I believe using Net::SSH ,is there any better module to do this ?
Lets assume I have connected to the server , now how would I execute the script and how would I now the status (running/success/failure)
of the script.
a. When its running I would like to see the output that gets generated. Is it possible to view the script output in realtime?
b. If its a failure then I should be notified and reason for failure and should not trigger the next script in sequence.
If someone has similar setup already available and ready to show the code/setup , I would be much happier :)
I've searched and found several very similar questions to mine but nothing in those answers have worked for me yet.
I have a perl CGI script that accepts a file upload. It looks at the file and determines how it should be processed and then calls a second non-CGI script to do the actual processing. At least, that's how it should work.
This is running on Windows with Apache 2.0.59 and ActiveState Perl 5.8.8. The file uploading part works fine but I can't seem to get the upload.cgi script to run the second script that does the actual processing. The second script doesn't communicate in any way with the user that sent the file (other than it sends an email when it's done). I want the CGI script to run the second script (in a separate process) and then 'go away'.
So far I've tried exec, system (passing a 1 as the first parameter), system (without using 1 as first parameter and calling 'start'), and Win32::Process. Using system with 1 as the first parameter gave me errors in the Apache log:
'1' is not recognized as an internal or external command,\r, referer: http://my.server.com/cgi-bin/upload.cgi
Nothing else has given me any errors but they just don't seem to work. The second script logs a message to the Windows event log as one of the first things it does. No log entry is being created.
It works fine on my local machine under Omni webserver but not on the actual server machine running Apache. Is there an Apache config that could be affecting this? The upload.cgi script resides in the d:\wwwroot\test\cgi-bin dir but the other script is elsewhere on the same machine (d:\wwwroot\scripts).
There may be a security related problem, but it should be apparent in the logs.
This won't exactly answer your question but it may give you other implementation ideas where you will not face with potential security and performance problems.
I don't quite like mixing my web server environment with system() calls. Instead, I create an application server (with POE usually) which accepts the relevant parameters from the web server, processes the job, and notifies the web server upon completion. (well, the notification part may not be straightforward but that's another topic.)
I have managed successfully to server my Catalyst app on my development machine using Plack + Starman, using a daemon script I based on one I found in Dave Rolsky's Silki distribution.
I then set up nginx to reverse proxy to my Starman server, and aliased the static directory for nginx to serve. So far, so good. However, I am at a loss as to where my application STDERR is supposed to be logging to. It isn't reaching nginx (I suppose that makes sense) but I can't find much documentation as to where Starman may be logging it - if anywhere. I did have a look at Plack's Middleware modules but only saw options for access logs.
Can someone help me?
It's going nowhere. Catalyst::Log is sending data to STDERR, and the init script is sending STDERR to /dev/null.
You have a few basic choices:
Replace Catalyst::Log with something like Catalyst::Log::Log4perl or simply a subclass of Catalyst::Log with overridden _send_to_log -- either one will allow you to send the logging output somewhere other than STDERR.
Write some code that runs at the PSGI level to manage a logfile and reopen STDERR to it. I tried this, it wasn't very pleasant. Logfiles are harder than they look.
Use FastCGI instead, and you'll have an error stream that sends the log output back to the webserver. You can still use Plack via Plack::Handler::FCGI / Plack::Handler::FCGI::Engine (I'd recommend the latter, because the FCGI::Engine code is much newer and nicer than FCGI.pm).
I realise it is a long time since the question was asked, but I've just hit the same problem...
You actually have one more option than Hobbs mentioned.
It isn't quite the "init script" that is sending STDERR to /dev/null, it is Starman.
If you look at the source code for Starman, you would discover that, if you give it the --background flag, it uses MooseX::Daemonize::Core.
And once you know that, its documentation will tell you that it deliberately closes STDERR, STDOUT and STDIN and re-directs them to /dev/null, AND that it takes the environment variables MX_DAEMON_STDERR and MX_DAEMON_STDOUT as names of files to use instead.
So if you start your catalyst server with MX_DAEMON_STDERR set to a file name, STDERR will go to that file.
Today Starman has a --error-log command line option which allows you to redirect error messages to a file.
See documentation of starman:
--error-log
Specify the pathname of a file where the error log should be written. This enables you to still have access to the errors when using --daemonize.
I've got this project running under mod_perl shows some information on a host. On this page is a text box with a dropdown that allows users to ping/nslookup/traceroute the host. The output is shown in the text box like a tail -f.
It works great under CGI. When the user requests a ping it would make an AJAX call to the server, where it essentially starts the ping with the output going to a temp file. Then subsequent ajax calls would 'tail' the file so that the output was updated until the ping finished. Once the job finished, the temp file would be removed.
However, under mod_perl no matter what I do I can's stop it from creating zombie processes. I've tried everything, double forking, using IPC::Run etc. In the end, system calls are not encouraged under mod_perl.
So my question is, maybe there's a better way to do this? Is there a CPAN module available for creating command line jobs and tailing output that will work under mod_perl? I'm just looking for some suggestions.
I know I could probably create some sort of 'job' daemon that I signal with details and get updates from. It would run the commands and keep track of their status etc. But is there a simpler way?
Thanks in advance.
I had a short timeframe on this one and had no luck with CPAN, so I'll provide my solution here (I probably re-invented the wheel). I had to get something done right away.
I'll use ping in this example.
When ping is requested by the user, the AJAX script creates a record in a database with the details of the ping (host, interval, count etc.). The record has an auto-incrementing ID field. It then sends a SIGHUP to to a job daemon, which is just a daemonised perl script.
This job daemon receives the SIGHUP, looks for new jobs in the database and processes each one. When it gets a new job, it forks, writes the PID and 'running' status to the DB record, opens up stdout/stderr files based on the unique job ID and uses IPC::Run to direct STDOUT/STDERR to these files.
The job daemon keeps track of the forked jobs, killing them if they run too long etc.
To tail the output, the AJAX script send back the job ID to the browser. Then on a Javascript timer, the AJAX script is called which basically checks the status of the job via the database record and tails the files.
When the ping finishes, the job daemon sets the record status to 'done'. The AJAX script checks for this on it's regular status checks.
One of the reasons I did it this way is that the AJAX script and the job daemon talk through and authenticated means (the DB).
Can I run my mod_perl aplication as an ordinary user similar to running a plain vanilla CGI application under suexec?
From the source:
Is it possible to run mod_perl enabled Apache as suExec?
The answer is No. The reason is that
you can't "suid" a part of a process.
mod_perl lives inside the Apache
process, so its UID and GID are the
same as the Apache process.
You have to use mod_cgi if you need
this functionality.
Another solution is to use a crontab
to call some script that will check
whether there is something to do and
will execute it. The mod_perl script
will be able to create and update this
todo list.
A more nuanced answer with some possible workarounds from "Practical mod_perl" book:
(I hope that's not a pirated content, if it is please edit it out)
mod_perl 2.0 improves the situation,
since it allows a pool of Perl
interpreters to be dedicated to a
single virtual host. It is possible to
set the UIDs and GIDs of these
interpreters to be those of the user
for which the virtual host is
configured, so users can operate
within their own protected spaces and
are unable to interfere with other
users.
Additional solutions from the sme book are in appendix C2
As mod_perl runs within the apache process, I would think the answer is generally no. You could, for example, run a separate apache process as this ordinary user and use the main apache process as a proxy for it.