I've searched and found several very similar questions to mine but nothing in those answers have worked for me yet.
I have a perl CGI script that accepts a file upload. It looks at the file and determines how it should be processed and then calls a second non-CGI script to do the actual processing. At least, that's how it should work.
This is running on Windows with Apache 2.0.59 and ActiveState Perl 5.8.8. The file uploading part works fine but I can't seem to get the upload.cgi script to run the second script that does the actual processing. The second script doesn't communicate in any way with the user that sent the file (other than it sends an email when it's done). I want the CGI script to run the second script (in a separate process) and then 'go away'.
So far I've tried exec, system (passing a 1 as the first parameter), system (without using 1 as first parameter and calling 'start'), and Win32::Process. Using system with 1 as the first parameter gave me errors in the Apache log:
'1' is not recognized as an internal or external command,\r, referer: http://my.server.com/cgi-bin/upload.cgi
Nothing else has given me any errors but they just don't seem to work. The second script logs a message to the Windows event log as one of the first things it does. No log entry is being created.
It works fine on my local machine under Omni webserver but not on the actual server machine running Apache. Is there an Apache config that could be affecting this? The upload.cgi script resides in the d:\wwwroot\test\cgi-bin dir but the other script is elsewhere on the same machine (d:\wwwroot\scripts).
There may be a security related problem, but it should be apparent in the logs.
This won't exactly answer your question but it may give you other implementation ideas where you will not face with potential security and performance problems.
I don't quite like mixing my web server environment with system() calls. Instead, I create an application server (with POE usually) which accepts the relevant parameters from the web server, processes the job, and notifies the web server upon completion. (well, the notification part may not be straightforward but that's another topic.)
Related
I do know about the double hop issue. My scenario is: I have a script I want to run remotely that calls another script located on a network share that calls a third script located on a second network share in a different domain.
Currently what I am doing is using Credssp (I've read there can be security issues but this environment is not public facing) to pass credentials for the 1st network share that has script2. I do not have access to the computer with the second domain so I cannot setup credssp on it. In order to work around that, inside of the script2 I am using "net use" command on the third script in order for the script to be able to find the path. I am then using "Copy-Item" to copy the third script on to the machine running script2 (the remote machine).
Up to this step, everything is working when I run script1. I can see script3 is copied over onto the remote machine. When script3 is called, it should make a web request that sends text to stdout (which I pipe to Out-File in script2). However, whenever I try to run the copy of script3 (located on the remote machine) from script2 (running on the remote machine) it does not seem to do anything. If I run script2 locally on the remote machine then it works fine (file is generated from script3's output).
Any idea's on why this won't work? I've tried running script 3 using several variations of invoke-expression, invoke-command, start-process, and even trying to run with cmd. I'm also having trouble getting output on what exactly is causing the issue (stdout and stderr are many times empty when using the different commands). Am I missing some command or tool that may make this easier to troubleshoot? It almost seems like script3 is still running into a double hop issue despite it only making a web request? And if it was running into that, I thought it would have had an error returned.
There my be a better design for doing what I'm trying to do. I'm fairly new to PowerShell and may be over complicating this.
Edit: Rewrote my scripts in python and got it working.
I need some heads up here. I need to write a CGI script which has to connect unix host and execute set of perl scripts.
I am new to CGI world and I have couple of questions in my mind for which I don't know perl solution.
How will connect to Unix host from CGI script. I believe using Net::SSH ,is there any better module to do this ?
Lets assume I have connected to the server , now how would I execute the script and how would I now the status (running/success/failure)
of the script.
a. When its running I would like to see the output that gets generated. Is it possible to view the script output in realtime?
b. If its a failure then I should be notified and reason for failure and should not trigger the next script in sequence.
If someone has similar setup already available and ready to show the code/setup , I would be much happier :)
I have an example and question regarding unix/apache session scope. Here is the test script I am using:
#! /usr/bin/perl -I/gcne/etc
$pid = $$;
system("mkdir -p /gcne/var/nick/hello.$pid");
chdir "/gcne/var/nick/hello.$pid";
$num = 3;
while($num--){
system("> blah.$pid.$num");
#sleep(5);
system("sleep 5");
}
system("> blahDONE.$pid");
I have noticed that if I call this script TWICE from a web browser that it will execute these requests in sequence — a total of 30 seconds. How does Perl/unix deal with parallel execution and using system commands? Is there a possibility that I get cross-session problems when using system calls? Or does apache treat each of these server calls as a new console session process?
In this example, I'm basically trying to test whether or not different PID files would be created in the "wrong" PID folder.
CentOS release 5.3
Apache/2.2.3 Jul 14 2009
Thanks
If you call the script via the normal CGI interface, then each time you request a web page your script is called. This means each time it gets a new process ID. Basically for CGI's the interface between Apache and your programm are the commandline args, the environment variables and STDOUT and STDERR. Otherwise everything is a normal command call.
Situation is a little different when you use mechanism like mod_perl, but it seems you don't do this ATM.
Apache does not do any synchronisation, so you can expect up to MaxClients (see apache docs) parallel calls of your script.
P.S. The environment variables between a call from apache and from shell are a bit different, but this is not relevant for your question (but you'll probably wonder if e.g. USER or similar variables are missing).
See also for more information: http://httpd.apache.org/docs/2.4/howto/cgi.html
Especially: http://httpd.apache.org/docs/2.4/howto/cgi.html#behindscenes
A browser may only issue one call at a time (tested with firefox), so when testing it may appear requests are handled one after another. This is not server related, but caused by the web browser.
I am trying to connect to an external SOAP service using PHP and have written a small php test script that just connects to the service and performs a simple request to check everything is working.
This all works correctly but when I run via a browser request, it is very slow taking somewhere in the region of 40s to establish the initial connection. When I do the same request using the exact same script on the command line, it goes through straight away.
Does anyone have any ideas as to why this might be?
Cheers
PHP caches the wsdl in /tmp. If you run from the command line first, the cache file will be owned by whatever user you're running the script as, and apache won't be able to read the cache. The wsdl will have to be downloaded and parsed every time which will be slow.
Check the permissions of /tmp/wsdl*.
Maybe external SOAP service trying to check your IP, and your server has ICMP allowed, when your local network - not.
Anyway, this question might be answered more clearly by administrator of external SOAP service :)
Is there a difference between the php.inis that are being used?
On a standard ubuntu server installation:
diff /etc/php5/apache2/php.ini /etc/php5/cli/php.ini
//edit:
Another difference might be in the include paths. Had this trouble myself on a local test server, it didn't actually use the soap class that was included (it didn't include anything, because the search paths weren't valid), but it included the built-in soap_client class.
I've developed a Powershell script to deploy updates to a suite of applications; including SQL Server database updates.
Next I need a way to execute these scripts on 100+ servers; without manually connecting to each server. "Powershell v2 with remoting" is not an option as it is still in CTP.
Powershell v1 with WinRM looks the most promising, but I can't get feedback from my scripts. The scripts execute, but I need to know about exceptions. The scripts create a log file, is there a way to send the contents of the log file back to the "client" (the local computer making the remote calls)?
Quick answer is No. Long version is, possible but will involve lots of hacks. I developed very similar deployment script/system using PowerShell 2 last year. The remoting feature is the primary reason we put up with the CTP status. PowerShell 1 with WinRM is flaky at best and as you said, no real feedback apart from ok or failed.
Alternative that I considered included using PsExec, which is very much non-standard and may be blocked by firewall. The other approach involves using system management tools such as MS's System Center, but that's just a big hammer for a tiny nail. So you have to pick your poison...
Just a comment on this: The easiest way to capture powershell output is to use the start-transcript cmdlet to pipe console output to a file. We have a small snippet at the start of all our script that sends a log file with the console output from each script to a central file share, and names the log file with script name and date executed so that we'll have an idea of what happened. Its not too hard to pipe all those log files into a database for further processing either. Probably won't seolve all your problems, but would definitely help on the "getting data back" part.
best regards,
Trond