CGI script to connect to unix host and trigger perl script and report back the status - perl

I need some heads up here. I need to write a CGI script which has to connect unix host and execute set of perl scripts.
I am new to CGI world and I have couple of questions in my mind for which I don't know perl solution.
How will connect to Unix host from CGI script. I believe using Net::SSH ,is there any better module to do this ?
Lets assume I have connected to the server , now how would I execute the script and how would I now the status (running/success/failure)
of the script.
a. When its running I would like to see the output that gets generated. Is it possible to view the script output in realtime?
b. If its a failure then I should be notified and reason for failure and should not trigger the next script in sequence.
If someone has similar setup already available and ready to show the code/setup , I would be much happier :)

Related

How can I implement Webhooks with ASP.NET (or some other technology) to execute a powershell script?

I would like to be able to execute a powershell script on a Win10 box called MyServices which when called with either HTTP POST or HTTP GET http://MyServices/foobar/<hex hash code> will execute a powershell script that resides on MyServices.
Or, alternatively and preferably is there a utility that can work over my local network that I can send a command from another machine that can execute the script for me? I don’t want to reinvent the wheel of something like this already exists.
ETA: I’m running a Linux machine that’s going to initiate the powershell script. And to make it more complicated, this is a service running under docker.

How to run powershell script remotely using chef?

I have powershell script which is present on chef server to run on remote windows server, how can i run this powershell script from chef server on remote windows server.
Chef doesn't do anything like this. First, Chef Server can never remotely access servers directly, all it does is stores data. Second, Chef doesn't really do "run a thing in a place right now". We offer workstation tools like knife ssh and knife winrm as simplistic wrappers but they aren't made for anything complex. The Chef-y way to do this would be to make a recipe and run your script using the the powershell_script resource.
Does it mean chef is also running on Windows server ?
If yes, why not to use psexec from Windows Ps tools ?
https://learn.microsoft.com/en-us/sysinternals/downloads/psexec
Here is my understanding of what you are trying to achieve. If I'm wrong then please correct me in a comment and I will update my answer.
You have a powershell script that you need to run on a specific server or set of servers.
It would be convenient to have a central management solution for running this script instead of logging into each server and running it manually.
Ergo you either need to run this script in many places when a condition isn't filled, such as a file is missing, or you need to run this script often, or you need this script to be run with a certain timing in regards to other processes you have going on.
Without knowing precisely what you're trying to achieve with your script the best solution I know of is to write a cookbook and do one of the following
If your script is complex place it in your cookbook/files folder (assuming the script will be identical on all computers it runs on) or in your cookbook/templates folder (if you will need to inject information into it at write time). You can then write the .ps file to the local computer during a Chef converge with the following code snippet. After you write it to disk you will also have to call it with one of the commands in the next bullet.
Monomorphic file:
cookbook_file '<destination>' do
source '<filename.ps>'
<other options>
end
Options can be found at https://docs.chef.io/resource_cookbook_file.html
Polymorphic file:
template '<destination>' do
source '<template.ps.erb>'
variables {<hash of variables and values>}
<other options>
end
Options can be found at https://docs.chef.io/resource_template.html
If your script is a simple one-liner you can instead use powershell_script, powershell_out! or execute. powershell_out! has all the same options and features as the shell_out! command and the added advantage that your converge will pause until it receives an exit status for the command, if that is desirable. The documentation on using it is a bit more spotty though so spend time experimenting with it and googling.
https://docs.chef.io/resource_powershell_script.html
https://docs.chef.io/resource_execute.html
Which ever option you end up going with you will probably want to guard your resource with conditions on when it should not run, such as when a file already exists, a registry key is set or what ever else your script changes that you can use. If you truly want the script to execute every single converge then you can skip this step, but that is a code smell and I urge you to reconsider your plans.
https://docs.chef.io/resource_common.html#guards
It's important to note that this is not an exhaustive list of how to run a powershell script on your nodes, just a collection of common patterns I've seen.
Hope this helped.

Scope of system calls using Perl script on apache/unix

I have an example and question regarding unix/apache session scope. Here is the test script I am using:
#! /usr/bin/perl -I/gcne/etc
$pid = $$;
system("mkdir -p /gcne/var/nick/hello.$pid");
chdir "/gcne/var/nick/hello.$pid";
$num = 3;
while($num--){
system("> blah.$pid.$num");
#sleep(5);
system("sleep 5");
}
system("> blahDONE.$pid");
I have noticed that if I call this script TWICE from a web browser that it will execute these requests in sequence — a total of 30 seconds. How does Perl/unix deal with parallel execution and using system commands? Is there a possibility that I get cross-session problems when using system calls? Or does apache treat each of these server calls as a new console session process?
In this example, I'm basically trying to test whether or not different PID files would be created in the "wrong" PID folder.
CentOS release 5.3
Apache/2.2.3 Jul 14 2009
Thanks
If you call the script via the normal CGI interface, then each time you request a web page your script is called. This means each time it gets a new process ID. Basically for CGI's the interface between Apache and your programm are the commandline args, the environment variables and STDOUT and STDERR. Otherwise everything is a normal command call.
Situation is a little different when you use mechanism like mod_perl, but it seems you don't do this ATM.
Apache does not do any synchronisation, so you can expect up to MaxClients (see apache docs) parallel calls of your script.
P.S. The environment variables between a call from apache and from shell are a bit different, but this is not relevant for your question (but you'll probably wonder if e.g. USER or similar variables are missing).
See also for more information: http://httpd.apache.org/docs/2.4/howto/cgi.html
Especially: http://httpd.apache.org/docs/2.4/howto/cgi.html#behindscenes
A browser may only issue one call at a time (tested with firefox), so when testing it may appear requests are handled one after another. This is not server related, but caused by the web browser.

Spawn external process from a CGI script

I've searched and found several very similar questions to mine but nothing in those answers have worked for me yet.
I have a perl CGI script that accepts a file upload. It looks at the file and determines how it should be processed and then calls a second non-CGI script to do the actual processing. At least, that's how it should work.
This is running on Windows with Apache 2.0.59 and ActiveState Perl 5.8.8. The file uploading part works fine but I can't seem to get the upload.cgi script to run the second script that does the actual processing. The second script doesn't communicate in any way with the user that sent the file (other than it sends an email when it's done). I want the CGI script to run the second script (in a separate process) and then 'go away'.
So far I've tried exec, system (passing a 1 as the first parameter), system (without using 1 as first parameter and calling 'start'), and Win32::Process. Using system with 1 as the first parameter gave me errors in the Apache log:
'1' is not recognized as an internal or external command,\r, referer: http://my.server.com/cgi-bin/upload.cgi
Nothing else has given me any errors but they just don't seem to work. The second script logs a message to the Windows event log as one of the first things it does. No log entry is being created.
It works fine on my local machine under Omni webserver but not on the actual server machine running Apache. Is there an Apache config that could be affecting this? The upload.cgi script resides in the d:\wwwroot\test\cgi-bin dir but the other script is elsewhere on the same machine (d:\wwwroot\scripts).
There may be a security related problem, but it should be apparent in the logs.
This won't exactly answer your question but it may give you other implementation ideas where you will not face with potential security and performance problems.
I don't quite like mixing my web server environment with system() calls. Instead, I create an application server (with POE usually) which accepts the relevant parameters from the web server, processes the job, and notifies the web server upon completion. (well, the notification part may not be straightforward but that's another topic.)

Perl expect - how to control timeout on target machine

I am a newbie to perl. I am using perl expect module to spawn to a remote system. Execute a set of commands there one after another using the send module(like $exp->send("my command as string goes here\n"). The problem is the commands that I execute take some time for processing . And before all the command finish the remote machine gets timed out and I come back to my host machine prompt. Can you please help me how to handle this.?
I have one more question. I have a command which returns 2 values after execution(say I am doing a print for 2 values on remote machine). I want to capture these 2 values and pass as argument to the next command using send module. How do I do this.
Pls help me with this problem.
Thanks.
I just found out something about the expect module. There is an undef option that can be used with expect like $exp->expect(undef). This will wait indefinitely and lets all commands finish their processing. But the problem is that, it does not return back the control to the host machine. There is one more option of using expect with eof which will wait until it encounters an eof and then returns to the host machine. Although no idea precisely how to use it. An elegant solution that I found is to use ssh to run commands on remote machine rather than using expect in which case we do not have to deal with timeouts. :)
I just found out something about the expect module. There is an undef option that can be used with expect like $exp->expect(undef). This will wait indefinitely and lets all commands finish their processing. But the problem is that, it does not return back the control to the host machine. There is one more option of using expect with eof which will wait until it encounters an eof and then returns to the host machine. Although no idea precisely how to use it. An elegant solution that I found is to use ssh to run commands on remote machine rather than using expect in which case we do not have to deal with timeouts. :)