How to start a server monitoring perl script and execute the client side code in the same script - perl

I need to launch a server script which will not exit. and after the server is ready I need to start the client code to run some automated tests.
tried, not work, the server process is not in the background and the client code cannot be executed.
system ($server &)
is it possible to use Parallel::ForkManager to handle this, how? all the examples are repetitive tasks, while my case is server and client.

Parallel::ForkManager isn't really designed for this; there are various other distributions for supporting what a server needs to do; Daemon::Daemonize looks like it does the fewest other things besides just running your designated server code in the background.

Related

How to run powershell script remotely using chef?

I have powershell script which is present on chef server to run on remote windows server, how can i run this powershell script from chef server on remote windows server.
Chef doesn't do anything like this. First, Chef Server can never remotely access servers directly, all it does is stores data. Second, Chef doesn't really do "run a thing in a place right now". We offer workstation tools like knife ssh and knife winrm as simplistic wrappers but they aren't made for anything complex. The Chef-y way to do this would be to make a recipe and run your script using the the powershell_script resource.
Does it mean chef is also running on Windows server ?
If yes, why not to use psexec from Windows Ps tools ?
https://learn.microsoft.com/en-us/sysinternals/downloads/psexec
Here is my understanding of what you are trying to achieve. If I'm wrong then please correct me in a comment and I will update my answer.
You have a powershell script that you need to run on a specific server or set of servers.
It would be convenient to have a central management solution for running this script instead of logging into each server and running it manually.
Ergo you either need to run this script in many places when a condition isn't filled, such as a file is missing, or you need to run this script often, or you need this script to be run with a certain timing in regards to other processes you have going on.
Without knowing precisely what you're trying to achieve with your script the best solution I know of is to write a cookbook and do one of the following
If your script is complex place it in your cookbook/files folder (assuming the script will be identical on all computers it runs on) or in your cookbook/templates folder (if you will need to inject information into it at write time). You can then write the .ps file to the local computer during a Chef converge with the following code snippet. After you write it to disk you will also have to call it with one of the commands in the next bullet.
Monomorphic file:
cookbook_file '<destination>' do
source '<filename.ps>'
<other options>
end
Options can be found at https://docs.chef.io/resource_cookbook_file.html
Polymorphic file:
template '<destination>' do
source '<template.ps.erb>'
variables {<hash of variables and values>}
<other options>
end
Options can be found at https://docs.chef.io/resource_template.html
If your script is a simple one-liner you can instead use powershell_script, powershell_out! or execute. powershell_out! has all the same options and features as the shell_out! command and the added advantage that your converge will pause until it receives an exit status for the command, if that is desirable. The documentation on using it is a bit more spotty though so spend time experimenting with it and googling.
https://docs.chef.io/resource_powershell_script.html
https://docs.chef.io/resource_execute.html
Which ever option you end up going with you will probably want to guard your resource with conditions on when it should not run, such as when a file already exists, a registry key is set or what ever else your script changes that you can use. If you truly want the script to execute every single converge then you can skip this step, but that is a code smell and I urge you to reconsider your plans.
https://docs.chef.io/resource_common.html#guards
It's important to note that this is not an exhaustive list of how to run a powershell script on your nodes, just a collection of common patterns I've seen.
Hope this helped.

Terminate running perl script started with CGI

I am creating a Perl script that creates a Net::WebSocket::Server on port 3000. Now I had the (not so brilliant) idea to start the script in the browser via CGI, so it runs in the background and can't be stopped. However, I have to restart the script whenever I modify it.
Is it possible to stop a CGI script in an endless loop, except by restarting the computer?
You didn't say what operating system you are on, so we cannot give you specific advice on how to find and kill the process. But you can always restart the web server application. CGI scripts are children of the server process (probably an Apache) that starts them. If you simply restart the Apache server, they should all be terminated.
Please don't put code that is supposed to run persistently in your cgi-bin directory. That's a bad idea, as you discovered.

CGI script to connect to unix host and trigger perl script and report back the status

I need some heads up here. I need to write a CGI script which has to connect unix host and execute set of perl scripts.
I am new to CGI world and I have couple of questions in my mind for which I don't know perl solution.
How will connect to Unix host from CGI script. I believe using Net::SSH ,is there any better module to do this ?
Lets assume I have connected to the server , now how would I execute the script and how would I now the status (running/success/failure)
of the script.
a. When its running I would like to see the output that gets generated. Is it possible to view the script output in realtime?
b. If its a failure then I should be notified and reason for failure and should not trigger the next script in sequence.
If someone has similar setup already available and ready to show the code/setup , I would be much happier :)

Scope of system calls using Perl script on apache/unix

I have an example and question regarding unix/apache session scope. Here is the test script I am using:
#! /usr/bin/perl -I/gcne/etc
$pid = $$;
system("mkdir -p /gcne/var/nick/hello.$pid");
chdir "/gcne/var/nick/hello.$pid";
$num = 3;
while($num--){
system("> blah.$pid.$num");
#sleep(5);
system("sleep 5");
}
system("> blahDONE.$pid");
I have noticed that if I call this script TWICE from a web browser that it will execute these requests in sequence — a total of 30 seconds. How does Perl/unix deal with parallel execution and using system commands? Is there a possibility that I get cross-session problems when using system calls? Or does apache treat each of these server calls as a new console session process?
In this example, I'm basically trying to test whether or not different PID files would be created in the "wrong" PID folder.
CentOS release 5.3
Apache/2.2.3 Jul 14 2009
Thanks
If you call the script via the normal CGI interface, then each time you request a web page your script is called. This means each time it gets a new process ID. Basically for CGI's the interface between Apache and your programm are the commandline args, the environment variables and STDOUT and STDERR. Otherwise everything is a normal command call.
Situation is a little different when you use mechanism like mod_perl, but it seems you don't do this ATM.
Apache does not do any synchronisation, so you can expect up to MaxClients (see apache docs) parallel calls of your script.
P.S. The environment variables between a call from apache and from shell are a bit different, but this is not relevant for your question (but you'll probably wonder if e.g. USER or similar variables are missing).
See also for more information: http://httpd.apache.org/docs/2.4/howto/cgi.html
Especially: http://httpd.apache.org/docs/2.4/howto/cgi.html#behindscenes
A browser may only issue one call at a time (tested with firefox), so when testing it may appear requests are handled one after another. This is not server related, but caused by the web browser.

How to shutdown Perl dancer applications nicely

I run several Perl dancer applications at the same time with the same user in FCGI mode (Apache). As I understand correctly, Apache (or any other webserver) will fork a new dancer application if the current one(s) are busy.
To ensure that no visitor is interrupted by the dancer shutdown I like to let dancer handles the current connection until it finished and then exit/last the process.
How to shutdown a Perl dancer application using kill signal HUP to perfom such nice shutdown?
To rollout a new version of a dancer application I use pkill -HUP perl as the dancer user to "shutdown" the processes. But currently (due to missing signal handler) it's more like shoot 'em down as of shutdown an application.
The solution by mugen kenichi works (starman):
If you are able to change your infrastructure you could try one of the plack webservers that support your need. starman and hyponotoad both do graceful restarts on SIGHUP
There are a few shortcoming regarding <% request.uri_base %> so we have to develop with hard coded URI paths. Not very handsome but necessary.
If I read your question correctly, you are concerned that Apache/FCGI might kill the Dancer app while it is in the middle of handling a request. Is that correct?
If so, don't worry about it. Apache/FCGI doesn't do that. When it forks a new instance of the handler because existing ones are busy, that's a new one in addition to the existing instances. The existing ones are left alone to finish what they're doing.