Perl's backticks/system giving "tcsetattr: Input/output error" - perl

I'm writing a perl script that kicks off a script on several different servers using ssh. The remote script needs to run as long as this script is running:
#!/usr/bin/perl
require 'config.cfg'
##servers is defined in config.cfg
#Contains server info as [username,hostname]
#
# #servers = ([username,server1.test.com],[username,server2.test.com])
#
foreach $server ( #servers ) {
my $pid = fork();
if ( $pid == 0 ) {
$u = ${$server}[0];
$h = ${$server}[1];
print "Running script on $h \n";
print `ssh -tl $u $h perl /opt/scripts/somescript.pl`;
exit 0;
} else {
die "Couldn't start the process: $!\n";
}
}
[...]
When I run this script, I get the following output:
./brokenscript.pl
Running script on server01
$ tcsetattr: Input/output error
Connection to server01 closed.
The same result occurs when running with system (and backticks are preferred anyways since I want the output of the script). When I run the exact command between the backticks on the command line, it works exactly as expected. What is causing this?

The tcsetattr: Input/output error message comes from ssh when it tries to put the local terminal into “raw” mode (which involves a call to tcsetattr; see enter_raw_mode in sshtty.c, called from client_loop in clientloop.c).
From IEEE Std 1003.1, 2004 (Posix) Section 11.1.4: Terminal Access Control, tcsetattr may return -1 with errno == EIO (i.e. “Input/output error”) if the calling process is in an orphaned (or background?) process group.
Effectively ssh is trying to change the setting of the local terminal even though it is not in the foreground process group (due of your fork and, the local script exiting (as evidenced by the apparent shell prompt that comes immediately before the error message in your quoted output)).
If you just want to avoid the error message, you can use ssh -ntt (redirect stdin from /dev/null, but ask the remote side to allocate a tty anyway) instead of ssh -t (add your -l and any other options you need back in too, of course).
More likely, you are interesting in keeping the local script running as long as some of the remote processes are still running. For that, you need to use the wait function (or one of its “relatives”) to wait for each forked process to exit before you exit the program that forked them (this will keep them in the foreground process group as long as the program that started them is in it). You may still want to use -n though, since it would be confusing if the multiple instances of ssh that you forked all tried to use (read from, or change the settings of) the local terminal at the same time.
As a simple demonstration, you could have the local script do sleep 30 after forking off all the children so that the ssh command(s) have time to start up while they are part of the foreground process group. This should suppress the error message, but it will not address your stated goal. You need wait for that (If I am interpreting your goal correctly).

That probably happens because you are forcing SSH to allocate a tty when stdin/stdout are not really ttys. SSH tries to call some specific tty function on those handlers (probably forwarded from the remote side) and the call fails returning some error.
Is there any reason why you should be allocating a tty?
Is there also any reason to use the obsolete version 1 of the SSH protocol?

Related

Get the output of a command executed via $self->send() on a remote host in Perl Expect module

I am using Expect module in Perl to do an interactive activity and execute a command on a remote machine. Request your help on this
Here are the steps I used
Do a switch user to another account.
Sends the password to login.
Once I get the shell for the new user, execute a ssh command to connect to a remote machine.
Then I want to execute a command in that remote machine and get its response.
I am able to execute the command on the remote machine. I am seeing the output on my terminal too.
But I am not able to capture it in a variable so that I can compare it against a value.
use Expect;
my $exp = Expect->new;
$exp->raw_pty(1);
$exp->spawn('su - crazy_user') or die "Cannot spawn switch user cmd: $!\n"
$exp->expect($timeout,
[ qr/Password:/i,
sub { my $self = shift;
$self->send("$passwd\n");
exp_continue;
}],
[ qr/\-bash\-4.1\$/i,
sub { my $self = shift;
$self->send("ssh $REMOTE_MACHINE\n");
$self->send("$COMMAND1\n");
exp_continue;
}]
);
$exp->soft_close();
How can I get the result of the $COMMAND1 that I executed on the remote machine via $self->send("$COMMAND1\n") ?
I am by no means an expert on this but as noone else has answered so far, let me attempt it.
Your expect command is the su and as such, normal expecting will only work on whatever that command answers back to your original shell. That however is only the password prompt and maybe some error messages. You can still send commands, but their responses show up on the shell of the new user, not the shell the expect command has been executed in. That is why they show up on screen but not in the stream available to your expect object. Note that you would likely encounter the very same problem if you where to ssh directly (i am not sure why you would su and then ssh anyways, could you not directly ssh crazy-user#remote_machine?).
The solution is probably to get rid of the su and ssh directly into the user you need on the remote machine employing Net::SSH::Expect instead of plain Expect as it gives you everything written to the remote console in its output stream. But be careful, if i remember correctly, the syntax for inspecting the stream is slightly different.

how to disable Net::SSH::Expect timeout time

My master scripts executes on gateway server. It invokes another script and executes it on a remote server, however there is no definite run time for the remote script. So i would like to disable the timeout..
Ok let me explain.
sub rollout
{
(my $path, my $address)=#_;
my $errorStatus = 0;
print "Rollout process being initiated on netsim\n";
#Creating SSH object
my $ssh = Net::SSH::Expect-> new (
host => "netsim",
user => 'root',
raw_pty => 1
);
#Starting SSH process
$ssh->run_ssh() or die "SSH process couldn't start: $!";
#Start the interactive session
$ssh->exec("/tmp/rollout.pl $path $address", 30);
return $errorStatus;
}
When this part of the code gets executed, the rollout.pl script is invoked, however after a second the process gets killed on the remote server
Without knowing exactly what is going wrong let me do a wild guess at an answer. Should i missinterpret your question, i will edit this later:
The default and enforced timeout of Net::SSH::Expect is only a timeout for read operations. It does not mean, that a started script will be killed when its time has run out, it just means that the last reading operation no longer waits for it saying stuff.
You can start your remote script, for example with a timeout of 1 second and the exec call may return after one second. The string returned by exec will then only contain the first second of output by the remote script. But the remote script may continue to run, you just need to start a new reading operation (there are several to your choice) to capture future output. The remote script, if everything works as expected, should continue to run until it is finished or you close your ssh connection.
That means, it will be killed if you just exec, wait for that to return and then close your connection. One way to go would be to have the remote script print something when it is finished ("done" comes to mind) and wait for that string with a waitfor with very high timeout and only close the connection when waitfor has returned TRUE. Or, well, if the timeout was reasonably long, also when it returns FALSE. If you want to monitor for your script crashing, i would suggest using pgrep or ps on the remote machine. Or make sure your remote script says something when dieing, for example using eval{} or die "done".
edit:
One thing to remember is, that this exec is in fact not something like system. It basically sends the command to the remote shell and hits enter. At least that is how you might visualize what is happening. But then, to appear more like system it waits for the default timeout of 1 second for any output and stores that in its own return value. Sadly it is a rather dumb reading operation as it just sits there and listens, so it is not a good idea to give it a high timeout or actually use it to interact with the script.
I would suggest to uses exec with a timeout of 0 and then use waitfor with a very high timeout to wait for whatever pattern your remote script puts as output once finished. And only then close your connection.
edit 2:
Just replace your exec with something like:
$ssh->exec("/tmp/rollout.pl $path $address", 0);
my $didItReturn = $ssh->waitfor("done", 300);
And make sure rollout.pl prints "done" when it is finished. And not before. Now this will start the script and then wait for up to 5 minutes for the "done". If a "done" showed up, $didItReturn will be true, if it timed out, it will be false. It will not wait for 5 minutes if it sees a "done" (as opposed to giving a timeout to exec).
edit 3:
If you want no timeout at all:
$ssh->exec("/tmp/rollout.pl $path $address", 0);
my $didItReturn = 0;
while ($didItReturn = 0)
{
$didItReturn = $ssh->waitfor("done", 300);
}

Odd behavior with Perl system() command

Note that I'm aware that this is probably not the best or most optimal way to do this but I've run into this somewhere before and I'm curious as to the answer.
I have a perl script that is called from an init that runs and occasionally dies. To quickly debug this, I put together a quick wrapper perl script that basically consists of
#$path set from library call.
while(1){
system("$path/command.pl " . join(" ",#ARGV) . " >>/var/log/outlog 2>&1");
sleep 30; #Added this one later. See below...
}
Fire this up from the command line and it runs fine and as expected. command.pl is called and the script basically halts there until the child process dies then goes around again.
However, when called from a start script (actually via start-stop-daemon), the system command returns immediately, leaving command.pl running. Then it goes around for another go. And again and again. (This was not fun without the sleep command.). ps reveals the parent of (the many) command.pl to be 1 rather than the id of the wrapper script (which it is when I run from the command line).
Anyone know what's occurring?
Maybe the command.pl is not being run successfully. Maybe the file doesn't have execute permission (do you need to say perl command.pl?). Maybe you are running the command from a different directory than you thought, and the command.pl file isn't found.
There are at least three things you can check:
standard error output of your command. For now you are swallowing it by saying 2>&1. Remove that part and observe what errors the system command produces.
the return value of system. The command may run and system may still return an exit code, but if system returns 0, you know the command was successful.
Perl's error variable $!. If there was a problem, Perl will set $!, which may or may not be helpful.
To summarize, try:
my $ec = system("command.pl >> /var/log/outlog");
if ($ec != 0) {
warn "exit code was $ec, \$! is $!";
}
Update: if multiple instance of the command keep showing up in your ps output, then it sounds like the program is forking and running itself in the background. If that is indeed what the command is supposed to do, then what you do NOT want to do is run this command in an endless loop.
Perhaps when run from a deamon the "system" command is using a different shell than the one used when you are running as yourself. Maybe the shell used by the daemon does not recognize the >& construct.
Instead of system("..."), try exec("...") function if that works for you.

Export PATH using NET::SSH:PERL

I am writing a shell script to automate some of the tedious tasks that we perform. I need to ssh to a server and change the PATH variable remotely, have that variable persist for the next commands to be executed. Code below;
sub ab_tier{
my $ssh=Net::SSH::Perl->new($host);
$ssh->login($user2,$user2);
my $PATH;
my($stdout,$stderr,$exit)=$ssh->cmd("export
PATH=/usr/bin/sudo:/local/perl-5.6.1/bin:$PATH");
my($PATH, $stderr, $exit)=$ssh->cmd("echo $PATH");
print $PATH; # Check the path for correctness : does not change
}
However the PATH does not change. Is there another way to implement this or am I doing something wrong. I need to automate tasks so dont think $ssh->shell would help here. Please suggest.
I made changes as per suggestions and everything works fine. However I am noticing another issue, which is occurring when trying to display environment variables.
my $cmd_ref_pri={
cmd0=>"echo $ENV{'HOME'}",
cmd1=>"chmod 777 $ENV{'COMMON_TOP'}/temp"
};
Now I am connecting to a remote server using Net::SSH::Perl and the value returned by $ENV{"HOME"} is the value of the my home directory and not of the remote server. However if I add a command as in :
my $cmd_ref_pri={
cmd0=>"whoami ; echo $ENV{'HOME'}",
cmd1=>"chmod 777 $ENV{'COMMON_TOP'}/temp"
};
Then the user id displayed is of the user using which I ssh to the remote server. I do not have other modules installed and the only one available is Net:SSh:perl hence I am forced to use this.
routine for executing command
sub ssh_cmd{
#$cmd_sub - contains command, $ssh contains object ref
my ($cmd_sub,$ssh)=#_;
my($stdout, $stderr, $exit)=$ssh->cmd("bash",$cmd_sub);
if( $exit !=0){
print $stdout;
print "ERROR-> $stderr";
exit 1;
}
return 0;
}
Any suggestions as to why this could happen ?
cmd() is not passing your commands into one shell. It executes them in separate shells (or without any shell - manual is not clear about it). As soon as you finish your export PATH the shell exits and the new PATH is lost.
Looks like it is possible to pass all the relevant commands to a single shell process as separate lines of $stdin?
my $stdin='export A=B
echo $A
';
$ssh->cmd("bash",$stdin);
This would work just like on interactive login (but without terminal control, so commands that talk directly to terminal would likely fail).
Anyway Net::SSH::Perl does not look like the best tool for the job. I would rather use expect for automation.
Set PATH on every command call:
$ssh->cmd('PATH=/usr/bin/sudo:/local/perl-5.6.1/bin:$PATH echo $PATH');
And BTW, Net::SSH::Perl is not being maintained anymore, nowadays Net::SSH2 and Net::OpenSSH are better alternatives.
Write commands to a remote temp file, then execute that one. Or, skip the $PATH thing and use the full path for subsequent commands (assuming you know it).

How do I daemonize an arbitrary script in unix?

I'd like a daemonizer that can turn an arbitrary, generic script or command into a daemon.
There are two common cases I'd like to deal with:
I have a script that should run forever. If it ever dies (or on reboot), restart it. Don't let there ever be two copies running at once (detect if a copy is already running and don't launch it in that case).
I have a simple script or command line command that I'd like to keep executing repeatedly forever (with a short pause between runs). Again, don't allow two copies of the script to ever be running at once.
Of course it's trivial to write a "while(true)" loop around the script in case 2 and then apply a solution for case 1, but a more general solution will just solve case 2 directly since that applies to the script in case 1 as well (you may just want a shorter or no pause if the script is not intended to ever die (of course if the script really does never die then the pause doesn't actually matter)).
Note that the solution should not involve, say, adding file-locking code or PID recording to the existing scripts.
More specifically, I'd like a program "daemonize" that I can run like
% daemonize myscript arg1 arg2
or, for example,
% daemonize 'echo `date` >> /tmp/times.txt'
which would keep a growing list of dates appended to times.txt. (Note that if the argument(s) to daemonize is a script that runs forever as in case 1 above, then daemonize will still do the right thing, restarting it when necessary.) I could then put a command like above in my .login and/or cron it hourly or minutely (depending on how worried I was about it dying unexpectedly).
NB: The daemonize script will need to remember the command string it is daemonizing so that if the same command string is daemonized again it does not launch a second copy.
Also, the solution should ideally work on both OS X and linux but solutions for one or the other are welcome.
EDIT: It's fine if you have to invoke it with sudo daemonize myscript myargs.
(If I'm thinking of this all wrong or there are quick-and-dirty partial solutions, I'd love to hear that too.)
PS: In case it's useful, here's a similar question specific to python.
And this answer to a similar question has what appears to be a useful idiom for a quick-and-dirty demonizing of an arbitrary script:
You can daemonize any executable in Unix by using nohup and the & operator:
nohup yourScript.sh script args&
The nohup command allows you to shut down your shell session without it killing your script, while the & places your script in the background so you get a shell prompt to continue your session. The only minor problem with this is standard out and standard error both get sent to ./nohup.out, so if you start several scripts in this manor their output will be intertwined. A better command would be:
nohup yourScript.sh script args >script.out 2>script.error&
This will send standard out to the file of your choice and standard error to a different file of your choice. If you want to use just one file for both standard out and standard error you can us this:
nohup yourScript.sh script args >script.out 2>&1 &
The 2>&1 tells the shell to redirect standard error (file descriptor 2) to the same file as standard out (file descriptor 1).
To run a command only once and restart it if it dies you can use this script:
#!/bin/bash
if [[ $# < 1 ]]; then
echo "Name of pid file not given."
exit
fi
# Get the pid file's name.
PIDFILE=$1
shift
if [[ $# < 1 ]]; then
echo "No command given."
exit
fi
echo "Checking pid in file $PIDFILE."
#Check to see if process running.
PID=$(cat $PIDFILE 2>/dev/null)
if [[ $? = 0 ]]; then
ps -p $PID >/dev/null 2>&1
if [[ $? = 0 ]]; then
echo "Command $1 already running."
exit
fi
fi
# Write our pid to file.
echo $$ >$PIDFILE
# Get command.
COMMAND=$1
shift
# Run command until we're killed.
while true; do
$COMMAND "$#"
sleep 10 # if command dies immediately, don't go into un-ctrl-c-able loop
done
The first argument is the name of the pid file to use. The second argument is the command. And all other arguments are the command's arguments.
If you name this script restart.sh this is how you would call it:
nohup restart.sh pidFileName yourScript.sh script args >script.out 2>&1 &
I apologise for the long answer (please see comments about how my answer nails the spec). I'm trying to be comprehensive, so you have as good of a leg up as possible. :-)
If you are able to install programs (have root access), and are willing to do one-time legwork to set up your script for daemon execution (i.e., more involved than simply specifying the command-line arguments to run on the command line, but only needing to be done once per service), I have a way that's more robust.
It involves using daemontools. The rest of the post describes how to set up services using daemontools.
Initial setup
Follow the instructions in How to install daemontools. Some distributions (e.g., Debian, Ubuntu) already have packages for it, so just use that.
Make a directory called /service. The installer should have already done this, but just verify, or if installing manually. If you dislike this location, you can change it in your svscanboot script, although most daemontools users are used to using /service and will get confused if you don't use it.
If you're using Ubuntu or another distro that doesn't use standard init (i.e., doesn't use /etc/inittab), you will need to use the pre-installed inittab as a base for arranging svscanboot to be called by init. It's not hard, but you need to know how to configure the init that your OS uses.
svscanboot is a script that calls svscan, which does the main work of looking for services; it's called from init so init will arrange to restart it if it dies for any reason.
Per-service setup
Each service needs a service directory, which stores housekeeping information about the service. You can also make a location to house these service directories so they're all in one place; usually I use /var/lib/svscan, but any new location will be fine.
I usually use a script to set up the service directory, to save lots of manual repetitive work. e.g.,
sudo mkservice -d /var/lib/svscan/some-service-name -l -u user -L loguser "command line here"
where some-service-name is the name you want to give your service, user is the user to run that service as, and loguser is the user to run the logger as. (Logging is explained in just a little bit.)
Your service has to run in the foreground. If your program backgrounds by default, but has an option to disable that, then do so. If your program backgrounds without a way to disable it, read up on fghack, although this comes at a trade-off: you can no longer control the program using svc.
Edit the run script to ensure it's doing what you want it to. You may need to place a sleep call at the top, if you expect your service to exit frequently.
When everything is set up right, create a symlink in /service pointing to your service directory. (Don't put service directories directly within /service; it makes it harder to remove the service from svscan's watch.)
Logging
The daemontools way of logging is to have the service write log messages to standard output (or standard error, if you're using scripts generated with mkservice); svscan takes care of sending log messages to the logging service.
The logging service takes the log messages from standard input. The logging service script generated by mkservice will create auto-rotated, timestamped log files in the log/main directory. The current log file is called current.
The logging service can be started and stopped independently of the main service.
Piping the log files through tai64nlocal will translate the timestamps into a human-readable format. (TAI64N is a 64-bit atomic timestamp with a nanosecond count.)
Controlling services
Use svstat to get the status of a service. Note that the logging service is independent, and has its own status.
You control your service (start, stop, restart, etc.) using svc. For example, to restart your service, use svc -t /service/some-service-name; -t means "send SIGTERM".
Other signals available include -h (SIGHUP), -a (SIGALRM), -1 (SIGUSR1), -2 (SIGUSR2), and -k (SIGKILL).
To down the service, use -d. You can also prevent a service from automatically starting at bootup by creating a file named down in the service directory.
To start the service, use -u. This is not necessary unless you've downed it previously (or set it up not to auto-start).
To ask the supervisor to exit, use -x; usually used with -d to terminate the service as well. This is the usual way to allow a service to be removed, but you have to unlink the service from /service first, or else svscan will restart the supervisor.
Also, if you created your service with a logging service (mkservice -l), remember to also exit the logging supervisor (e.g., svc -dx /var/lib/svscan/some-service-name/log) before removing the service directory.
Summary
Pros:
daemontools provides a bulletproof way to create and manage services. I use it for my servers, and I highly recommend it.
Its logging system is very robust, as is the service auto-restart facility.
Because it starts services with a shell script that you write/tune, you can tailor your service however you like.
Powerful service control tools: you can send most any signal to a service, and can bring services up and down reliably.
Your services are guaranteed a clean execution environment: they will execute with the same environment, process limits, etc., as what init provides.
Cons:
Each service takes a bit of setup. Thankfully, this only needs doing once per service.
Services must be set up to run in the foreground. Also, for best results, they should be set up to log to standard output/standard error, rather than syslog or other files.
Steep learning curve if you're new to the daemontools way of doing things. You have to restart services using svc, and cannot run the run scripts directly (since they would then not be under the control of the supervisor).
Lots of housekeeping files, and lots of housekeeping processes. Each service needs its own service directory, and each service uses one supervisor process to auto-restart the service if it dies. (If you have many services, you will see lots of supervise processes in your process table.)
In balance, I think daemontools is an excellent system for your needs. I welcome any questions about how to set it up and maintain it.
You should have a look at daemonize. It allows to detect second copy (but it uses file locking mechanism). Also it works on different UNIX and Linux distributions.
If you need to automatically start your application as daemon, then you need to create appropriate init-script.
You can use the following template:
#!/bin/sh
#
# mydaemon This shell script takes care of starting and stopping
# the <mydaemon>
#
# Source function library
. /etc/rc.d/init.d/functions
# Do preliminary checks here, if any
#### START of preliminary checks #########
##### END of preliminary checks #######
# Handle manual control parameters like start, stop, status, restart, etc.
case "$1" in
start)
# Start daemons.
echo -n $"Starting <mydaemon> daemon: "
echo
daemon <mydaemon>
echo
;;
stop)
# Stop daemons.
echo -n $"Shutting down <mydaemon>: "
killproc <mydaemon>
echo
# Do clean-up works here like removing pid files from /var/run, etc.
;;
status)
status <mydaemon>
;;
restart)
$0 stop
$0 start
;;
*)
echo $"Usage: $0 {start|stop|status|restart}"
exit 1
esac
exit 0
I think you may want to try start-stop-daemon(8). Check out scripts in /etc/init.d in any Linux distro for examples. It can find started processes by command line invoked or PID file, so it matches all your requirements except being a watchdog for your script. But you can always start another daemon watchdog script that just restarts your script if necessary.
As an alternative to the already mentioned daemonize and daemontools, there is the daemon command of the libslack package.
daemon is quite configurable and does care about all the tedious daemon stuff such as automatic restart, logging or pidfile handling.
If you're using OS X specifically, I suggest you take a look at how launchd works. It will automatically check to ensure your script is running and relaunch it if necessary. It also includes all sorts of scheduling features, etc. It should satisfy both requirement 1 and 2.
As for ensuring only one copy of your script can run, you need to use a PID file. Generally I write a file to /var/run/.pid that contains a PID of the current running instance. if the file exists when the program runs, it checks if the PID in the file is actually running (the program may have crashed or otherwise forgotten to delete the PID file). If it is, abort. If not, start running and overwrite the PID file.
Daemontools ( http://cr.yp.to/daemontools.html ) is a set of pretty hard-core utilities used to do this, written by dj bernstein. I have used this with some success. The annoying part about it is that none of the scripts return any visible results when you run them - just invisible return codes. But once it's running it's bulletproof.
First get createDaemon() from http://code.activestate.com/recipes/278731/
Then the main code:
import subprocess
import sys
import time
createDaemon()
while True:
subprocess.call(" ".join(sys.argv[1:]),shell=True)
time.sleep(10)
You could give a try to immortal It is a *nix cross-platform (OS agnostic) supervisor.
For a quick try on macOS:
brew install immortal
In case you are using FreeBSD from the ports or by using pkg:
pkg install immortal
For Linux by downloading the precompiled binaries or from source: https://immortal.run/source/
You can either use it like this:
immortal -l /var/log/date.log date
Or by a configuration YAML file which gives you more options, for example:
cmd: date
log:
file: /var/log/date.log
age: 86400 # seconds
num: 7 # int
size: 1 # MegaBytes
timestamp: true # will add timesamp to log
If you would like to keep also the standard error output in a separate file you could use something like:
cmd: date
log:
file: /var/log/date.log
age: 86400 # seconds
num: 7 # int
size: 1 # MegaBytes
stderr:
file: /var/log/date-error.log
age: 86400 # seconds
num: 7 # int
size: 1 # MegaBytes
timestamp: true # will add timesamp to log
This is a working version complete with an example which you can copy into an empty directory and try out (after installing the CPAN dependencies, which are Getopt::Long, File::Spec, File::Pid, and IPC::System::Simple -- all pretty standard and are highly recommended for any hacker: you can install them all at once with cpan <modulename> <modulename> ...).
keepAlive.pl:
#!/usr/bin/perl
# Usage:
# 1. put this in your crontab, to run every minute:
# keepAlive.pl --pidfile=<pidfile> --command=<executable> <arguments>
# 2. put this code somewhere near the beginning of your script,
# where $pidfile is the same value as used in the cron job above:
# use File::Pid;
# File::Pid->new({file => $pidfile})->write;
# if you want to stop your program from restarting, you must first disable the
# cron job, then manually stop your script. There is no need to clean up the
# pidfile; it will be cleaned up automatically when you next call
# keepAlive.pl.
use strict;
use warnings;
use Getopt::Long;
use File::Spec;
use File::Pid;
use IPC::System::Simple qw(system);
my ($pid_file, $command);
GetOptions("pidfile=s" => \$pid_file,
"command=s" => \$command)
or print "Usage: $0 --pidfile=<pidfile> --command=<executable> <arguments>\n", exit;
my #arguments = #ARGV;
# check if process is still running
my $pid_obj = File::Pid->new({file => $pid_file});
if ($pid_obj->running())
{
# process is still running; nothing to do!
exit 0;
}
# no? restart it
print "Pid " . $pid_obj->pid . " no longer running; restarting $command #arguments\n";
system($command, #arguments);
example.pl:
#!/usr/bin/perl
use strict;
use warnings;
use File::Pid;
File::Pid->new({file => "pidfile"})->write;
print "$0 got arguments: #ARGV\n";
Now you can invoke the example above with: ./keepAlive.pl --pidfile=pidfile --command=./example.pl 1 2 3 and the file pidfile will be created, and you will see the output:
Pid <random number here> no longer running; restarting ./example.pl 1 2 3
./example.pl got arguments: 1 2 3
You might also try Monit. Monit is a service that monitors and reports on other services. While it's mainly used as a way to notify (via email and sms) about runtime problems, it can also do what most of the other suggestions here have advocated. It can auto (re)start and stop programs, send emails, initiate other scripts, and maintain a log of output that you can pick up. In addition, I've found it's easy to install and maintain since there's solid documentation.
I have made a series of improvements on the other answer.
stdout out of this script is purely made up of stdout coming from its child UNLESS it exits due to detecting that the command is already being run
cleans up after its pidfile when terminated
optional configurable timeout period (Accepts any positive numeric argument, sends to sleep)
usage prompt on -h
arbitrary command execution, rather than single command execution. The last arg OR remaining args (if more than one last arg) are sent to eval, so you can construct any sort of shell script as a string to send to this script as a last arg (or trailing args) for it to daemonize
argument count comparisons done with -lt instead of <
Here is the script:
#!/bin/sh
# this script builds a mini-daemon, which isn't a real daemon because it
# should die when the owning terminal dies, but what makes it useful is
# that it will restart the command given to it when it completes, with a
# configurable timeout period elapsing before doing so.
if [ "$1" = '-h' ]; then
echo "timeout defaults to 1 sec.\nUsage: $(basename "$0") sentinel-pidfile [timeout] command [command arg [more command args...]]"
exit
fi
if [ $# -lt 2 ]; then
echo "No command given."
exit
fi
PIDFILE=$1
shift
TIMEOUT=1
if [[ $1 =~ ^[0-9]+(\.[0-9]+)?$ ]]; then
TIMEOUT=$1
[ $# -lt 2 ] && echo "No command given (timeout was given)." && exit
shift
fi
echo "Checking pid in file ${PIDFILE}." >&2
#Check to see if process running.
if [ -f "$PIDFILE" ]; then
PID=$(< $PIDFILE)
if [ $? = 0 ]; then
ps -p $PID >/dev/null 2>&1
if [ $? = 0 ]; then
echo "This script is (probably) already running as PID ${PID}."
exit
fi
fi
fi
# Write our pid to file.
echo $$ >$PIDFILE
cleanup() {
rm $PIDFILE
}
trap cleanup EXIT
# Run command until we're killed.
while true; do
eval "$#"
echo "I am $$ and my child has exited; restart in ${TIMEOUT}s" >&2
sleep $TIMEOUT
done
Usage:
$ term-daemonize.sh pidfilefortesting 0.5 'echo abcd | sed s/b/zzz/'
Checking pid in file pidfilefortesting.
azzzcd
I am 79281 and my child has exited; restart in 0.5s
azzzcd
I am 79281 and my child has exited; restart in 0.5s
azzzcd
I am 79281 and my child has exited; restart in 0.5s
^C
$ term-daemonize.sh pidfilefortesting 0.5 'echo abcd | sed s/b/zzz/' 2>/dev/null
azzzcd
azzzcd
azzzcd
^C
Beware that if you run this script from different directories it may use different pidfiles and not detect any existing running instances. Since it is designed to run and restart ephemeral commands provided through an argument there is no way to know whether something's been already started, because who is to say whether it is the same command or not? To improve on this enforcement of only running a single instance of something, a solution specific to the situation is required.
Also, for it to function as a proper daemon, you must use (at the bare minimum) nohup as the other answer mentions. I have made no effort to provide any resilience to signals the process may receive.
One more point to take note of is that killing this script (if it was called from yet another script which is killed, or with a signal) may not succeed in killing the child, especially if the child is yet another script. I am uncertain of why this is, but it seems to be something related to the way eval works, which is mysterious to me. So it may be prudent to replace that line with something that accepts only a single command like in the other answer.
There is also a very simple double-fork + setsid approach to detach any script from its parent process
( setsid my-regular-script arg [arg ...] 1>stdout.log 2>stderr.log & )
setsid is a part of standard util-linux package which has been with linux since birth. This works when launched in any POSIX compatible shell I know.
Another double-fork based approach doesn't even require any extra exacutables or packages and relies purely on POSIX based shell
( my-regular-script arg [arg ...] 1>stdout.log 2>stderr.log & ) &
It also survives becoming an orphan when the parent process leaves the stage