Perl: Waiting for background process to finish - perl

My problem is that when I run the following it will say that the bash script has finishes successfully. But it doesnt wait for the script to finish, if it quits to early it will move a file that it needs. So what am I doing wrong that it wont wait for the background process to finish to move the files?
my $pid = fork();
if($pid == -1){
die;
} elsif ($pid == 0){
#system(#autoDeploy) or die;
logit("Running auto deploy for $bundleApp");
exec("./deployer -d $domain.$enviro -e $enviro >> /tmp/$domain.$enviro &")
or logit("Couldnt run the script.");
}
while (wait () != -1){
}
logit("Ran autoDeploy");
logit("Moving $bundleApp, to $bundleDir/old/$bundleApp.$date.bundle");
move("$bundleDir/$bundleApp", "$bundleDir/old/$bundleApp.$date.bundle");
delete $curBundles{$bundleApp};

The simplest thing that you're doing wrong is using & at the end of the exec commandline -- that means you're forking twice, and the process that you're waiting on will exit immediately.
I don't actually see what purpose fork/exec are serving you at all here, though, if you're not redirecting I/O and not doing anything but wait for the exec'd process to finish; that's what system is for.
system("./deployer -d $domain.$enviro -e $enviro >> /tmp/$domain.$enviro")
and logit("Problem running deployer: $?");
will easily serve to replace the first twelve lines of your code.
And just as a note in passing, fork doesn't return -1 on failure; it returns undef, so that whole check is entirely bogus.

You don't need to use & in your exec parameters, as you're already running under a fork.

Related

How to use Perl to check when a Unix command has finished processing

I am working on a capstone project and am hoping for some insight.
This is the first time I've worked with Perl and it's pretty much a basic Perl script to automate a few different Unix commands that need to be executed in a specific order. There are two lines throughout the script which executes a Unix command that needs to finish processing before it is acceptable for the rest of the script to run (data will be incorrect otherwise).
How am I able to use Perl (or maybe this is a Unix question?) to print a simple string once the Unix command has finished processing? I am looking into ways to read in the Unix command name but am not sure how to implement a way to check if the process is no longer running and to print a string such as "X command has finished processing" upon it's completion.
Example:
system("nohup scripts_pl/RunAll.pl &");
This runs a command in the background that takes time to process. I am asking how I can use Perl (or Unix?) to print a string once the process has finished.
I'm sorry if I didn't understand your asking context.
But couldn't you use perl process fork function instead of & if you would like to do parallel process?
# parent process
if (my $pid = fork) {
# this block behaves as a normal process
system("nohup scripts_pl/RunAll2.pl"); # you can call other system (like RunAll2.pl)
wait; # wait for the background processing
say 'finished both';
}
# child process
else {
# this block behaves as a background process
system("nohup scripts_pl/RunAll.pl"); # trim &
}
You could try to use IPC::Open3 instead of system:
use IPC::Open3;
my $pid = open3("<&STDIN", ">&STDOUT", ">&STDERR", 'nohup scripts_pl/RunAll.pl');
waitpid( $pid, 0 );
Or, if you need to run nohup through the shell:
my $pid = open3("<&STDIN", ">&STDOUT", ">&STDERR", 'bash','-c', 'nohup scripts_pl/RunAll.pl & wait');
Update: Thanks to #ikegami. A better approach if you would like STDIN to stay open after running the command:
open(local *CHILD_STDIN, "<&", '/dev/null') or die $!;
my $pid = open3("<&CHILD_STDIN", ">&STDOUT", ">&STDERR", 'nohup scripts_pl/RunAll.pl');

Wait for the child process to complete in system command in perl

part of my script looks like this.
my #args = ("/bin/updateServer & ");
system (#args) == 0 or die "system #args failed: $?";
reloadServer;
My requirement is only after the updateServer finishes, reloadServer has to be called.
In my case reload server runs immeadiately after update server.
UpdateServer runs for around 4 hours and so I have to run it in background with "&"
How can I change my code to run reloadServer only after the updateServer is completed.
Can someone please help me in doing so.
Just:
#args = ("/bin/updateServer");
Remove & from command to avoid start process in background
Instead of running the system command in the background, create a thread to run it and then reload:
use threads;
my $thread = threads->create(sub {
my #args = ("/bin/updateServer");
system (#args) == 0 or die "system #args failed: $?";
reloadServer;
});
# Store $thread somewhere so you can check $thread->error/is_running for it failing/completing.
# Continue doing other things.
The thread will run in the background and run reloadServer once the (now blocking) system command completes.

Running script constantly in background: daemon, lock file with crontab, or simply loop?

I have a Perl script that
queries a database for a list of files to process
processes the files
and then exits
Upon startup this script creates a file (let's say script.lock), and upon exit it removes this file. I have a crontab entry that runs this script every minute. If the lockfile exists then the script exits, assuming that another instance of itself is running.
The above process works fine but I am not very happy with the robustness of this approach. Specifically, if for some reason the script exits prematurely and the lockfile is not removed then a new instance will not execute properly.
I would appreciate some advice on the following:
Is using the lock file a good approach or is there a better/more robust way to do this?
Is using crontab for this a good idea or could I better write an endless loop with sleep()?
Should I use the GNU 'daemon' program or the Perl Proc::Daemon module (or some other equivalent) for this?
Let's assume you take the continuous loop route. You rejigger your program to be one infinite loop. You sleep for a certain amount of time, then wake up and process your database files, and then go back to sleep.
You now need a mechanism to make sure your program is still up and running. This could be done via something like inetd.
However, your program basically does a single task, and does that task repeatedly through the day. This is what crontab is for. The inetd mechanism is for servers that are waiting for a client, like https or sshd. In these cases, you need a mechanism to recreate the server process as soon as it dies.
One way you can improve your lockfile mechanism is to include the PID with it. For example, in your Perl script you do this:
open my $lock_file_fh, ">", LOCK_FILE_NAME;
say {$lock_file_fh} "$$";
close $lock_file_fh;
Now, if your crontab sees the lock file, it can test to see if that process ID is still running or not:
if [ -f $lock_file ]
then
pid=$(cat $lock_file)
if ! ps -p $pid
then
rm $lock_file
fi
restart_program
else
restart_program
fi
Using a lock file is a fine approach if using cron, although I would recommend a database if you can install and use one easily (MySQL/Postgres/whatever. NOT SQLite). This is more portable than a file on a local filesystem, among other reasons, and can be re-used.
You are indeed correct. cron is not the best idea for this scenario just for the reason you described - if the process dies prematurely, it's hard to recover (you can, by checking timestamps, but not very easily).
What you should use cron for is a "start_if_daemon_died" job instead.
This is well covered on StackOverflow already, e.g. here:
" How can I run a Perl script as a system daemon in linux? " or more posts.
This is not meant as a new answer but simply a worked out example in Perl of David W.'s accepted answer.
my $LOCKFILE = '/tmp/precache_advs.lock';
create_lockfile();
do_something_interesting();
remove_lockfile();
sub create_lockfile {
check_lockfile();
open my $fh, ">", $LOCKFILE or die "Unable to open $LOCKFILE: $!";
print $fh "$$";
close $fh;
return;
}
sub check_lockfile {
if ( -e $LOCKFILE ) {
my $pid = `cat $LOCKFILE`;
if ( system("ps -p $pid") == 0 ) {
# script is still running, so don't start a new instance
exit 0;
}
else {
remove_lockfile();
}
}
return;
}
sub remove_lockfile {
unlink $LOCKFILE or "Unable to remove $LOCKFILE: $!";
return;
}

Executing a Bash command asynchronously from a Perl script

I have to run a Bash command. But this command will take a few minutes to run.
If I execute this command normally (synchronously), my application will hang until the command is finished running.
How do I run Bash commands asynchronously from a Perl script?
You can use threads to start Bash asynchronously,
use threads;
my $t = async {
return scalar `.. long running command ..`;
};
and later manually test if thread is ready to join, and get output in a non-blocking fashion,
my $output = $t->is_joinable() && $t->join();
If you do not care about the result, you can just use system("my_bash_script &");. It will return immediately and the script does what is needed to be done.
I have two files:
$ cat wait.sh
#!/usr/bin/bash
for i in {1..5}; { echo "wait#$i"; sleep 1;}
$cat wait.pl
#!/usr/bin/perl
use strict; use warnings;
my $t = time;
system("./wait.sh");
my $t1 = time;
print $t1 - $t, "\n";
system("./wait.sh &");
print time - $t1, "\n";
Output:
wait#1
wait#2
wait#3
wait#4
wait#5
5
0
wait#1
wait#2
wait#3
wait#4
wait#5
It can be seen that the second call returns immediately, but it keeps writing to the stdout.
If you need to communicate to the child then you need to use fork and redirect STDIN and STDOUT (and STDERR). Or you can use the IPC::Open2 or IPC::Open3 packages. Anyhow, it is always a good practice to wait for the child to exit before the caller exits.
If you want to wait for the executed processes you can try something like this in Bash:
#!/usr/bin/bash
cpid=()
for exe in script1 script2 script3; do
$exe&
cpid[$!]="$exe";
done
while [ ${#cpid[*]} -gt 0 ]; do
for i in ${!cpid[*]}; do
[ ! -d /proc/$i ] && echo UNSET $i && unset cpid[$i]
done
echo DO SOMETHING HERE; sleep 2
done
This script at first launches the script# asynchronously and stores the pids in an array called cpid. Then there is a loop; it browses that they are still running (/proc/ exists). If one does not exist, text UNSET <PID> is presented and the PID is deleted from the array.
It is not bulletproof as if DO SOMETHING HERE part runs very long, then the same PID can be added to another process. But it works well in the average environment.
But this risk also can be reduced:
#!/usr/bin/bash
# Enable job control and handle SIGCHLD
set -m
remove() {
for i in ${!cpid[*]}; do
[ ! -d /proc/$i ] && echo UNSET $i && unset cpid[$i] && break
done
}
trap "remove" SIGCHLD
#Start background processes
cpid=()
for exe in "script1 arg1" "script2 arg2" "script3 arg3" ; do
$exe&
cpid[$!]=$exe;
done
#Non-blocking wait for background processes to stop
while [ ${#cpid[*]} -gt 0 ]; do
echo DO SOMETHING; sleep 2
done
This version enables the script to receive the SIGCHLD signal when an asynchronous sub process exited. If SIGCHLD is received, it asynchronously looks for the first non-existent process. The waiting while-loop is reduced a lot.
The normal way to do this is with fork. You'll have your script fork, and the child would then call either exec or system on the Bash script (depending on whether the child needs to handle the return code of the Bash script, or otherwise interact with it).
Then your parent would probably want a combination of wait and/or a SIGCHILD handler.
The exact specifics of how to handle it depend a lot on your situation and exact needs.

perl fork doesn't work properly when run remotely (via ssh)

I have a perl script, script.pl which, when run, does a fork, the parent process outputs its pid to a file then exits while the child process outputs something to STOUT and then goes into a while loop.
$pid = fork();
if ( ! defined $pid )
{
die "Failed to fork.";
}
#Parent process
elsif($pid)
{
if(!open (PID, ">>running_PIDs"))
{
warn "Error opening file to append PID";
}
print PID "$pid \n";
close PID;
}
#child process
else
{
print "Output started";
while($loopControl)
{
#Do some stuff
}
}
This works fine when I call it locally ie: perl script.pl.
The script prints out some things then returns control back to the shell. (while the child process goes off into its loop in the background).
However, when I call this via ssh control is never returned back to the shell (nor is the "Output started" line ever printed.
ie:
$ ssh username#example.com 'perl script.pl'
However, the interesting thing is, the child process does run (I can see it when I type ps).
Can anyone explain whats going on?
EDIT:
I ran it under debug and got this:
### Forked, but do not know how to create a new TTY.
Since two debuggers fight for the same TTY, input is severely entangled.
I know how to switch the output to a different window in xterms
and OS/2 consoles only. For a manual switch, put the name of the created TTY
in $DB::fork_TTY, or define a function DB::get_fork_TTY() returning this.
On UNIX-like systems one can get the name of a TTY for the given window
by typing tty, and disconnect the shell from TTY by sleep 1000000.
Whenever you launch background jobs via non-interactive ssh commands, you need to close or otherwise tie off stdin, stdout, & stderr. Otherwise ssh will wait for the backgrounded process to exit. FAQ.
This is called disassociating or detaching from the controlling terminal and is a general best practice when writing background jobs, not just for SSH.
So the simplest change that doesn't mute your entire command is to add:
#close std fds inherited from parent
close STDIN;
close STDOUT;
close STDERR;
right after your print "Output started";. If your child process needs to print output periodically during its run, then you'll need to redirect to a log file instead.
ssh username#example.com 'nohup perl script.pl'
You aren't able to exit because there's still a process attached. You need to nohup it.
What is happening is that ssh is executing 'perl script.pl' as a command directly. If you have 'screen' available, you could do:
$ ssh username#example.com 'screen -d -m perl script.pl'
to have it running on a detached screen, and reattach later with screen -r
To understand this better I would recommend reading #Jax's solution on
Getting ssh to execute a command in the background on target machine
It's not to do with Perl. It's becaue of the way SSH handles any long-running process you're trying to background.
I need to launch script.pl from a bash script (to define essential local variables on the target host):
$ ssh username#example.com /path/to/launcher.sh
/path/to/launcher.sh was invoking the Perl script with:
CMD="/path/to/script.pl -some_arg=$VALUE -other_arg"
$CMD &
which worked locally, but when run via ssh it didn't return.
I tried #pra's solution inside the Perl script, but it didn't work in my case.
Using #Jax's solution, I replaced $CMD & with this:
nohup $CMD > /path/to/output.log 2>&1 < /dev/null &
and it works beautifully.