perl fork doesn't work properly when run remotely (via ssh) - perl

I have a perl script, script.pl which, when run, does a fork, the parent process outputs its pid to a file then exits while the child process outputs something to STOUT and then goes into a while loop.
$pid = fork();
if ( ! defined $pid )
{
die "Failed to fork.";
}
#Parent process
elsif($pid)
{
if(!open (PID, ">>running_PIDs"))
{
warn "Error opening file to append PID";
}
print PID "$pid \n";
close PID;
}
#child process
else
{
print "Output started";
while($loopControl)
{
#Do some stuff
}
}
This works fine when I call it locally ie: perl script.pl.
The script prints out some things then returns control back to the shell. (while the child process goes off into its loop in the background).
However, when I call this via ssh control is never returned back to the shell (nor is the "Output started" line ever printed.
ie:
$ ssh username#example.com 'perl script.pl'
However, the interesting thing is, the child process does run (I can see it when I type ps).
Can anyone explain whats going on?
EDIT:
I ran it under debug and got this:
### Forked, but do not know how to create a new TTY.
Since two debuggers fight for the same TTY, input is severely entangled.
I know how to switch the output to a different window in xterms
and OS/2 consoles only. For a manual switch, put the name of the created TTY
in $DB::fork_TTY, or define a function DB::get_fork_TTY() returning this.
On UNIX-like systems one can get the name of a TTY for the given window
by typing tty, and disconnect the shell from TTY by sleep 1000000.

Whenever you launch background jobs via non-interactive ssh commands, you need to close or otherwise tie off stdin, stdout, & stderr. Otherwise ssh will wait for the backgrounded process to exit. FAQ.
This is called disassociating or detaching from the controlling terminal and is a general best practice when writing background jobs, not just for SSH.
So the simplest change that doesn't mute your entire command is to add:
#close std fds inherited from parent
close STDIN;
close STDOUT;
close STDERR;
right after your print "Output started";. If your child process needs to print output periodically during its run, then you'll need to redirect to a log file instead.

ssh username#example.com 'nohup perl script.pl'
You aren't able to exit because there's still a process attached. You need to nohup it.

What is happening is that ssh is executing 'perl script.pl' as a command directly. If you have 'screen' available, you could do:
$ ssh username#example.com 'screen -d -m perl script.pl'
to have it running on a detached screen, and reattach later with screen -r

To understand this better I would recommend reading #Jax's solution on
Getting ssh to execute a command in the background on target machine
It's not to do with Perl. It's becaue of the way SSH handles any long-running process you're trying to background.
I need to launch script.pl from a bash script (to define essential local variables on the target host):
$ ssh username#example.com /path/to/launcher.sh
/path/to/launcher.sh was invoking the Perl script with:
CMD="/path/to/script.pl -some_arg=$VALUE -other_arg"
$CMD &
which worked locally, but when run via ssh it didn't return.
I tried #pra's solution inside the Perl script, but it didn't work in my case.
Using #Jax's solution, I replaced $CMD & with this:
nohup $CMD > /path/to/output.log 2>&1 < /dev/null &
and it works beautifully.

Related

Running script constantly in background: daemon, lock file with crontab, or simply loop?

I have a Perl script that
queries a database for a list of files to process
processes the files
and then exits
Upon startup this script creates a file (let's say script.lock), and upon exit it removes this file. I have a crontab entry that runs this script every minute. If the lockfile exists then the script exits, assuming that another instance of itself is running.
The above process works fine but I am not very happy with the robustness of this approach. Specifically, if for some reason the script exits prematurely and the lockfile is not removed then a new instance will not execute properly.
I would appreciate some advice on the following:
Is using the lock file a good approach or is there a better/more robust way to do this?
Is using crontab for this a good idea or could I better write an endless loop with sleep()?
Should I use the GNU 'daemon' program or the Perl Proc::Daemon module (or some other equivalent) for this?
Let's assume you take the continuous loop route. You rejigger your program to be one infinite loop. You sleep for a certain amount of time, then wake up and process your database files, and then go back to sleep.
You now need a mechanism to make sure your program is still up and running. This could be done via something like inetd.
However, your program basically does a single task, and does that task repeatedly through the day. This is what crontab is for. The inetd mechanism is for servers that are waiting for a client, like https or sshd. In these cases, you need a mechanism to recreate the server process as soon as it dies.
One way you can improve your lockfile mechanism is to include the PID with it. For example, in your Perl script you do this:
open my $lock_file_fh, ">", LOCK_FILE_NAME;
say {$lock_file_fh} "$$";
close $lock_file_fh;
Now, if your crontab sees the lock file, it can test to see if that process ID is still running or not:
if [ -f $lock_file ]
then
pid=$(cat $lock_file)
if ! ps -p $pid
then
rm $lock_file
fi
restart_program
else
restart_program
fi
Using a lock file is a fine approach if using cron, although I would recommend a database if you can install and use one easily (MySQL/Postgres/whatever. NOT SQLite). This is more portable than a file on a local filesystem, among other reasons, and can be re-used.
You are indeed correct. cron is not the best idea for this scenario just for the reason you described - if the process dies prematurely, it's hard to recover (you can, by checking timestamps, but not very easily).
What you should use cron for is a "start_if_daemon_died" job instead.
This is well covered on StackOverflow already, e.g. here:
" How can I run a Perl script as a system daemon in linux? " or more posts.
This is not meant as a new answer but simply a worked out example in Perl of David W.'s accepted answer.
my $LOCKFILE = '/tmp/precache_advs.lock';
create_lockfile();
do_something_interesting();
remove_lockfile();
sub create_lockfile {
check_lockfile();
open my $fh, ">", $LOCKFILE or die "Unable to open $LOCKFILE: $!";
print $fh "$$";
close $fh;
return;
}
sub check_lockfile {
if ( -e $LOCKFILE ) {
my $pid = `cat $LOCKFILE`;
if ( system("ps -p $pid") == 0 ) {
# script is still running, so don't start a new instance
exit 0;
}
else {
remove_lockfile();
}
}
return;
}
sub remove_lockfile {
unlink $LOCKFILE or "Unable to remove $LOCKFILE: $!";
return;
}

Perl -- command executing inside a script hangs

When I run the following script, it does exactly what I want it to do and exits:
setDisplay.sh:
#!/bin/bash
Xvfb -fp /usr/share/fonts/X11/misc/ :22 -screen 0 1024x768x16 2>&1 &
export DISPLAY=:22
When I run ./setDisplay.sh, everything works fine.
OK, here's where the fun starts...
I have a Perl script that calls setDisplay...
Here is the eamorr.pl script:
#!/usr/bin/perl
use strict;
use warnings;
my $homeDir="/home/eamorr/Dropbox/site/";
my $cmd;
my $result;
print "-----Setting display...\n";
$cmd="sh $homeDir/setDisplay.sh";
print $cmd."\n";
$result=`$cmd`;
print $result;
It just hangs when I run ./eamorr.pl
I'm totally stuck...
When you do this:
$result=`$cmd`;
a pipe is created connecting the perl process to the external command, and perl reads from that pipe until EOF.
Your external command creates a background process which still has the pipe on its stdout (and also its stderr since you did 2>&1). There will be no EOF on that pipe until the background process exits or closes its stdout and stderr or redirects them elsewhere.
If you intend to collect the stdout and stderr of Xvfb into the perl variable $result, you'll naturally have to wait for it to finish. If you didn't intend that, I can't guess what you were trying to do with the 2>&1.
Also a script that ends with an export command is suspect. It can only modify its own environment, and then it immediately exits so there's no noticeable effect. Usually that's a sign that someone is trying to modify the parent process's environment, which is not possible.

Perl not executing command when in debugger or as a Win32::Daemon

Synopsis
I execute a shell command from Perl and when run from the command line it works, but when run in the debugger it does not work. Running it as a Win32::Daemon shows the same behaviour.
The Source Code
I execute a command either with backticks
print `$cmd`
or like this:
open FH, "$cmd |" or die "Couldn't execute $cmd: $!\n";
while(defined(my $line = <FH>)) {
chomp($line);
print "$line\n";
}
close FH;
The command reads like this:
$cmd = '"C:\path\to\sscep.exe" getca -f "C:\path\to\config\capi_sscep.cnf"'
Even creating a small test script that just executes this command does only work if run from command line.
The System
Windows x64
Active Perl v5.16.0, MSWin32-x64-multi-thread
Eclipse Juno 20120614-1722
What works
I works to open an administrator prompt (necessary for script execution) and to:
perl script.pl
Output gets printed to screen, $? is 0.
What does not work
Starting Eclipse and running a debug session with the same perl script.pl call.
Also not working is adding a service and executing the command (created with Win32::Daemon). The daemon itself is working perfectly fine and starting the perl script as expected. Only the command does not get executed. $? is 13568 or 53 if shifted with $? >> 8, no output gets printed. The exit code does not belong to the program.
Further Details
The tool I am calling is called sscep and is extended by me. It uses the OpenSSL API and loads the capi engine (Windows CryptoAPI). But the command itself does at least print output before any serious action starts. I can happily provide the source code for this, but I doubt it will help.
I was able to narrow this further down: The problem only exists in the combination of the Perl program (CertNanny) and the binary (sscep). Calling dir inside CertNanny works, calling sscep in a test Perl script works, too. So what could possibly be done in Perl to break a single binary from being called...?
Any ideas where this problem might originate from or how I can possibly narrow it down?
Here is what I believe the problem to be: when you run your program on the command line, the system() command goes through the shell (cmd.exe); when you run your program elsewhere, it does not. Unfortunately, the two methods handle command line arguments differently. Here is an article that seems like it should help you solve the problem.
In my experience, this sort of thing is a mess in Windows. I have had trouble with this issue in Perl, also.

ssh to open a file on a remote machine in perl

I am having problems with ssh'ing to a remote machine and open a text file on that machine using Perl. I am currently tailing the file as seen below,
my $remote_filename = '/export/home/fsv/sample.txt';
my $remote_host = 'bs16-s1.xyz.com';
my $cmd = "ssh -l $sshUser $remote_host tail -f $remote_filename |";
open $inFile, $cmd or die "Couldn't spawn [$cmd]: $!/$?";
The connection times out and I see that file is not even close to being opened. I tried using Net::SSH and Remote::FIle as well with no avail. It would be great if I could get some assistance on this.
Thanks for your time.
You are actually blocking later in the program than you claim. Specifically, you block where you read from $inFile until the handle returns EOF, which is why ssh exits, which is when tail exits. Since tail -f never exits (unless terminated by a signal), you never exit either. That's why switching to cat worked.

jsvc (tomcat) does not daemonize properly when run with backticks and then defuncts

In debian lenny, when running /etc/init.d/tomcat5.5 start, it runs jsvc and expects it to daemonize itself.
From a simple bash shell, this works fine.
However, from a script, this gets completely stuck:
For example, the following works like a charm:
#!/usr/bin/perl
my $cmd = '/etc/init.d/tomcat5.5 start';
system($cmd);
However, the following gets stuck as jsvc does not daemonize:
#!/usr/bin/perl
my $cmd = '/etc/init.d/tomcat5.5 start';
`$cmd`;
It also gets stuck when running it using backticks in bash:
#!/bin/bash
CMD='/etc/init.d/tomcat5.5 start'
`$CMD`
Is this a bug in jsvc? Any idea why this works in a shell or using system() , but not using backticks? I am actually getting defunct/zombie processes because of this issue.
Just a hunch -- for a job to become a daemon it needs to close any file descriptors that were opened in its parent process. Perhaps this is easier to do with system than with backticks/readpipe, though I can't come up with any good reasons why that would be so. What if you used the backticks like:
`$CMD < /dev/null > /dev/null 2>&1`
Backticks will evaluate to the output of the command, if there's lots of data, you may fill the buffer. No need to use the backticks if you don't want to evaluate or catpure the output in the script itself.
In example, this bash script should work:
#!/bin/bash
CMD="/etc/init.d/tomcat5.5 start"
# note no backticks
$CMD
Also please define "daemonize"? You want this nohup'd and asynchronous?