ssh to open a file on a remote machine in perl - perl

I am having problems with ssh'ing to a remote machine and open a text file on that machine using Perl. I am currently tailing the file as seen below,
my $remote_filename = '/export/home/fsv/sample.txt';
my $remote_host = 'bs16-s1.xyz.com';
my $cmd = "ssh -l $sshUser $remote_host tail -f $remote_filename |";
open $inFile, $cmd or die "Couldn't spawn [$cmd]: $!/$?";
The connection times out and I see that file is not even close to being opened. I tried using Net::SSH and Remote::FIle as well with no avail. It would be great if I could get some assistance on this.
Thanks for your time.

You are actually blocking later in the program than you claim. Specifically, you block where you read from $inFile until the handle returns EOF, which is why ssh exits, which is when tail exits. Since tail -f never exits (unless terminated by a signal), you never exit either. That's why switching to cat worked.

Related

How can I resolve a "no such file or directory" in perl?

I've been working on a script for the last 2 weeks that has been running just fine. Suddenly yesterday, it stopped working.
I was using the code mkdir -p $somepath. It started hanging so I inserted an "or die". Then I figured that this error might be due to me not checking if it already exists so I changed the code to
unless (-e $somepath) {system ("mkdir -p $somepath") or die "Couldn't make $somepath: $!"
};
NOTE: I have also done this using perl mkdir without system but I tried everything since nothing was working.
Here's the strange part. It actually IS creating the directory and then dying with the error
Couldn't make "insert path here": No such file or directory
So it seems like it is doing the mkdir, checking that it does exist, and then dies. I don't know why it started doing this because I hadn't changed a single thing to my script and it was working fine yesterday. Please let me know if I've forgotten to include any information.
Edit: I figured out my issue. It has absolutely nothing to do with any of the mkdirs. For some reason it was coming from enabling an option that allows a user to supply a path instead of having a direct path in the code. The dirty fix was to disable the feature for now. So the problem was occurring even before I got to the mkdir lines.
The problem is the ... or die you added.
system returns 0 on success (which is false in boolean context), so your code will only die if mkdir succeeds.
Also, $! doesn't contain a meaningful status code if system reports a non-zero exit status. The whole No such file or directory message is a red herring.
Better:
system(...) == 0
or die "mkdir -p $somepath returned $?";
Calling system with a single argument goes through the shell, which can cause problems if $somepath contains e.g. spaces, *, or other special characters.
Better:
system('mkdir', '-p', '--', $somepath) == 0
or die "mkdir -p $somepath returned $?";
There is no point in checking -e $somepath beforehand; mkdir -p takes care of that for you.
Finally, you don't need to run a separate program just to create a directory hierarchy:
use File::Path qw(make_path);
make_path($somepath);

How to get the PID of a program executed within a Perl script

This answer explains how to get the pid of the new process when using Perl's exec(). The pid doesn't even change, so all you need to do is get the pid of the original script. But it doesn't work if I redirect the output to a file as part of the command, which I need to do.
say "my pid is $$";
exec("childscript.pl"); # same pid
But if I redirect the output as part of the command:
say "my pid is $$";
exec("childscript.pl > log.txt"); # different pid, usually old pid + 1
exec("childscript.pl > log.txt 2>&1 &"); # same
then the new pid is one higher than the old one (which is probably just because they were spawned in succession and not reliable). I tested this both by looking at the output, and by inserting a sleep 30 into "childscript.pl" so that I could see it with ps -e.
My guess here is that redirecting the output causes a new process to do the writing. But I need the pid of the program, and I have no control over the program except for the fact that I can execute it. (It needs to run in the background too.)
When you call exec with a single argument (and that argument contains shell metacharacters), perl automatically runs sh -c ARG for you. In your case:
exec("childscript.pl > log.txt");
# really means:
exec("/bin/sh", "-c", "childscript.pl > log.txt");
I.e. your script loads sh into the currently running process, keeping its PID. The shell performs output redirection, then runs childscript.pl as a child process (with a new PID).
There are two ways to attack this problem:
Do the output redirection in Perl and don't spawn a shell:
open STDOUT, ">", "log.txt" or die "$0: log.txt: $!\n";
exec "childscript.pl";
die "$0: childscript.pl: $!\n";
Tell the shell to also use exec and not spawn a child process:
exec "exec childscript.pl > log.txt";
die "$0: childscript.pl: $!\n";

Breaking whole chain of commands in perl through Net::OpenSSH

I have a perl script which is using Net::OpenSSH. At one moment I have following code:
$ssh->system(#cmd) or die "Failed to execute command on remote system";
For various reasons I might want to kill the command and when I press ^C I'd like to have a whole chain terminated. With above command only the local process is terminated.
After Googleing the problem I found that I need to allocate pseudo-terminal. I tried to use:
$ssh->system({tty => 1}, #cmd) or die "Failed to execute command on remote system";
Which worked partially - it terminated the remote process but not the local one (and I couldn't find a way to check for error which would distinguish both). I tried the spawn as well thinking that blocking signals have something to do with it:
my $pid = $ssh->spawn({tty => 1}, #cmd) or die "Failed to execute command on remote system";
waitpid($pid, 0);
die "Failed to execute command on remote system" unless ($? == 0);
How to stop everything on ^C or killing the local command?
PS. the command I'm executing is a perl script I have control over if it helps.

Running script constantly in background: daemon, lock file with crontab, or simply loop?

I have a Perl script that
queries a database for a list of files to process
processes the files
and then exits
Upon startup this script creates a file (let's say script.lock), and upon exit it removes this file. I have a crontab entry that runs this script every minute. If the lockfile exists then the script exits, assuming that another instance of itself is running.
The above process works fine but I am not very happy with the robustness of this approach. Specifically, if for some reason the script exits prematurely and the lockfile is not removed then a new instance will not execute properly.
I would appreciate some advice on the following:
Is using the lock file a good approach or is there a better/more robust way to do this?
Is using crontab for this a good idea or could I better write an endless loop with sleep()?
Should I use the GNU 'daemon' program or the Perl Proc::Daemon module (or some other equivalent) for this?
Let's assume you take the continuous loop route. You rejigger your program to be one infinite loop. You sleep for a certain amount of time, then wake up and process your database files, and then go back to sleep.
You now need a mechanism to make sure your program is still up and running. This could be done via something like inetd.
However, your program basically does a single task, and does that task repeatedly through the day. This is what crontab is for. The inetd mechanism is for servers that are waiting for a client, like https or sshd. In these cases, you need a mechanism to recreate the server process as soon as it dies.
One way you can improve your lockfile mechanism is to include the PID with it. For example, in your Perl script you do this:
open my $lock_file_fh, ">", LOCK_FILE_NAME;
say {$lock_file_fh} "$$";
close $lock_file_fh;
Now, if your crontab sees the lock file, it can test to see if that process ID is still running or not:
if [ -f $lock_file ]
then
pid=$(cat $lock_file)
if ! ps -p $pid
then
rm $lock_file
fi
restart_program
else
restart_program
fi
Using a lock file is a fine approach if using cron, although I would recommend a database if you can install and use one easily (MySQL/Postgres/whatever. NOT SQLite). This is more portable than a file on a local filesystem, among other reasons, and can be re-used.
You are indeed correct. cron is not the best idea for this scenario just for the reason you described - if the process dies prematurely, it's hard to recover (you can, by checking timestamps, but not very easily).
What you should use cron for is a "start_if_daemon_died" job instead.
This is well covered on StackOverflow already, e.g. here:
" How can I run a Perl script as a system daemon in linux? " or more posts.
This is not meant as a new answer but simply a worked out example in Perl of David W.'s accepted answer.
my $LOCKFILE = '/tmp/precache_advs.lock';
create_lockfile();
do_something_interesting();
remove_lockfile();
sub create_lockfile {
check_lockfile();
open my $fh, ">", $LOCKFILE or die "Unable to open $LOCKFILE: $!";
print $fh "$$";
close $fh;
return;
}
sub check_lockfile {
if ( -e $LOCKFILE ) {
my $pid = `cat $LOCKFILE`;
if ( system("ps -p $pid") == 0 ) {
# script is still running, so don't start a new instance
exit 0;
}
else {
remove_lockfile();
}
}
return;
}
sub remove_lockfile {
unlink $LOCKFILE or "Unable to remove $LOCKFILE: $!";
return;
}

perl fork doesn't work properly when run remotely (via ssh)

I have a perl script, script.pl which, when run, does a fork, the parent process outputs its pid to a file then exits while the child process outputs something to STOUT and then goes into a while loop.
$pid = fork();
if ( ! defined $pid )
{
die "Failed to fork.";
}
#Parent process
elsif($pid)
{
if(!open (PID, ">>running_PIDs"))
{
warn "Error opening file to append PID";
}
print PID "$pid \n";
close PID;
}
#child process
else
{
print "Output started";
while($loopControl)
{
#Do some stuff
}
}
This works fine when I call it locally ie: perl script.pl.
The script prints out some things then returns control back to the shell. (while the child process goes off into its loop in the background).
However, when I call this via ssh control is never returned back to the shell (nor is the "Output started" line ever printed.
ie:
$ ssh username#example.com 'perl script.pl'
However, the interesting thing is, the child process does run (I can see it when I type ps).
Can anyone explain whats going on?
EDIT:
I ran it under debug and got this:
### Forked, but do not know how to create a new TTY.
Since two debuggers fight for the same TTY, input is severely entangled.
I know how to switch the output to a different window in xterms
and OS/2 consoles only. For a manual switch, put the name of the created TTY
in $DB::fork_TTY, or define a function DB::get_fork_TTY() returning this.
On UNIX-like systems one can get the name of a TTY for the given window
by typing tty, and disconnect the shell from TTY by sleep 1000000.
Whenever you launch background jobs via non-interactive ssh commands, you need to close or otherwise tie off stdin, stdout, & stderr. Otherwise ssh will wait for the backgrounded process to exit. FAQ.
This is called disassociating or detaching from the controlling terminal and is a general best practice when writing background jobs, not just for SSH.
So the simplest change that doesn't mute your entire command is to add:
#close std fds inherited from parent
close STDIN;
close STDOUT;
close STDERR;
right after your print "Output started";. If your child process needs to print output periodically during its run, then you'll need to redirect to a log file instead.
ssh username#example.com 'nohup perl script.pl'
You aren't able to exit because there's still a process attached. You need to nohup it.
What is happening is that ssh is executing 'perl script.pl' as a command directly. If you have 'screen' available, you could do:
$ ssh username#example.com 'screen -d -m perl script.pl'
to have it running on a detached screen, and reattach later with screen -r
To understand this better I would recommend reading #Jax's solution on
Getting ssh to execute a command in the background on target machine
It's not to do with Perl. It's becaue of the way SSH handles any long-running process you're trying to background.
I need to launch script.pl from a bash script (to define essential local variables on the target host):
$ ssh username#example.com /path/to/launcher.sh
/path/to/launcher.sh was invoking the Perl script with:
CMD="/path/to/script.pl -some_arg=$VALUE -other_arg"
$CMD &
which worked locally, but when run via ssh it didn't return.
I tried #pra's solution inside the Perl script, but it didn't work in my case.
Using #Jax's solution, I replaced $CMD & with this:
nohup $CMD > /path/to/output.log 2>&1 < /dev/null &
and it works beautifully.