Breaking whole chain of commands in perl through Net::OpenSSH - perl

I have a perl script which is using Net::OpenSSH. At one moment I have following code:
$ssh->system(#cmd) or die "Failed to execute command on remote system";
For various reasons I might want to kill the command and when I press ^C I'd like to have a whole chain terminated. With above command only the local process is terminated.
After Googleing the problem I found that I need to allocate pseudo-terminal. I tried to use:
$ssh->system({tty => 1}, #cmd) or die "Failed to execute command on remote system";
Which worked partially - it terminated the remote process but not the local one (and I couldn't find a way to check for error which would distinguish both). I tried the spawn as well thinking that blocking signals have something to do with it:
my $pid = $ssh->spawn({tty => 1}, #cmd) or die "Failed to execute command on remote system";
waitpid($pid, 0);
die "Failed to execute command on remote system" unless ($? == 0);
How to stop everything on ^C or killing the local command?
PS. the command I'm executing is a perl script I have control over if it helps.

Related

How do I execute an external program from Perl and have the perl script continue

I am running a perl script from a cron job on Ubuntu. As part of the script it needs to execute an external program and not wait for the program to complete and also continue executing the script. I have tried the following but as near as I can tell it is not executing the program and also seams to not continue the script.
exec("/usr/bin/dotnet /usr/local/myprogram/myprogram.dll arg1, arg2, moreargs")
or print STDERR "Couldn't exec myprogram";
To call your program, you should pass the program arguments as separate arguments to exec. (See https://perldoc.perl.org/functions/exec.html.)
Since you want the script to proceed without waiting for the program to return, you should use fork (see https://perldoc.perl.org/functions/fork.html) so that you have two separate processes running in parallel (one running the program, one running the rest of the script).
So:
my $child_pid = fork();
die "Couldn't fork" unless defined $child_pid;
if (! $child_pid) {
exec '/usr/bin/dotnet', '/usr/local/myprogram/myprogram.dll', 'arg1', 'arg2', 'moreargs';
die "Couldn't exec myprogram: $!";
}
# rest of script
wait();

Perl : implementing socket programming ( system() never returns)

My aim : implement socket programming such that client tries connecting to server if server not installed on remote machine , client(host) on its part transfers a tar file to server(target) machine and a perl script. This perl script untar the folder and runs a script (server perl script) , now the problem is : this server script has to run forever ( multiple clients ) until the machine restarts or something untoward happens.
so the script runs properly : but since it is continuously running the control doesnt go back to the client which will again try to connect to the server ( on some predefined socket) , so basically i want that somehow i run the server but bring back control to my host which is client in this case.
here is the code :
my $sourcedir = "$homedir/host_client/test.tar";
my $sourcedir2 = "$homedir/host_client/sabkuch.pl";
my $remote_path = "/local/home/hanmaghu";
# Main subroutines
my $ssh = Net::OpenSSH->new ( $hostmachine, user =>$username, password => $password);
$ssh->scp_put($sourcedir,$sourcedir2,$remote_path)
or die "scp failed \n" . $ssh->error;
# test() is similar to system() in perl openssh package
my $rc = $ssh->test('perl sabkuch.pl');
# check if test function returned or not -> this is never executed
if ($rc == 1) {
print "test was ok , server established \n";
}
else {
print "return from test = $rc \n";
}
exit;
The other script which invokes our server script is :
#!/usr/bin/perl
use strict;
use warnings;
system('tar -xvf test.tar');
exec('cd utpsm_run_automation && perl utpsm_lts_server.pl');
#system('perl utpsm_lts_server.pl');
# Tried with system but in both cases it doesn't return,
# this xxx_server.pl is my server script
exit;
The server script is :
#!/usr/bin/perl
use strict;
use warnings;
use IO::Socket::INET;
#flush after every write
$| =1;
my $socket = new IO::Socket::INET (
LocalHost => '0.0.0.0',
LocalPort => '7783',
Proto => 'tcp',
Listen => 5,
Reuse => 1
);
die "cannot create socket $! \n" unless $socket;
print "server waiting for client on port $socket->LocalPort \n";
while (1)
{
# waiting for new client connection
my $client_socket = $socket->accept();
# get info about new connected client
my $client_address = $client_socket->peerhost();
my $client_port = $client_socket->peerport();
print "connection from $client_address:$client_port \n";
# read upto 1024 characters from connected client
my $data = "";
$client_socket->recv($data,1024);
print "rceeived data = $data";
# write response data to the connected client
$data = "ok";
$client_socket->send($data);
# notify client response is sent
shutdown($client_socket,1);
}
$socket->close();
Please help how to execute this : in terms of design this is what i want but having this issue while implementation, can i do it some other work around method.
In short, your 'driver' sabkuch.pl starts the server using exec -- which never returns. From exec
The "exec" function executes a system command and never returns; ...
(Emphasis from the quoted documentation.) Once an exec is used, the program running in the process is replaced by the other program, see exec wiki. If that server keeps running the exit you have there is never reached, that is unless there are errors. See Perl's exec linked above.
So your $ssh->test() will block forever (well, until the server does exit somehow). You need a non-blocking way to start the server. Here are some options
Run the driver in the background
my $rc = $ssh->test('perl sabkuch.pl &');
This starts a separate subshell and spawns sabkuch.pl in it, then returns control and test can complete. The sabkuch.pl runs exec and thus turns into the other program (the server), to run indefinitely. See Background processes in perlipc. Also see it in perlfaq8, and the many good links there. Note that there is no need for perl ... if sabkuch.pl can be made executable.
See whether Net::OpenSSH has a method to execute commands so that it doesn't block.
One way to 'fire-and-forget' is to fork and then exec in the child, while the parent can then do what it wants (exit in this case). Then there is more to consider. Plenty of (compulsory) information is found in perlipc, while examples abound elsewhere as well (search for fork and exec). This should not be taken lightly as errors can lead to bizarre behavior and untold consequences. Here is a trivial example.
#!/usr/bin/perl
use strict;
use warnings;
system('tar -xvf test.tar') == 0 or die "Error with system(...): $!";
my $pid = fork;
die "Can't fork: $!" if not defined $pid;
# Two processes running now. For child $pid is 0, for parent large integer
if ($pid == 0) { # child, parent won't get into this block
exec('cd utpsm_run_automation && perl utpsm_lts_server.pl');
die "exec should've not returned: $!";
}
# Can only be parent here since child exec-ed and can't get here. Otherwise,
# put parent-only code in else { } and wait for child or handle $SIG{CHLD}
# Now parent can do what it needs to...
exit; # in your case
Normally a concern when forking is to wait for children. If we'd rather not, this can be solved by double-forking or by handling SIGCHLD (see waitpid as well), for example. Please study perlfaq8 linked above, Signals in perlipc, docs for all calls used, and everything else you can lay your hands on. In this case the parent should by all means exit first and the child process is then re-parented by init and all is well. The exec-ed process gets the same $pid but since cd will trigger the shell (sh -c cd) the server will eventually run with a different PID.
With system('command &') we need not worry about waiting for a child.
This is related only to your direct question, not the rest of the shown code.
Well i figured the best way would be to fork out a child process and parents exists thus child can now go on forever running the server.pl
but it still is not working please let me knoe where in this code i am going wrong
#!/usr/bin/perl
use strict;
use warnings;
system('tar -xvf test.tar');
my $child_pid = fork;
if (!defined $child_pid){
print "couldn't fork \n";}
else {
print "in child , now executing \n";
exec('cd utpsm_run_automation && perl utpsm_lts_server.pl')
or die "can't run server.pl in sabkuch child \n";
}
the output is my script still hangs and the print statement "in child now executing " gets run twice , i dont understand why,
i work mostly on assembly language hence this all is new to me.
help will be appreciated.

[Perl][net::ssh2] How to keep the ssh connection while executing remote command

I'm working on a perl script using net::ssh2 to make a SSH connection to a remote server.
(I'm working on windows)
I chose Net::SSH2 because i had to make some SFTP connections in the same script.
For now, my sftp connections work perfectly. The problem is when i try to execute a "long-duration" command. I mean a command which execution can take more than 30sec.
$ssh2 = Net::SSH2->new();
$ssh2->connect('HOST') or die;
if($ssh2->auth(username=>'USER', password=>'PSWD'))
{
$sftp = Net::SFTP::Foreign->new(ssh2=>$ssh2, backend=>'Net_SSH2');
$sftp->put('local_path', 'remote_path');
$channel=$ssh2->channel();
##
$channel->shell('BCP_COMMAND_OR_OTHER_PERL_SCRIPT');
# OR (I tried both, both failed :( )
$channel->exec('BCP_COMMAND_OR_OTHER_PERL_SCRIPT');
##
$channel->wait_closed();
$channel->close();
print "End of command";
$sftp_disconnect();
}
$ssh2->disconnect();
When i execute this script, the connection is successfull, the file is correctly sent but the execution is not (completely) performed. I mean, I think the command is sent for execution but terminated immediatly or not sent at all, i'm not sure.
What i want is the script waits until the command is completly finished before disconnect everything (just because sometimes, i need to get the result of the command execution)
Does anyone know how to solve this? :( The cpan documentation is not very explicit for this
Thanks!
PS: I'm open to any remarks or suggestion :)
Edit: After some test, i can say that the command is sent but is interrupted. My test was to start another perl script on the remote server. This script writes in a flat file. In this test, the script is started, the file is half-filled. I mean, the file is brutaly stopped in the middle.
In the other hand, when i performed a "sleep(10)" just after the "$channel->exec()", the script goes to the end successfully.
Problem is, that I can't write a "sleep(10)" (i don't know if it will take 9 or 11 seconds (or more, you see my point)
You can try using Net::SSH::Any instead.
It provides a higher level and easier to use API and can use Net::SSH2 or Net::OpenSSH to handle the SSH connection.
For instance:
use Net::SSH::Any;
my $ssh = Net::SSH::Any->new($host, user => $user, password => $password);
$ssh->error and die $ssh->error;
my $sftp = $ssh->sftp;
$sftp->put('local_path', 'remote_path');
my $output = $ssh->capture($cmd);
print "command $cmd output:\n$output\n\n";
$sftp->put('local_path1', 'remote_path1');
# no need to explicitly disconnect, connections will be closed when
# both $sftp and $ssh go out of scope.
Note that SFTP support (via Net::SFTP::Foreign) has been added on version 0.03 that I have just uploaded to CPAN.

ssh to open a file on a remote machine in perl

I am having problems with ssh'ing to a remote machine and open a text file on that machine using Perl. I am currently tailing the file as seen below,
my $remote_filename = '/export/home/fsv/sample.txt';
my $remote_host = 'bs16-s1.xyz.com';
my $cmd = "ssh -l $sshUser $remote_host tail -f $remote_filename |";
open $inFile, $cmd or die "Couldn't spawn [$cmd]: $!/$?";
The connection times out and I see that file is not even close to being opened. I tried using Net::SSH and Remote::FIle as well with no avail. It would be great if I could get some assistance on this.
Thanks for your time.
You are actually blocking later in the program than you claim. Specifically, you block where you read from $inFile until the handle returns EOF, which is why ssh exits, which is when tail exits. Since tail -f never exits (unless terminated by a signal), you never exit either. That's why switching to cat worked.

perl fork doesn't work properly when run remotely (via ssh)

I have a perl script, script.pl which, when run, does a fork, the parent process outputs its pid to a file then exits while the child process outputs something to STOUT and then goes into a while loop.
$pid = fork();
if ( ! defined $pid )
{
die "Failed to fork.";
}
#Parent process
elsif($pid)
{
if(!open (PID, ">>running_PIDs"))
{
warn "Error opening file to append PID";
}
print PID "$pid \n";
close PID;
}
#child process
else
{
print "Output started";
while($loopControl)
{
#Do some stuff
}
}
This works fine when I call it locally ie: perl script.pl.
The script prints out some things then returns control back to the shell. (while the child process goes off into its loop in the background).
However, when I call this via ssh control is never returned back to the shell (nor is the "Output started" line ever printed.
ie:
$ ssh username#example.com 'perl script.pl'
However, the interesting thing is, the child process does run (I can see it when I type ps).
Can anyone explain whats going on?
EDIT:
I ran it under debug and got this:
### Forked, but do not know how to create a new TTY.
Since two debuggers fight for the same TTY, input is severely entangled.
I know how to switch the output to a different window in xterms
and OS/2 consoles only. For a manual switch, put the name of the created TTY
in $DB::fork_TTY, or define a function DB::get_fork_TTY() returning this.
On UNIX-like systems one can get the name of a TTY for the given window
by typing tty, and disconnect the shell from TTY by sleep 1000000.
Whenever you launch background jobs via non-interactive ssh commands, you need to close or otherwise tie off stdin, stdout, & stderr. Otherwise ssh will wait for the backgrounded process to exit. FAQ.
This is called disassociating or detaching from the controlling terminal and is a general best practice when writing background jobs, not just for SSH.
So the simplest change that doesn't mute your entire command is to add:
#close std fds inherited from parent
close STDIN;
close STDOUT;
close STDERR;
right after your print "Output started";. If your child process needs to print output periodically during its run, then you'll need to redirect to a log file instead.
ssh username#example.com 'nohup perl script.pl'
You aren't able to exit because there's still a process attached. You need to nohup it.
What is happening is that ssh is executing 'perl script.pl' as a command directly. If you have 'screen' available, you could do:
$ ssh username#example.com 'screen -d -m perl script.pl'
to have it running on a detached screen, and reattach later with screen -r
To understand this better I would recommend reading #Jax's solution on
Getting ssh to execute a command in the background on target machine
It's not to do with Perl. It's becaue of the way SSH handles any long-running process you're trying to background.
I need to launch script.pl from a bash script (to define essential local variables on the target host):
$ ssh username#example.com /path/to/launcher.sh
/path/to/launcher.sh was invoking the Perl script with:
CMD="/path/to/script.pl -some_arg=$VALUE -other_arg"
$CMD &
which worked locally, but when run via ssh it didn't return.
I tried #pra's solution inside the Perl script, but it didn't work in my case.
Using #Jax's solution, I replaced $CMD & with this:
nohup $CMD > /path/to/output.log 2>&1 < /dev/null &
and it works beautifully.