Perl : implementing socket programming ( system() never returns) - perl

My aim : implement socket programming such that client tries connecting to server if server not installed on remote machine , client(host) on its part transfers a tar file to server(target) machine and a perl script. This perl script untar the folder and runs a script (server perl script) , now the problem is : this server script has to run forever ( multiple clients ) until the machine restarts or something untoward happens.
so the script runs properly : but since it is continuously running the control doesnt go back to the client which will again try to connect to the server ( on some predefined socket) , so basically i want that somehow i run the server but bring back control to my host which is client in this case.
here is the code :
my $sourcedir = "$homedir/host_client/test.tar";
my $sourcedir2 = "$homedir/host_client/sabkuch.pl";
my $remote_path = "/local/home/hanmaghu";
# Main subroutines
my $ssh = Net::OpenSSH->new ( $hostmachine, user =>$username, password => $password);
$ssh->scp_put($sourcedir,$sourcedir2,$remote_path)
or die "scp failed \n" . $ssh->error;
# test() is similar to system() in perl openssh package
my $rc = $ssh->test('perl sabkuch.pl');
# check if test function returned or not -> this is never executed
if ($rc == 1) {
print "test was ok , server established \n";
}
else {
print "return from test = $rc \n";
}
exit;
The other script which invokes our server script is :
#!/usr/bin/perl
use strict;
use warnings;
system('tar -xvf test.tar');
exec('cd utpsm_run_automation && perl utpsm_lts_server.pl');
#system('perl utpsm_lts_server.pl');
# Tried with system but in both cases it doesn't return,
# this xxx_server.pl is my server script
exit;
The server script is :
#!/usr/bin/perl
use strict;
use warnings;
use IO::Socket::INET;
#flush after every write
$| =1;
my $socket = new IO::Socket::INET (
LocalHost => '0.0.0.0',
LocalPort => '7783',
Proto => 'tcp',
Listen => 5,
Reuse => 1
);
die "cannot create socket $! \n" unless $socket;
print "server waiting for client on port $socket->LocalPort \n";
while (1)
{
# waiting for new client connection
my $client_socket = $socket->accept();
# get info about new connected client
my $client_address = $client_socket->peerhost();
my $client_port = $client_socket->peerport();
print "connection from $client_address:$client_port \n";
# read upto 1024 characters from connected client
my $data = "";
$client_socket->recv($data,1024);
print "rceeived data = $data";
# write response data to the connected client
$data = "ok";
$client_socket->send($data);
# notify client response is sent
shutdown($client_socket,1);
}
$socket->close();
Please help how to execute this : in terms of design this is what i want but having this issue while implementation, can i do it some other work around method.

In short, your 'driver' sabkuch.pl starts the server using exec -- which never returns. From exec
The "exec" function executes a system command and never returns; ...
(Emphasis from the quoted documentation.) Once an exec is used, the program running in the process is replaced by the other program, see exec wiki. If that server keeps running the exit you have there is never reached, that is unless there are errors. See Perl's exec linked above.
So your $ssh->test() will block forever (well, until the server does exit somehow). You need a non-blocking way to start the server. Here are some options
Run the driver in the background
my $rc = $ssh->test('perl sabkuch.pl &');
This starts a separate subshell and spawns sabkuch.pl in it, then returns control and test can complete. The sabkuch.pl runs exec and thus turns into the other program (the server), to run indefinitely. See Background processes in perlipc. Also see it in perlfaq8, and the many good links there. Note that there is no need for perl ... if sabkuch.pl can be made executable.
See whether Net::OpenSSH has a method to execute commands so that it doesn't block.
One way to 'fire-and-forget' is to fork and then exec in the child, while the parent can then do what it wants (exit in this case). Then there is more to consider. Plenty of (compulsory) information is found in perlipc, while examples abound elsewhere as well (search for fork and exec). This should not be taken lightly as errors can lead to bizarre behavior and untold consequences. Here is a trivial example.
#!/usr/bin/perl
use strict;
use warnings;
system('tar -xvf test.tar') == 0 or die "Error with system(...): $!";
my $pid = fork;
die "Can't fork: $!" if not defined $pid;
# Two processes running now. For child $pid is 0, for parent large integer
if ($pid == 0) { # child, parent won't get into this block
exec('cd utpsm_run_automation && perl utpsm_lts_server.pl');
die "exec should've not returned: $!";
}
# Can only be parent here since child exec-ed and can't get here. Otherwise,
# put parent-only code in else { } and wait for child or handle $SIG{CHLD}
# Now parent can do what it needs to...
exit; # in your case
Normally a concern when forking is to wait for children. If we'd rather not, this can be solved by double-forking or by handling SIGCHLD (see waitpid as well), for example. Please study perlfaq8 linked above, Signals in perlipc, docs for all calls used, and everything else you can lay your hands on. In this case the parent should by all means exit first and the child process is then re-parented by init and all is well. The exec-ed process gets the same $pid but since cd will trigger the shell (sh -c cd) the server will eventually run with a different PID.
With system('command &') we need not worry about waiting for a child.
This is related only to your direct question, not the rest of the shown code.

Well i figured the best way would be to fork out a child process and parents exists thus child can now go on forever running the server.pl
but it still is not working please let me knoe where in this code i am going wrong
#!/usr/bin/perl
use strict;
use warnings;
system('tar -xvf test.tar');
my $child_pid = fork;
if (!defined $child_pid){
print "couldn't fork \n";}
else {
print "in child , now executing \n";
exec('cd utpsm_run_automation && perl utpsm_lts_server.pl')
or die "can't run server.pl in sabkuch child \n";
}
the output is my script still hangs and the print statement "in child now executing " gets run twice , i dont understand why,
i work mostly on assembly language hence this all is new to me.
help will be appreciated.

Related

perl open() always returns the PID of the sh instead of the underlying program

I have to kill a program that I am opening via
$pid = open(FH, "program|")
or
$pid = or open(FH, "-|", "program")
However, the program (mosquittto_sub, to be specific) still lingers around in the background, because open is returning the PID of the sh that perl is using to run the program, so I am only killing the sh wrapper instead of the actual program.
Is there a way to get the programs real PID? What is the point of getting the sh's PID?
There are a few ways to deal with this.
First, you can use a list form to open a process and then no shell is involved so the child process (with pid returned by open) is precisely the one with the program you need to stop
my #cmd = ('progname', '-arg1', ...);
my $pid = open my $fh, '-|', #cmd // die "Can't open \"#cmd\": $!";
...
my $num_signaled = kill 15, $pid;
This sketch needs some checks added. Please see the linked documentation (look for "pipe").
If this isn't suitable for some reason -- perhaps you need the shell to run that program -- then you can find the program's pid, and Proc::ProcessTable module is good for this. A basic demo
use Proc::ProcessTable;
my $prog_name = ...
my $pid;
my $pt = Proc::ProcessTable->new();
foreach my $proc (#{$pt->table}) {
if ($proc->cmndline =~ /\Q$prog_name/) { # is this enough to identify it?
$pid = $proc->pid;
last;
}
}
my $num_signaled = kill 15, $pid;
Please be careful with identifying the program by its name -- on a modern system there may be all kinds of processes running that contain the name of the program you want to terminate. For more detail and discussion please see this post and this post, for starters.
Finally, you can use a module to run your external programs and then you'll be able to manage and control them far more nicely. Here I'd recommend IPC::Run.

Perl Behavioral Differences Closing Child Process Spawned with open() vs. IPC::Open3

I'm trying to figure this out but haven't been able to wrap my head around it. I need to open a piped subprocess and read from its output. Originally I was using the standard open() call like this:
#!/usr/bin/perl;
use warnings;
use strict;
use Scalar::Util qw(openhandle);
use IPC::Open3;
my $fname = "/var/log/file.log.1.gz";
my $pid = open(my $fh, "-|:encoding(UTF-8)", "gunzip -c \"$fname\" | tac");
# Read one line from the file
while (my $row = <$fh>) {
print "Row: $row\n";
last; # Bail out early
}
# Check if the PID is valid and kill it if so
if (kill(0, $pid) == 1) {
kill(15, $pid);
waitpid($pid, 0);
$pid = 0;
}
# Close the filehandle if it is still open
if (openhandle($fh)) {
close $fh;
}
The above works, except that I get errors from tac in the logs saying:
tac: write error
From what I can tell from various testing and research that I've done, this is happening because killing the PID returned from open() just kills the first child process (but not the second) and so when I then close the filehandle, tac is still writing to it, thus the "write error" due to the broken pipe. The strange thing is, at times when I check ($? >> 8) if the close() call returns false, it will return 141, indicating it received a SIGPIPE (backing up my theory above). However, other times it returns 0 which is strange.
Furthermore, if I run the same command but without a double-pipe (only a single one), like this (everything else the same as above):
my $pid = open(my $fh, "-|:encoding(UTF-8)", "gunzip -c \"$fname\"");
...I'll get an error in the logs like this:
gzip: stdout: Broken pipe
...but in this case, gunzip/gzip was the only process (which I killed via the returned PID), so I'm not sure why it would still be writing to the pipe when I close the filehandle (since it was supposed to be killed already, AND waited for with waitpid()).
I'm trying to repro this in the Perl debugger but its difficult because I can't get the stderr of the child process with plain open() (the way I'm seeing the external process' stderr in prod is in the apache2 logs - this is a CGI script).
I understand from reading the docs that I can't get the PID of all child processes in a multi-piped open with open(), so I decided to try and resort to a different method so that I could close all processes cleanly. I tried open3(), and interestingly, without making any changes (literally running basically the same exact scenario as above but with open3() instead of open()):
my $pid = open3(my $in, my $fh, undef, "gunzip -c \"$fname\"");
...and then killing it just like I did above, I don't get any errors. This holds true for both the single piped process as shown above, as well as the double-piped process that involves piping to "tac".
Therefore, I'm wondering what I am missing here? I know there are differences in the way open() and open3() work, but are there differences in the way that child processes are spawned from them? In both cases I can see that the initial child (the PID returned) is itself a child of the Perl process. But its almost as if the process spawned by open(), is not getting properly killed and/or cleaned up (via waitpid()) while the same process spawned by open3() is, and that's the part I can't figure out.
And, more to the bigger picture and the issue at hand - what is the suggestion for the best way to cleanly close a multi-piped process in this sort of scenario? Am I spending more time than is warranted on this? The script itself works as it should aside from these errors, so if it turns out that the tac and gzip errors I'm seeing are inconsequential, should I just live with them and move on?
Any help is much appreciated!
If you just want to read the last line of a gzipped file, it's easy to do it in pure perl without calling an external program:
#!/usr/bin/env perl
use warnings;
use strict;
use feature qw/say/;
use IO::Uncompress::Gunzip qw/$GunzipError/;
my $fname = 'foo.txt.gz';
my $z = new IO::Uncompress::Gunzip $fname or die "Couldn't open file: $GunzipError\n";
my $row;
while (<$z>) {
$row = $_;
}
say "Row: $row";
This happens because either your perl script or its parent is ignoring the SIGPIPE signal, and the ignore signal dispositions are inherited by the children.
Here is a simpler testcase for your condition:
$ perl -e '$SIG{PIPE}="IGNORE"; open my $fh, "-|", "seq 100000 | tac; true"; print scalar <$fh>'
100000
tac: write error
$ (trap "" PIPE; perl -e 'open my $fh, "-|", "seq 100000 | tac"; print scalar <$fh>')
100000
tac: write error
$ (trap "" PIPE; perl -e 'my $pid = open my $fh, "-|", "seq 100000 | tac"; print scalar <$fh>; kill 15, $pid; waitpid $pid, 0')
100000
$ tac: write error
The latter version does the same kill as the version from the OP, which will not kill either the right or left side of the pipeline, but the shell running and waiting for both (some shells will exec through the left side of a pipeline; with such shells, a ; exit $? could be appended to the command in order to reproduce the example).
A case where SIGPIPE is ignored upon entering a perl script is when run via fastcgi -- which sets the SIGPIPE disposition to ignore, and expects the script to handle it. In that case simply setting an SIGPIPE handler instead of IGNORE (even an empty handler) would work, since in that case the signal disposition will be reset to default upon executing external commands:
$SIG{PIPE} = sub { };
open my $fh, '-|', 'trap - PIPE; ... | tac';
When run as a standalone script it could be some setup bug (I've see it happen in questions related to containerization on Linux), or someone trying to exploit buggy programs running with elevated privileges not bothering to handle write(2) errors (EPIPE in this case).
my $pid = open3(my $in, my $fh, undef, "gunzip -c \"$fname\"");
...and then killing it just like I did above, I don't get any errors.
Where should you get the errors from, if you're redirecting its stderr to the same $fh you only read the first line from?
The thing is absolutely no different with open3:
$ (trap "" PIPE; perl -MIPC::Open3 -e 'my $pid = open3 my $in, my $out, my $err, "seq 100000 | tac 2>/dev/tty"; print scalar <$out>')
100000
$ tac: write error

Can I spawn a child process using expect from a perl script invoked by nginx/ FCGI

In a perl script invoked by nginix/ FCGI , if I do this ssh user#ip <command>, it works i.e. ti connects to remote machine
However, if I do this expect->spawn(ssh user#ip ), FCGI returns an error on doing a open (That code is pretty generic and I don't expect bugs in it).If I run the script using expect as a standalone, it works.
That leads me to believe that FCGI perhaps does not allow process to be spawned by expect. Though I am not sure. Googling has not helped so far. Any inputs appreciated
The reason it fails is because Expect.pm tries to dup STDIN and FCGI.pm doesn't support the operation, next release of FCGI.pm will have a more clear error message than: Not a GLOB reference at /path/to/perl/lib/....
The solution is to avoid using the globals: STDIN, STDOUT, STDERR and %ENV when doing inter-process communication.
my ($stdin, $stdout, $stderr, $env) =
(IO::Handle->new, IO::Handle->new, IO::Handle->new, {});
my $r = FCGI::Request($stdin, $stdout, $stderr, $env)
or die "Couldn't create FCGI request: '$!'";
while ($r->Accept >= 0) {
my $exp = Expect->spawn('ssh', 'user#ip')
or die "Cannot spawn command: $!\n";
print $stdout "OK\n";
}

How can I run a system command in Perl asynchronously?

I currently have a Perl script that runs an external command on the system, gathers the output, and performs some action based on what was returned. Right now, here is how I run this (where $cmd is a string with the command setup):
#output = `$cmd`;
I'd like to change this so if the command hangs and does not return a value after so much time then I kill the command. How would I go about running this asynchronously?
There's a LOT of ways to do this:
You can do this with a fork (perldoc -f fork)
or using threads (perldoc threads). Both of these make passing the returned information back to the main program difficult.
On systems that support it, you can set an alarm (perldoc -f alarm) and then clean up in the signal handler.
You can use an event loop like POE or Coro.
Instead of the backticks, you can use open() or respectively open2 or open3 (cf. IPC::Open2, IPC::Open3) to start a program while getting its STDOUT/STDERR via a file handle. Run non-blocking read operations on it. (perldoc -f select and probably google "perl nonblocking read")
As a more powerful variant of the openX()'s, check out IPC::Run/IPC::Cmd.
Probably tons I can't think of in the middle of the night.
If you really just need to put a timeout on a given system call that is a much simpler problem than asynchronous programming.
All you need is alarm() inside of an eval() block.
Here is a sample code block that puts these into a subroutine that you could drop into your code. The example calls sleep so isn't exciting for output, but does show you the timeout functionality you were interested in.
Output of running it is:
/bin/sleep 2 failure: timeout at
./time-out line 15.
$ cat time-out
#!/usr/bin/perl
use warnings;
use strict;
my $timeout = 1;
my #cmd = qw(/bin/sleep 2);
my $response = timeout_command($timeout, #cmd);
print "$response\n" if (defined $response);
sub timeout_command {
my $timeout = (shift);
my #command = #_;
undef $#;
my $return = eval {
local($SIG{ALRM}) = sub {die "timeout";};
alarm($timeout);
my $response;
open(CMD, '-|', #command) || die "couldn't run #command: $!\n";
while(<CMD>) {
$response .= $_;
}
close(CMD) || die "Couldn't close execution of #command: $!\n";
$response;
};
alarm(0);
if ($#) {
warn "#cmd failure: $#\n";
}
return $return;
}
If your external program doesn't take any input, look for the following words in the perlipc manpage:
Here's a safe backtick or pipe open for read:
Use the example code and guard it with an alarm (which is also explained in perlipc).
I coded below to run rsync on 20 directories simultaneously (in parallel instead of sequentially requiring me to wait hours for it to complete):
use threads;
for my $user ( keys %users ) {
my $host = $users{$user};
async {
system <<~ "SHELL";
ssh $host \\
rsync_user $user
SHELL
}
}
$ pgrep -lf rsync | wc -l
20
Not sure if it's best or even a good solution, but I was glad that it worked for my use case.
With this you get a mixed output on screen (what I ignored anyway), but it does its job successfully.
threads pragma exports the (very useful) async function by default.
rsync_user is my Perl script that wraps rsync command with options, and source and target directories set.
Ran on FreeBSD 13.1 with Perl 5.32.1

How can I display the execution status of the command "ssh-copy-id" from Perl?

I have a Perl script which does this: It generates a ssh authentication key on my system and then copies this key to a remote Linux system for passwordless ssh connects. The code is as below:
# Generate an rsa key and store it in the given file
system("ssh-keygen -t rsa -N '' -f /root/.ssh/id_rsa 1>/dev/null");
# Copy the generated key to a remote system whose username
# is stored in variable $uname and IP address is stored in variable $ip
system("ssh-copy-id -i /root/.ssh/id_rsa.pub $uname\#$ip 2>&1 1>/dev/null");
The problem I have is this: The ssh-copy-id command takes quite some time to copy the key to the remote system. So, when this Perl script is run, it will appear like the script has hung.
Therefore, I want to do display a "progress message": When the copy is in progress I want to display "SSH authentication key copy is progress" and if the copy has failed, display "Failed to copy" and if it has succeeded, display "Copy succeeded".
How do I go about doing this?
One more(based on Chas's answer):
I tried the code as Chas suggested
die "could not fork: $!" unless defined(my $pid = fork);
#child sleeps then exits with a random exit code
unless ($pid) {
print "Connecting to server ";
exec "ssh-copy-id -i /root/.ssh/id_rsa.pub $uname\#$ip 2>&1 1>/dev/null";
exit int rand 255;
}
#parent waits for child to finish
$| = 1;
print "waiting: ";
my #throbber = qw/ . o O o . /;
until ($pid = waitpid(-1, WNOHANG)) {
#get the next frame
my $frame = shift #throbber;
#display it<br />
print $frame;
#put it at the end of the list of frames
push #throbber, $frame;
#wait a quarter second<br />
select undef, undef, undef, .25;<br />
#backspace over the frame<br />
print "\b";<br />
}
The problem is this:
Now ssh-copy-id asks for a Password input while connecting to the remote server. So, the "throbber" output(i.e the circle of varying size that get's displayed) comes after the Password input which looks weird. This is how it looks like:
CURRENT OUTPUT
Connecting to remote server o0O0o #This is the throbber. The output doesn't exactly look like this but I can't print dynamically changing output, can I
Password:o0O0oXXXXXo0O0o #You get it right, the throbber unnecessarily comes at the Password prompt too
THE OUTPUT THAT I WANT:
Connecting to remote server o0O0o #The throbber should be displayed HERE ONLY, NOWHERE ELSE
Password:XXXXX
Any ideas, anyone?
It is fairly simple to fork off another process to do work for you while the main process lets the user know things haven't stopped happening:
#!/usr/bin/perl
use strict;
use warnings;
use POSIX ":sys_wait_h";
die "could not fork: $!" unless defined(my $pid = fork);
#child sleeps then exits with a random exit code
unless ($pid) {
#your code replaces this code
#in this case, it should probably just be
#exec "ssh-copy-id -i /root/.ssh/id_rsa.pub $uname\#$ip 2>&1 1>/dev/null";
#as that will replace the child process with ssh-copy-id
sleep 5;
exit int rand 255;
}
#parent waits for child to finish
$| = 1;
print "waiting: ";
my #throbber = qw/ . o O o . /;
until ($pid = waitpid(-1, WNOHANG)) {
#get the next frame
my $frame = shift #throbber;
#display it
print $frame;
#put it at the end of the list of frames
push #throbber, $frame;
#wait a quarter second
select undef, undef, undef, .25;
#backspace over the frame
print "\b";
}
#exit code is in bits 8 - 15 of $?, so shift them down to 0 - 7
my $exit_code = $? >> 8;
print "got exit code of $exit_code\n";
system() returns the exit status of the program as returned by the wait call. Nothing is returned if the system call was successful.
Thus u can see code like this:
system( 'ls' ) and die "Unable to call ls: $?";
which is very unintuitive ;-)
I normally do the following:
my $status = system( 'ls' );
die "Unable to call ls: $?" if $status;
However if you look at the perldoc you see a nicer alternative:
system( 'ls' ) == 0
or die "Unable to call ls: $?"
I would go with this method. But it would be amiss of me not to mention method suggested in the Perl Best Practices book:
use Perl6::Builtins qw( system );
system 'ls' or die "Unable to run ls: $?";
However note the PBP recommendation list points you away from this towards using autodie:
use autodie qw( system );
eval { system 'ls' };
die "Unable to run ls: $#" if $#;
So this is probably the canonical way going forward.
I don't think it's possible in an easy way (without resorting to forking, timers and whatnot) to add a progress bar to a single external command run via system() that doesn't generate output.
On the other hand, I think the reason your ssh-copy-id takes a long time to complete is because of an inproper DNS setup (check the sshd logs on the server side for a clue). The ssh server probably tries to resolve the reverse mapping for the client IP and times out. Fixing that will probably speed things up quite a lot.
When it comes to your messages. Can't you just use a print before running the system() command and using the return value from system to print the completion message (as already suggested by some of the other answers)?