Open3 outputting to std error only and not asynchronously - perl

I am relatively new to perl programming and I am trying to figure out how open3 works. Here is the code.
#!/usr/bin/perl
use strict;
use warnings;
use IPC::Open3;
my $dir = "/home/vman/Documents/Dev/perl_scripts/Provenance/temp";
my $fileHandle;
my $bashPid;
print "Starting main program\n";
my $pid = fork();
if($pid)#Parent process2
{
print("Start transfer.\n");
$bashPid = $pid;
#Attaching an strace to the executed command which happens in the child process
open3(\*WRITE, \*READ,\*ERROR,"strace", "-f", "-F", "-e", "trace=open,execve","-p", $bashPid, "-s", "2097152","-q");
while(<READ>)
{
print("Here1\n");
print("$_");
}
while(<ERROR>)
{
print("$_");
}
print("Finish transfer.\n");
}
elsif($pid == 0)
{
if (scalar(#ARGV == 0))
{
exit
}
my $args = join(' ', #ARGV);
exec($args);
}
else
{
die("Could not fork.");
}
close(READ);
close(WRITE);
close(ERROR);
waitpid($bashPid, 0);
print "End of main program\n";
I want to run an strace on a bash process, then capture all the output while it is being outputted. Then I will take that output and parse it to see what files are being changed by which process and I will save those changes in a mysql database. For now all I am trying to do is attach an strace onto an existing bash process and get the output of that strace printed within the bash terminal that is running just to make sure that it is asynchronously reading the output.
One of the problems is that I am getting the output through the ERROR filehandle. I am a little confused on to why this is happening. Am I using the correct order for open3 and if there is an error why is the correct output even making it to stderr?
The second problem I have is that I am getting the output only when exec ends which is no good since it needs to be done while exec is running. I thought open3 runs asynchronously.
As per suggested this is what I did and it works perfectly.
#!/usr/bin/perl
use strict;
use warnings;
use IPC::Run3;
my $bashPid;
print "Starting main program\n";
my $pid = fork();
if($pid)#Parent process
{
print("Start transfer.\n");
$bashPid = $pid;
#Attaching an strace to the executed command which happens in the child process
my $command = "strace -fFe trace=open,execve -p $bashPid -s 2097152 -q";
run3($command, \*STDIN, \*STDOUT, \*STDERR);
if ($?)
{
die "something went horribly wrong";
}
while(<STDERR>)
{
print($_);
}
print("Finish transfer.\n");
}
elsif($pid == 0)#cild process
{
if (scalar(#ARGV == 0))
{
exit
}
my $args = join(' ', #ARGV);
exec($args);
}
else
{
die("Could not fork.");
}
close(STDIN);
close(STDOUT);
close(STDERR);
waitpid($bashPid, 0);
print "End of main program\n";

One of the problems is that I am getting the output through the ERROR filehandle
Correct. strace writes to STDERR.
The second problem I have is that I am getting the output only when exec ends which is no good since it needs to be done while exec is running. I thought open3 runs asynchronously.
That's because you only start reading from the child's STDERR after the child closes its STDOUT when it ends.
In fact, you're lucky you haven't deadlocked yet. By reading one at a time as you are currently, doing, you'll deadlock when strace has output enough to fill the pipe.
You need to read from both the child's STDOUT and STDERR as it comes in. You could do this using with the help of select, polling non-blocking handle or threads. None of those options are as simple as ditching open3 and using a higher-level module that handles this for you. The simpler IPC::Run3 and the fully featured IPC::Run are good choices.

Related

IO::Select loop is somehow keeping child from exiting?

I have a perl IO::Select loop for running a command (with no stdin) and saving it's output into a variety of logs.
It's worked great for a number of years, but suddenly it's misbehaving with one particular command. The command runs to completion and exits properly if it's run from the command line, but when I run it from this perl code then after it's done (as seen from the logs) the child process won't exit, and it just sits in a hung state. I don't have strace rights on the systems this is happening on, so I can't see exactly what is stuck.
The code is essentially:
# Start process
open(local *nostdin, '<', '/dev/null') or die $!;
my $pid = open3('<&nostdin',
my $outH = gensym(),
my $errH = gensym(),
$command
);
# Setup select
my $sel = IO::Select->new();
$sel->add($outH);
$sel->add($errH);
# Select loop
while (my #ready = $sel->can_read()) {
foreach my $handle (#ready) {
my $bytesRead = sysread($handle, my $buf='', 1024);
if ($bytesRead <= 0) {
warn("Error reading command out $!\n Command: $command\n") if $bytesRead;
$sel->remove($handle);
next;
}
if ($handle==$outH) {
syswrite(SYSOUT,$buf) if $stdoutOpen;
syswrite(SYSLOG,$buf) if $logOpen;
}
if ($handle==$errH) {
syswrite(SYSERR,$buf) if $stderrOpen;
syswrite(SYSLOG,$buf) if $logOpen;
}
}
}
Is there anything I could be doing wrong? And even if there is a mistake, how can the perl I/O loop keep the child process from doing it's exit? It seems once the child exits it would close all it's I/O and we'd finish this loop (as has always been the case with this code).
I've done some tests using waitPid and some output, and it looks like the $outH is closing, but $errH never has any data and never closes, and then I can see from waitPid that the child has exiting.
So I open two pipes, for child out and child err. The child uses the output pipe and never the error pipe. Then the child closes the output pipe and exits, somehow this exit isn't closing the error pipe. How is this happening? Is this something a child could even do on purpose?

Perl how to properly handle System Commands (including Timeout with Kill & capture of RC/STDERR/STDOUT)

From a Perl script I want to execute various system commands and process the output in my script.
The script will be run automatically, so I want to make sure that no commands are hanging etc.
I'm open to any kind of feedback.
My requirements for the command execution:
Timeout -> If command runs longer than XX Seconds, it should kill its process(es)
If command returns information, it should not have to wait for end of timeout
I want to capture the exit status, STDERR, STDOUT in the script.
Here is an example I worked out from an other stackoverflow question: Kill a hung child process
What's not working for me at the moment:
cannot capture exit status of executed command
cannot capture STDERR of executed command
Code:
my $cmd = "sleep 15"; # other tests i use -> "echo bla" and "alkjdsf"
my $TIMEOUT = 10;
my $pid = open my $proc, '-|', "$cmd";
if (fork() == 0) {
my $poor_mans_alarm = "sleep 1,kill 0,$pid ||exit for 1..$TIMEOUT;kill 9,$pid";
# run poor man's alarm in a background process
exec($^X, '-e', "$poor_mans_alarm");
}
my $process_output = "";
while (<$proc>) {
$process_output .= $_;
}
If you either have a trick for this code or recommend a completely different solution, let me know.
Thanks and cheers
Addition:
Got a working Example with IPC::Open3,
But for future reader please Check out IPC::Run which has a Timeout Functionality included,
as mentioned by James Green.
Working example with IPC::Open3:
my $pid = open3(\*WRITE, \*READ,\*ERROR,"$command");
if (fork() == 0) {
my $poor_mans_alarm = "sleep 1,kill 0,$pid ||exit for 1..10;kill 9,$pid";
# run poor man's alarm in a background process
exec($^X, '-e', "$poor_mans_alarm");
}
# get all the STDOUT and STDERR from the Child.
while (<READ>) {
$output .= $_;
}
while (<ERROR>) {
$output .= $_;
}
waitpid($pid, 0);
if ($?) {
$rc = $? >> 8;
if ($rc != 1){
print "Some error $?\n";
}
}
It looks like IPC::Run provides pretty much everything you're after, including timeouts and capture of both STDOUT and STDERR. Docs are at https://metacpan.org/pod/IPC::Run including some usage examples.

How can Perl's `system` proceed without wait for the completion.

In Perl, the command, will wait till the "command" is completed. Is there a way to let command wait only for 20 sec ? One scenario is like the following:
The command is an infinite loop and won't finish. The command will freeze and the program can't proceed. What I want to let the program not blocked by command.
I know Ruby has a way to do this. Does Perl have a solution?
Thanks,
=Y
Use alarm:
eval {
local $SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required
alarm 20;
system("<Your command>")
alarm 0;
};
if ($#) {
die unless $# eq "alarm\n"; # propagate unexpected errors
# timed out
}
else {
# didn't
}
#!/usr/bin/perl -w
use strict;
use 5.010;
my $foo = 123;
my $pidChild = fork(); # All objects before this fork statement will be copied.
given ($pidChild)
{
when (!defined($_)) {
die "Cannot fork: $!";
}
when ($_ == 0) {
# The child process goes here.
$foo = 456; # This is a duplicate.
system 'subprocess options'; # Or: exec 'suprocess options';
}
default {
# The original process goes here.
waitpid($_, 0); # Whether to wait or not is up to you.
say $foo; # Output: 123
}
}
If Inter-Process Communication (IPC) is needed, before the invocation of fork, the built-in function pipe can be used to create 2 handlers, one for input and another for output, they'll be shared by the original process and the subprocess.
There's surely more than one way to do IPC. The built-in function open, the subroutine open2 offered by the module IPC::Open2, and the open3 offered by IPC::Open3 all can run a subprocess asynchronously.

want to get the die error message from a command executed via open in Perl

I am trying to fork out a cmd like below
my $h = IO::Handle->new;
$self->{-handle} = $h;
die "IO::Handle->new failed." unless defined $h;
$self->{-pid} = open $h, $self->{-command} . ' 2>&1 |';
$self->fileevent($h, 'readable' => [\&_read_cmd_op, $self]);
sub _read_cmd_op{
my $h = $self->{-handle};
if ( sysread $h, $_, 4096 ) {
my $t = $self->Subwidget('text');
$t->insert('end', $_);
$t->yview('end');
} else {
$self->{-finish} = 1;
}
}
Now the problem is that the '$self{-command}' is invoking
another perl script which if dies I want to know.
Note that the $self{-pid} still exists even if cmd dies.
The above code is in a Perl/TK app, where the $self->{-command} o/p in captured in a
text widget.
Somehow i don't get the die message even in the test widget.
I see it on stdout.
2 questions
How can i get the cmd op/error in the text widget?
How can i know that the command fired via IO::Handle died?
$self->{-pid} is just the pid of the forked process, not some magic object which goes away if the command exits.
I cannot reproduce the problem not getting the die() message. If the snippet above is called with 'perl -e "die 123"', then "123" appears in the text widget (at least on a Unix system).
For getting the exit code you can use something like the following.
} else {
$mw->fileevent($h, 'readable', '');
my $pid = waitpid($self->{-pid},0);
warn "pid $pid finished";
warn "retcode is " . ($? >> 8);
$self->{-finish} = 1;
}
The fileevent call with the empty callback stops further selects on this filehandle. With the waitpid call you wait for the termination of the child process. Once this happens, the exit code is available in the $? variable, like after a normal system() call. So for a non-zero exit code you know that the command died or exited with a false value.

perl: handle die before the framework

I am working with a perl framework which monitor $SIG{DIE} itself, my code was executed by the framework, so my exception handle code cannot be executed because the framework is first one to detected the exception then terminate the script.
frame.pm
sub execute
{
$SIG{__DIE__} = \&_handleDie;
eval{ #execute myscript.pl sub main
$rv = &$pFunct(#args);}
if ($#){ processException($#)}
print "myscript.pl success executed"
}
myscript.pl
use frame;
frame->execute( \&main );
sub main
{
%codes that redirect STDOUT to a file%
#if below API cmd no exception, hide it's output,
#otherwise output the API cmd STDERR msg
%codes called API of another module%
try
{
die("sth wrong");
}catch{
%codes restore STDOUT to terminal%
print "error msg, but this line will not be executed, how to get it be execute?"
}
}
The script first redirect STDOUT to a file for dumy some no use output.
When I want to implement is if exception happen(die line), the script can restore STDOUT to terminal then print error to terminal. Now it was handled by frame and print to STDOUT but not STDERR, so I need to handle restore STDOUT before frame print it to STDOUT.
with ruakh's solution, myscript.pl has passed SIG of frame, now catched by frame line if ($#){ processException($#)}, that is when execute myscript->die(), the program come to frame->if ($#){ processException($#)}, but not myscript->catch
=====================
I finally found this works for me:
myscript.pl
frame->execute( \&main );
sub main
{
open my $stdOri, ">&STDOUT";
my $tmpFile = "/tmp/.output.txt.$$";
open STDOUT, ">$tmpFile";
#overwrite frame provided exception handling.
local $SIG{__DIE__}=sub{close STDOUT; open STDOUT, ">&", $stdOri;};
#cause a exception,
#this exception will be processed by 'local $SIG{__DIE__}' block which restore STDOUT
#then frame->eval catch this exception, and print it in the terminal.
my $c=5/0;
}
thanks for ruakh's inspire.
Assuming that you don't want to modify the framework, you can locally override the signal-handler:
use frame;
frame->execute( \&main );
sub main
{
try
{
local $SIG{__DIE__}; # remove signal-handler
die("sth wrong");
}catch{
print STDERR "error msg";
die $#; # pass control back to framework's signal handler
}
}
Disclaimer: tested with an eval-block, rather than with try/catch, since I don't have TryCatch installed. My understanding is that TryCatch depends on eval, and not on $SIG{__DIE__}, but I could be wrong about that.
The framework's $SIG{__DIE__} handler is wrong wrong wrong. It shouldn't be eating exceptions inside of an eval. It should do die #_ if $^S as suggested by perldoc -f die.