Unable to close print file handle in perl - perl

I have the following perl snippet:
open FH, " | lpr ";
print FH "Hello";
print FH "This is a print test";
close FH or die "can't close: $! $?";
I am getting the following output:
can't close: 256 at <filename> line 4
Any help would be appreciated..
Thanks in advance :)

From perldoc -f close
If the file handle came from a piped open, "close" will
additionally return false if one of the other system
calls
involved fails, or if the program exits with non-zero
status.
The missing error is probably due to your lpr-process is not done, or something went wrong there. Did the print work?

If the filehandle came from a piped open, close returns false if one of the other syscalls involved fails or if its program exits with non-zero status. If the only problem was that the program exited non-zero, $! will be set to 0 . Closing a pipe also waits for the process executing on the pipe to exit--in case you wish to look at the output of the pipe afterwards--and implicitly puts the exit status value of that command into $? and ${^CHILD_ERROR_NATIVE}.

Related

Perl Behavioral Differences Closing Child Process Spawned with open() vs. IPC::Open3

I'm trying to figure this out but haven't been able to wrap my head around it. I need to open a piped subprocess and read from its output. Originally I was using the standard open() call like this:
#!/usr/bin/perl;
use warnings;
use strict;
use Scalar::Util qw(openhandle);
use IPC::Open3;
my $fname = "/var/log/file.log.1.gz";
my $pid = open(my $fh, "-|:encoding(UTF-8)", "gunzip -c \"$fname\" | tac");
# Read one line from the file
while (my $row = <$fh>) {
print "Row: $row\n";
last; # Bail out early
}
# Check if the PID is valid and kill it if so
if (kill(0, $pid) == 1) {
kill(15, $pid);
waitpid($pid, 0);
$pid = 0;
}
# Close the filehandle if it is still open
if (openhandle($fh)) {
close $fh;
}
The above works, except that I get errors from tac in the logs saying:
tac: write error
From what I can tell from various testing and research that I've done, this is happening because killing the PID returned from open() just kills the first child process (but not the second) and so when I then close the filehandle, tac is still writing to it, thus the "write error" due to the broken pipe. The strange thing is, at times when I check ($? >> 8) if the close() call returns false, it will return 141, indicating it received a SIGPIPE (backing up my theory above). However, other times it returns 0 which is strange.
Furthermore, if I run the same command but without a double-pipe (only a single one), like this (everything else the same as above):
my $pid = open(my $fh, "-|:encoding(UTF-8)", "gunzip -c \"$fname\"");
...I'll get an error in the logs like this:
gzip: stdout: Broken pipe
...but in this case, gunzip/gzip was the only process (which I killed via the returned PID), so I'm not sure why it would still be writing to the pipe when I close the filehandle (since it was supposed to be killed already, AND waited for with waitpid()).
I'm trying to repro this in the Perl debugger but its difficult because I can't get the stderr of the child process with plain open() (the way I'm seeing the external process' stderr in prod is in the apache2 logs - this is a CGI script).
I understand from reading the docs that I can't get the PID of all child processes in a multi-piped open with open(), so I decided to try and resort to a different method so that I could close all processes cleanly. I tried open3(), and interestingly, without making any changes (literally running basically the same exact scenario as above but with open3() instead of open()):
my $pid = open3(my $in, my $fh, undef, "gunzip -c \"$fname\"");
...and then killing it just like I did above, I don't get any errors. This holds true for both the single piped process as shown above, as well as the double-piped process that involves piping to "tac".
Therefore, I'm wondering what I am missing here? I know there are differences in the way open() and open3() work, but are there differences in the way that child processes are spawned from them? In both cases I can see that the initial child (the PID returned) is itself a child of the Perl process. But its almost as if the process spawned by open(), is not getting properly killed and/or cleaned up (via waitpid()) while the same process spawned by open3() is, and that's the part I can't figure out.
And, more to the bigger picture and the issue at hand - what is the suggestion for the best way to cleanly close a multi-piped process in this sort of scenario? Am I spending more time than is warranted on this? The script itself works as it should aside from these errors, so if it turns out that the tac and gzip errors I'm seeing are inconsequential, should I just live with them and move on?
Any help is much appreciated!
If you just want to read the last line of a gzipped file, it's easy to do it in pure perl without calling an external program:
#!/usr/bin/env perl
use warnings;
use strict;
use feature qw/say/;
use IO::Uncompress::Gunzip qw/$GunzipError/;
my $fname = 'foo.txt.gz';
my $z = new IO::Uncompress::Gunzip $fname or die "Couldn't open file: $GunzipError\n";
my $row;
while (<$z>) {
$row = $_;
}
say "Row: $row";
This happens because either your perl script or its parent is ignoring the SIGPIPE signal, and the ignore signal dispositions are inherited by the children.
Here is a simpler testcase for your condition:
$ perl -e '$SIG{PIPE}="IGNORE"; open my $fh, "-|", "seq 100000 | tac; true"; print scalar <$fh>'
100000
tac: write error
$ (trap "" PIPE; perl -e 'open my $fh, "-|", "seq 100000 | tac"; print scalar <$fh>')
100000
tac: write error
$ (trap "" PIPE; perl -e 'my $pid = open my $fh, "-|", "seq 100000 | tac"; print scalar <$fh>; kill 15, $pid; waitpid $pid, 0')
100000
$ tac: write error
The latter version does the same kill as the version from the OP, which will not kill either the right or left side of the pipeline, but the shell running and waiting for both (some shells will exec through the left side of a pipeline; with such shells, a ; exit $? could be appended to the command in order to reproduce the example).
A case where SIGPIPE is ignored upon entering a perl script is when run via fastcgi -- which sets the SIGPIPE disposition to ignore, and expects the script to handle it. In that case simply setting an SIGPIPE handler instead of IGNORE (even an empty handler) would work, since in that case the signal disposition will be reset to default upon executing external commands:
$SIG{PIPE} = sub { };
open my $fh, '-|', 'trap - PIPE; ... | tac';
When run as a standalone script it could be some setup bug (I've see it happen in questions related to containerization on Linux), or someone trying to exploit buggy programs running with elevated privileges not bothering to handle write(2) errors (EPIPE in this case).
my $pid = open3(my $in, my $fh, undef, "gunzip -c \"$fname\"");
...and then killing it just like I did above, I don't get any errors.
Where should you get the errors from, if you're redirecting its stderr to the same $fh you only read the first line from?
The thing is absolutely no different with open3:
$ (trap "" PIPE; perl -MIPC::Open3 -e 'my $pid = open3 my $in, my $out, my $err, "seq 100000 | tac 2>/dev/tty"; print scalar <$out>')
100000
$ tac: write error

perl parent process hangs waiting for child process to read stdin

I have a perl script which emulates a tee command so I can get output written to the terminal and a log file. It works something like this (error checking &c omitted).
$pid = open(STDOUT, '-|');
# Above is perl magic that forks a process and sets up a pipe with the
# child's STDIN being one end of the pipe and the parent's STDOUT (in
# this case) being the other.
if ($pid == 0)
{
# Child.
# Open log file
while (<STDIN>)
{
# print to STDOUT and log file
}
#close log files
exit;
}
# parent
open STDERR, '>&STDOUT';
# do lots of system("...") calls
close STDERR;
close STDOUT;
exit;
This sometimes hangs, and invariably if you look at the processes and the stacks of said processes, the parent is hanging in one of the closes, waiting for the child to exit, whereas the child is hanging reading something from a file (which has to be STDIN, because there's no other file).
I'm rather at a loss as to how to deal with this. The problem seems to happen if you are running the program from a shell that isn't attached to a console - running the script in a normal shell works fine - and the only piece of code that has changed recently in that script is the addition of an open/close of a file just to touch it (and it's before the script gets to this 'tee' code).
Has anybody had problems like this before and/or have a suggestion as to what I can do to fix this? Thanks.
Well, after some experimentation it seems that opening STDOUT directly appears to be at least part of the reason. My code now reads like this:
$pid = open($handle, '|-');
if ($pid == 0)
{
# Child.
# Open log file
while (<STDIN>)
{
# print to STDOUT and log file
}
#close log files
exit;
}
# parent
open my $oldout, '>&STDOUT';
open my $olderr, '>&STDERR';
open STDOUT, '>&', $handle;
open STDERR, '>&', $handle;
# do lots of system("...") calls
open STDOUT, '>&', $oldout;
open STDERR, '>&', $olderr;
close $handle or die "Log child exited unexpectedly: $!\n";
exit;
which if nothing else, looks cleaner (but still messier than I'd like as I don't know what to do if any of those dups has an error). But I'm still unclear as to why opening and closing a handle much earlier in the code made such a difference to this bit.

unable to capture stderr while performing openssh to a variable- perl

I want to capture the standard error displayed on host machine after (ssh->capture) to a variable.
for example when i try:
use Net::OpenSSH;
my $ssh = Net::OpenSSH->new($host);
my $out=$ssh->capture("cd /home/geek");
$ssh->error and
die "remote cd command failed: " . $ssh->error;
out put is:
child exited with code 1 at ./change_dir.pl line 32
i am not able to see what is the error. i get no such file or directory on the terminal. I want to capture the same "no such file or director" in $out.
example 2,
my ($stdout,$stderr)=$ssh->capture("cd /home/geek");
if($stderr)
print"Error = $stderr";
else
print "$stdout"
i see "Error=" printed but does not seee that $stderr on the screen.
i see $stdout is printed on success but print $stderr does not get printed only"Error= " gets printed.
When an error occurs it is most likely not going to be in STDOUT, and if it is in STDERR you are not catching that. You need to get to the application's exit code, in the following way. (Given the update to the question which I only see now: See the end for how to get STDERR.)
After the capture method you want to examine $? for errors (see Net-OpenSSH). Unpack that to get to the exit code returned by what was actually run by $ssh, and then look in that application's docs to see what that code means
$exit_code = $?;
if ($exit_code) {
$app_exit = $exit_code >> 8;
warn "Error, bit-shift \$? --> $app_exit";
}
The code to investigate is $app_exit.
An example. I use zip in a project and occasionally catch the error of 3072 (that is the $?). When that's unpacked as above I get 12, which is zip's actual exit. I look up its docs and it nicely lists its exit codes and 12 means Nothing to update. That's the design decision for zip, to exit with 12 if it had no files to update in the archive. Then that exit gets packaged into a two-byte number (in the upper byte), and that is returned and so it is what I get in $?.
Failure modes in general, from system in Perl docs
if ($? == -1) { warn "Failed to execute -- " }
elsif ($? & 127) {
$msg = sprintf("\tChild died with signal %d, %s coredump -- ",
($? & 127), ($? & 128) ? 'with' : 'without');
warn $msg;
} else {
$msg = sprintf("\tChild exited with value %d -- ", $? >> 8);
warn $msg;
}
The actual exit code $? >> 8 is supplied by whatever ran and so its interpretation is up to that application. You need to look through its docs and hopefully its exit codes are documented.
Note that $ssh->error seems designed for this task. From the module's docs
my $output = $ssh->capture({ timeout => 10 }, "echo hello; sleep 20; echo bye");
$ssh->error and warn "operation didn't complete successfully: ". $ssh->error;
The printed error needs further investigation. Docs don't say what it is, but I'd expect the unpacked code discussed above (the question update indicates this). Here $ssh only runs a command and it doesn't know what went wrong. It merely gets back the command's exit code, to be looked at.
Or, you can modify the command to get the STDERR on the STDOUT, see below.
The capture method is an equivalent of Perl's backticks (qx). There is a lot on SO on how to get STDERR from backticks, and Perl's very own FAQ has that nicely written up in perlfaq8. A complication here is that this isn't qx but a module's method and, more importantly, it runs on another machine. However, the "output redirection" method should still work without modifications. The command (run by $ssh) can be written so that its STDERR is redirected to its STDOUT.
$cmd_all_output = 'your_whole_command 2>&1';
$ssh->capture($cmd_all_output);
Now you will get the error that you see at the terminal ("no such file or directory") printed on STDOUT and so it will wind up in your $stdout. Note that one must use sh shell syntax, as above. There is a big bit more to it so please look it up (but this should work as it stands). Most of the time it is the same message as in the exit code description.
The check that you have in your code is good, the first line of defense: One should always check $? when running external commands, and for this the command to run need not be touched.

Why system() returns 0 even though the program it executes dies

I'm trying to test a piece of code ($code) that should make sure that only one instance of the program is running at a time:
#!/usr/bin/perl
# test_lock
use strict;
use warnings;
( my $code = <<'CODE') =~ s/^\s+//gm;
#!/usr/bin/perl
use strict;
use warnings;
use Fcntl qw(:flock);
# Make sure only one instance of the program is running at a time.
open our $Lock, '<', $0 or die "Can't lock myself $0: $!";
flock $Lock, LOCK_EX | LOCK_NB
or die "Another instance of $0 is already running. Exiting ...\n";
sleep(2);
CODE
my $progfile = '/tmp/x';
open my $fh, '>', $progfile or die $!;
print $fh $code;
close $fh;
$|++;
my $ex1 = system("perl $progfile &");
print "First system(): $ex1\n";
my $ex2 = system("perl $progfile");
print "Second system(): $ex2\n";
I expected that the second call to system() would return a non-zero value ($ex2) as it can't get the lock and dies. However I get:
$ perl test_lock
First system(): 0
Another instance of /tmp/x is already running. Exiting ...
Second system(): 0
What is wrong with my assumption? (Is there a better way to test the $code?)
I think it likely because you have a race condition. How do you know that error is actually coming from your second process?
Because if you for example, run:
perl /tmp/x & perl /tmp/x ; echo $?
You may get a zero return, because the 'winner' of the race may well be the latter process (which return code you're catching). (Try it a few times, and you'll see different results)
You also do have slight difference is what the shell is doing between the two commands - from the docs:
If there is only one scalar argument, the argument is checked for shell metacharacters, and if there are any, the entire argument is passed to the system's command shell for parsing (this is /bin/sh -c on Unix platforms, but varies on other platforms). If there are no shell metacharacters in the argument, it is split into words and passed directly to execvp , which is more efficient.
So actually you should see invocation of sh before perl in your first, which means it's actually more likely to take longer to get to the lock point.
That means your command is more like:
sh -c "perl /tmp/x"& perl /tmp/x; echo $?
Run that a few times and see how many times you get non-zero error codes. It's not often, because usually the 'delay' of the shell start up is enough to ensure that the second instance wins the race most of the time!
If you've linux - try strace -fTt yourscript which will trace the execution flow. Or you can make judicious use of $$ to report the process-pid when running.
In both cases, you are obtaining the exit code of the shell you launch. Roughly speaking, the shell returns the exit code of the last program it ran.
Since the shell created by system("perl $progfile &") doesn't wait for the child to end, it will virtually always return 0 since launching perl in the background is unlikely to result in an error.
So if the second instance of perl managed to obtain the lock first, you'll get the outcome you got. This race condition can be seem more clearly if you identify the source of the exception.
#!/usr/bin/perl
# test_lock
use strict;
use warnings;
( my $code = <<'CODE') =~ s/^\s+//gm;
#!/usr/bin/perl
use strict;
use warnings;
use Fcntl qw(:flock);
# Make sure only one instance of the program is running at a time.
open our $Lock, '<', $0 or die "Can't lock myself $0: $!";
flock $Lock, LOCK_EX | LOCK_NB
or die "$ARGV[0]: Another instance of $0 is already running. Exiting ...\n";
sleep(2);
CODE
my $progfile = 'b.pl';
open my $fh, '>', $progfile or die $!;
print $fh $code;
close $fh;
$|++;
my $ex1 = system("perl $progfile 1 &");
print "First system(): $ex1\n";
my $ex2 = system("perl $progfile 2");
print "Second system(): $ex2\n";
Output:
$ perl a.pl
First system(): 0
1: Another instance of b.pl is already running. Exiting ...
Second system(): 0
$ perl a.pl
First system(): 0
2: Another instance of b.pl is already running. Exiting ...
Second system(): 2816

How can I redirect the output of Perl's system() to a filehandle?

With the open command in Perl, you can use a filehandle. However I have trouble getting back the exit code with the open command in Perl.
With the system command in Perl, I can get back the exit code of the program I'm running. However I want to just redirect the STDOUT to some filehandle (no stderr).
My stdout is going to be a line-by-line output of key-value pairs that I want to insert into a mao in perl. That is why I want to redirect only my stdout from my Java program in perl. Is that possible?
Note: If I get errors, the errors get printed to stderr. One possibility is to check if anything gets printed to stderr so that I can quite the Perl script.
Canonically, if you're trying to get at the text output of a forked process, my understanding is that's what the backticks are for. If you need the exit status as well, you can check it with the $? special variable afterward, e.g.:
open my $fh, '>', "output.txt" or die $!;
print {$fh} `echo "Hello!"`;
print "Return code: $?\n";
Output to STDERR from the command in backticks will not be captured, but will instead be written directly to STDERR in the Perl program it's called from.
You may want to check out IPC::System::Simple -- it gives you many options for executing external commands, capturing its output and return value, and optionally dying if a bad result is returned.
This is one of the ways to do it.
open my $fh, '>', $file;
defined(my $pid = fork) or die "fork: $!";
if (!$pid) {
open STDOUT, '>&', $fh;
exec($command, #args);
}
waitpid $pid, 0;
print $? == 0 ? "ok\n" : "nok\n";
Use open in -| mode. When you close the filehandle, the exit status will be in $?.
open my $fh, '-|', "$command"; # older version: open my $fh, "$command |";
my #command_output = <$fh>;
close $fh;
my $command_status = $?;
From perldoc -f close
If the file handle came from a piped open, "close" will
additionally return false if one of the other system calls
involved fails, or if the program exits with non-zero status.
(If the only problem was that the program exited non-zero, $!
will be set to 0.) Closing a pipe also waits for the process
executing on the pipe to complete, in case you want to look at
the output of the pipe afterwards, and implicitly puts the exit
status value of that command into $? and
"${^CHILD_ERROR_NATIVE}".