Error in perlipc documentation? - perl

I'm trying to puzzle through something I see in the perlipc documentation.
If you're writing to a pipe, you should also trap SIGPIPE. Otherwise,
think of what happens when you start up a pipe to a command that
doesn't exist: the open() will in all likelihood succeed (it only
reflects the fork()'s success), but then your output will
fail--spectacularly. Perl can't know whether the command worked
because your command is actually running in a separate process whose
exec() might have failed. Therefore, while readers of bogus commands
return just a quick end of file, writers to bogus command will trigger
a signal they'd better be prepared to handle. Consider:
open(FH, "|bogus") or die "can't fork: $!";
print FH "bang\n" or die "can't write: $!";
close FH or die "can't close: $!";
That won't blow up until the close, and it will blow up with a
SIGPIPE. To catch it, you could use this:
$SIG{PIPE} = 'IGNORE';
open(FH, "|bogus") or die "can't fork: $!";
print FH "bang\n" or die "can't write: $!";
close FH or die "can't close: status=$?";
If I'm reading that correctly, it says that the first version will probably not die until the final close.
However, that's not happening on my OS X box (Perl versions 5.8.9 through 5.15.9). It blows up on the open with a "can't fork: No such file or directory" regardless of whether or not I have the $SIG{PIPE} line in there.
What am I misunderstanding?

This was a change implemented back during development of 5.6 so that system() could detect when it failed to fork/exec the child
https://github.com/mirrors/perl/commit/d5a9bfb0fc8643b1208bad4f15e3c88ef46b4160
It is also documented in http://search.cpan.org/dist/perl/pod/perlopentut.pod#Pipe_Opens
which itself points to perlipc, but perlipc does seem to be missing this

Same output on my Perl, v5.14.2, Ubuntu 12.04:
$ perl
open(FH, "|bogus") or die "can't fork: $!";
print FH "bang\n" or die "can't write: $!";
close FH or die "can't close: $!";
can't fork: No such file or directory at - line 1.
And that error is odd, since it has nothing to do with forking -- they must have added some lookahead-guessing to see if it could execute bogus. (Which would be a race condition at best, but one likely to provide significantly better error handling in the common case; and only be as inconvenient as the old behavior when the race is lost.)

Related

In Perl script, I can open / write to/ and close a file, but I get "bad file descriptor" when I try to flock it

I can OPEN the file using the file handle, but when I try to FLOCK using the same file handle I get "bad file descriptor."
my $file='/Library/WebServer/Documents/myFile.txt';
open(my $fh, '>', $file) or die "Could not open '$file' - $!";
# I DO NOT GET AN ERROR FROM OPENING THE FILE
flock($fh, LOCK_EX) or die "Could not lock '$file' - $!";
# HERE IS WHERE I GET THE "BAD FILE DESCRIPTOR" ERROR
# IF I COMMENT THIS LINE OUT, THE PRINT AND CLOSE COMMANDS BELOW EXECUTE NORMALLY
print $fh "hello world";
close($fh) or die "Could not write '$file' - $!";
It's the same file handle, so why do OPEN and PRINT work, but not FLOCK? I have tried setting the permissions for the file to 646, 666, and 777, but I always get the same results.
Thanks!
Did you import the constant LOCK_EX per the flock documentation?
use Fcntl ':flock';
If not, LOCK_EX doesn't mean anything and the flock call will fail. Using strict and/or warnings would have identified a problem with this system call.

What is the easiest way to test error handling when writing to a file in Perl?

I have a bog standard Perl file writing code with (hopefully) adequate error handling, of the type:
open(my $fh, ">", "$filename") or die "Could not open file $filname for writing: $!\n";
# Some code to get data to write
print $fh $data or die "Could not write to file $filname: $!\n";
close $fh or die "Could not close file $filname afterwriting: $!\n";
# No I can't use File::Slurp, sorry.
(I just wrote this code from memory, pardon any typos or bugs)
It is somewhat easy to test error handling in the first "die" line (for example, create a non-writable file with the same name you plan to write).
How can I test error handling in the second (print) and third (close) "die" lines?
The only way I know of to induce error when closing is to run out of space on filesystem while writing, which is NOT easy to do as a test.
I would prefer integration test type solutions rather than unit test type (which would involve mocking IO methods in Perl).
Working with a bad filehandle will make them both fail
use warnings;
use strict;
use feature 'say';
my $file = shift || die "Usage: $0 out-filename\n";
open my $fh, '>', $file or die "Can't open $file: $!";
$fh = \*10;
say $fh 'writes ok, ', scalar(localtime) or warn "Can't write: $!";
close $fh or warn "Error closing: $!";
Prints
say() on unopened filehandle 10 at ...
Can't write: Bad file descriptor at ...
close() on unopened filehandle 10 at ...
Error closing: Bad file descriptor at ...
If you don't want to see perl's warnings capture them with $SIG{__WARN__} and print your messages to a file (or STDOUT), for example.
Riffing on zdim's answer ...
Write to a file handle opened for reading.
Close a file handle that has already been closed.

Perl STDERR redirect failing

A common functions script that our systems use uses a simple STDERR redirect in order to create user-specific error logs. it goes like this
# re-route standard out to text file
close STDERR;
open STDERR, '>>', 'd:/output/Logs/STDERR_' . &parseUsername($ENV{REMOTE_USER}) . '.txt'
or die "couldn't redirect STDERR: $!";
Now, I copy-pasted this to my own functions script for a system-specific error log, and while it'll compile, it breaks the scripts that require it. Oddly enough, it doesn't even print the error that the children script are throwing. My slightly modified version looks like,
close STDERR;
open (STDERR, '>>', 'err/STDERR_SPORK.txt')
or die print "couldn't redirect STDERR: $!";
everything compiles fine in command prompt, -c returns ok, and if I throw a warn into the function script, and compile, it outputs properly. I still do not understand why though this kills the children. I cut out the redirect, and sure enough they work. Any thoughts?
die (and warn) writes to STDERR. If you close STDERR and then need to die as you attempt to reopen it, where would you expect to see the error message?
Since this is Perl, there are many ways to address this issue. Here are a couple.
open the file first to a tmp filehandle, reassign it to STDERR if everything goes ok
if (open my $tmp_fh, '>>',
'd:/output/Logs/STDERR_' . &parseUsername($ENV{REMOTE_USER}) . '.txt') {
close STDERR;
*STDERR = *$tmp_fh;
} else {
die "couldn't redirect STDERR: $!";
}
Use con. For programs that you run from a command line, most systems have a concept of "the current terminal". In Unix systems, it's /dev/tty and on Windows, it's con. Open an output stream to this terminal pseudo-file.
open STDERR, '>>',
'd:/output/Logs/STDERR_' . &parseUsername($ENV{REMOTE_USER}) . '.txt'
or do {
open my $tty_fh, '>', 'con';
print $tty_fh "couldn't redirect STDERR: $!";
exit 1;
};
After changing nothing in the script, and poking around in the server, and changing nothing, it now works as expected. I don't know what to say to be honest.

Perl pipe and C process as child [Windows ]

I want to fork a child ( which is my C executable ) and share a pipe between perl and C process,
Is it possible to have STDOUT and STDIN to use as pipe.
Tried with following code but child process keep continue running.
use IPC::Open2;
use Symbol;
my $CHILDPROCESS= "chile.exe";
$WRITER = gensym();
$READER = gensym();
my $pid = open2($READER,$WRITER,$CHILDPROCESS);
while(<STDIN>)
{
print $WRITER $_;
}
close($WRITER);
while(<$READER>)
{
print STDOUT "$_";
}
The Safe Pipe Opens section of the perlipc documentation describes a nice feature for doing this:
The open function will accept a file argument of either "-|" or "|-" to do a very interesting thing: it forks a child connected to the filehandle you've opened. The child is running the same program as the parent. This is useful for safely opening a file when running under an assumed UID or GID, for example. If you open a pipe to minus, you can write to the filehandle you opened and your kid will find it in his STDIN. If you open a pipe from minus, you can read from the filehandle you opened whatever your kid writes to his STDOUT.
But according to the perlport documentation
open
open to |- and -| are unsupported. (Win32, RISC OS)
EDIT: This might only work for Linux. I have not tried it for Windows. There might be a way to emulate it in Windows though.
Here is what you want I think:
#Set up pipes to talk to the shell.
pipe(FROM_PERL, TO_C) or die "pipe: $!\n";
pipe(FROM_C, TO_PERL) or die "pipe: $!\n";
#auto flush so we don't have (some) problems with deadlocks.
TO_C->autoflush(1);
TO_PERL->autoflush(1);
if($pid = fork()){
#parent
close(FROM_PERL) or die "close: $!\n";
close(TO_PERL) or die "close: $!\n";
}
else{
#child
die "Error on fork.\n" unless defined($pid);
#redirect I/O
open STDIN, "<&FROM_PERL";
open STDOUT, ">&TO_PERL";
open STDERR, ">&TO_PERL";
close(TO_C) or die "close: $!\n";
close(FROM_C) or die "close $!\n";
exec("./cprogram"); #start program
}
Now you can communicate to the shell via FROM_C and TO_C as input and output, respectively.
This Q&A over on Perlmonks suggests that open2 runs fine on Windows, provided you manage it carefully enough.

How can I reinitialize Perl's STDIN/STDOUT/STDERR?

I have a Perl script which forks and daemonizes itself. It's run by cron, so in order to not leave a zombie around, I shut down STDIN,STDOUT, and STDERR:
open STDIN, '/dev/null' or die "Can't read /dev/null: $!";
open STDOUT, '>>/dev/null' or die "Can't write to /dev/null: $!";
open STDERR, '>>/dev/null' or die "Can't write to /dev/null: $!";
if (!fork()) {
do_some_fork_stuff();
}
The question I have is: I'd like to restore at least STDOUT after this point (it would be nice to restore the other 2). But what magic symbols do I need to use to re-open STDOUT as what STDOUT used to be?
I know that I could use "/dev/tty" if I was running from a tty (but I'm running from cron and depending on stdout elsewhere). I've also read tricks where you can put STDOUT aside with open SAVEOUT,">&STDOUT", but just the act of making this copy doesn't solve the original problem of leaving a zombie around.
I'm looking to see if there's some magic like open STDOUT,"|-" (which I know isn't it) to open STDOUT the way it's supposed to be opened.
# copy of the file descriptors
open(CPERR, ">&STDERR");
# redirect stderr in to warning file
open(STDERR, ">>xyz.log") || die "Error stderr: $!";
# close the redirected filehandles
close(STDERR) || die "Can't close STDERR: $!";
# restore stdout and stderr
open(STDERR, ">&CPERR") || die "Can't restore stderr: $!";
#I hope this works for you.
#-Hariprasad AJ
If it's still useful, two things come to mind:
You can close STDOUT/STDERR/STDIN in just the child process (i.e. if (!fork()). This will allow the parent to still use them, because they'll still be open there.
I think you can use the simpler close(STDOUT) instead of opening it to /dev/null.
For example:
if (!fork()) {
close(STDIN) or die "Can't close STDIN: $!\n";
close(STDOUT) or die "Can't close STDOUT: $!\n";
close(STDERR) or die "Can't close STDERR: $!\n";
do_some_fork_stuff();
}
Once closed, there's no way to get it back.
Why do you need STDOUT again? To write messages to the console? Use /dev/console for that, or write to syslog with Sys::Syslog.
Honestly though, the other answer is correct. You must save the old stdout (cloned to a new fd) if you want to reopen it later. It does solve the "zombie" problem, since you can then redirect fd 0 (and 1 & 2) to /dev/null.