Perl STDERR redirect failing - perl

A common functions script that our systems use uses a simple STDERR redirect in order to create user-specific error logs. it goes like this
# re-route standard out to text file
close STDERR;
open STDERR, '>>', 'd:/output/Logs/STDERR_' . &parseUsername($ENV{REMOTE_USER}) . '.txt'
or die "couldn't redirect STDERR: $!";
Now, I copy-pasted this to my own functions script for a system-specific error log, and while it'll compile, it breaks the scripts that require it. Oddly enough, it doesn't even print the error that the children script are throwing. My slightly modified version looks like,
close STDERR;
open (STDERR, '>>', 'err/STDERR_SPORK.txt')
or die print "couldn't redirect STDERR: $!";
everything compiles fine in command prompt, -c returns ok, and if I throw a warn into the function script, and compile, it outputs properly. I still do not understand why though this kills the children. I cut out the redirect, and sure enough they work. Any thoughts?

die (and warn) writes to STDERR. If you close STDERR and then need to die as you attempt to reopen it, where would you expect to see the error message?
Since this is Perl, there are many ways to address this issue. Here are a couple.
open the file first to a tmp filehandle, reassign it to STDERR if everything goes ok
if (open my $tmp_fh, '>>',
'd:/output/Logs/STDERR_' . &parseUsername($ENV{REMOTE_USER}) . '.txt') {
close STDERR;
*STDERR = *$tmp_fh;
} else {
die "couldn't redirect STDERR: $!";
}
Use con. For programs that you run from a command line, most systems have a concept of "the current terminal". In Unix systems, it's /dev/tty and on Windows, it's con. Open an output stream to this terminal pseudo-file.
open STDERR, '>>',
'd:/output/Logs/STDERR_' . &parseUsername($ENV{REMOTE_USER}) . '.txt'
or do {
open my $tty_fh, '>', 'con';
print $tty_fh "couldn't redirect STDERR: $!";
exit 1;
};

After changing nothing in the script, and poking around in the server, and changing nothing, it now works as expected. I don't know what to say to be honest.

Related

How to redirect output of some commands to a file but keep some in the console?

I am using below perl script,
open (STDOUT, '>',"log.txt")|| die "ERROR: opening log.txt\n";
print "\n inside";
close (STDOUT);
print "\noutside";
I need string "inside" need to be printed inside log.txt. string "outside" need to be printed in the console.
But with my script string "inside" is printed inside log.txt, but string "outside" is not printed on the console.
Can anyone help me on this
You redirect the standard output stream to a file with open STDOUT, '>', $file. After that there is no simple way to print to the console.
Some ways to print to both a file and the console
Save STDOUT before redirecting and restore it later when needed. Again, see open
open SAVEOUT, ">&STDOUT" or warn "Can't dup STDOUT: $!";
open *STDOUT, ">", $logfile or warn "Can't redirect STDOUT to $logfile: $!";
say "goes to file";
...
open STDOUT, ">&SAVEOUT" or warn "Can't reopen STDOUT: $!"; # restore
say "goes to console";
Print to a variable what you intend for console; use select to switch the default for where prints go
open my $fh_so, '>', \my $for_stdout;
select $fh_so;
say "goes to variable \$for_stdout";
say STDOUT "goes to console";
...
select STDOUT; # make STDOUT default again
say $for_stdout; # goes to console (all accumulated $fh_so prints)
With this you can reach the console by specifying STDOUT, otherwise you would put out all STDOUT-intended prints once you select back to STDOUT
Print logs to a file directly, as in Jens's answer
open my $fh_log, '>', $logfile or die "Can't open $logfile: $!";
say $fh_log "goes to $logfile";
say "goes to console";
...
close $fh_log;
where the default print (or rather say above) keeps going to the console
The first two seem a little cumbersome, don't they. I'd recommend to print logs directly to a log file, unless STDOUT needs to be redirected for the most of the program, or you have a compelling reason to select filehandles around.
For say, which adds newline to what is printed, you need use feature 'say'; at the beginning.
Always start your programs with use warnings; and use strict;.

Error in perlipc documentation?

I'm trying to puzzle through something I see in the perlipc documentation.
If you're writing to a pipe, you should also trap SIGPIPE. Otherwise,
think of what happens when you start up a pipe to a command that
doesn't exist: the open() will in all likelihood succeed (it only
reflects the fork()'s success), but then your output will
fail--spectacularly. Perl can't know whether the command worked
because your command is actually running in a separate process whose
exec() might have failed. Therefore, while readers of bogus commands
return just a quick end of file, writers to bogus command will trigger
a signal they'd better be prepared to handle. Consider:
open(FH, "|bogus") or die "can't fork: $!";
print FH "bang\n" or die "can't write: $!";
close FH or die "can't close: $!";
That won't blow up until the close, and it will blow up with a
SIGPIPE. To catch it, you could use this:
$SIG{PIPE} = 'IGNORE';
open(FH, "|bogus") or die "can't fork: $!";
print FH "bang\n" or die "can't write: $!";
close FH or die "can't close: status=$?";
If I'm reading that correctly, it says that the first version will probably not die until the final close.
However, that's not happening on my OS X box (Perl versions 5.8.9 through 5.15.9). It blows up on the open with a "can't fork: No such file or directory" regardless of whether or not I have the $SIG{PIPE} line in there.
What am I misunderstanding?
This was a change implemented back during development of 5.6 so that system() could detect when it failed to fork/exec the child
https://github.com/mirrors/perl/commit/d5a9bfb0fc8643b1208bad4f15e3c88ef46b4160
It is also documented in http://search.cpan.org/dist/perl/pod/perlopentut.pod#Pipe_Opens
which itself points to perlipc, but perlipc does seem to be missing this
Same output on my Perl, v5.14.2, Ubuntu 12.04:
$ perl
open(FH, "|bogus") or die "can't fork: $!";
print FH "bang\n" or die "can't write: $!";
close FH or die "can't close: $!";
can't fork: No such file or directory at - line 1.
And that error is odd, since it has nothing to do with forking -- they must have added some lookahead-guessing to see if it could execute bogus. (Which would be a race condition at best, but one likely to provide significantly better error handling in the common case; and only be as inconvenient as the old behavior when the race is lost.)

Different results for command line and CGI for perl web form script

The code for my sub:
sub put_file
{
my($host, $placement_directory, $tar_directory, $filename, $user, $pass) = #_;
my $ftp = Net::FTP->new($host) or die "cannot connect to localhost";
$ftp->login($user, $pass) or die "cannot log in";
$ftp->cwd($placement_directory);
print $tar_directory."/".$filename;
$ftp->put("$tar_directory/$filename") or die "cannot put file ", $ftp->message;
print "File has been placed \n";
}
So when this sub is called from a test script(that runs from command line) that uses the same config file and does all of the same things as the CGI script, no errors are found and the file is placed correctly. When the sub is called from my CGI script the script will output the $tar_directory."/".$filename but not "File has been placed \n" and the ftp->message outputs "cannot put file Directory successfully changed." Which seems to come from the cwd line before it.
Other info:
I have tried running the test script as multiple users with the same result.
I use strict and warnings.
The tar file that is being moved is created by the script.
I'm new to perl so any advice is helpful because I'm stuck on this and cant find any help using the power of The Google.
Just a guess. Your ftp->put is failing, triggering the die. Unless you have:
use CGI::Carp qw(carpout fatalsToBrowser);
you won't see the die message in your browser. Since you died, you don't see the final print statement either.
Check your webserver log for the die output, or just change "die" to "print".
Net::FTP can put() from a filehandle as well as a file name:
open my $fh, '<', $tar_directory . '/' . $filename or die "Could not open file: $!";
$ftp->put($fh, $filename) or die "cannot put file ", $ftp->message;
If the problem is on your side then the open should fail and you should get an error message of some kind that will, hopefully, tell you what is wrong; if the problem is on the remote side then the put should fail and you'll see the same thing you'r seeing now.
That $ftp->message only has the success message from the cwd indicates that everything is fine on the remote side and the put isn't even reaching the remote server.

How can I redirect the output of Perl's system() to a filehandle?

With the open command in Perl, you can use a filehandle. However I have trouble getting back the exit code with the open command in Perl.
With the system command in Perl, I can get back the exit code of the program I'm running. However I want to just redirect the STDOUT to some filehandle (no stderr).
My stdout is going to be a line-by-line output of key-value pairs that I want to insert into a mao in perl. That is why I want to redirect only my stdout from my Java program in perl. Is that possible?
Note: If I get errors, the errors get printed to stderr. One possibility is to check if anything gets printed to stderr so that I can quite the Perl script.
Canonically, if you're trying to get at the text output of a forked process, my understanding is that's what the backticks are for. If you need the exit status as well, you can check it with the $? special variable afterward, e.g.:
open my $fh, '>', "output.txt" or die $!;
print {$fh} `echo "Hello!"`;
print "Return code: $?\n";
Output to STDERR from the command in backticks will not be captured, but will instead be written directly to STDERR in the Perl program it's called from.
You may want to check out IPC::System::Simple -- it gives you many options for executing external commands, capturing its output and return value, and optionally dying if a bad result is returned.
This is one of the ways to do it.
open my $fh, '>', $file;
defined(my $pid = fork) or die "fork: $!";
if (!$pid) {
open STDOUT, '>&', $fh;
exec($command, #args);
}
waitpid $pid, 0;
print $? == 0 ? "ok\n" : "nok\n";
Use open in -| mode. When you close the filehandle, the exit status will be in $?.
open my $fh, '-|', "$command"; # older version: open my $fh, "$command |";
my #command_output = <$fh>;
close $fh;
my $command_status = $?;
From perldoc -f close
If the file handle came from a piped open, "close" will
additionally return false if one of the other system calls
involved fails, or if the program exits with non-zero status.
(If the only problem was that the program exited non-zero, $!
will be set to 0.) Closing a pipe also waits for the process
executing on the pipe to complete, in case you want to look at
the output of the pipe afterwards, and implicitly puts the exit
status value of that command into $? and
"${^CHILD_ERROR_NATIVE}".

How can I reinitialize Perl's STDIN/STDOUT/STDERR?

I have a Perl script which forks and daemonizes itself. It's run by cron, so in order to not leave a zombie around, I shut down STDIN,STDOUT, and STDERR:
open STDIN, '/dev/null' or die "Can't read /dev/null: $!";
open STDOUT, '>>/dev/null' or die "Can't write to /dev/null: $!";
open STDERR, '>>/dev/null' or die "Can't write to /dev/null: $!";
if (!fork()) {
do_some_fork_stuff();
}
The question I have is: I'd like to restore at least STDOUT after this point (it would be nice to restore the other 2). But what magic symbols do I need to use to re-open STDOUT as what STDOUT used to be?
I know that I could use "/dev/tty" if I was running from a tty (but I'm running from cron and depending on stdout elsewhere). I've also read tricks where you can put STDOUT aside with open SAVEOUT,">&STDOUT", but just the act of making this copy doesn't solve the original problem of leaving a zombie around.
I'm looking to see if there's some magic like open STDOUT,"|-" (which I know isn't it) to open STDOUT the way it's supposed to be opened.
# copy of the file descriptors
open(CPERR, ">&STDERR");
# redirect stderr in to warning file
open(STDERR, ">>xyz.log") || die "Error stderr: $!";
# close the redirected filehandles
close(STDERR) || die "Can't close STDERR: $!";
# restore stdout and stderr
open(STDERR, ">&CPERR") || die "Can't restore stderr: $!";
#I hope this works for you.
#-Hariprasad AJ
If it's still useful, two things come to mind:
You can close STDOUT/STDERR/STDIN in just the child process (i.e. if (!fork()). This will allow the parent to still use them, because they'll still be open there.
I think you can use the simpler close(STDOUT) instead of opening it to /dev/null.
For example:
if (!fork()) {
close(STDIN) or die "Can't close STDIN: $!\n";
close(STDOUT) or die "Can't close STDOUT: $!\n";
close(STDERR) or die "Can't close STDERR: $!\n";
do_some_fork_stuff();
}
Once closed, there's no way to get it back.
Why do you need STDOUT again? To write messages to the console? Use /dev/console for that, or write to syslog with Sys::Syslog.
Honestly though, the other answer is correct. You must save the old stdout (cloned to a new fd) if you want to reopen it later. It does solve the "zombie" problem, since you can then redirect fd 0 (and 1 & 2) to /dev/null.