How to keep open STDOUT and STDERR with Daemon::Simple - perl

I've been using Daemon::Simple to quickly Daemonise some scripts and also make them start/stop capable. However, some of these scripts are written in python, so I generally call Daemon::Simple::Init() then exec() the python script.
However, I've found that Daemon::Simple::Init() closes STDOUT and STDERR, and it seems as a result of this the python scripts break (just exit) when they write to STDOUT and STDERR. Reopening STDOUT and STDERR and redirecting them to a file before the exec does not help.
What I've found does help is changing the source of Daemon::Simple from this:
close(STDOUT);
close(STDERR);
to:
open STDOUT, "/dev/null"
open STDERR, "/dev/null"
If I then again open STDOUT and STDERR and redirect them to real files after calling Daemon::Simple:Init() like before, this time it works. It seems that closing STDOUT and STDERR and reopening somehow breaks them post exec(), but opening them to /dev/null and reopening them works fine.
Is there anyway I can reopen or keep open STDOUT and STDERR without modifying Daemon::Simple which survives an exec()?

Are those the same lines you were using to try to reopen STDERR and STDOUT? If so the problem is likely that you're opening them for reading not for writing. Try using
open STDOUT, '>', '/dev/null';

Does this work for you?
use Daemon::Simple;
open(SAVEOUT, ">&STDOUT");
open(SAVEIN, "<&STDIN");
Daemon::Simple::init("daemon");
open(STDOUT, ">&SAVEOUT");
open(STDIN, "<&SAVEIN");
exec("python-script") or die "exec failed: $!";

Related

Meaning of open( STDERR, ">&STDOUT" )

i find this in one sample script,
then i search from google and found the following words,
Note that you cannot simply open STDERR to be a dup of STDOUT in your Perl
program and avoid calling the shell to do the redirection. This doesn't
work:
open(STDERR, ">&STDOUT");
This fails because the open() makes STDERR go to where STDOUT was going at
the time of the open(). The backticks then make STDOUT go to a string, but
don't change STDERR (which still goes to the old STDOUT).
Now I am confused. What exactly is the meaning of open(STDERR, ">&STDOUT"); ?
With the & in the mode >& in the call
open STDERR, ">&STDOUT"; # or: open STDERR, ">&", \*STDOUT
the first given filehandle is made a copy of the second one. See open, and see man 2 dup2 since this goes via dup2 syscall. The notation follows the shell's I/O redirection.
Since here the first filehandle exists (STDERR)† it is first closed.
The effect is that prints to STDERR will go to where STDOUT was going before this was done, with the side effect of the original STDERR being closed.
This is legit and does not result in errors but is not a good way to redirect STDERR in general -- after that we cannot restore STDERR any more. See open for how to redirect STDERR.
The rest of the comment clearly refers to a situation where backticks (see qx), which redirect STDOUT of the executed command(s) to the program, are used after that open call. All this seems to refer to an idea of redirecting STDERR to STDOUT in this way.
Alas, the STDERR, made by that open call to go where STDOUT was going, doesn't get redirected by the backticks and thus still goes "there." In my case prints to STDERR wind up on the terminal as I see the warning (ls: cannot access...) with
perl -we'open STDERR, ">&STDOUT"; $out = qx(ls no_such)'
(unlike with perl -we'$out = qx(ls no_such 2>&1)'). Explicit prints to STDERR also go to the terminal as STDOUT (add such prints and redirect output to a file to see).
This may be expected since & made a copy of the filehandle, so the "new" one (the former STDERR) still goes where STDOUT was going, that is to the terminal. What is of course unintended in this case and thus an error.
† Every program in UNIX gets connected to standard streams stdin, stdout, and stderr, with file descriptors 0, 1, and 2 respectively. In a Perl program we then get ready filehandles for these, like the STDERR (for fd 2).
Some generally useful posts on manipulations of file descriptors in the shell:
What does “3>&1 1>&2 2>&3” do in a script?
In the shell, what does “ 2>&1 ” mean?
File descriptors & shell scripting
Order of redirections
Shell redirection i/o order
It's basically dup2(fileno(STDOUT), fileno(STDERR)). See your system's dup2 man page.
In short, it associates STDERR with the same stream as STDOUT at the system level. After the command is performed writing to either will be the same as writing to STDOUT before the change.
Unless someone's messed with STDOUT or STDERR, it's equivalent to the shell command
exec 2>&1

Perl STDERR redirect failing

A common functions script that our systems use uses a simple STDERR redirect in order to create user-specific error logs. it goes like this
# re-route standard out to text file
close STDERR;
open STDERR, '>>', 'd:/output/Logs/STDERR_' . &parseUsername($ENV{REMOTE_USER}) . '.txt'
or die "couldn't redirect STDERR: $!";
Now, I copy-pasted this to my own functions script for a system-specific error log, and while it'll compile, it breaks the scripts that require it. Oddly enough, it doesn't even print the error that the children script are throwing. My slightly modified version looks like,
close STDERR;
open (STDERR, '>>', 'err/STDERR_SPORK.txt')
or die print "couldn't redirect STDERR: $!";
everything compiles fine in command prompt, -c returns ok, and if I throw a warn into the function script, and compile, it outputs properly. I still do not understand why though this kills the children. I cut out the redirect, and sure enough they work. Any thoughts?
die (and warn) writes to STDERR. If you close STDERR and then need to die as you attempt to reopen it, where would you expect to see the error message?
Since this is Perl, there are many ways to address this issue. Here are a couple.
open the file first to a tmp filehandle, reassign it to STDERR if everything goes ok
if (open my $tmp_fh, '>>',
'd:/output/Logs/STDERR_' . &parseUsername($ENV{REMOTE_USER}) . '.txt') {
close STDERR;
*STDERR = *$tmp_fh;
} else {
die "couldn't redirect STDERR: $!";
}
Use con. For programs that you run from a command line, most systems have a concept of "the current terminal". In Unix systems, it's /dev/tty and on Windows, it's con. Open an output stream to this terminal pseudo-file.
open STDERR, '>>',
'd:/output/Logs/STDERR_' . &parseUsername($ENV{REMOTE_USER}) . '.txt'
or do {
open my $tty_fh, '>', 'con';
print $tty_fh "couldn't redirect STDERR: $!";
exit 1;
};
After changing nothing in the script, and poking around in the server, and changing nothing, it now works as expected. I don't know what to say to be honest.

How to redirect STDERR within script?

I can redirect STDERR from the command line by doing:
perl myscript.pl 2> err.txt
How can I do this within the script so that the person running the script doesn't have to do it?
This is what I do
open STDERR, '>', "$errfile"
or warn "Cannot redirect STDERR to $errfile: $!";
If this is in a library, call carp instead of warn. That requires use Carp;, for core module Carp.
In case the STDERR stream will need to be restored first save it, as shown in open.

perl parent process hangs waiting for child process to read stdin

I have a perl script which emulates a tee command so I can get output written to the terminal and a log file. It works something like this (error checking &c omitted).
$pid = open(STDOUT, '-|');
# Above is perl magic that forks a process and sets up a pipe with the
# child's STDIN being one end of the pipe and the parent's STDOUT (in
# this case) being the other.
if ($pid == 0)
{
# Child.
# Open log file
while (<STDIN>)
{
# print to STDOUT and log file
}
#close log files
exit;
}
# parent
open STDERR, '>&STDOUT';
# do lots of system("...") calls
close STDERR;
close STDOUT;
exit;
This sometimes hangs, and invariably if you look at the processes and the stacks of said processes, the parent is hanging in one of the closes, waiting for the child to exit, whereas the child is hanging reading something from a file (which has to be STDIN, because there's no other file).
I'm rather at a loss as to how to deal with this. The problem seems to happen if you are running the program from a shell that isn't attached to a console - running the script in a normal shell works fine - and the only piece of code that has changed recently in that script is the addition of an open/close of a file just to touch it (and it's before the script gets to this 'tee' code).
Has anybody had problems like this before and/or have a suggestion as to what I can do to fix this? Thanks.
Well, after some experimentation it seems that opening STDOUT directly appears to be at least part of the reason. My code now reads like this:
$pid = open($handle, '|-');
if ($pid == 0)
{
# Child.
# Open log file
while (<STDIN>)
{
# print to STDOUT and log file
}
#close log files
exit;
}
# parent
open my $oldout, '>&STDOUT';
open my $olderr, '>&STDERR';
open STDOUT, '>&', $handle;
open STDERR, '>&', $handle;
# do lots of system("...") calls
open STDOUT, '>&', $oldout;
open STDERR, '>&', $olderr;
close $handle or die "Log child exited unexpectedly: $!\n";
exit;
which if nothing else, looks cleaner (but still messier than I'd like as I don't know what to do if any of those dups has an error). But I'm still unclear as to why opening and closing a handle much earlier in the code made such a difference to this bit.

How can I reinitialize Perl's STDIN/STDOUT/STDERR?

I have a Perl script which forks and daemonizes itself. It's run by cron, so in order to not leave a zombie around, I shut down STDIN,STDOUT, and STDERR:
open STDIN, '/dev/null' or die "Can't read /dev/null: $!";
open STDOUT, '>>/dev/null' or die "Can't write to /dev/null: $!";
open STDERR, '>>/dev/null' or die "Can't write to /dev/null: $!";
if (!fork()) {
do_some_fork_stuff();
}
The question I have is: I'd like to restore at least STDOUT after this point (it would be nice to restore the other 2). But what magic symbols do I need to use to re-open STDOUT as what STDOUT used to be?
I know that I could use "/dev/tty" if I was running from a tty (but I'm running from cron and depending on stdout elsewhere). I've also read tricks where you can put STDOUT aside with open SAVEOUT,">&STDOUT", but just the act of making this copy doesn't solve the original problem of leaving a zombie around.
I'm looking to see if there's some magic like open STDOUT,"|-" (which I know isn't it) to open STDOUT the way it's supposed to be opened.
# copy of the file descriptors
open(CPERR, ">&STDERR");
# redirect stderr in to warning file
open(STDERR, ">>xyz.log") || die "Error stderr: $!";
# close the redirected filehandles
close(STDERR) || die "Can't close STDERR: $!";
# restore stdout and stderr
open(STDERR, ">&CPERR") || die "Can't restore stderr: $!";
#I hope this works for you.
#-Hariprasad AJ
If it's still useful, two things come to mind:
You can close STDOUT/STDERR/STDIN in just the child process (i.e. if (!fork()). This will allow the parent to still use them, because they'll still be open there.
I think you can use the simpler close(STDOUT) instead of opening it to /dev/null.
For example:
if (!fork()) {
close(STDIN) or die "Can't close STDIN: $!\n";
close(STDOUT) or die "Can't close STDOUT: $!\n";
close(STDERR) or die "Can't close STDERR: $!\n";
do_some_fork_stuff();
}
Once closed, there's no way to get it back.
Why do you need STDOUT again? To write messages to the console? Use /dev/console for that, or write to syslog with Sys::Syslog.
Honestly though, the other answer is correct. You must save the old stdout (cloned to a new fd) if you want to reopen it later. It does solve the "zombie" problem, since you can then redirect fd 0 (and 1 & 2) to /dev/null.