i find this in one sample script,
then i search from google and found the following words,
Note that you cannot simply open STDERR to be a dup of STDOUT in your Perl
program and avoid calling the shell to do the redirection. This doesn't
work:
open(STDERR, ">&STDOUT");
This fails because the open() makes STDERR go to where STDOUT was going at
the time of the open(). The backticks then make STDOUT go to a string, but
don't change STDERR (which still goes to the old STDOUT).
Now I am confused. What exactly is the meaning of open(STDERR, ">&STDOUT"); ?
With the & in the mode >& in the call
open STDERR, ">&STDOUT"; # or: open STDERR, ">&", \*STDOUT
the first given filehandle is made a copy of the second one. See open, and see man 2 dup2 since this goes via dup2 syscall. The notation follows the shell's I/O redirection.
Since here the first filehandle exists (STDERR)† it is first closed.
The effect is that prints to STDERR will go to where STDOUT was going before this was done, with the side effect of the original STDERR being closed.
This is legit and does not result in errors but is not a good way to redirect STDERR in general -- after that we cannot restore STDERR any more. See open for how to redirect STDERR.
The rest of the comment clearly refers to a situation where backticks (see qx), which redirect STDOUT of the executed command(s) to the program, are used after that open call. All this seems to refer to an idea of redirecting STDERR to STDOUT in this way.
Alas, the STDERR, made by that open call to go where STDOUT was going, doesn't get redirected by the backticks and thus still goes "there." In my case prints to STDERR wind up on the terminal as I see the warning (ls: cannot access...) with
perl -we'open STDERR, ">&STDOUT"; $out = qx(ls no_such)'
(unlike with perl -we'$out = qx(ls no_such 2>&1)'). Explicit prints to STDERR also go to the terminal as STDOUT (add such prints and redirect output to a file to see).
This may be expected since & made a copy of the filehandle, so the "new" one (the former STDERR) still goes where STDOUT was going, that is to the terminal. What is of course unintended in this case and thus an error.
† Every program in UNIX gets connected to standard streams stdin, stdout, and stderr, with file descriptors 0, 1, and 2 respectively. In a Perl program we then get ready filehandles for these, like the STDERR (for fd 2).
Some generally useful posts on manipulations of file descriptors in the shell:
What does “3>&1 1>&2 2>&3” do in a script?
In the shell, what does “ 2>&1 ” mean?
File descriptors & shell scripting
Order of redirections
Shell redirection i/o order
It's basically dup2(fileno(STDOUT), fileno(STDERR)). See your system's dup2 man page.
In short, it associates STDERR with the same stream as STDOUT at the system level. After the command is performed writing to either will be the same as writing to STDOUT before the change.
Unless someone's messed with STDOUT or STDERR, it's equivalent to the shell command
exec 2>&1
Related
I have a Perl CGI program. myperlcgi.pl
Within this program I have the following:
my $retval = system('extprogram')
extprogram has a print statement within.
The output from extprogram is being included within myperlcgi.pl
I tried adding
> output.txt 2>&1
to system call but did not change anything.
How do I prevent output form extprogram being used in myperlcgi.pl.
Surprised that stdout from system call is being used in myperlcgi.pl
The system command just doesn’t give you complete control over capturing STDOUT and STDERR of the executed command.
Use backticks or open to execute the command instead. That will capture the STDOUT of the command’s execution. If the command also outputs to STDERR, then you can append 2>&1 to redirect STDERR to STDOUT for capture in backticks, like so:
my $output = `$command 2>&1`;
If you really need the native return status of the executed command, you can get that information using $? or ${^CHILD_ERROR_NATIVE}. See perldoc perlvar for details.
Another option is to use the IPC::Open3 Perl library, but I find that method to be overkill for most situations.
I'm having the opposite problem of so many posts I've seen on here.
I'm running a perl command written by someone else and the output is all being forced to the screen despite using the ">" command.
Windows clearly knows what I'm intending because the file name I give is being created fresh and new every time I execute my command but the contents/size of the log file are empty and 0 bytes long.
My perl executable lives in a different place than my perl routine/.pl file.
I tried running as administrator and not.
This is not something wrong with the program. Some of my coworkers execute it just fine and there is no output to their screens.
The general syntax is:
F:\git\repoFolderStructure\bin>
F:\git\repoFolderStructure\bin>perl alog.pl param1 param2 commaSeparatedParam3 2020-12-17,18:32:33 2020-12-17,18:33:33 > mylogfile.log
>>>>>Lots and lots of output I wish was in a file
Also attempted in the directory with my perl.exe and gave the path to my repo folder's bin.
Is there something weird about windows that could create/prevent the > operator behavior?
Here's the kicker: I did ipconfig > out.txt just fine, though...nothing written to the screen.
Thanks for any tips for what I could do to try and change the behavior!
It could be that the output is being sent to STDERR, while you are capturing STDOUT. Append 2>&1 to capture both to the same file.
>perl -e"print qq{STDOUT\n}; warn qq{STDERR\n};" >stdout
STDERR
>type stdout
STDOUT
>perl -e"print qq{STDOUT\n}; warn qq{STDERR\n};" 2>stderr
STDOUT
>type stderr
STDERR
>perl -e"print qq{STDOUT\n}; warn qq{STDERR\n};" >stdout 2>stderr
>type stdout
STDOUT
>type stderr
STDERR
>perl -e"print qq{STDOUT\n}; warn qq{STDERR\n};" >both 2>&1
>type both
STDERR
STDOUT
Note that 2>&1 must come after you redirect STDOUT if you want to combine both streams.
I am calling an external program from my perl code using backticks
print `<some long running program>`
The long running program prints detailed log messages onto standard output.
The problem I'm having is that due to buffering, the output from the long running program is printed all at once after it has finished its execution.
I tried making the STDOUT filehandle "hot" but that did not help.
Is there anyway I can have my program print continuously onto the screen?
Open as an exec pipe rather than using backticks.
open ( my $prog_stdout, "-|", "/your/program" ) or die $!;
This will fork and exec but give you access to $prog_stdout to do things with.
E.g.
while ( <$prog_stdout> ) {
print;
}
(It'll close if your external program exits, so the while will terminate).
You may also want to include autoflushing of the filehandle. http://perldoc.perl.org/IO/Handle.html
But that may not be necessary, as output won't be buffered indefinitely.
It might not be buffering but the fact that back ticks return when external program finishes.
You can however use reading pipe to read external output line by line,
use autodie;
open my $pipe, "-|", "<some long running program>";
# $pipe->autoflush();
while (<$pipe>) { .. }
I've been using Daemon::Simple to quickly Daemonise some scripts and also make them start/stop capable. However, some of these scripts are written in python, so I generally call Daemon::Simple::Init() then exec() the python script.
However, I've found that Daemon::Simple::Init() closes STDOUT and STDERR, and it seems as a result of this the python scripts break (just exit) when they write to STDOUT and STDERR. Reopening STDOUT and STDERR and redirecting them to a file before the exec does not help.
What I've found does help is changing the source of Daemon::Simple from this:
close(STDOUT);
close(STDERR);
to:
open STDOUT, "/dev/null"
open STDERR, "/dev/null"
If I then again open STDOUT and STDERR and redirect them to real files after calling Daemon::Simple:Init() like before, this time it works. It seems that closing STDOUT and STDERR and reopening somehow breaks them post exec(), but opening them to /dev/null and reopening them works fine.
Is there anyway I can reopen or keep open STDOUT and STDERR without modifying Daemon::Simple which survives an exec()?
Are those the same lines you were using to try to reopen STDERR and STDOUT? If so the problem is likely that you're opening them for reading not for writing. Try using
open STDOUT, '>', '/dev/null';
Does this work for you?
use Daemon::Simple;
open(SAVEOUT, ">&STDOUT");
open(SAVEIN, "<&STDIN");
Daemon::Simple::init("daemon");
open(STDOUT, ">&SAVEOUT");
open(STDIN, "<&SAVEIN");
exec("python-script") or die "exec failed: $!";
I'm trying to stream a file from a remote website to a local command and am running into some problems when trying to detect errors.
The code looks something like this:
use IPC::Open3;
my #cmd = ('wget','-O','-','http://10.10.1.72/index.php');#any website will do here
my ($wget_pid,$wget_in,$wget_out,$wget_err);
if (!($wget_pid = open3($wget_in,$wget_out,$wget_err,#cmd))){
print STDERR "failed to run open3\n";
exit(1)
}
close($wget_in);
my #wget_outs = <$wget_out>;
my #wget_errs = <$wget_err>;
print STDERR "wget stderr: ".join('',#wget_errs);
#page and errors outputted on the next line, seems wrong
print STDERR "wget stdout: ".join('',#wget_outs);
#clean up after this, not shown is running the filtering command, closing and waitpid'ing
When I run that wget command directly from the command-line and redirect stderr to a file, something sane happens - the stdout will be the downloaded page, the stderr will contain the info about opening the given page.
wget -O - http://10.10.1.72/index.php 2> stderr_test_file
When I run wget via open3, I'm getting both the page and the info mixed together in stdout. What I expect is the loaded page in one stream and STDERR from wget in another.
I can see I've simplified the code to the point where it's not clear why I want to use open3, but the general plan is that I wanted to stream stdout to another filtering program as I received it, and then at the end I was going to read the stderr from both wget and the filtering program to determine what, if anything went wrong.
Other important things:
I was trying to avoid writing the wget'd data to a file, then filtering that file to another file, then reading the output.
It's key that I be able to see what went wrong, not just reading $? >> 8 (i.e. I have to tell the user, hey, that IP address is wrong, or isn't the right kind of website, or whatever).
Finally, I'm choosing system/open3/exec over other perl-isms (i.e. backticks) because some of the input is provided by untrustworthy users.
You are passing an undefined value as the error handle argument to open3, and as IPC::Open3 says:
If CHLD_ERR is false, or the same file descriptor as CHLD_OUT, then STDOUT and STDERR of the child are on the same filehandle (this means that an autovivified lexical cannot be used for the STDERR filehandle, see SYNOPSIS) ...
The workaround is to initialize $wget_err to something before calling open3:
my ($wget_pid, $wget_in, $wget_out, $wget_err);
use Symbol qw(gensym);
$wget_err = gensym();
if (!$wget_pid = open3( ... ) ) { ...