I'm trying to redirect STDOUT and STDERR from a perl script - executed from a bash script - to both screen and log file.
perlscript.pl
#!/usr/bin/perl
print "This is a standard output";
print "This is a second standard output";
print STDERR "This is an error";
bashscript.sh
#!/bin/bash
./perlscript.pl 2>&1 | tee -a logfile.log
If I execute the perlscript directly the screen output is printed in the correct order :
This is a standard output
This is a second standard output
This is an error
But when I execute the bash script the STDERR is printed first (in both screen and file) :
This is an error
This is a standard output
This is a second standard output
With a bash script as child the output is ordered flawlessly. Is it a bug with perl or tee? Am I doing something wrong?
An usual trick to turnoff buffering is to set the variable $|. Add the below line at
beginning of your script.
$| = 1;
This would turn the buffering off. Also refer to this excellent article by MJD explaining buffering in perl. Suffering from Buffering?
I guess this has to do with the way STDOUT and STDERR buffers are flushed. Try
autoflush STDOUT 1;
at the beginning of your perl script so that STDOUT is flushed after each print statement.
Related
I have a Perl CGI program. myperlcgi.pl
Within this program I have the following:
my $retval = system('extprogram')
extprogram has a print statement within.
The output from extprogram is being included within myperlcgi.pl
I tried adding
> output.txt 2>&1
to system call but did not change anything.
How do I prevent output form extprogram being used in myperlcgi.pl.
Surprised that stdout from system call is being used in myperlcgi.pl
The system command just doesn’t give you complete control over capturing STDOUT and STDERR of the executed command.
Use backticks or open to execute the command instead. That will capture the STDOUT of the command’s execution. If the command also outputs to STDERR, then you can append 2>&1 to redirect STDERR to STDOUT for capture in backticks, like so:
my $output = `$command 2>&1`;
If you really need the native return status of the executed command, you can get that information using $? or ${^CHILD_ERROR_NATIVE}. See perldoc perlvar for details.
Another option is to use the IPC::Open3 Perl library, but I find that method to be overkill for most situations.
I'm having the opposite problem of so many posts I've seen on here.
I'm running a perl command written by someone else and the output is all being forced to the screen despite using the ">" command.
Windows clearly knows what I'm intending because the file name I give is being created fresh and new every time I execute my command but the contents/size of the log file are empty and 0 bytes long.
My perl executable lives in a different place than my perl routine/.pl file.
I tried running as administrator and not.
This is not something wrong with the program. Some of my coworkers execute it just fine and there is no output to their screens.
The general syntax is:
F:\git\repoFolderStructure\bin>
F:\git\repoFolderStructure\bin>perl alog.pl param1 param2 commaSeparatedParam3 2020-12-17,18:32:33 2020-12-17,18:33:33 > mylogfile.log
>>>>>Lots and lots of output I wish was in a file
Also attempted in the directory with my perl.exe and gave the path to my repo folder's bin.
Is there something weird about windows that could create/prevent the > operator behavior?
Here's the kicker: I did ipconfig > out.txt just fine, though...nothing written to the screen.
Thanks for any tips for what I could do to try and change the behavior!
It could be that the output is being sent to STDERR, while you are capturing STDOUT. Append 2>&1 to capture both to the same file.
>perl -e"print qq{STDOUT\n}; warn qq{STDERR\n};" >stdout
STDERR
>type stdout
STDOUT
>perl -e"print qq{STDOUT\n}; warn qq{STDERR\n};" 2>stderr
STDOUT
>type stderr
STDERR
>perl -e"print qq{STDOUT\n}; warn qq{STDERR\n};" >stdout 2>stderr
>type stdout
STDOUT
>type stderr
STDERR
>perl -e"print qq{STDOUT\n}; warn qq{STDERR\n};" >both 2>&1
>type both
STDERR
STDOUT
Note that 2>&1 must come after you redirect STDOUT if you want to combine both streams.
I completly don't understand this behaviour.
I have a very simple Perl script:
#!/usr/bin/perl
use strict;
use warnings;
print "line 1\n";
print STDERR "line 2\n";
If I run it from console I will get what I expect:
$ perl a.pl
line 1
line 2
But if I redirect it to file, I will get the lines in the reverse order:
$ perl a.pl &> a
$ cat a
line 2
line 1
How can I capture all the outputs STDOUT+STDERR to the file with the same
order I get in the console?
This is an effect of buffering. When outputting to a terminal, Perl (just as stdio) uses line based buffering, which will give the expected effect. When outputting to a file, it uses block buffering (typically 4k or 8k blocks), hence it will only flush on program end.
The difference you're observing is because STDERR is unbuffered, unlike most other handles.
The easiest way to circumvent this issue is to enable autoflushing on STDOUT, using the $| special variable. See perlvar for more details.
Don't use $|, use autoflush instead.
STDOUT->autoflush(1);
STDERR->autoflush(1);
See http://perldoc.perl.org/IO/Handle.html
I am running unix commands in perl as
$cmd="ls -l";
$result=`$cmd`;
print $log_file $result;
but the output from execution is also printed on screen.
how to execute command without printing result on screen ?
The standard output stream isn't printed to screen -- it's captured to $result. But there is a second output stream called the standard error stream that programs can write to, and which also by default goes to the screen. Many programs use this stream for logging, or (in the case of ls) for writing error messages. To capture this in addition, use
$cmd = "ls -l 2>&1";
To discard it instead, use
$cmd = "ls -l 2>/dev/null";
I think there's a mistake. Try that in a shell:
perl -e 'my $x = `ls -l`;'
That doesn't produce any output.
it happens probably because $log_file is somehow attached to STDOUT, duplicated to it, something like:
open(my $log_file, ">&STDOUT")
I'm writing a Perl script that should execute commands in shell and parse their output. As a shell I'm intended to use csh. I've started with this
my $out = `cmd`
but it doesn't capture STDERR, which I need too. Running sh with output redirection does nothing
my $out = `sh -c "cmd 2>&1"`
still captures only STDOUT, not STDERR.
Even redirecting to file in csh doesn't work for me
tcsh$ cmd >& logfile.log
still captures STDOUT only %)
The command I'm trying to execute is actuallty sh script and some commands in this script print into STDERR and I want to capture that output. If I execute sh -c "cmd 2>/dev/null" STDERR actually goes to /dev/null and only STDOUT is printed in terminal.
Could anyone help me with this?
I believe there is something you are not telling us. Are you on cygwin? Or Windows? Do you have a PERL5SHELL environment variable set?
There is something that you are not telling us because both of these work fine on the five platforms I can easily test on:
% perl -le '$out = `sh -c "grep missing /dev/nowhere 2>&1" | cat -n`; chomp $out; print "got <<<$out>>>"'
got <<< 1 grep: /dev/nowhere: No such file or directory>>>
But in far, there is no reason to call sh(1) explicitly for shelling out. That’s because Perl always calls sh(1) for all its backtick, pipe opens, and system() shell-outs:
% perl -le '$out = `grep missing /dev/nowhere 2>&1 | cat -n`; chomp $out; print "got <<<$out>>>"'
got <<< 1 grep: /dev/nowhere: No such file or directory>>>
The only except to this I can think of occurs on non-Unix systems, where because they have no /bin/sh, something else is defined.
But under no circumstances will simple shell-outs be calling tcsh(1) behind your back. You’d’ve had to’ve seriously hacked the perl(1) source to get that to happen. I also rather doubt you could (easily) hack the binary, since the string "/bin/tcsh" is going to be longer than "/bin/sh", and it isn’t very often going to be found in /bin/ anyway.
That you can’t get stderr redirection working even from the shell says something pretty weird is going on. I think we need more information.
Here, you are capturing the STDOUT of sh, which is not the STDERR of cmd:
my $out = `sh -c "cmd 2>&1"`;
Can you just run cmd directly?
my $out = `cmd 2>&1`;
Backquotes capture STDOUT not STDERR.
system will dump both stdout and stderr to their parent's settings.
If you want to capture STDERR, you need something like IPC::Open3:
Extremely similar to open2(), open3() spawns the given $cmd and connects CHLD_OUT for reading from the child, CHLD_IN for writing to the child, and CHLD_ERR for errors. If CHLD_ERR is false,
You said that running the command cmd >& logfile.log in tcsh sends only cmd's stdout to the log file, not its stderr. That doesn't make sense.
Try replacing cmd with the following script:
#!/bin/sh
echo stdout
echo STDERR 1>&2
Both "stdout" and "STDERR" should show up in logfile.log.
If so, then perhaps your "cmd" is doing something odd. My best guess is that cmd is writing to /dev/tty, not to either stdout or stderr; that wouldn't be affected by redirection.
To see what I mean, add this line to the above script:
echo tty > /dev/tty
I don't really have time to mock up an example as I normally would, nor even test one. I am thinking that you might try using Capture::Tiny to see if that helps.