ksh script: how redirect on the same file all stdout and stderr using exec, adding a different prefix? - redirect

I'd like to..
on ksh
using exec on script begin
redirect both stdout and stderr on the same file
(and till here I got it)
but...
adding a different "prefix" to text line, one for stdout lines and one for stderr ones
let's say,
if stdout produces "hallo stdout"
and stderr "hallo stderr"
on my dump file I'd like to have
FROM STDOUT : hallo stdout
FROM STDERR : hallo stderr
ps: hi everyone. That's my first....forget my bad english

Related

Windows command prompt creating but not redirecting output to file

I'm having the opposite problem of so many posts I've seen on here.
I'm running a perl command written by someone else and the output is all being forced to the screen despite using the ">" command.
Windows clearly knows what I'm intending because the file name I give is being created fresh and new every time I execute my command but the contents/size of the log file are empty and 0 bytes long.
My perl executable lives in a different place than my perl routine/.pl file.
I tried running as administrator and not.
This is not something wrong with the program. Some of my coworkers execute it just fine and there is no output to their screens.
The general syntax is:
F:\git\repoFolderStructure\bin>
F:\git\repoFolderStructure\bin>perl alog.pl param1 param2 commaSeparatedParam3 2020-12-17,18:32:33 2020-12-17,18:33:33 > mylogfile.log
>>>>>Lots and lots of output I wish was in a file
Also attempted in the directory with my perl.exe and gave the path to my repo folder's bin.
Is there something weird about windows that could create/prevent the > operator behavior?
Here's the kicker: I did ipconfig > out.txt just fine, though...nothing written to the screen.
Thanks for any tips for what I could do to try and change the behavior!
It could be that the output is being sent to STDERR, while you are capturing STDOUT. Append 2>&1 to capture both to the same file.
>perl -e"print qq{STDOUT\n}; warn qq{STDERR\n};" >stdout
STDERR
>type stdout
STDOUT
>perl -e"print qq{STDOUT\n}; warn qq{STDERR\n};" 2>stderr
STDOUT
>type stderr
STDERR
>perl -e"print qq{STDOUT\n}; warn qq{STDERR\n};" >stdout 2>stderr
>type stdout
STDOUT
>type stderr
STDERR
>perl -e"print qq{STDOUT\n}; warn qq{STDERR\n};" >both 2>&1
>type both
STDERR
STDOUT
Note that 2>&1 must come after you redirect STDOUT if you want to combine both streams.

Meaning of open( STDERR, ">&STDOUT" )

i find this in one sample script,
then i search from google and found the following words,
Note that you cannot simply open STDERR to be a dup of STDOUT in your Perl
program and avoid calling the shell to do the redirection. This doesn't
work:
open(STDERR, ">&STDOUT");
This fails because the open() makes STDERR go to where STDOUT was going at
the time of the open(). The backticks then make STDOUT go to a string, but
don't change STDERR (which still goes to the old STDOUT).
Now I am confused. What exactly is the meaning of open(STDERR, ">&STDOUT"); ?
With the & in the mode >& in the call
open STDERR, ">&STDOUT"; # or: open STDERR, ">&", \*STDOUT
the first given filehandle is made a copy of the second one. See open, and see man 2 dup2 since this goes via dup2 syscall. The notation follows the shell's I/O redirection.
Since here the first filehandle exists (STDERR)† it is first closed.
The effect is that prints to STDERR will go to where STDOUT was going before this was done, with the side effect of the original STDERR being closed.
This is legit and does not result in errors but is not a good way to redirect STDERR in general -- after that we cannot restore STDERR any more. See open for how to redirect STDERR.
The rest of the comment clearly refers to a situation where backticks (see qx), which redirect STDOUT of the executed command(s) to the program, are used after that open call. All this seems to refer to an idea of redirecting STDERR to STDOUT in this way.
Alas, the STDERR, made by that open call to go where STDOUT was going, doesn't get redirected by the backticks and thus still goes "there." In my case prints to STDERR wind up on the terminal as I see the warning (ls: cannot access...) with
perl -we'open STDERR, ">&STDOUT"; $out = qx(ls no_such)'
(unlike with perl -we'$out = qx(ls no_such 2>&1)'). Explicit prints to STDERR also go to the terminal as STDOUT (add such prints and redirect output to a file to see).
This may be expected since & made a copy of the filehandle, so the "new" one (the former STDERR) still goes where STDOUT was going, that is to the terminal. What is of course unintended in this case and thus an error.
† Every program in UNIX gets connected to standard streams stdin, stdout, and stderr, with file descriptors 0, 1, and 2 respectively. In a Perl program we then get ready filehandles for these, like the STDERR (for fd 2).
Some generally useful posts on manipulations of file descriptors in the shell:
What does “3>&1 1>&2 2>&3” do in a script?
In the shell, what does “ 2>&1 ” mean?
File descriptors & shell scripting
Order of redirections
Shell redirection i/o order
It's basically dup2(fileno(STDOUT), fileno(STDERR)). See your system's dup2 man page.
In short, it associates STDERR with the same stream as STDOUT at the system level. After the command is performed writing to either will be the same as writing to STDOUT before the change.
Unless someone's messed with STDOUT or STDERR, it's equivalent to the shell command
exec 2>&1

Perl STDERR printed in the wrong order with Tee

I'm trying to redirect STDOUT and STDERR from a perl script - executed from a bash script - to both screen and log file.
perlscript.pl
#!/usr/bin/perl
print "This is a standard output";
print "This is a second standard output";
print STDERR "This is an error";
bashscript.sh
#!/bin/bash
./perlscript.pl 2>&1 | tee -a logfile.log
If I execute the perlscript directly the screen output is printed in the correct order :
This is a standard output
This is a second standard output
This is an error
But when I execute the bash script the STDERR is printed first (in both screen and file) :
This is an error
This is a standard output
This is a second standard output
With a bash script as child the output is ordered flawlessly. Is it a bug with perl or tee? Am I doing something wrong?
An usual trick to turnoff buffering is to set the variable $|. Add the below line at
beginning of your script.
$| = 1;
This would turn the buffering off. Also refer to this excellent article by MJD explaining buffering in perl. Suffering from Buffering?
I guess this has to do with the way STDOUT and STDERR buffers are flushed. Try
autoflush STDOUT 1;
at the beginning of your perl script so that STDOUT is flushed after each print statement.

Capturing the output of STDERR while piping STDOUT to a file

I have a rather odd situation. I'm trying to automate the backup of a collection of SVN repositories with Perl. I'm shelling out to the svnadmin dump command, which sends the dump to STDOUT, and any errors it encounters to STDERR.
The command I need to run will be of the form:
svnadmin dump $repo -q >$backupFile
STDOUT will go to the backup file, but, STDERR is what I need to capture in my Perl script.
What's the right way to approach this kind of situation?
EDIT:
To clarify:
STDOUT will contain the SVN Dump data
STDERR will contain any errors that may happen
STDOUT needs to end up in a file, and STDERR needs to end up in Perl. At no point can ANYTHING but the original content of STDOUT end up in that stream or the dump will be corrupted and I'll have something worse than no backup at all, a bad one!
Well, there are generic ways to do it within perl too, but the bash solution (which the above makes me think you're looking for) is to redirect stderr first to stdout and then redirect stdout to a file. intuitively this doesn't make a whole lot of sense until you see what's happening internally to bash. But this works:
svnadmin dump $repo -q 2>&1 >$backupFile
However, do not do it the other way (ie, put the 2>&1 at the end), or else all the output of both stdout and stderr will go to your file.
Edit to avoid some people's confusion that this doesn't work:
What you want is this:
# perl -e 'print STDERR "foo\n"; print "bar\n";' 2>&1 > /tmp/f
foo
# cat /tmp/f
bar
and specifically you don't want this:
# perl -e 'print STDERR "foo\n"; print "bar\n";' > /tmp/f 2>&1
# cat /tmp/f
foo
bar
Here's one way:
{
local $/; # allow reading stderr as a single chunk
open(CMD, "svnadmin dump $repo -q 2>\&1 1>$backupFile |") or die "...";
$errinfo = <CMD>; # read the stderr from the above command
close(CMD);
}
In other words, use the shell 2>&1 mechanism to get stderr to a place where Perl can easily read it, and use 1> to get the dump sent to the file. The stuff I wrote about $/ and reading the stderr as a single chunk is just for convenience -- you could read the stderr you get back any way you like of course.
While tchrist is certainly correct that you can use handle direction and backticks to make this work, I can also recommend David Golden's Capture::Tiny module. It gives generic interfaces to capturing or tee-ing STDOUT and STDERR, from there you can do with them what you will.
This stuff is really easy. It’s what backticks were invented for, for goodness’ sake. Just do:
$his_error_output = `somecmd 2>&1 1>somefile`;
and voilà you’re done!
I don’t understand what the trouble is. Didn’t have your gazzintas drilled into you as a young child the way Jethro did? :)
From perldoc perlop for qx:
To read both a command's STDOUT and its STDERR separately, it's
easiest to redirect them separately to files, and then read from those
files when the program is done:
system("program args 1>program.stdout 2>program.stderr");

How can I run an external command and capture its output in Perl?

I'm new to Perl and want to know of a way to run an external command (call it prg) in the following scenarios:
Run prg, get its stdout only.
Run prg, get its stderr only.
Run prg, get its stdout and stderr, separately.
You can use the backtics to execute your external program and capture its stdout and stderr.
By default the backticks discard the stderr and return only the stdout of the external program.So
$output = `cmd`;
Will capture the stdout of the program cmd and discard stderr.
To capture only stderr you can use the shell's file descriptors as:
$output = `cmd 2>&1 1>/dev/null`;
To capture both stdout and stderr you can do:
$output = `cmd 2>&1`;
Using the above you'll not be able to differenciate stderr from stdout. To separte stdout from stderr can redirect both to a separate file and read the files:
`cmd 1>stdout.txt 2>stderr.txt`;
In most cases you can use the qx// operator (or backticks). It interpolates strings and executes them with the shell, so you can use redirections.
To capture a command's STDOUT (STDERR is unaffected):
$output = `cmd`;
To capture a command's STDERR and STDOUT together:
$output = `cmd 2>&1`;
To capture a command's STDERR but discard its STDOUT (ordering is important here):
$output = `cmd 2>&1 1>/dev/null`;
To exchange a command's STDOUT and STDERR in order to capture the STDERR but leave its STDOUT to come out the old STDERR:
$output = `cmd 3>&1 1>&2 2>&3 3>&-`;
To read both a command's STDOUT and its STDERR separately, it's easiest to redirect them separately to files, and then read from those files when the program is done:
system("program args 1>program.stdout 2>program.stderr");
You can use IPC::Open3 or IPC::Run. Also, read How can I capture STDERR from an external command from perlfaq8.
Beware about the answer of Eugene (can't comment on his answer), just above, that the syntax to exchange SDTOUT and STDERR is valid on Unixes (Unixen-like shells such as ksh, or bash I guess) but not under Windows CMD (error: 3>& was unexpected at this time.).
The appropriate syntax under Windows CMD and Perl on Windows is:
perl -e "$r=qx{nslookup 255.255.255.255 2>&1 1>&3 3>&2};
Note that the command:
nslookup 255.255.255.255
will produce (something like) on STDOUT:
Server: mymodem.lan
Address: fd37:c01e:a880::1
and on STDERR:
*** mymodem.lan can't find 255.255.255.255: Non-existent domain
You can test that this syntax works with the following CMD/Perl syntax:
First:
perl -e "$r=qx{nslookup 255.255.255.255 2>&1 1>&3 3>&2}; $r=~s/[\n\r]//eg; print qq{on STDOUT qx result=[$r]};"
you get: Server: mymodem.lan
Address: fd37:c01e:a880::1
on STDOUT qx result=[*** mymodem.lan can't find 255.255.255.255: Non-existent domain]
Then
perl -e "$r=qx{nslookup 255.255.255.255 2>&1 1>&3 3>&2}; $r=~s/[\n\r]//eg; print STDOUT qq{on STDOUT qx result=[$r]};" 2>&1 1>NUL:
you get: Server: mymodem.lan
Address: fd37:c01e:a880::1
QED [fr:CQFD]
Note that it is not possible to get BOTH stderr and stdout as returned string for a qx or backticks command. If you know for sure that the err text returned by your spawned command is of length N lines, you can still redirect STDERR to STDOUT like describe by Eugene and others but capture your qx returned text in an array instead of as scalar string. The STDERR flow will be returned into the array before the STDOUT so that the N first lines of your array are the SDTERR lines. Like:
#r=qx{nslookup 255.255.255.255 2>&1};
$r[0] is "*** mymodem.lan can't find 255.255.255.255: Non-existent domain"
But of course you must be sure that there is an err text on STDERR and of strictly N lines (stored in #r[0..N-1]). If not, the only solution is using temp files as described above.