Msys: how to keep STDERR on the screen and at the same time copy both STDOUT and STDERR to files - sh

I would like to be able to
Display STDERR on the screen
Copy STDOUT and STDERR in files (and if possible in the same file)
For information, I am using Msys to do that.
.
After doing some research on the SO, I have tried to use something like
<my command> > >(tee stdout.log) 2> >(tee stderr.log)
Bu I got the following error:
sh: syntax error near unexpected token `>'
.
Any idea on how to do that?

There might be no direct solution in Msys but the >(tee ... ) solution works fine in *Nix, OSX, and probably Cygwin.
The workaround is to grep all the errors and warnings we want to keep them on the screen.
I have successfully used the following command for a makefile to compile C code:
make 2>&1 | tee make.log | grep -E "(([Ee]rror|warning|make):|In function|undefined)"

I have a simple script (test.sh) that generates STDOUT and STDERR:
#!/bin/bash
echo hello
rm something
exit
Then, to do what you want execute with the following:
./test.sh > stdout.log 2> >(tee stderr.log >&2)
You'll get the STDERR on the screen, and two separated log files with STDERR and STDOUT. I used part of the answer given here
Note that I am assuming you don't have a file called something on the current directory :)
If you want both STDOUT and STDERR to go to the same file, use the -a option on tee:
./test.sh > std.log 2> >(tee -a std.log >&2)

Related

tee stderr to file (csh/tcsh shell)

How to redirect the STDEER to a file and both STDOUT & STDERR still show on screen?
I found many method on web, but they not work on csh/tcsh shell.
However, my command cannot run at "bash shell".
I know something like below :
(command > /dev/null) >& stderr.log
But this will mask the screen display.
tcsh's IO redirection options are redirecting stdout and stderr simultaneously or just stdout.
One option is to redirect stdout to /dev/tty and then dup stderr into stdout and tee it.
% (command > /dev/tty) |& tee stderr.log
Note that this will always write to the console, even if used in a script which you then pipe somewhere else.
You can call other shells from inside tcsh, so it's worth mentioning what they can do.
Bourne-compatible shells can only pipe stdout, but do have syntax for manipulating file descriptors, by using a third file descriptor as temporary storage we can swap stdout and stderr. The sequence 3>&2 2>&1 1>&3 3>&- accomplishes this and then closes file descriptor 3.
$ (command 3>&2 2>&1 1>&3 3>&- | tee stderr.log) 3>&2 2>&1 1>&3 3>&-
Bash and some other shells like zsh support process substitution, which lets you use a command as a source <(...) or destination >(...).
$ command 2> >(tee stderr.log)
If you don't mind rants, http://www.grymoire.com/unix/CshTop10.txt and http://www.perl.com/doc/FMTEYEWTK/versus/csh.whynot have some decent information about some of the rough edges of (t)csh quoting, redirection, conditionals, and aliases. Some of the information about specific bugs is long out of date, however.

redirect stdout and stderr to seperate files and to screen

I have been through quite a few variants of this question but none seems to do exactly what I want. I would like to start csh, have STDOUT go to one file and to the screen, and have STDERR go to a different file, and to the screen as well.
This command:
csh > stdout_file.txt 2> stderr_file.txt
gets STDOUT and STDERR to different files, but than nothing goes to the screen. How can I get what is written to the files go to the screen as well? I assume tee should be a part of this, but whatever I try gives my syntax errors.
((csh | tee f1) >`tty`) 2>&1 | tee f2
or maybe
((csh | tee f1) >`tty`) |& tee f2

Logging stdout and stderr separately and together simultaneously

I have a specific problem I'd like to solve and I'm running system through perl which I think runs in bash:
Show stdout and stderr in both.log. Add "done" if it finished.
Append stderr in stderr.log
Don't print out to terminal, except the original command and "done" if the command finished.
So once we combine stdout and stderr, can we separate them again? OR can we first capture stderr and then combine it with stdout after?
bash: system("((" . $cmd . " 2>&1 1>&3 | tee -a stderr.log) 3>&1) > both.log; echo done | tee -a both.log"
Even though we use tee for stderr.log it doesn't tee the error output to the terminal (which is what I want). I don't completely understand how it works, but I guess the tee causes it to go to the log file and also to the next file descriptor equation.
Bonus
I found this here in the comments: http://wiki.bash-hackers.org/howto/redirection_tutorial
This can be used to do the same thing, but all output is also teed to terminal (It doesn't print "done").
bash:
(($command 2>&1 1>&3 | tee stderr.log) 3>&1 ) | tee both.log
Using Perl 5.14.1.
tcsh alone would not suffice here. It does not support redirecting stderr without stdout.
Use another shell or a designated script.
More here.
UPDATE:
Can this be ported to tcsh?
No. tcsh does not support redirecting stderr without stdout.

How is this bash script resulting in an infinite loop?

From some Googling (I'm no bash expert by any means) I was able to put together a bash script that allows me to run a test suite and output a status bar at the bottom while it runs. It typically takes about 10 hours, and the status bar tells me how many tests passed and how many failed.
It works great sometimes, however occasionally I will run into an infinite loop, which is bad (mmm-kay?). Here's the code I'm using:
#!/bin/bash
WHITE="\033[0m"
GREEN="\033[32m"
RED="\033[31m"
(run_test_suite 2>&1) | tee out.txt |
while IFS=read -r line;
do
printf "%$(tput cols)s\r" " ";
printf "%s\n" "$line";
printf "${WHITE}Passing Tests: ${GREEN}$(grep -c passed out.txt)\t" 2>&1;
printf "${WHITE}Failed Tests: ${RED}$( grep -c FAILED out.txt)${WHITE}\r" 2>&1;
done
What happens when I encounter the bug is I'll have an error message repeat infinitely, causing the log file (out.txt) to become some multi-megabyte monstrosity (I think it got into the GB's once). Here's an example error that repeats (with four lines of whitespace between each set):
warning caused by MY::Custom::Perl::Module::TEST_FUNCTION
print() on closed filehandle GEN3663 at /some/CPAN/Perl/Module.pm line 123.
I've tried taking out the 2>&1 redirect, and I've tried changing while IFS=read -r line; to while read -r line;, but I keep getting the infinite loop. What's stranger is this seems to happen most of the time, but there have been times I finish the long test suite without any problems.
EDIT:
The reason I'm writing this is to upgrade from a black & white test suite to a color-coded test suite (hence the ANSI codes). Previously, I would run the test suite using
run_test_suite > out.txt 2>&1 &
watch 'grep -c FAILED out.txt; grep -c passed out.txt; tail -20 out.txt'
Running it this way gets the same warning from Perl, but prints it to the file and moves on, rather than getting stuck in an infinite loop. Using watch, also prints stuff like [32m rather than actually rendering the text as green.
I was able to fix the perl errors and the bash script seems to work well now after a few modifications. However, it seems this would be a safer way to run the test suite in case something like that were to happen in the future:
#!/bin/bash
WHITE="\033[0m"
GREEN="\033[32m"
RED="\033[31m"
run_full_test > out.txt 2>&1 &
tail -f out.txt | while IFS= read line; do
printf "%$(tput cols)s\r" " ";
printf "%s\n" "$line";
printf "${WHITE}Passing Tests: ${GREEN}$(grep -c passed out.txt)\t" 2>&1;
printf "${WHITE}Failed Tests: ${RED}$( grep -c 'FAILED!!' out.txt)${WHITE}\r" 2>&1;
done
There are some downsides to this. Mainly, if I hit Ctrl-C to stop the test, it appears to have stopped, but really run_full_test is still running in the background and I need to remember to kill it manually. Also, when the test is finished tail -f is still running. In other words there are two processes running here and they are not in sync.
Here is the original script, slightly modified, which addresses those problems, but isn't foolproof (i.e. can get stuck in an infinite loop if run_full_test has issues):
#!/bin/bash
WHITE="\033[0m"
GREEN="\033[32m"
RED="\033[31m"
(run_full_test 2>&1) | tee out.txt | while IFS= read line; do
printf "%$(tput cols)s\r" " ";
printf "%s\n" "$line";
printf "${WHITE}Passing Tests: ${GREEN}$(grep -c passed out.txt)\t" 2>&1;
printf "${WHITE}Failed Tests: ${RED}$( grep -c 'FAILED!!' out.txt)${WHITE}\r" 2>&1;
done
The bug is in your script. That's not an IO error; that's an illegal argument error. That error happens when the variable you provide as a handle isn't a handle at all, or is one that you've closed.
Writing to a broken pipe results in the process being killed by SIGPIPE or in print returning false with $! set to EPIPE.

How to check if a Perl script doesn't have any compilation errors?

I am calling many Perl scripts in my Bash script (sometimes from csh also).
At the start of the Bash script I want to put a test which checks if all the Perl scripts are devoid of any compilation errors.
One way of doing this would be to actually call the Perl script from the Bash script and grep for "compilation error" in the piped log file, but this becomes messy as different Perl scripts are called at different points in the code, so I want to do this at the very start of the Bash script.
Is there a way to check if the Perl script has no compilation error?
Beware!!
Using the below command to check compilation errors in your Perl program can be dangerous.
$ perl -c yourperlprogram
Randal has written a very nice article on this topic which you should check out
Sanity-checking your Perl code (Linux Magazine Column 91, Mar 2007)
Quoting from his article:
Probably the simplest thing we can tell is "is it valid?". For this,
we invoke perl itself, passing the compile-only switch:
perl -c ourprogram
For this operation, perl compiles the program,
but stops just short of the execution phase. This means that every
part of the program text is translated into the internal data
structure that represents the working program, but we haven't actually
executed any code. If there are any syntax errors, we're informed, and
the compilation aborts.
Actually, that's a bit of a lie. Thanks to BEGIN blocks (including
their layered-on cousin, the use directive), some Perl code may have
been executed during this theoretically safe "syntax check". For
example, if your code contains:
BEGIN { warn "Hello, world!\n" }
then you will see that message,
even during perl -c! This is somewhat surprising to people who
consider "compile only" to mean "executes no code". Consider the
code that contains:
BEGIN { system "rm", "-rf", "/" }
and you'll see the problem with
that argument. Oops.
Apart from perl -c program.pl, it's also better to find warnings using the command:
perl -w program.pl
For details see: http://www.perl.com/pub/2004/08/09/commandline.html
I use the following part of a bash func for larger perl projects :
# foreach perl app in the src/perl dir
while read -r dir ; do
echo -e "\n"
echo "start compiling $dir ..." ;
cd $product_instance_dir/src/perl/$dir ;
# run the autoloader utility
find . -name '*.pm' -exec perl -MAutoSplit -e 'autosplit($ARGV[0], $ARGV[1], 0, 1, 1)' {} \;
# foreach perl file check the syntax by setting the correct INC dirs
while read -r file ; do
perl -MCarp::Always -I `pwd` -I `pwd`/lib -wc "$file"
# run the perltidy inline
# perltidy -b "$file"
# sleep 3
ret=$? ;
test $ret -ne 0 && break 2 ;
done < <(find "." -type f \( -name "*.pl" -or -name "*.pm" \))
test $ret -ne 0 && break ;
echo "stop compiling $dir ..." ;
echo -e "\n\n"
cd $product_instance_dir ;
done < <(ls -1 "src/perl")
When you need to check errors/warnings before running but your file depends on mutliple other files you can add option -I:
perl -I /path/to/dependency/lib -c /path/to/file/to/check
Edit: from man perlrun
Directories specified by -I are prepended to the search path for modules (#INC).