I am trying to capture the output of a command. It works fine if the command executes. However when there is an error, i am unable to capture what gets displayed in commandline
Eg.
$ out=`/opt/torque/bin/qsub submitscript`
qsub: Unauthorized Request MSG=group ACL is not satisfied: user abc#xyz.org, queue home
$ echo $out
$
I want $out to have the message
Thanks!
Errors are on stderr, so you need to redirect them into stdout so the backticks will capture it:
out=`/opt/torque/bin/qsub submitscript 2>&1`
if [ $? -gt 0 ] ; then
# By convention, this is sent to stderr, but if you need it on
# stdout, just remove the >&2 redirection
echo "Error: $out" >&2
else
echo "Success: $out"
fi
You should test the exit status of the command to figure out what the output represents (one way shown). It is similar for perl, slightly different syntax of course.
Have you tried doing it like this
$ out=`/opt/torque/bin/qsub submitscript 2>&1 > /dev/null`
$ echo $out
Related
In my script I need to work with the exit status of the non-last command of a pipeline:
do_real_work 2>&1 | tee real_work.log
To my surprise, $? contains the exit code of the tee. Indeed, the following command:
false 2>&1 | tee /dev/null ; echo $?
outputs 0. Surprise, because the csh's (almost) equivalent
false |& tee /dev/null ; echo $status
prints 1.
How do I get the exit code of the non-last command of the most recent pipeline?
Bash has set -o pipefail which uses the first non-zero exit code (if any) as the exit code of a pipeline.
POSIX shell doesn't have such a feature AFAIK. You could work around that with a different approach:
tail -F -n0 real_work.log &
do_real_work > real_work.log 2>&1
kill $!
That is, start following the as yet non-existing file before running the command, and kill the process after running the command.
I would like to be able to
Display STDERR on the screen
Copy STDOUT and STDERR in files (and if possible in the same file)
For information, I am using Msys to do that.
.
After doing some research on the SO, I have tried to use something like
<my command> > >(tee stdout.log) 2> >(tee stderr.log)
Bu I got the following error:
sh: syntax error near unexpected token `>'
.
Any idea on how to do that?
There might be no direct solution in Msys but the >(tee ... ) solution works fine in *Nix, OSX, and probably Cygwin.
The workaround is to grep all the errors and warnings we want to keep them on the screen.
I have successfully used the following command for a makefile to compile C code:
make 2>&1 | tee make.log | grep -E "(([Ee]rror|warning|make):|In function|undefined)"
I have a simple script (test.sh) that generates STDOUT and STDERR:
#!/bin/bash
echo hello
rm something
exit
Then, to do what you want execute with the following:
./test.sh > stdout.log 2> >(tee stderr.log >&2)
You'll get the STDERR on the screen, and two separated log files with STDERR and STDOUT. I used part of the answer given here
Note that I am assuming you don't have a file called something on the current directory :)
If you want both STDOUT and STDERR to go to the same file, use the -a option on tee:
./test.sh > std.log 2> >(tee -a std.log >&2)
I know this has been an issue for a while and I found a lot of discussion about it, however I didn't get which would be finally a way to get it done: pipe both, stdout and stderr. In bash, this would be simply:
cmd 2>&1 | cmd2
That syntax works in fish too. A demo:
$ function cmd1
echo "this is stdout"
echo "this is stderr" >&2
end
$ function cmd2
rev
end
$ cmd1 | cmd2
this is stderr
tuodts si siht
$ cmd1 &| cmd2
rredts si siht
tuodts si siht
Docs: https://fishshell.com/docs/current/language.html#redirects
There's also a handy shortcut, per these docs
&>
Here's the relevant quote (emphasis and white space, mine):
As a convenience, the redirection &> can be used to direct both stdout
and stderr to the same destination. For example:
echo hello &> all_output.txt
redirects both stdout and stderr to the file
all_output.txt. This is equivalent to echo hello > all_output.txt 2>&1.
I have a specific problem I'd like to solve and I'm running system through perl which I think runs in bash:
Show stdout and stderr in both.log. Add "done" if it finished.
Append stderr in stderr.log
Don't print out to terminal, except the original command and "done" if the command finished.
So once we combine stdout and stderr, can we separate them again? OR can we first capture stderr and then combine it with stdout after?
bash: system("((" . $cmd . " 2>&1 1>&3 | tee -a stderr.log) 3>&1) > both.log; echo done | tee -a both.log"
Even though we use tee for stderr.log it doesn't tee the error output to the terminal (which is what I want). I don't completely understand how it works, but I guess the tee causes it to go to the log file and also to the next file descriptor equation.
Bonus
I found this here in the comments: http://wiki.bash-hackers.org/howto/redirection_tutorial
This can be used to do the same thing, but all output is also teed to terminal (It doesn't print "done").
bash:
(($command 2>&1 1>&3 | tee stderr.log) 3>&1 ) | tee both.log
Using Perl 5.14.1.
tcsh alone would not suffice here. It does not support redirecting stderr without stdout.
Use another shell or a designated script.
More here.
UPDATE:
Can this be ported to tcsh?
No. tcsh does not support redirecting stderr without stdout.
From some Googling (I'm no bash expert by any means) I was able to put together a bash script that allows me to run a test suite and output a status bar at the bottom while it runs. It typically takes about 10 hours, and the status bar tells me how many tests passed and how many failed.
It works great sometimes, however occasionally I will run into an infinite loop, which is bad (mmm-kay?). Here's the code I'm using:
#!/bin/bash
WHITE="\033[0m"
GREEN="\033[32m"
RED="\033[31m"
(run_test_suite 2>&1) | tee out.txt |
while IFS=read -r line;
do
printf "%$(tput cols)s\r" " ";
printf "%s\n" "$line";
printf "${WHITE}Passing Tests: ${GREEN}$(grep -c passed out.txt)\t" 2>&1;
printf "${WHITE}Failed Tests: ${RED}$( grep -c FAILED out.txt)${WHITE}\r" 2>&1;
done
What happens when I encounter the bug is I'll have an error message repeat infinitely, causing the log file (out.txt) to become some multi-megabyte monstrosity (I think it got into the GB's once). Here's an example error that repeats (with four lines of whitespace between each set):
warning caused by MY::Custom::Perl::Module::TEST_FUNCTION
print() on closed filehandle GEN3663 at /some/CPAN/Perl/Module.pm line 123.
I've tried taking out the 2>&1 redirect, and I've tried changing while IFS=read -r line; to while read -r line;, but I keep getting the infinite loop. What's stranger is this seems to happen most of the time, but there have been times I finish the long test suite without any problems.
EDIT:
The reason I'm writing this is to upgrade from a black & white test suite to a color-coded test suite (hence the ANSI codes). Previously, I would run the test suite using
run_test_suite > out.txt 2>&1 &
watch 'grep -c FAILED out.txt; grep -c passed out.txt; tail -20 out.txt'
Running it this way gets the same warning from Perl, but prints it to the file and moves on, rather than getting stuck in an infinite loop. Using watch, also prints stuff like [32m rather than actually rendering the text as green.
I was able to fix the perl errors and the bash script seems to work well now after a few modifications. However, it seems this would be a safer way to run the test suite in case something like that were to happen in the future:
#!/bin/bash
WHITE="\033[0m"
GREEN="\033[32m"
RED="\033[31m"
run_full_test > out.txt 2>&1 &
tail -f out.txt | while IFS= read line; do
printf "%$(tput cols)s\r" " ";
printf "%s\n" "$line";
printf "${WHITE}Passing Tests: ${GREEN}$(grep -c passed out.txt)\t" 2>&1;
printf "${WHITE}Failed Tests: ${RED}$( grep -c 'FAILED!!' out.txt)${WHITE}\r" 2>&1;
done
There are some downsides to this. Mainly, if I hit Ctrl-C to stop the test, it appears to have stopped, but really run_full_test is still running in the background and I need to remember to kill it manually. Also, when the test is finished tail -f is still running. In other words there are two processes running here and they are not in sync.
Here is the original script, slightly modified, which addresses those problems, but isn't foolproof (i.e. can get stuck in an infinite loop if run_full_test has issues):
#!/bin/bash
WHITE="\033[0m"
GREEN="\033[32m"
RED="\033[31m"
(run_full_test 2>&1) | tee out.txt | while IFS= read line; do
printf "%$(tput cols)s\r" " ";
printf "%s\n" "$line";
printf "${WHITE}Passing Tests: ${GREEN}$(grep -c passed out.txt)\t" 2>&1;
printf "${WHITE}Failed Tests: ${RED}$( grep -c 'FAILED!!' out.txt)${WHITE}\r" 2>&1;
done
The bug is in your script. That's not an IO error; that's an illegal argument error. That error happens when the variable you provide as a handle isn't a handle at all, or is one that you've closed.
Writing to a broken pipe results in the process being killed by SIGPIPE or in print returning false with $! set to EPIPE.