How to redirect standard output to multiple commands in Fish? - fish

This question is the same as this one, https://unix.stackexchange.com/questions/28503/how-can-i-send-stdout-to-multiple-commands, but pertains to the Fish shell. I'd like to extract multiple field values from some JSON output and saving them to different files. Here is an example that works in Bash:
bash-3.2$ echo '{"foo": "bar", "baz": "bam"}' | tee >(jq -r '.foo' > foo.txt) >(jq -r '.baz' > baz.txt)
{"foo": "bar", "baz": "bam"}
Note that the files have been save successfully:
bash-3.2$ cat foo.txt
bar
bash-3.2$ cat baz.txt
bam
However, if I attempt to do the same in Fish, it hangs:
> echo '{"foo": "bar", "baz": "bam"}' | tee >(jq '.foo' > foo.txt) >(jq '.bar' > bar.txt)
Any idea what the differene between Fish and Bash is that is causing this?

Related

Inconsistent External Command Output

The terminal transcript speaks for itself:
iMac:~$ echo -n a | md5
0cc175b9c0f1b6a831c399e269772661
iMac:~$ perl -e 'system "echo -n a | md5"'
c3392e9373ccca33629d82b17699420f
Note that the MD5 hash of a is 0cc175b9c0f1b6a831c399e269772661, the first
result. Why does it turns out to be different when the same command is called
by perl?
By the way, perl is perl 5, version 12, subversion 4 (v5.12.4) built for darwin-thread-multi-2level. And the system: Mac OS 10.8, Darwin 12.0
When in the /bin/sh shell on mac, echo -n doesn't not print out the newline like it does in /bin/bash. You can see this if you drop into /bin/sh and run echo -n a, your output should look like this:
sh-3.2$ echo -n a
-n a
so you're literally getting -n a instead of the desired a. As perl system runs /bin/sh to evaluate your command, -n a is being passed into md5 instead of your desired a
The specific question has already been answered, but I want to point out that od is useful to help understand exactly what any command outputs or file contains. This is useful especially to show otherwise non-printing characters.
$ echo -n a | od -tc
0000000 a
0000001
$ perl -e 'system "echo -n a | od -tc";'
0000000 - n a \n
0000005

Change multiple files

The following command is correctly changing the contents of 2 files.
sed -i 's/abc/xyz/g' xaa1 xab1
But what I need to do is to change several such files dynamically and I do not know the file names. I want to write a command that will read all the files from current directory starting with xa* and sed should change the file contents.
I'm surprised nobody has mentioned the -exec argument to find, which is intended for this type of use-case, although it will start a process for each matching file name:
find . -type f -name 'xa*' -exec sed -i 's/asd/dsg/g' {} \;
Alternatively, one could use xargs, which will invoke fewer processes:
find . -type f -name 'xa*' | xargs sed -i 's/asd/dsg/g'
Or more simply use the + exec variant instead of ; in find to allow find to provide more than one file per subprocess call:
find . -type f -name 'xa*' -exec sed -i 's/asd/dsg/g' {} +
Better yet:
for i in xa*; do
sed -i 's/asd/dfg/g' $i
done
because nobody knows how many files are there, and it's easy to break command line limits.
Here's what happens when there are too many files:
# grep -c aaa *
-bash: /bin/grep: Argument list too long
# for i in *; do grep -c aaa $i; done
0
... (output skipped)
#
You could use grep and sed together. This allows you to search subdirectories recursively.
Linux: grep -r -l <old> * | xargs sed -i 's/<old>/<new>/g'
OS X: grep -r -l <old> * | xargs sed -i '' 's/<old>/<new>/g'
For grep:
-r recursively searches subdirectories
-l prints file names that contain matches
For sed:
-i extension (Note: An argument needs to be provided on OS X)
Those commands won't work in the default sed that comes with Mac OS X.
From man 1 sed:
-i extension
Edit files in-place, saving backups with the specified
extension. If a zero-length extension is given, no backup
will be saved. It is not recommended to give a zero-length
extension when in-place editing files, as you risk corruption
or partial content in situations where disk space is exhausted, etc.
Tried
sed -i '.bak' 's/old/new/g' logfile*
and
for i in logfile*; do sed -i '.bak' 's/old/new/g' $i; done
Both work fine.
#PaulR posted this as a comment, but people should view it as an answer (and this answer works best for my needs):
sed -i 's/abc/xyz/g' xa*
This will work for a moderate amount of files, probably on the order of tens, but probably not on the order of millions.
Another more versatile way is to use find:
sed -i 's/asd/dsg/g' $(find . -type f -name 'xa*')
I'm using find for similar task. It is quite simple: you have to pass it as an argument for sed like this:
sed -i 's/EXPRESSION/REPLACEMENT/g' `find -name "FILE.REGEX"`
This way you don't have to write complex loops, and it is simple to see, which files you are going to change, just run find before you run sed.
u can make
'xxxx' text u search and will replace it with 'yyyy'
grep -Rn '**xxxx**' /path | awk -F: '{print $1}' | xargs sed -i 's/**xxxx**/**yyyy**/'
There's some good answers above. I thought I'd throw in one more that is succinct and parallelizable, using GNU parallel, which I often prefer to xargs:
parallel sed -i 's/abc/xyz/g' {} ::: xa*
Combine this with the -j N option to run N jobs in parallel.
If you are able to run a script, here is what I did for a similar situation:
Using a dictionary/hashMap (associative array) and variables for the sed command, we can loop through the array to replace several strings. Including a wildcard in the name_pattern will allow to replace in-place in files with a pattern (this could be something like name_pattern='File*.txt' ) in a specific directory (source_dir).
All the changes are written in the logfile in the destin_dir
#!/bin/bash
source_dir=source_path
destin_dir=destin_path
logfile='sedOutput.txt'
name_pattern='File.txt'
echo "--Begin $(date)--" | tee -a $destin_dir/$logfile
echo "Source_DIR=$source_dir destin_DIR=$destin_dir "
declare -A pairs=(
['WHAT1']='FOR1'
['OTHER_string_to replace']='string replaced'
)
for i in "${!pairs[#]}"; do
j=${pairs[$i]}
echo "[$i]=$j"
replace_what=$i
replace_for=$j
echo " "
echo "Replace: $replace_what for: $replace_for"
find $source_dir -name $name_pattern | xargs sed -i "s/$replace_what/$replace_for/g"
find $source_dir -name $name_pattern | xargs -I{} grep -n "$replace_for" {} /dev/null | tee -a $destin_dir/$logfile
done
echo " "
echo "----End $(date)---" | tee -a $destin_dir/$logfile
First, the pairs array is declared, each pair is a replacement string, then WHAT1 will be replaced for FOR1 and OTHER_string_to replace will be replaced for string replaced in the file File.txt. In the loop the array is read, the first member of the pair is retrieved as replace_what=$i and the second as replace_for=$j. The find command searches in the directory the filename (that may contain a wildcard) and the sed -i command replaces in the same file(s) what was previously defined. Finally I added a grep redirected to the logfile to log the changes made in the file(s).
This worked for me in GNU Bash 4.3 sed 4.2.2 and based upon VasyaNovikov's answer for Loop over tuples in bash.
The Silver Searcher Solution
I'm adding another option for those people who don't know about the amazing tool called The Silver Searcher (command line tool is ag).
Note: You can use grep and other tools to do the same thing here, but The Silver Searcher is fantastic :)
TLDR
ag -l 'abc' | xargs sed -i 's/abc/xyz/g'
Install The Silver Searcher
sudo apt install silversearcher-ag # Debian / Ubuntu
sudo pacman -S the_silver_searcher # Arch / EndeavourOS
sudo yum install epel-release the_silver_searcher # RHEL / CentOS
Demo Files
Paste the following into your terminal to create some demonstration files:
mkdir /tmp/food
cd /tmp/food
content="Everybody loves to abc this food!"
echo "$content" > ./milk
echo "$content" > ./bread
mkdir ./fastfood
echo "$content" > ./fastfood/pizza
echo "$content" > ./fastfood/burger
mkdir ./fruit
echo "$content" > ./fruit/apple
echo "$content" > ./fruit/apricot
Using 'ag'
The following ag command will recursively find all the files that contain the string 'abc'. It ignores the .git directory, .gitignore files, and other ignore files:
$ ag 'abc'
milk
1:Everybody loves to abc this food!
bread
1:Everybody loves to abc this food!
fastfood/burger
1:Everybody loves to abc this food!
fastfood/pizza
1:Everybody loves to abc this food!
fruit/apple
1:Everybody loves to abc this food!
fruit/apricot
1:Everybody loves to abc this food!
To just list the files that contain the string 'abc', use the -l switch:
$ ag -l 'abc'
bread
fastfood/burger
fastfood/pizza
fruit/apricot
milk
fruit/apple
Changing Multiple Files
Finally, using xargs and sed, we can replace the 'abc' string with another string:
ag -l 'abc' | xargs sed -i 's/abc/eat/g'
In the above command, ag is listing all the files that contain the string 'abc'. The xargs command is splitting the file names and piping them individually into the sed command.

grep all lines from start of file to line containing a string

If I have input file containing
statementes
asda
rertte
something
nothing here
I want to grep / extract (without using awk) every line from starting till I get the string "something". How can I do this? grep -B does not work since it needs the exact number of lines.
Desired output:
statementes
asda
rertte
something
it's not completely robust, but sure -B works... just make the -B count huge:
grep -B `wc -l <filename>` -e 'something' <filename>
You could use a bash while loop and exit early when you hit the string:
$ cat file | while read line; do
> echo $line
> if echo $line | grep -q something; then
> exit 0
> fi
> done
head -n `grep -n -e 'something' <filename> | cut -d: -f1` <filename>

how to trim wipe output with sed?

i want to trim an output of wipe command with sed.
i try to use this one:
wipe -vx7 /dev/sdb 2>&1 | sed -u 's/.*\ \([0-9]\+\).*/\1/g'
but it don't work for some reason.
when i use echo & sed to print the output of wipe command it works!
echo "/dev/sdb: 10%" | sed -u 's/.*\ \([0-9]\+\).*/\1/g'
what i'm doing wrong?
Thanks!
That looks like a progress indicator. They are often output directly to the tty instead of to stdout or stderr. You may be able to use the expect script called unbuffer (source) or some other method to create a pseudo tty. Be aware that there will probably be more junk such as \r, etc., that you may need to filter out.
Demonstration:
$ cat foo
#!/bin/sh
echo hello > /dev/tty
$ a=$(./foo)
hello
$ echo $a
$ a=$(unbuffer ./foo)
$ echo $a
hello

Is `xargs -t` output stderr or stdout, and can you control it?

say i have a directory with hi.txt and blah.txt and i execute the following command on a linux-ish command line
ls *.* | xargs -t -i{} echo {}
the output you will see is
echo blah.txt
blah.txt
echo hi.txt
hi.txt
i'd like to redirect the stderr output (say 'echo blah.txt' fails...), leaving only the output from the xargs -t command written to std out, but it looks as if it's stderr as well.
ls *.* | xargs -t -i{} echo {} 2> /dev/null
Is there a way to control it, to make it output to stdout?
Use:
ls | xargs -t -i{} echo {} 2>&1 >/dev/null
The 2>&1 sends the standard error from xargs to where standard output is currently going; the >/dev/null sends the original standard output to /dev/null. So, the net result is that standard output contains the echo commands, and /dev/null contains the file names. We can debate about spaces in file names and whether it would be easier to use a sed script to put 'echo' at the front of each line (with no -t option), or whether you could use:
ls | xargs -i{} echo echo {}
(Tested: Solaris 10, Korn Shell ; should work on other shells and Unix platforms.)
If you don't mind seeing the inner workings of the commands, I did manage to segregate the error output from xargs and the error output of the command executed.
al * zzz | xargs -t 2>/tmp/xargs.stderr -i{} ksh -c "ls -dl {} 2>&1"
The (non-standard) command al lists its arguments one per line:
for arg in "$#"; do echo "$arg"; done
The first redirection (2>/tmp/xargs.stderr) sends the error output from xargs to the file /tmp/xargs.stderr. The command executed is 'ksh -c "ls -dl {} 2>&1"', which uses the Korn shell to run ls -ld on the file name with any error output going to standard output.
The output in /tmp/xargs.stderr looks like:
ksh -c ls -dl x1 2>&1
ksh -c ls -dl x2 2>&1
ksh -c ls -dl xxx 2>&1
ksh -c ls -dl zzz 2>&1
I used 'ls -ld' in place of echo to ensure I was testing errors - the files x1, x2, and xxx existed, but zzz does not.
The output on standard output looked like:
-rw-r--r-- 1 jleffler rd 1020 May 9 13:05 x1
-rw-r--r-- 1 jleffler rd 1069 May 9 13:07 x2
-rw-r--r-- 1 jleffler rd 87 May 9 20:42 xxx
zzz: No such file or directory
When run without the command wrapped in 'ksh -c "..."', the I/O redirection was passed as an argument to the command ('ls -ld'), and it therefore reported that it could not find the file '2>&1'. That is, xargs did not itself use the shell to do the I/O redirection.
It would be possible to arrange for various other redirections, but the basic problem is that xargs makes no provision for separating its own error output from that of the commands it executes, so it is hard to do.
The other rather obvious option is to use xargs to write a shell script, and then have the shell execute it. This is the option I showed before:
ls | xargs -i{} echo echo {} >/tmp/new.script
You can then see the commands with:
cat /tmp/new.script
You can run the commands to discard the errors with:
sh /tmp/new.script 2>/dev/null
And, if you don't want to see the standard output from the commands either, append 1>&2 to the end of the command.
So I believe what you want is to have as stdout is
the stdout from the utility that xargs executes
the listing of commands generated by xargs -t
You want to ignore the stderr stream generated by the
executed utility.
Please correct me if I'm wrong.
First, let's create a better testing utility:
% cat myecho
#!/bin/sh
echo STDOUT $#
echo STDERR $# 1>&2
% chmod +x myecho
% ./myecho hello world
STDOUT hello world
STDERR hello world
% ./myecho hello world >/dev/null
STDERR hello world
% ./myecho hello world 2>/dev/null
STDOUT hello world
%
So now we have something that actually outputs to both stdout and stderr, so we
can be sure we're only getting what we want.
A tangential way to do this is not to use xargs, but rather, make. Echoing a command
and then doing it is kind of what make does. That's its bag.
% cat Makefile
all: $(shell ls *.*)
$(shell ls): .FORCE
./myecho $# 2>/dev/null
.FORCE:
% make
./myecho blah.txt 2>/dev/null
STDOUT blah.txt
./myecho hi.txt 2>/dev/null
STDOUT hi.txt
% make >/dev/null
%
If you're tied to using xargs, then you need to modify your utility that
xargs uses so it surpresses stderr. Then you can use the 2>&1 trick others
have mentioned to move the command listing generated by xargs -t from stderr
to stdout.
% cat myecho2
#!/bin/sh
./myecho $# 2>/dev/null
% chmod +x myecho2
% ./myecho2 hello world
STDOUT hello world
% ls *.* | xargs -t -i{} ./myecho2 {} 2>&1
./myecho blah.txt 2>/dev/null
STDOUT blah.txt
./myecho hi.txt 2>/dev/null
STDOUT hi.txt
% ls *.* | xargs -t -i{} ./myecho2 {} 2>&1 | tee >/dev/null
%
So this approach works, and collapses everything you want to stdout (leaving out what you don't want).
If you find yourself doing this a lot, you can write a general utility to surpress stderr:
% cat surpress_stderr
#!/bin/sh
$# 2>/dev/null
% ./surpress_stderr ./myecho hello world
STDOUT hello world
% ls *.* | xargs -t -i{} ./surpress_stderr ./myecho {} 2>&1
./surpress_stderr ./myecho blah.txt 2>/dev/null
STDOUT blah.txt
./surpress_stderr ./myecho hi.txt 2>/dev/null
STDOUT hi.txt
%
xargs -t echos the commands to be executed to stderr before executing them. If you want them to instead echo to stderr, you can pipe stderr to stdout with the 2>&1 construct:
ls *.* | xargs -t -i{} echo {} 2>&1
It looks like xargs -t goes to stderr, and there's not much you can do about it.
You could do:
ls | xargs -t -i{} echo "Foo: {}" >stderr.txt | tee stderr.txt
to display only the stderr data on your terminal as your command runs, and then grep through stderr.txt after to see if anything unexpected occurred, along the lines of grep -v Foo: stderr.txt
Also note that on Unix, ls *.* isn't how you display everything. If you want to see all the files, just run ls on its own.
As I understand your problem using GNU Parallel http://www.gnu.org/software/parallel/ would do the right thing:
ls *.* | parallel -v echo {} 2> /dev/null