How to share the piped output 2 by 2 arguments, only the first argument is for a printout command, i.e. echo to stdout/stderr, and the second argument for another command (let it be CMD), by making use of xargs
Ouput ie. the source to be piped can be as either the 2 arguments in one single line or 1 arg for 1 line, below all only illustration:
echo -e 'foo bar\nfoo1 bar1\nfoo2 bar2\n' # ... much more
or
echo -e 'foo\nbar\nfoo1\nbar1\nfoo2\bar2\n'
so how is
echo -e 'foo bar\nfoo1 bar1\nfoo2 bar2\n' |xargs echo |xargs CMD
supposed to be in real?
expected printout stdout/stderr result
foo
{output of CMD being fed with bar}
foo1
{output of CMD being fed with bar1}
foo2
{output of CMD being fed with bar2}
# ...
You can use xargs -n2 to do this by-2 args, then ab-use a shell scriptlet to handle $1 and $2 as e.g. (using ls as CMD just as example):
$ echo -e 'foo\nbar\nfoo1\nbar1\nfoo2\nbar2'|xargs -n2 sh -c 'echo $1; ls $2' --
foo
ls: cannot access 'bar': No such file or directory
foo1
ls: cannot access 'bar1': No such file or directory
foo2
ls: cannot access 'bar2': No such file or directory
NOTE the needed -- as last xargs argument
Related
I want to use open function to run the following commands:
> echo "toto\ntata"
toto
tata
> echo -e "toto\ntata"
toto
tata
> echo -E "toto\ntata"
toto\ntata
> echo -n "toto\ntata"
toto\ntata
Note that for the last command, there is no trailing new line.
So I have the following script:
use strict;
sub run_cmd {
my ($cmd) = #_;
my $fcmd;
print("Run $cmd\n");
open($fcmd, "$cmd |");
while ( my $line = <$fcmd> ) {
print "-> $line";
}
close $fcmd;
}
eval{run_cmd('echo "toto\ntata"')};
eval{run_cmd('echo -e "toto\ntata"')};
eval{run_cmd('echo -E "toto\ntata"')};
eval{run_cmd('echo -n "toto\ntata"')};
But when I run it, I get this results:
Run echo "toto\ntata"
-> toto
-> tata
Run echo -e "toto\ntata"
-> -e toto
-> tata
Run echo -n "toto\ntata"
-> toto
-> tataRun echo -E "toto\ntata"
-> -E toto
-> tata
We can see the -n option is correctly interpreted by open, because there is no newline at the end of the text, but it is not the case for the options -e and -E. Worse, there are printed by echo.
Why my options are not interpreted by open every time? What should I do to get a correct output?
When you provide a shell command to system, qx, open, etc, it is treated as a sh shell command.
Here's the documentation for the echo builtin of dash which is used by some Linux distros as sh:
echo [-n] args...
Print the arguments on the standard output, separated by spaces. Unless the -n option is present, a newline is output following the arguments.
If any of the following sequences of characters is encountered during output, the sequence is not output. Instead, the specified action is performed:
...
\n Output a newline character.
...
It has no -e or -E option.
Running the commands from dash (aka sh in your case) results in exactly the same output as you received.
$ echo "toto\ntata"
toto
tata
$ echo -e "toto\ntata"
-e toto
tata
$ echo -E "toto\ntata"
-E toto
tata
$ echo -n "toto\ntata"
toto
tata$
Your test was probably run in the bash shell. The following will execute the command using bash instead:
open(my $pipe, "-|", "bash", "-c", $cmd)
Is there any reason why you use external program echo when perl has many ways to output some information to the console/file?
At your disposition you have print, printf, say
Perl input and output functions
Programming Perl
I'm using this to find files of a particular name in subdirectories, then editing some content:
find prod -type f -name "file.txt" -exec sed -i '' -e "s,^varname.*$, varname = \"$value\"," {} +
How can I get the name of the current directory (not the directory the script is executed in, rather the directory the file is found in) and insert it into the replace text? Something like:
find prod -type f -name "file.txt" -exec sed -i '' -e "s,^ varname.*$, varname = \"$value/$dirname\"," {} +
I'm hoping to keep it as a one-liner. My most recent attempt was this, but the replacement didn't work and I feel there must be a simpler syntax:
find prod -type f -name "file.txt" -exec sh -c '
for file do
dirname=${file%/*}
done' sed -i '' -e "s,^varname.*$, varname = \"$value/$dirname\"," {} +
Example:
value=bar
file.txt input:
varname = “foo”
file.txt output:
varname = “bar/directory_name”
You can do this with GNU awk in the same way:
The sed command you make use of can be replaced with:
$ awk --inplace -v v="$value" '(FNR==1){d=FILENAME;sub("/[^/]*$","",d)}/^varname/{$0="varname = "v"/"d}1'
So your find woud read:
$ find prod -type f -name "file.txt" -exec awk --inplace -v v="$value" '(FNR==1){d=FILENAME;sub("/[^/]*$","",d)}/^varname/{$0="varname = "v"/"d}1' {} \;
This might work for you (GNU sed & parallel):
find prod -type f -name "file.txt" |
parallel -qa- --link sed -i 's#\(varname=\).*#\1"{2}{1//}"#' {1} ::: $value
We supply 2 sources to the parallel command. The first source is the list of files from the find command using the parallel option -a -. The second source is the variable $value, being only a single value it is linked to the first source using the parallel option --link. The sed command is quoted using the parallel option -q and normal regexp rules apply excepting that the values {2} and {1//} are first interpreted by parallel to represent the second source and the directory of the first source respectively.
N.B. To check the commands to parallel are as you desire, use the --dryrun option and check the output before running for real.
You need to use -execdir and spawn a shell:
find ... -execdir \
bash -c 'sed -i "" -e "s,^ varname.*$, varname = \"$value/${PWD}\"," "$1"' -- {} +
-execdir runs sed in the parent folder of the file instead of the folder from where you run find. This allows to use
$PWD.
Further note: I calling bash with two arguments:
-exec bash -c '... code ...' -- {}
^^ ^^
I'm passing the -- as a placeholder. When called with -c, bash starts to index arguments at $0 instead of $1. ($0 would normally contain the script's name). That allows to use $1 for the filename from {} which is imo more readable and understandable.
"ls" behaves differently when its output is being piped:
> ls ???
bar foo
> ls ??? | cat
bar
foo
How does it know, and how would I do this in Perl?
In Perl, the -t file test operator indicates whether a filehandle
(including STDIN) is connected to a terminal.
There is also the -p test operator to indicate whether a filehandle
is attached to a pipe.
$ perl -e 'printf "term:%d, pipe:%d\n", -t STDIN, -p STDIN'
term:1, pipe:0
$ perl -e 'printf "term:%d, pipe:%d\n", -t STDIN, -p STDIN' < /tmp/foo
term:0, pipe:0
$ echo foo | perl -e 'printf "term:%d, pipe:%d\n", -t STDIN, -p STDIN'
term:0, pipe:1
File test operator documentation at perldoc -f -X.
use IO::Interactive qw(is_interactive);
is_interactive() or warn "Being piped\n";
say i have a directory with hi.txt and blah.txt and i execute the following command on a linux-ish command line
ls *.* | xargs -t -i{} echo {}
the output you will see is
echo blah.txt
blah.txt
echo hi.txt
hi.txt
i'd like to redirect the stderr output (say 'echo blah.txt' fails...), leaving only the output from the xargs -t command written to std out, but it looks as if it's stderr as well.
ls *.* | xargs -t -i{} echo {} 2> /dev/null
Is there a way to control it, to make it output to stdout?
Use:
ls | xargs -t -i{} echo {} 2>&1 >/dev/null
The 2>&1 sends the standard error from xargs to where standard output is currently going; the >/dev/null sends the original standard output to /dev/null. So, the net result is that standard output contains the echo commands, and /dev/null contains the file names. We can debate about spaces in file names and whether it would be easier to use a sed script to put 'echo' at the front of each line (with no -t option), or whether you could use:
ls | xargs -i{} echo echo {}
(Tested: Solaris 10, Korn Shell ; should work on other shells and Unix platforms.)
If you don't mind seeing the inner workings of the commands, I did manage to segregate the error output from xargs and the error output of the command executed.
al * zzz | xargs -t 2>/tmp/xargs.stderr -i{} ksh -c "ls -dl {} 2>&1"
The (non-standard) command al lists its arguments one per line:
for arg in "$#"; do echo "$arg"; done
The first redirection (2>/tmp/xargs.stderr) sends the error output from xargs to the file /tmp/xargs.stderr. The command executed is 'ksh -c "ls -dl {} 2>&1"', which uses the Korn shell to run ls -ld on the file name with any error output going to standard output.
The output in /tmp/xargs.stderr looks like:
ksh -c ls -dl x1 2>&1
ksh -c ls -dl x2 2>&1
ksh -c ls -dl xxx 2>&1
ksh -c ls -dl zzz 2>&1
I used 'ls -ld' in place of echo to ensure I was testing errors - the files x1, x2, and xxx existed, but zzz does not.
The output on standard output looked like:
-rw-r--r-- 1 jleffler rd 1020 May 9 13:05 x1
-rw-r--r-- 1 jleffler rd 1069 May 9 13:07 x2
-rw-r--r-- 1 jleffler rd 87 May 9 20:42 xxx
zzz: No such file or directory
When run without the command wrapped in 'ksh -c "..."', the I/O redirection was passed as an argument to the command ('ls -ld'), and it therefore reported that it could not find the file '2>&1'. That is, xargs did not itself use the shell to do the I/O redirection.
It would be possible to arrange for various other redirections, but the basic problem is that xargs makes no provision for separating its own error output from that of the commands it executes, so it is hard to do.
The other rather obvious option is to use xargs to write a shell script, and then have the shell execute it. This is the option I showed before:
ls | xargs -i{} echo echo {} >/tmp/new.script
You can then see the commands with:
cat /tmp/new.script
You can run the commands to discard the errors with:
sh /tmp/new.script 2>/dev/null
And, if you don't want to see the standard output from the commands either, append 1>&2 to the end of the command.
So I believe what you want is to have as stdout is
the stdout from the utility that xargs executes
the listing of commands generated by xargs -t
You want to ignore the stderr stream generated by the
executed utility.
Please correct me if I'm wrong.
First, let's create a better testing utility:
% cat myecho
#!/bin/sh
echo STDOUT $#
echo STDERR $# 1>&2
% chmod +x myecho
% ./myecho hello world
STDOUT hello world
STDERR hello world
% ./myecho hello world >/dev/null
STDERR hello world
% ./myecho hello world 2>/dev/null
STDOUT hello world
%
So now we have something that actually outputs to both stdout and stderr, so we
can be sure we're only getting what we want.
A tangential way to do this is not to use xargs, but rather, make. Echoing a command
and then doing it is kind of what make does. That's its bag.
% cat Makefile
all: $(shell ls *.*)
$(shell ls): .FORCE
./myecho $# 2>/dev/null
.FORCE:
% make
./myecho blah.txt 2>/dev/null
STDOUT blah.txt
./myecho hi.txt 2>/dev/null
STDOUT hi.txt
% make >/dev/null
%
If you're tied to using xargs, then you need to modify your utility that
xargs uses so it surpresses stderr. Then you can use the 2>&1 trick others
have mentioned to move the command listing generated by xargs -t from stderr
to stdout.
% cat myecho2
#!/bin/sh
./myecho $# 2>/dev/null
% chmod +x myecho2
% ./myecho2 hello world
STDOUT hello world
% ls *.* | xargs -t -i{} ./myecho2 {} 2>&1
./myecho blah.txt 2>/dev/null
STDOUT blah.txt
./myecho hi.txt 2>/dev/null
STDOUT hi.txt
% ls *.* | xargs -t -i{} ./myecho2 {} 2>&1 | tee >/dev/null
%
So this approach works, and collapses everything you want to stdout (leaving out what you don't want).
If you find yourself doing this a lot, you can write a general utility to surpress stderr:
% cat surpress_stderr
#!/bin/sh
$# 2>/dev/null
% ./surpress_stderr ./myecho hello world
STDOUT hello world
% ls *.* | xargs -t -i{} ./surpress_stderr ./myecho {} 2>&1
./surpress_stderr ./myecho blah.txt 2>/dev/null
STDOUT blah.txt
./surpress_stderr ./myecho hi.txt 2>/dev/null
STDOUT hi.txt
%
xargs -t echos the commands to be executed to stderr before executing them. If you want them to instead echo to stderr, you can pipe stderr to stdout with the 2>&1 construct:
ls *.* | xargs -t -i{} echo {} 2>&1
It looks like xargs -t goes to stderr, and there's not much you can do about it.
You could do:
ls | xargs -t -i{} echo "Foo: {}" >stderr.txt | tee stderr.txt
to display only the stderr data on your terminal as your command runs, and then grep through stderr.txt after to see if anything unexpected occurred, along the lines of grep -v Foo: stderr.txt
Also note that on Unix, ls *.* isn't how you display everything. If you want to see all the files, just run ls on its own.
As I understand your problem using GNU Parallel http://www.gnu.org/software/parallel/ would do the right thing:
ls *.* | parallel -v echo {} 2> /dev/null
I'm trying to run the following command:
find . -iname '.#*' -print0 | xargs -0 -L 1 foobar
where "foobar" is an alias or function defined in my .bashrc file (in my case, it's a function that takes one parameter). Apparently xargs doesn't recognize these as things it can run. Is there a clever way to remedy this?
Since only your interactive shell knows about aliases, why not just run the alias without forking out through xargs?
find . -iname '.#*' -print0 | while read -r -d '' i; do foobar "$i"; done
If you're sure that your filenames don't have newlines in them (ick, why would they?), you can simplify this to
find . -iname '.#*' -print | while read -r i; do foobar "$i"; done
or even just find -iname '.#*' | ..., since the default directory is . and the default action is -print.
One more alternative:
IFS=$'\n'; for i in `find -iname '.#*'`; do foobar "$i"; done
telling Bash that words are only split on newlines (default: IFS=$' \t\n'). You should be careful with this, though; some scripts don't cope well with a changed $IFS.
Using Bash you may also specify the number of args being passed to your alias (or function) like so:
alias myFuncOrAlias='echo' # alias defined in your ~/.bashrc, ~/.profile, ...
echo arg1 arg2 | xargs -n 1 bash -cil 'myFuncOrAlias "$1"' arg0
echo arg1 arg2 | xargs bash -cil 'myFuncOrAlias "$#"' arg0
Adding a trailing space to the command being aliased causes other aliased commands to expand:
alias xargs='xargs ' # aliased commands passed to xargs will be expanded
See this answer for more info:
https://stackoverflow.com/a/59842439/11873710
This doesn't work because xargs expects to be able to exec the program given as its parameter.
Since foobar in your case is just a bash alias or function there's no program to execute.
Although it involves starting bash for each file returned by find, you could write a small shell script thus:
#!/bin/bash
. $(HOME)/.bashrc
func $*
and then pass the name of that script as the parameter to xargs
I usually use find like this:
find . -iname '' -exec cmd '{}' \;
'{}' will get replaced with the filename, and \; is necessary to terminate the execution chain. However, if that doesn't work with your function, you might need to run it through bash:
find .. |sed -e "s/.*/cmd '&'/"|bash
Find prints each file on a line, sed just prefixes this with your command, and then pipe it to bash for execution. Skip the |bash first to see what will happen.
try
find . -iname '.#*' -print0 | xargs -0 -L 1 $(foobar)