I've written a svn-hook for text files. The content test looks like this:
svnlook cat -t $txn $repos $file 2>/dev/null | file - | egrep -q 'text$'
and I was wondering if this could be done with Perl. However something like this doesn't work:
svnlook cat -t $txn $repos $file 2>/dev/null | perl -wnl -e '-T' -
I'm testing the exit status of this invocation ($?) to see if the given file was text or binary. Since I'm getting the content out of svn. I can't use perl's normal file check.
I've done a simulation with the file program and perl with a text and binary file (text.txt, icon.png):
find -type f | xargs -i /bin/bash -c 'if $(cat {} | file - | egrep -q "text$"); then echo "{}: text"; else echo "{}: binary"; fi'
./text.txt: text
./icons.png: binary
find -type f | xargs -i /bin/bash -c 'if $(cat {} | perl -wln -e "-T;"); then echo "{}: text"; else echo "{}: binary"; fi'
./text.txt: text
./icons.png: text
You're testing perl's exit code, but you never set it. You need
perl -le'exit(-T STDIN ?0:1)' < file
Related
I would like my generated Makefile to have these new tasks for linting:
perl:
-for f in **/*.pl; do perl -MO=Lint -cw $$f 2>&1 | grep -v "syntax OK"; done
-for f in **/*.pm; do perl -MO=Lint -cw $$f 2>&1 | grep -v "syntax OK"; done
perlcritic:
-perlcritic . | grep -v "source OK"
lint: perl perlcritic
I tried writing a Makefile.PL, but when I run it, the resulting Makefile still lacks the lint task.
use ExtUtils::MakeMaker;
sub MY::lint {
return <<'END';
lint:
echo "Linting!!!!!!!!!!!1"
END
}
WriteMakefile;
I tried reading the CPAN docs, but like most docs, they give snippets without sufficient context, so I'm not even sure if I should declare the subs before or after WriteMakefile.
Also posted on Reddit.
Thanks to briandfoy:
$ cat Makefile.PL
#!/usr/bin/env perl
use strict;
use warnings;
use ExtUtils::MakeMaker;
WriteMakefile;
sub MY::postamble {
return <<'END';
perlwarn:
-find . -type f -name '*.pl' -exec perl -MO=Lint -cw {} 2>&1 \; | grep -v "syntax OK" | grep -v "Can't locate"
-find . -type f -name '*.pm' -exec perl -MO=Lint -cw {} 2>&1 \; | grep -v "syntax OK" | grep -v "Can't locate"
-find . -type f -name '*.t' -exec perl -MO=Lint -cw {} 2>&1 \; | grep -v "syntax OK" | grep -v "Can't locate"
perlcritic:
-perlcritic . | grep -v "source OK"
lint: perlwarn perlcritic
END
}
A solution that works and is slightly easier to maintain is to put the make targets in a separate makefile, so you can benefit of your text editor capabilities and it is a bit easier to read:
# In Makefile.PL
use File::Slurp;
sub MY::postamble {
my $targets = read_file('./script/additional.make');
return $targets;
}
# In /script/additional.make
perl:
for f in **/*.pl; do perl -MO=Lint -cw $$f 2>&1 | grep -v "syntax OK"; done
for f in **/*.pm; do perl -MO=Lint -cw $$f 2>&1 | grep -v "syntax OK"; done
perlcritic:
perlcritic . | grep -v "source OK"
lint: perl perlcritic
Note for later readers: I am using Module::Install and it was necessary to use :: as separators because it seems that Module::Install forbid to mix : and ::. It also disallow the use of -.
I have a bunch of image files that were incorrectly named 'something#x2.png' and they need to be 'something#2x.png'. They're spread across multiple directories like so:
/images
something#x2.png
/icons
icon#x2.png
/backgrounds
background#x2.png
How can I use grep + sed to find/replace as needed?
Ruby(1.9+)
$ ruby -e 'Dir["**/*#x2.png"].each{|x| File.rename( x, x.sub(/#x2/,"#2x") ) }'
Look at qmv and rename
find -iname '*.png' -print0 | xargs -0 qmv -d
will launch your default editor and allow you to interactively edit the names
rename s/#x2/#2x/ *.png
Slashes look linuxy/unixoid to me. Do you have find and rename?
find -name "*#x2*" -execdir rename 's/#x2/#2x/' {} +
rename is worth installing, comes in some perl-package.
With bash 2.x/3.x
#!/bin/bash
while IFS= read -r -d $'\0' file; do
echo mv "$file" "${file/#x2/#2x}"
done < <(find images/ -type f -name "something*#x2*.png" -print0)
With bash 4.x
#!/bin/bash
shopt -s globstar
for file in images/**; do
[[ "$file" == something*#x2*.png ]] && echo mv "$file" "${file/#x2/#2x}"
done
Note:
In each case I left in an echo so you can do a dry-run, remove the echo if the output is sufficient
If I have input file containing
statementes
asda
rertte
something
nothing here
I want to grep / extract (without using awk) every line from starting till I get the string "something". How can I do this? grep -B does not work since it needs the exact number of lines.
Desired output:
statementes
asda
rertte
something
it's not completely robust, but sure -B works... just make the -B count huge:
grep -B `wc -l <filename>` -e 'something' <filename>
You could use a bash while loop and exit early when you hit the string:
$ cat file | while read line; do
> echo $line
> if echo $line | grep -q something; then
> exit 0
> fi
> done
head -n `grep -n -e 'something' <filename> | cut -d: -f1` <filename>
How it is possible to make a dry run with sed?
I have this command:
find ./ -type f | xargs sed -i 's/string1/string2/g'
But before I really substitute in all the files, i want to check what it WOULD substitute. Copying the whole directory structure to check is no option!
Remove the -i and pipe it to less to paginate though the results. Alternatively, you can redirect the whole thing to one large file by removing the -i and appending > dryrun.out
I should note that this script of yours will fail miserably with files that contain spaces in their name or other nefarious characters like newlines or whatnot. A better way to do it would be:
while IFS= read -r -d $'\0' file; do
sed -i 's/string1/string2/g' "$file"
done < <(find ./ -type f -print0)
I would prefer to use the p-option:
find ./ -type f | xargs sed 's/string1/string2/gp'
Could be combined with the --quiet parameter for less verbose output:
find ./ -type f | xargs sed --quiet 's/string1/string2/gp'
From man sed:
p:
Print the current pattern space.
--quiet:
suppress automatic printing of pattern space
I know this is a very old thread and the OP doesn't really need this answer, but I came here looking for a dry run mode myself, so thought of adding the below piece of advice for anyone coming here in future. What I wanted to do was to avoid stomping the backup file unless there is something really changing. If you blindly run sed using the -i option with backup suffix, existing backup file gets overwritten, even when there is nothing substituted.
The way I ended up doing is to pipe sed output to diff and see if anything changed and then rerun sed with in-place update option, something like this:
if ! sed -e 's/string1/string2/g' $fpath | diff -q $fpath - > /dev/null 2>&1; then
sed -i.bak -e 's/string1/string2/g' $fpath
fi
As per OP's question, if the requirement is to just see what would change, then instead of running the in-pace sed, you could do the diff again with some informative messages:
if ! sed -e 's/string1/string2/g' $fpath | diff -q $fpath - > /dev/null 2>&1; then
echo "File $fpath will change with the below diff:"
sed -e 's/string1/string2/g' $fpath | diff $fpath -
fi
You could also capture the output in a variable to avoid doing it twice:
diff=$(sed -e 's/string1/string2/g' $fpath | diff $fpath -)
if [[ $? -ne 0 ]]; then
echo "File $fpath will change with the below diff:"
echo "$diff"
fi
say i have a directory with hi.txt and blah.txt and i execute the following command on a linux-ish command line
ls *.* | xargs -t -i{} echo {}
the output you will see is
echo blah.txt
blah.txt
echo hi.txt
hi.txt
i'd like to redirect the stderr output (say 'echo blah.txt' fails...), leaving only the output from the xargs -t command written to std out, but it looks as if it's stderr as well.
ls *.* | xargs -t -i{} echo {} 2> /dev/null
Is there a way to control it, to make it output to stdout?
Use:
ls | xargs -t -i{} echo {} 2>&1 >/dev/null
The 2>&1 sends the standard error from xargs to where standard output is currently going; the >/dev/null sends the original standard output to /dev/null. So, the net result is that standard output contains the echo commands, and /dev/null contains the file names. We can debate about spaces in file names and whether it would be easier to use a sed script to put 'echo' at the front of each line (with no -t option), or whether you could use:
ls | xargs -i{} echo echo {}
(Tested: Solaris 10, Korn Shell ; should work on other shells and Unix platforms.)
If you don't mind seeing the inner workings of the commands, I did manage to segregate the error output from xargs and the error output of the command executed.
al * zzz | xargs -t 2>/tmp/xargs.stderr -i{} ksh -c "ls -dl {} 2>&1"
The (non-standard) command al lists its arguments one per line:
for arg in "$#"; do echo "$arg"; done
The first redirection (2>/tmp/xargs.stderr) sends the error output from xargs to the file /tmp/xargs.stderr. The command executed is 'ksh -c "ls -dl {} 2>&1"', which uses the Korn shell to run ls -ld on the file name with any error output going to standard output.
The output in /tmp/xargs.stderr looks like:
ksh -c ls -dl x1 2>&1
ksh -c ls -dl x2 2>&1
ksh -c ls -dl xxx 2>&1
ksh -c ls -dl zzz 2>&1
I used 'ls -ld' in place of echo to ensure I was testing errors - the files x1, x2, and xxx existed, but zzz does not.
The output on standard output looked like:
-rw-r--r-- 1 jleffler rd 1020 May 9 13:05 x1
-rw-r--r-- 1 jleffler rd 1069 May 9 13:07 x2
-rw-r--r-- 1 jleffler rd 87 May 9 20:42 xxx
zzz: No such file or directory
When run without the command wrapped in 'ksh -c "..."', the I/O redirection was passed as an argument to the command ('ls -ld'), and it therefore reported that it could not find the file '2>&1'. That is, xargs did not itself use the shell to do the I/O redirection.
It would be possible to arrange for various other redirections, but the basic problem is that xargs makes no provision for separating its own error output from that of the commands it executes, so it is hard to do.
The other rather obvious option is to use xargs to write a shell script, and then have the shell execute it. This is the option I showed before:
ls | xargs -i{} echo echo {} >/tmp/new.script
You can then see the commands with:
cat /tmp/new.script
You can run the commands to discard the errors with:
sh /tmp/new.script 2>/dev/null
And, if you don't want to see the standard output from the commands either, append 1>&2 to the end of the command.
So I believe what you want is to have as stdout is
the stdout from the utility that xargs executes
the listing of commands generated by xargs -t
You want to ignore the stderr stream generated by the
executed utility.
Please correct me if I'm wrong.
First, let's create a better testing utility:
% cat myecho
#!/bin/sh
echo STDOUT $#
echo STDERR $# 1>&2
% chmod +x myecho
% ./myecho hello world
STDOUT hello world
STDERR hello world
% ./myecho hello world >/dev/null
STDERR hello world
% ./myecho hello world 2>/dev/null
STDOUT hello world
%
So now we have something that actually outputs to both stdout and stderr, so we
can be sure we're only getting what we want.
A tangential way to do this is not to use xargs, but rather, make. Echoing a command
and then doing it is kind of what make does. That's its bag.
% cat Makefile
all: $(shell ls *.*)
$(shell ls): .FORCE
./myecho $# 2>/dev/null
.FORCE:
% make
./myecho blah.txt 2>/dev/null
STDOUT blah.txt
./myecho hi.txt 2>/dev/null
STDOUT hi.txt
% make >/dev/null
%
If you're tied to using xargs, then you need to modify your utility that
xargs uses so it surpresses stderr. Then you can use the 2>&1 trick others
have mentioned to move the command listing generated by xargs -t from stderr
to stdout.
% cat myecho2
#!/bin/sh
./myecho $# 2>/dev/null
% chmod +x myecho2
% ./myecho2 hello world
STDOUT hello world
% ls *.* | xargs -t -i{} ./myecho2 {} 2>&1
./myecho blah.txt 2>/dev/null
STDOUT blah.txt
./myecho hi.txt 2>/dev/null
STDOUT hi.txt
% ls *.* | xargs -t -i{} ./myecho2 {} 2>&1 | tee >/dev/null
%
So this approach works, and collapses everything you want to stdout (leaving out what you don't want).
If you find yourself doing this a lot, you can write a general utility to surpress stderr:
% cat surpress_stderr
#!/bin/sh
$# 2>/dev/null
% ./surpress_stderr ./myecho hello world
STDOUT hello world
% ls *.* | xargs -t -i{} ./surpress_stderr ./myecho {} 2>&1
./surpress_stderr ./myecho blah.txt 2>/dev/null
STDOUT blah.txt
./surpress_stderr ./myecho hi.txt 2>/dev/null
STDOUT hi.txt
%
xargs -t echos the commands to be executed to stderr before executing them. If you want them to instead echo to stderr, you can pipe stderr to stdout with the 2>&1 construct:
ls *.* | xargs -t -i{} echo {} 2>&1
It looks like xargs -t goes to stderr, and there's not much you can do about it.
You could do:
ls | xargs -t -i{} echo "Foo: {}" >stderr.txt | tee stderr.txt
to display only the stderr data on your terminal as your command runs, and then grep through stderr.txt after to see if anything unexpected occurred, along the lines of grep -v Foo: stderr.txt
Also note that on Unix, ls *.* isn't how you display everything. If you want to see all the files, just run ls on its own.
As I understand your problem using GNU Parallel http://www.gnu.org/software/parallel/ would do the right thing:
ls *.* | parallel -v echo {} 2> /dev/null