Can find tell me if no files exist? - find

On my FTP server, I look for files delivered in the past day and remove in-place header & trailer records.
find . -type f -name "CDC*" -ctime -1 -exec sed -i'' -e '1d' -e '$d' '{}' \;
This works well.
I want to automate this in a script. But how can I send myself an email notification is no files are found? I am thinking of doing something like:
find . -type f -name "CDC*" -ctime -1 -exec sed -i'' -e '1d' -e '$d' '{}' \;
EXIT=`echo $?`
case $EXIT in
0) ...do stuff...
*) mail....exit
esac;;
There has to a better way, right?

I'm pretty sure that you could take whatever command you need to do the search, and pipe a wc -l on to the end of it. Then use an if statement to check for zero. So using your example above.
NUMLINES=`find . -type f -name "CDC*" -ctime -1 -exec sed -i'' -e '1d' -e '$d' '{}' \ | wc -l`
if [ "$NUMLINES" -eq 0 ] ; then
foo
fi
Or something like that. I didn't check if that syntax is correct though. But i'm sure you get my drift

Related

Issue with Sed no input file when Xgrep

I am trying to create a script which looks for x days old files that have a specific string in it, it then removes it and logs the file it has changed.
My way is probably not the best way, but I am new to this so looking for some help. I have got to a stage where the script works but it does not log the file name it has worked on
Working script is
find /home/test -mtime +5 -type f ! -size 0 | xargs grep -E -l '(abc_Pswd_[1-9])' | xargs -n1 sed -i '/abc_Pswd_[1-9].*/d'
I am trying to get the file name from 2nd part of the script I have tried few things
find /home/test -mtime +7 -type f ! -size 0 | xargs grep -E -l '(abc.1x.[1-9] )' > /home/test/tst.log| xargs -n1 sed -i '/abc_Pswd_[1-9].*/d'
This works in terms of logging the result but it exits with the error "sed: no input files"

find -ctime bash alternative in Perl

Kind of new to Perl, still navigating my way through.
Is there another way to write the bash command below in "Perl"?
find $INPUT_DIR -ctime -$DAYS_NUM -type f -exec grep -hs EDI_DC {} \; |
grep -i -v xml >> $OUTPUT_DIR/$OUTPUT_FILENAME
where INPUT_DIR, DAYS_NUM, OUTPUT_DIR and OUTPUT_FILENAME are arguments passed during runtime.
When you try to convert find command to perl, consider using find2perl script.
It generate the perl code.
find2perl 'INPUT_DIR' -ctime -'DAYS_NUM' -type f -exec grep -hs EDI_DC {} \;

Using Sed and Find with Grep Linux

I am writing a script that will saech for php files that contain a phrase and I would like that phrase replaced with a new one below is my little script but it is not working it searches ok but does not work with the search and replace section
find . -type f -name "*.php" -exec grep -H "define('DB_HOST', 'localhost');" {} \; | xargs sed -i "define('DB_HOST', 'localhost');/define('DB_HOST', '10.0.0.1');/g"
can someone explain to me what i am doing wrong
many thanks
Joe
did you forget the 's/' at the beggining of the sed expression? As in
sed 's/expression1/expression2/g'
You seem to have
sed 'espression1/expression2/g'
Edit
Another thing: You don't need to use xarg here. You can use multiple -exec flags - and it will to each only if all the previous succeeded:
find . -name '*.php' -exec grep 'whatever' {} \; -exec sed -i 's/whatever/you want/g' {} \;
This will work:
find . -type f -name "*.php" -exec grep -l "define('DB_HOST', 'localhost');" {} \; | xargs sed -i "s/define('DB_HOST', 'localhost');/define('DB_HOST', '10.0.0.1');/g"
Corrections
Missing s/ in sed search and replace command
use grep -l instead of grep -H

sed command in dry run

How it is possible to make a dry run with sed?
I have this command:
find ./ -type f | xargs sed -i 's/string1/string2/g'
But before I really substitute in all the files, i want to check what it WOULD substitute. Copying the whole directory structure to check is no option!
Remove the -i and pipe it to less to paginate though the results. Alternatively, you can redirect the whole thing to one large file by removing the -i and appending > dryrun.out
I should note that this script of yours will fail miserably with files that contain spaces in their name or other nefarious characters like newlines or whatnot. A better way to do it would be:
while IFS= read -r -d $'\0' file; do
sed -i 's/string1/string2/g' "$file"
done < <(find ./ -type f -print0)
I would prefer to use the p-option:
find ./ -type f | xargs sed 's/string1/string2/gp'
Could be combined with the --quiet parameter for less verbose output:
find ./ -type f | xargs sed --quiet 's/string1/string2/gp'
From man sed:
p:
Print the current pattern space.
--quiet:
suppress automatic printing of pattern space
I know this is a very old thread and the OP doesn't really need this answer, but I came here looking for a dry run mode myself, so thought of adding the below piece of advice for anyone coming here in future. What I wanted to do was to avoid stomping the backup file unless there is something really changing. If you blindly run sed using the -i option with backup suffix, existing backup file gets overwritten, even when there is nothing substituted.
The way I ended up doing is to pipe sed output to diff and see if anything changed and then rerun sed with in-place update option, something like this:
if ! sed -e 's/string1/string2/g' $fpath | diff -q $fpath - > /dev/null 2>&1; then
sed -i.bak -e 's/string1/string2/g' $fpath
fi
As per OP's question, if the requirement is to just see what would change, then instead of running the in-pace sed, you could do the diff again with some informative messages:
if ! sed -e 's/string1/string2/g' $fpath | diff -q $fpath - > /dev/null 2>&1; then
echo "File $fpath will change with the below diff:"
sed -e 's/string1/string2/g' $fpath | diff $fpath -
fi
You could also capture the output in a variable to avoid doing it twice:
diff=$(sed -e 's/string1/string2/g' $fpath | diff $fpath -)
if [[ $? -ne 0 ]]; then
echo "File $fpath will change with the below diff:"
echo "$diff"
fi

Odd Sed Error Message

bash-3.2$ sed -i.bakkk -e "s#/sa/#/he/#g" .*
sed: .: in-place editing only works for regular files
I try to replace every /sa/ with /he/ in every dot-file in a folder. How can I get it working?
Use find -type f to find only files matching the name .* and exclude the directories . and ... -maxdepth 1 prevents find from recursing into subdirectories. You can then use -exec to execute the sed command, using a {} placeholder to tell find where the file names should go.
find . -type f -maxdepth 1 -name '.*' -exec sed -i.bakkk -e "s#/sa/#/he/#g" {} +
Using -exec is preferable over using backticks or xargs as it'll work even on weird file names containing spaces or even newlines—yes, "foo bar\nfile" is a valid file name. An honorable mention goes to find -print0 | xargs -0
find . -type f -maxdepth 1 -name '.*' -print0 | xargs -0 sed -i.bakkk -e "s#/sa/#/he/#g"
which is equally safe. It's a little more verbose, though, and less flexible since it only works for commands where the file names go at the end (which is, admittedly, 99% of them).
Try this one:
sed -i.bakkk -e "s#/sa/#/he/#g" `find .* -type f -maxdepth 0 -print`
This should ignore all directories (e.g., .elm, .pine, .mozilla) and not just . and .. which I think the other solutions don't catch.
The glob pattern .* includes the special directories . and .., which you probably didn't mean to include in your pattern. I can't think of an elegant way to exclude them, so here's an inelegant way:
sed -i.bakkk -e "s$/sa/#/he/#g" $(ls -d .* | grep -v '^\.\|\.\.$')