sed is appending code in the wrong place in all but one file in a directory - sed

I have the following tab indented code in a bunch of files in a directory:
'oracleServers': dcoDatabaseServers,
'oracleHomes': dcoDatabaseHomes,
'sysPasswords': dcoSYSPasswords,
I want to add 'useOracleDriver': useOracleDriver, after the 'oracleHomes' line in all files. I have this command:
sed -i "/oracleHomes/ a \\\t\t\t\t\t\t\t\t'useOracleDriver': useOracleDriver," $(find . -type f -name 'tc*')
When I run the command, the first file in alpha order with a tc* name, the text gets appended properly:
'oracleServers': dcoDatabaseServers,
'oracleHomes': dcoDatabaseHomes,
'useOracleDriver': useOracleDriver,
'sysPasswords': dcoSYSPasswords,
but with all other files beginning with tc*, I see the 'useOracleDriver': useOracleDriver line, but it's appended to the very end of the file. Any idea on how to get the command to append in the proper place in all the other files?

Try running the sed command on each file individually, instead of all on a single sed execution:
find . -type f -name 'tc*' -exec sed -i "/oracleHomes/ a \\\t\t\t\t\t\t\t\t'useOracleDriver': useOracleDriver," {} \;
or
for f in $(find . -type f -name 'tc*'); do sed -i "/oracleHomes/ a \\\t\t\t\t\t\t\t\t'useOracleDriver': useOracleDriver," $f; done
The advantage of the second format is that, in the event you're not using GNU sed (and therefore have no -i), you can change it to
for f in $(find . -type f -name 'tc*'); do sed "/oracleHomes/ a \\\t\t\t\t\t\t\t\t'useOracleDriver': useOracleDriver," $f > $f.tmp; mv $f.tmp $f; done

Related

Jenkinsfile, "find", ignoring some hidden directories and other folders

I am now working with "Jenkinsfile".
I need to do a "find" by type of the file extension, to do a "sed -i", ignoring some hidden directories and other folders.
I don't know the correct syntax.
Example:
def replacePath() {
sh 'sed -i "s/A\\/B/C\\/D\\/E\\/F\\/G\\/A\\/B\\/opt\\/C/g" \$(find . -type f -name "*.json" not path ..... -print0) '
Try using xargs, like so:
find . -type f -name '*.json' ... -print0 | xargs -0 sed -i 's/pattern/replacement/g'
Using xargs has fewer problems than passing argument on the command line with $(...), particularly when used with -print0, as xargs can cope with filenames containing shell metacharacters.

Get current directory in find command and use in sed - one-liner

I'm using this to find files of a particular name in subdirectories, then editing some content:
find prod -type f -name "file.txt" -exec sed -i '' -e "s,^varname.*$, varname = \"$value\"," {} +
How can I get the name of the current directory (not the directory the script is executed in, rather the directory the file is found in) and insert it into the replace text? Something like:
find prod -type f -name "file.txt" -exec sed -i '' -e "s,^ varname.*$, varname = \"$value/$dirname\"," {} +
I'm hoping to keep it as a one-liner. My most recent attempt was this, but the replacement didn't work and I feel there must be a simpler syntax:
find prod -type f -name "file.txt" -exec sh -c '
for file do
dirname=${file%/*}
done' sed -i '' -e "s,^varname.*$, varname = \"$value/$dirname\"," {} +
Example:
value=bar
file.txt input:
varname = “foo”
file.txt output:
varname = “bar/directory_name”
You can do this with GNU awk in the same way:
The sed command you make use of can be replaced with:
$ awk --inplace -v v="$value" '(FNR==1){d=FILENAME;sub("/[^/]*$","",d)}/^varname/{$0="varname = "v"/"d}1'
So your find woud read:
$ find prod -type f -name "file.txt" -exec awk --inplace -v v="$value" '(FNR==1){d=FILENAME;sub("/[^/]*$","",d)}/^varname/{$0="varname = "v"/"d}1' {} \;
This might work for you (GNU sed & parallel):
find prod -type f -name "file.txt" |
parallel -qa- --link sed -i 's#\(varname=\).*#\1"{2}{1//}"#' {1} ::: $value
We supply 2 sources to the parallel command. The first source is the list of files from the find command using the parallel option -a -. The second source is the variable $value, being only a single value it is linked to the first source using the parallel option --link. The sed command is quoted using the parallel option -q and normal regexp rules apply excepting that the values {2} and {1//} are first interpreted by parallel to represent the second source and the directory of the first source respectively.
N.B. To check the commands to parallel are as you desire, use the --dryrun option and check the output before running for real.
You need to use -execdir and spawn a shell:
find ... -execdir \
bash -c 'sed -i "" -e "s,^ varname.*$, varname = \"$value/${PWD}\"," "$1"' -- {} +
-execdir runs sed in the parent folder of the file instead of the folder from where you run find. This allows to use
$PWD.
Further note: I calling bash with two arguments:
-exec bash -c '... code ...' -- {}
^^ ^^
I'm passing the -- as a placeholder. When called with -c, bash starts to index arguments at $0 instead of $1. ($0 would normally contain the script's name). That allows to use $1 for the filename from {} which is imo more readable and understandable.

unix find and replace text in dir and subdirs

I'm trying to change the name of "my-silly-home-page-name.html" to "index.html" in all documents within a given master directory and subdirs.
I saw this: Shell script - search and replace text in multiple files using a list of strings.
And this: How to change all occurrences of a word in all files in a directory
I have tried this:
grep -r "my-silly-home-page-name.html" .
This finds the lines on which the text exists, but now I would like to substitute 'my-silly-home-page-name' for 'index'.
How would I do this with sed or perl?
Or do I even need sed/perl?
Something like:
grep -r "my-silly-home-page-name.html" . | sed 's/$1/'index'/g'
?
Also; I am trying this with perl, and I try the following:
perl -i -p -e 's/my-silly-home-page-name\.html/index\.html/g' *
This works, but I get an error when perl encounters directories, saying "Can't do inplace edit: SOMEDIR-NAME is not a regular file, <> line N"
Thanks,
jml
find . -type f -exec \
perl -i -pe's/my-silly-home-page-name(?=\.html)/index/g' {} +
Or if your find doesn't support -exec +,
find . -type f -print0 | xargs -0 \
perl -i -pe's/my-silly-home-page-name(?=\.html)/index/g'
Both pass to Perl as arguments as many names at a time as possible. Both work with any file name, including those that contains newlines.
If you are on Windows and you are using a Windows build of Perl (as opposed to a cygwin build), -i won't work unless you also do a backup of the original. Change -i to -i.bak. You can then go and delete the backups using
find . -type f -name '*.bak' -delete
This should do the job:
find . -type f -print0 | xargs -0 sed -e 's/my-silly-home-page-name\.html/index\.html/g' -i
Basically it gathers recursively all the files from the given directory (. in the example) with find and runs sed with the same substitution command as in the perl command in the question through xargs.
Regarding the question about sed vs. perl, I'd say that you should use the one you're more comfortable with since I don't expect huge differences (the substitution command is the same one after all).
There are probably better ways to do this but you can use:
find . -name oldname.html |perl -e 'map { s/[\r\n]//g; $old = $_; s/oldname.txt$/newname.html/; rename $old,$_ } <>';
Fyi, grep searches for a pattern; find searches for files.

Using sed to grab filename from full path?

I'm new to sed, and need to grab just the filename from the output of find. I need to have find output the whole path for another part of my script, but I want to just print the filename without the path. I also need to match starting from the beginning of the line, not from the end. In english, I want to match, the first group of characters ending with ".txt" not containing a "/". Here's my attempt that doesn't work:
ryan#fizz:~$ find /home/ryan/Desktop/test/ -type f -name \*.txt
/home/ryan/Desktop/test/two.txt
/home/ryan/Desktop/test/one.txt
ryan#fizz:~$ find /home/ryan/Desktop/test/ -type f -name \*.txt | sed s:^.*/[^*.txt]::g
esktop/test/two.txt
ne.txt
Here's the output I want:
two.txt
one.txt
Ok, so the solutions offered answered my original question, but I guess I asked it wrong. I don't want to kill the rest of the line past the file suffix i'm searching for.
So, to be more clear, if the following:
bash$ new_mp3s=\`find mp3Dir -type f -name \*.mp3\` && cp -rfv $new_mp3s dest
`/mp3Dir/one.mp3' -> `/dest/one.mp3'
`/mp3Dir/two.mp3' -> `/dest/two.mp3'
What I want is:
bash$ new_mp3s=\`find mp3Dir -type f -name \*.mp3\` && cp -rfv $new_mp3s dest | sed ???
`one.mp3' -> `/dest'
`two.mp3' -> `/dest'
Sorry for the confusion. My original question just covered the first part of what I'm trying to do.
2nd edit:
here's what I've come up with:
DEST=/tmp && cp -rfv `find /mp3Dir -type f -name \*.mp3` $DEST | sed -e 's:[^\`].*/::' -e "s:$: -> $DEST:"
This isn't quite what I want though. Instead of setting the destination directory as a shell variable, I would like to change the first sed operation so it only changes the cp output before the "->" on each line, so that I still have the 2nd part of the cp output to operate on with another '-e'.
3rd edit:
I haven't figured this out using only sed regex's yet, but the following does the job using Perl:
cp -rfv `find /mp3Dir -type f -name \*.mp3` /tmp | perl -pe "s:.*/(.*.mp3).*\`(.*/).*.mp3\'$:\$1 -> \$2:"
I'd like to do it in sed though.
Something like this should do the trick:
find yourdir -type f -name \*.txt | sed 's/.*\///'
or, slightly clearer,
find yourdir -type f -name \*.txt | sed 's:.*/::'
Why don't you use basename instead?
find /mydir | xargs -I{} basename {}
No need external tools if using GNU find
find /path -name "*.txt" -printf "%f\n"
I landed on the question based on the title: using sed to grab filename from fullpath.
So, using sed, the following is what worked for me...
FILENAME=$(echo $FULLPATH | sed -n 's/^\(.*\/\)*\(.*\)/\2/p')
The first group captures any directories from the path. This is discarded.
The second group capture is the text following the last slash (/). This is returned.
Examples:
echo "/test/file.txt" | sed -n 's/^\(.*\/\)*\(.*\)/\2/p'
file.txt
echo "/test/asd/asd/entrypoint.sh" | sed -n 's/^\(.*\/\)*\(.*\)/\2/p'
entrypoint.sh
echo "/test/asd/asd/default.json" | sed -n 's/^\(.*\/\)*\(.*\)/\2/p'
default.json
find /mydir | awk -F'/' '{print $NF}'
path="parentdir2/parentdir1/parentdir0/dir/FileName"
name=${path##/*}

Odd Sed Error Message

bash-3.2$ sed -i.bakkk -e "s#/sa/#/he/#g" .*
sed: .: in-place editing only works for regular files
I try to replace every /sa/ with /he/ in every dot-file in a folder. How can I get it working?
Use find -type f to find only files matching the name .* and exclude the directories . and ... -maxdepth 1 prevents find from recursing into subdirectories. You can then use -exec to execute the sed command, using a {} placeholder to tell find where the file names should go.
find . -type f -maxdepth 1 -name '.*' -exec sed -i.bakkk -e "s#/sa/#/he/#g" {} +
Using -exec is preferable over using backticks or xargs as it'll work even on weird file names containing spaces or even newlines—yes, "foo bar\nfile" is a valid file name. An honorable mention goes to find -print0 | xargs -0
find . -type f -maxdepth 1 -name '.*' -print0 | xargs -0 sed -i.bakkk -e "s#/sa/#/he/#g"
which is equally safe. It's a little more verbose, though, and less flexible since it only works for commands where the file names go at the end (which is, admittedly, 99% of them).
Try this one:
sed -i.bakkk -e "s#/sa/#/he/#g" `find .* -type f -maxdepth 0 -print`
This should ignore all directories (e.g., .elm, .pine, .mozilla) and not just . and .. which I think the other solutions don't catch.
The glob pattern .* includes the special directories . and .., which you probably didn't mean to include in your pattern. I can't think of an elegant way to exclude them, so here's an inelegant way:
sed -i.bakkk -e "s$/sa/#/he/#g" $(ls -d .* | grep -v '^\.\|\.\.$')