I need to run a search and replace command of something like:
s3.amazonaws.com/username/
changing to:
something.cloudfront.net
in the entire home directories folders and files like:
/home/username/
/home/username/folder
/home/username/folder/js
/home/username/folter/file.php
etc.
How would I do that? The files are with different extensions like .php and .js, etc. so I need to replace all files like
*.*
I found a way to replace it via perl command line but it's not working because of the "/" slash.
perl -p -i -e 's/s3.amazonaws.com/username//something.cloudfront.net/g' `find ./ -name *`
Here's an example using curlie brackets as delimiters, and being more careful with the arguments from find. You have to watch out for spaces and special characters in the filenames, you know.
find . -type f -print0 |
xargs -0 perl -i -pe's{\Qs3.amazonaws.com/username/}{something.cloudfront.net}g'
On POSIX systems, you don't even need xargs.
find . -type f -exec \
perl -i -pe's{\Qs3.amazonaws.com/username/}{something.cloudfront.net}g' {} +
(The \Q makes the '.' character mean "one dot" as opposed to "one anything").
Recursive file traversals in perl are best done with File::Find. This allows you to specify a callback - subroutine to execute on every file.
However, if you're really set on using perl-as-sed, can I point out that you can use a non slash as a delimiter.
s/somepattern/some_other_pattern/g;
is functionally equivalent to:
s,somepattern,some_other_pattern,g;
Or you can 'escape' the slash with a backslash, but I prefer using different delimiters for clarity.
So your pattern:
's/s3.amazonaws.com/username//something.cloudfront.net/g'
Would become:
's,s3.amazonaws.com/username/,something.cloudfront.net,g'
(Assuming I've got that middle 'slash' in the right place)
Related
I have nearly 300 files in 60 folders .
As per the C++ coding guidelines, I need to replace below lines from *.cpp and *.cl files (wants to remove extra space between if and for statement) -
for (* .....)
with
for(* .....)
and also
if (* .....)
with
if(* .....)
Can any one suggest me the grep command to do search and replace for all files.
Edited:
I tried with below commands:
sed -i 's/for (/for(/g' *.cpp
But got error like below:
sed: can't read *.cpp: No such file or directory
I think you need sed command (stream editor, see man sed on your mashine). It is more suitable for file editing.
sed -i -E 's/(for|if)[ ]+(\(.*\))/\1\2/g'
Let me explain:
-i stands for inline, that means that all changes will be done and saved in the file
-E is needed to use extended regular expression inside with sed
s/(for|if)[ ]+(\(.*\))/\1\2/g
s stands for substitute
/ is a separator, which separates different parts of command. Between first / and second / there is pattern that you need to find (and then replace). After second / and third / there that we want to have after substitution.
g in the end stands for global, that means to make changes in the whole file.
How to apply to every file that you need?
This question is already exist, so in the end you need to run in directory where are your files stored following command
find ./ -type f -exec sed -i -E 's/(for|if)[ ]+(\(.*\))/\1\2/g' {} \;
I hope, this will help:)
I have created the file "brol.txt", with following content:
for (correct
for(wrong
if (correct
if(wrong
I have launched following grep command:
grep -E "for \(|if \(" brol.txt
With following result:
for (correct
if (correct
Explanation:
grep -E means extended grep (allows to search for expression1 OR expression2,
separated by a pipe character)
\( means the search for a round bracket. The backslash is an escape character.
I am porting a sh script that was apparently written using GNU implementation of sed to BSD implementation of sed. The exact line in the script with the original comment are:
# escape dot in file extension to grep it
ext="$(echo $ext | sed 's/\./\\./' -)"
I am able to reproduce a results with the following (obviously I am not exhausting all possibilities values for ext) :
ext=.h; ext="$(echo $ext | sed 's/\./\\./' -)"; echo [$ext]
Using GNU's implementation of sed the following is returned:
[\.h]
Using BSD's implementation of sed the following is returned:
sed: -: No such file or directory
[]
Executing ext=.h; ext="$(echo $ext | sed 's/\./\\./')"; echo [$ext] returns [\.h] for both implementation of sed.
I have looked at both GNU and BSD's sed's man page have not found anything about the trailing "-". Googling for sed with a "-" is not very fruitful either.
Is the "-" a typo?
Is the "-" needed for some an unexpected value of $ext?
Is the issue not with sed, but rather with sh?
Can someone direct me to what I should be looking at, or even better, explain what the purpose of the "-" is?
On my system, that syntax isn't documented in the man page, but it is in the
'info' page:
sed OPTIONS... [SCRIPT] [INPUTFILE...]
If you do not specify INPUTFILE, or if INPUTFILE is -',sed'
filters the contents of the standard input.
Given that particular usage, I think you could leave off the '-' and it should
still work.
You got your specific question answered BUT your script is all wrong. Take a look at this:
# escape dot in file extension to grep it
ext="$(echo $ext | sed 's/\./\\./')"
The main problems with that are:
You're not quoting your variable ($ext) so it will go through file name expansion plus if it contains spaces will be passed to echo as multiple arguments instead of 1. Do this instead:
ext="$(echo "$ext" | sed 's/\./\\./')"
You're using an external command (sed) and a pipe to do something the shell can do trivially itself. Do this instead:
ext="${ext/./\.}"
Worst of all: You're escaping the RE meta-character (.) in your variable so you can pass it to grep to do an RE search on it as if it were a string - that doesn't make any sense and becomes intractable in the general case where your variable could contain any combination of RE metacharacters. Just do a string search instead of an RE search and you don't need to escape anything. Don't do either of the above substitution commands and then do either of these instead of grep "$ext" file:
grep -F "$ext" file
fgrep "$ext" file
awk -v ext="$ext" 'index($0,ext)' file
I'm trying to create a symbolic link (soft link) from the results of a find command. I'm using sed to remove the ./ that precedes the file name. I'm doing this so I can paste the file name to the end of the path where the link will be saved. I'm working on this with Ubuntu Server 8.04.
I learned from this post, which is kind of the solution to my problem but not quite-
How do I selectively create symbolic links to specific files in another directory in LINUX?
The resulting file name didn't work, though, so I started trying to learn awk and then decided on sed.
I'm using a one-line loop to accomplish this. The problem is that the structure of the loop is separating the filename, creating a link for each word in the filename. There are quite a few files and I would like to automate the process with each link taking the filename of the file it's linked to.
I'm comfortable with basic bash commands but I'm far from being a command line expert. I started this with ls and awk and moved to find and sed. My sed syntax could probably be better but I've learned this in two days and I'm kind of stuck now.
for t in find -type f -name "*txt*" | sed -e 's/.//' -e 's$/$$'; do echo ln -s $t ../folder2/$t; done
Any help or tips would be greatly appreciated. Thanks.
Easier:
Go to the folder where you want to have the files in and do:
find /path/with/files -type f -name "*txt*" -exec ln -s {} . ';'
Execute your for loop like this:
(IFS=$'\n'; for t in `find -type f -name "*txt*" | sed 's|.*/||'`; do ln -s $t ../folder2/$t; done)
By setting the IFS to only a newline, you should be able to read the entire filename without getting splitted at space.
The brackets are to make sure the loop is executed in a sub-shell and the IFS of the current shell does not get changed.
How can I replace text with other text using GNU sed? I was hacked and am just trying to see if I can remove some of the code that was placed into my php files. The text is of the
eval(base64_decode('blah'));
variety. All of them are identical, I would just like to find and replace all of them in all files. I have tried some commands, but they either needlessly alter and damage text in the files or simply fail to launch at all.
sed -i 's/text/other text/g' filename
(sed -i "s/eval(base64_decode('blah'))/huh/g" filename in your case).
find . -name \*.php -exec sed -i "s/text/other/g" {} \;
You may want to do a dry run and leave off the -i and just direct it to a file as a test first.
On Mac the -i usually doesn't work.
I'm trying to change the name of "my-silly-home-page-name.html" to "index.html" in all documents within a given master directory and subdirs.
I saw this: Shell script - search and replace text in multiple files using a list of strings.
And this: How to change all occurrences of a word in all files in a directory
I have tried this:
grep -r "my-silly-home-page-name.html" .
This finds the lines on which the text exists, but now I would like to substitute 'my-silly-home-page-name' for 'index'.
How would I do this with sed or perl?
Or do I even need sed/perl?
Something like:
grep -r "my-silly-home-page-name.html" . | sed 's/$1/'index'/g'
?
Also; I am trying this with perl, and I try the following:
perl -i -p -e 's/my-silly-home-page-name\.html/index\.html/g' *
This works, but I get an error when perl encounters directories, saying "Can't do inplace edit: SOMEDIR-NAME is not a regular file, <> line N"
Thanks,
jml
find . -type f -exec \
perl -i -pe's/my-silly-home-page-name(?=\.html)/index/g' {} +
Or if your find doesn't support -exec +,
find . -type f -print0 | xargs -0 \
perl -i -pe's/my-silly-home-page-name(?=\.html)/index/g'
Both pass to Perl as arguments as many names at a time as possible. Both work with any file name, including those that contains newlines.
If you are on Windows and you are using a Windows build of Perl (as opposed to a cygwin build), -i won't work unless you also do a backup of the original. Change -i to -i.bak. You can then go and delete the backups using
find . -type f -name '*.bak' -delete
This should do the job:
find . -type f -print0 | xargs -0 sed -e 's/my-silly-home-page-name\.html/index\.html/g' -i
Basically it gathers recursively all the files from the given directory (. in the example) with find and runs sed with the same substitution command as in the perl command in the question through xargs.
Regarding the question about sed vs. perl, I'd say that you should use the one you're more comfortable with since I don't expect huge differences (the substitution command is the same one after all).
There are probably better ways to do this but you can use:
find . -name oldname.html |perl -e 'map { s/[\r\n]//g; $old = $_; s/oldname.txt$/newname.html/; rename $old,$_ } <>';
Fyi, grep searches for a pattern; find searches for files.