Use grep / sed for filename search & replace? - command-line

I have a bunch of image files that were incorrectly named 'something#x2.png' and they need to be 'something#2x.png'. They're spread across multiple directories like so:
/images
something#x2.png
/icons
icon#x2.png
/backgrounds
background#x2.png
How can I use grep + sed to find/replace as needed?

Ruby(1.9+)
$ ruby -e 'Dir["**/*#x2.png"].each{|x| File.rename( x, x.sub(/#x2/,"#2x") ) }'

Look at qmv and rename
find -iname '*.png' -print0 | xargs -0 qmv -d
will launch your default editor and allow you to interactively edit the names
rename s/#x2/#2x/ *.png

Slashes look linuxy/unixoid to me. Do you have find and rename?
find -name "*#x2*" -execdir rename 's/#x2/#2x/' {} +
rename is worth installing, comes in some perl-package.

With bash 2.x/3.x
#!/bin/bash
while IFS= read -r -d $'\0' file; do
echo mv "$file" "${file/#x2/#2x}"
done < <(find images/ -type f -name "something*#x2*.png" -print0)
With bash 4.x
#!/bin/bash
shopt -s globstar
for file in images/**; do
[[ "$file" == something*#x2*.png ]] && echo mv "$file" "${file/#x2/#2x}"
done
Note:
In each case I left in an echo so you can do a dry-run, remove the echo if the output is sufficient

Related

How to remove some text in long filename from bunch of files in directory

Can't boot my Windows PC today and I am on 2nd OS Linux Mint. With my limited knowledge on Linux and shell scripts, I really don't have an idea how to do this.
I have a bunch of files in a directory generated from my system, need to remove the last 12 characters from the left of ".txt"
Sample filenames:
filename1--2c4wRK77Wk.txt
filename2-2ZUX3j6WLiQ.txt
filename3-8MJT42wEGqQ.txt
filename4-sQ5Q1-l3ozU.txt
filename5--Way7CDEyAI.txt
Desired result:
filename1.txt
filename2.txt
filename3.txt
filename4.txt
filename5.txt
Any help would be greatly appreciated.
Here is a programmatic way of doing this while still trying to account for pesky edge cases:
#!/bin/sh
set -e
find . -name "filename*" > /tmp/filenames.list
while read -r FILENAME; do
NEW_FILENAME="$(
echo "$FILENAME" | \
awk -F '.' '{$NF=""; gsub(/ /, "", $0); print}' | \
awk -F '/' '{print $NF}' | \
awk -F '-' '{print $1}'
)"
EXTENSION="$(echo "$FILENAME" | awk -F '.' '{print $NF}')"
if [[ "$EXTENSION" == "backup" ]]; then
continue
else
cp "$FILENAME" "${FILENAME}.backup"
fi
if [[ -z "$EXTENSION" ]]; then
mv "$FILENAME" "$NEW_FILENAME"
else
mv "$FILENAME" "${NEW_FILENAME}.${EXTENSION}"
fi
done < /tmp/filenames.list
Create a List of Files to Edit
First up create a list of files that you would like to edit (assuming that they all start with filename) and under the current working directory (.):
find . -name "filename*" > /tmp/filenames.list
If they don't start with filename fret not you could always use a find command like:
find . -type f > /tmp/filenames.list
Iterate over a list of files
To accomplish this we use a while read loop:
while read -r LINE; do
# perform action
done < file
If you had the ability to use bash you could always use a named pipe redirect:
while read -r LINE; do
# perform action
done < <(
find . -type f
)
Create a rename variable
Next, we create a variable NEW_FILENAME and using awk we strip off the file extension and any trailing spaces using:
awk -F '.' '{$NF=""; gsub(/ /, "", $0); print}'
We could just use the following though if you know for certain that there aren't multiple periods in the filename:
awk -F '.' '{print $1}'
The leading ./ is stripped off via
awk -F '/' '{print $NF}'
although this could have been easily done via basename
With the following command, we strip everything after the first -:
awk -F '-' '{print $1}'
Creating backups
Feel free to remove this if you deem unnecessary:
if [[ "$EXTENSION" == "backup" ]]; then
continue
else
cp "$FILENAME" "${FILENAME}.backup"
fi
One thing that we definitely don't want is to make backups of backups. The above logic accounts for this.
Renaming the files
One thing that we don't want to do is append a period to a filename that doesn't have an extension. This accounts for that.
if [[ -z "$EXTENSION" ]]; then
mv "$FILENAME" "$NEW_FILENAME"
else
mv "$FILENAME" "${NEW_FILENAME}.${EXTENSION}"
fi
Other things of note
Odds are that your Linux Mint installation has a bash shell so you could simplify some of these commands. For instance, you could use variable substitution: echo "$FILENAME" | awk -F '.' '{print $NF}' would become "${FILENAME##.*}"
[[ is not defined in POSIX sh so you will likely just need to replace [[ with [, but review this document first:
https://mywiki.wooledge.org/BashFAQ/031
From the pattern of filenames it looks like that the first token can be picked before "-" from filenames. Use following command to rename these files after changing directory to where files are located -
for srcFile in `ls -1`; do fileN=`echo $srcFile | cut -d"-" -f1`; targetFile="$fileN.txt"; mv $srcFile $targetFile; done
If above observation is wrong, following command can be used to remove exactly 12 characters before .txt (4 chars) -
for srcFile in `ls -1`; do fileN=`echo $srcFile | rev | cut -c17- | rev`; targetFile="$fileN.txt"; mv $srcFile $targetFile; done
In ls -1, a pattern can be added to filter files from current directory if that is required.

Fish Shell: Delete All Except

Using Fish, how can I delete the contents of a directory except for certain files (or directories). Something like rm !(file1|file2) from bash, but fishier.
There is no such feature in fish - that's issue #1444.
You can do something like
rm (string match -rv '^file1$|^file2$' -- *)
Note that this will fail on filenames with newlines in them.
Or you can do the uglier:
set -l files *
for file in file1 file2
if set -l index (contains -i -- $file $files)
set -e files[$index]
end
end
rm $files
which should work no matter what the filenames contain.
Or, as mentioned in that issue, you can use find, e.g.
find . -mindepth 1 -maxdepth 1 -type f -a ! \( -name 'file1' -o -name 'file2' \)

How to rename file extentions in fish in a for loop?

Here is the equivalent bash script that I am trying to convert to fish:
for j in *.md; do mv -v -- "$j" "${j%.md}.txt"; done
Here is what I tried:
for file in *.md
mv -v -- "$file" "{{$file}%.md}.txt"
end
But it simply ends up renaming all of the files like so:
‘amazon.md’ -> ‘{{amazon.md}%.md}.txt’
How do I do this correctly?
I found an alternative solution to this:
for file in *.md
mv -v -- "$file" (basename $file .md).txt
end
It works like a charm!
To do this just with fish:
for j in *.md
mv -v -- $j (string replace -r '\.md$' .txt $j)
end
The fish shell doesn't support parameter expansion operations like bash. The philosophy of the fish shell to let existing commands do the work instead of re-inventing the wheel. You can use sed for example:
for file in *.md
mv "$file" (echo "$file" | sed '$s/\.md$/.txt/')
end

Replace a given string in folders throughout a directory

I want to do the equivalent of this, but [maybe recursively] for all, say, .md files within a directory tree.
perl -pi -e 's/FOO/BAR/g' *.md
Use find:
find /path -name "*.md" -exec perl -pi -e 's/FOO/BAR/g' {} \;
A simple & pure bash one line solution by using bash parameter expansion:
$ cd ~/
$ mkdir test
$ cd test/
$ touch foo{1..10}.md
$ ls
foo1.md foo10.md foo2.md foo3.md foo4.md foo5.md foo6.md foo7.md foo8.md foo9.md
$ for file in ./*.md; do mv "$file" "${file/foo/bar}"; done
$ ls
bar1.md bar10.md bar2.md bar3.md bar4.md bar5.md bar6.md bar7.md bar8.md bar9.md
Of course it can be combined with find as suggested by devnull:
$ files=($(find ./test -name "*.md"))
$ for file in "${files[#]}"; do mv "$file" "${file/foo/bar}"; done
Or pipe the ouptut of find to the loop:
$ find ./test -name "*.md" | for file in $(xargs -0); do mv "$file" "${file/foo/bar}"; done

sed command in dry run

How it is possible to make a dry run with sed?
I have this command:
find ./ -type f | xargs sed -i 's/string1/string2/g'
But before I really substitute in all the files, i want to check what it WOULD substitute. Copying the whole directory structure to check is no option!
Remove the -i and pipe it to less to paginate though the results. Alternatively, you can redirect the whole thing to one large file by removing the -i and appending > dryrun.out
I should note that this script of yours will fail miserably with files that contain spaces in their name or other nefarious characters like newlines or whatnot. A better way to do it would be:
while IFS= read -r -d $'\0' file; do
sed -i 's/string1/string2/g' "$file"
done < <(find ./ -type f -print0)
I would prefer to use the p-option:
find ./ -type f | xargs sed 's/string1/string2/gp'
Could be combined with the --quiet parameter for less verbose output:
find ./ -type f | xargs sed --quiet 's/string1/string2/gp'
From man sed:
p:
Print the current pattern space.
--quiet:
suppress automatic printing of pattern space
I know this is a very old thread and the OP doesn't really need this answer, but I came here looking for a dry run mode myself, so thought of adding the below piece of advice for anyone coming here in future. What I wanted to do was to avoid stomping the backup file unless there is something really changing. If you blindly run sed using the -i option with backup suffix, existing backup file gets overwritten, even when there is nothing substituted.
The way I ended up doing is to pipe sed output to diff and see if anything changed and then rerun sed with in-place update option, something like this:
if ! sed -e 's/string1/string2/g' $fpath | diff -q $fpath - > /dev/null 2>&1; then
sed -i.bak -e 's/string1/string2/g' $fpath
fi
As per OP's question, if the requirement is to just see what would change, then instead of running the in-pace sed, you could do the diff again with some informative messages:
if ! sed -e 's/string1/string2/g' $fpath | diff -q $fpath - > /dev/null 2>&1; then
echo "File $fpath will change with the below diff:"
sed -e 's/string1/string2/g' $fpath | diff $fpath -
fi
You could also capture the output in a variable to avoid doing it twice:
diff=$(sed -e 's/string1/string2/g' $fpath | diff $fpath -)
if [[ $? -ne 0 ]]; then
echo "File $fpath will change with the below diff:"
echo "$diff"
fi