I have files that are named CULT_2009_BARRIERS_EXP_Linear.dbf
and would like to rename them to
CULT_BARRIERS_EXP_Linear.dbf .
The files have a date prefixed with them which is always different showing when it was captured.
I have tried to replace them with regular expressions. i want to test the string if it contains numbers and then rename. I have used
if [[ $file =~ [0-9] ]]; then rename -v "s/[0-9]//g" * && rename -v s/[_]_/_/ *;
which partially works. But I would ideally like to have one rename command as it is good practice
A single rename command would be enough. Just run the below command on the directory where .def files are actually stored.
rename -v "s/_[0-9]+//g" *.dbf
[0-9]+ matches one or more digits where [0-9] will match a single digit character. + repeats the previous token one or more times.
Related
I need help to rename multiple files in a directory based on the delimeter.
Sample:
From
R01235-XYZ-TRAIL.PDF
TO
R01234-TRAIL.PDF
and
From
XYZ-C12345-TRAIL.PDF
TO
C12345-TRAIL.PDF
is it possible to delete based on - delimeter?
I am not specifically removing XYZ but rather remove anything before the first - and the middle occurence between two -.. XYZ is just a representation of the characters in that field.
Thanks!
I tried SED, LS, MV, I also tried RENAME but it seems not working for me.
This might work for you:
rename -n 's/XYZ-//' file
This removes XYZ- from the file name.
If this meets your requirements, remove the -n option for the renaming to take place.
On retrospect, perhaps:
rename -n 's/([A-Z][0-9]{5}-).*-/$1/;s/^.*-([A-Z][0-9]{5}-)/$1/' file
With sed:
sed -E 's/^([A-Z][0-9]{5}-).*-|^.*([A-Z][0-9]{5}-.*)/mv & \1\2/' file
Check the results and then:
sed -E 's/^([A-Z][0-9]{5}-).*-|^.*([A-Z][0-9]{5}-.*)/mv & \1\2/' file | sh
I'm very very very new to programming and trying to learn how to make tedious analysis tasks a little faster. I have a master folder (Master) with 50 experiment folders and within each experiment folder are another set of folders holding text files. I want to extract 2 lines from one of the text fiels (experiment title on line 7, slope on line 104) and copy them to a new single file.
So far, all I have learned is how to extract the lines and add to a new file.
sed -n '7p; 104 p' reco.txt >> results.txt
How can I extract these two lines from all files 'reco.txt' in the subfolder of the folder 'Master' and export into a single text file?
As much explanation as you can bear would be great to help me learn.
You can use find in combination with xargs for this. On its own, you can get a list of all relevant files:
find . -name reco.txt -print
This finds all files named reco.txt in the current directory (.) or any subdirectories and writes them to standard output.
Now, normally you can use the -exec argument to find, which will run a program for each file found, except that typically multiple results are combined into a single execution (appended to the command line). Your particular invocation of sed only works on one file at a time.
So, instead of -exec, you can use xargs which is essentially the same thing but with more control.
find Master -name reco.txt -print0 | xargs -0 -n1 sed -n '7p; 104 p' > results.txt
This does the following:
Searches in the directory Master or subdirectories for any file named reco.txt.
Outputs each filename with null-terminator instead of newline (-print0) -- this allows the full path to contain characters that usually need escaping (such as spaces)
Pipes the result into xargs, which does the following:
Accepts null-terminated strings (-0)
Only puts at most one file into each command (-n1)
Runs sed -n '7p; 104 p' on that file
Entire output is redirected to results.txt, which will overwrite any existing contents in the file.
I'm using zsh, and am trying to write a function to operate on a URL and a pathname:
function my-function
{
somecommand --url $1 $(readlink -f $2)
}
(to complicate things somewhat, the function actually uses sh syntax, as it is sourced from my ~/.zshrc using a trick like this). The readlink is there to expand symlinks and ensure directories such as . are evaluated correctly (the directory name is stored for later use by somecommand).
When I type a command from the command-line like this:
my-function http://example.org/example /tmp/myexampledirectory
... it works fine, even if I autocomplete the directory name. However, if the directory name contains spaces, zsh completes it like this:
my-function http://example.org/example /tmp/My\ Example\ Directory
For most "normal" commands (cp, mv, etc.) that never seems to cause a problem. However, in my case, somecommand sees $2 as only being /tmp/My - presumably the rest is seen as another argument.
How can I avoid this situation? I would prefer not to alter the standard zsh autocompletion, but rather find a way for my function to handle this.
The zsh completion system works very well here, and the solution is very simple, just put double-quotes around the readlink argument in the script:
somecommand --url $1 $(readlink -f "$2")
The point is that without quotes readlink removes backslashes which escape whitespaces. Compare three results:
1. Without backslashes and quotes readlink -f assumes that there are three different files/directories (with default path in current directory) and produces
$ readlink -f /tmp/My Example Directory
/tmp/My
/home/jimmij/Example
/home/jimmij/Directory
2. With escaping backslashes but without quotes readlink -f understands that there is only one directory, but removes backslashes from output, so that somecommand takes three separate arguments
$ readlink -f /tmp/My\ Example\ Directory
/tmp/My Example Directory
3. With backslashes and with double-quotes readlink -f gives the output with backslashes what is (most probably) expected by somecommand
$ readlink -f "/tmp/My\ Example\ Directory"
/tmp/My\ Example\ Directory
BTW, as a rule of thumb: if there are any problems with whitespaces in the shell-like scripts (bash, zsh, whatever) the first thing to play with is different quotation marks around variables.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Renaming lots of files in Linux according to a pattern
I have multiple files in this format:
file_1.pdf
file_2.pdf
...
file_100.pdf
My question is how can I rename all files, that look like this:
file_001.pdf
file_002.pdf
...
file_100.pdf
I know you can rename multiple files with 'rename', but I don't know how to do this in this case.
You can do this using the Perl tool rename from the shell prompt. (There are other tools with the same name which may or may not be able to do this, so be careful.)
rename 's/(\d+)/sprintf("%03d", $1)/e' *.pdf
If you want to do a dry run to make sure you don't clobber any files, add the -n switch to the command.
note
If you run the following command (linux)
$ file $(readlink -f $(type -p rename))
and you have a result like
.../rename: Perl script, ASCII text executable
then this seems to be the right tool =)
This seems to be the default rename command on Ubuntu.
To make it the default on Debian and derivative like Ubuntu :
sudo update-alternatives --set rename /path/to/rename
Explanations
s/// is the base substitution expression : s/to_replace/replaced/, check perldoc perlre
(\d+) capture with () at least one integer : \d or more : + in $1
sprintf("%03d", $1) sprintf is like printf, but not used to print but to format a string with the same syntax. %03d is for zero padding, and $1 is the captured string. Check perldoc -f sprintf
the later perl's function is permited because of the e modifier at the end of the expression
If you want to do it with pure bash:
for f in file_*.pdf; do x="${f##*_}"; echo mv "$f" "${f%_*}$(printf '_%03d.pdf' "${x%.pdf}")"; done
(note the debugging echo)
I have a bunch of.txt files that need to be made into one big file that can be read by programs such as Microsoft Excel.
The problem is that the files currently do not have a break at the end of them, so they end up in one long line.
Here's an example of what I have (the numbers represent the line number):
1. | first line of txt file
2. | second line
Here's what I want to turn that into:
1. | first line of txt file
2. | second line
3. |
I have around 3000 of these files in a folder, all in the same format. Is there any way to take these files and add a blank line to the end of them all? I'd like to do this without the need for complicated code, i.e. PHP, etc.. I know there are similar things you can do using the terminal (I'm on CentOS), but if something does specifically what I require I'm missing it.
The simplest way to achieve this is with a bash for-loop:
for file in *.txt; do
echo >> "$file"
done
This iterates over all .txt files in the current directory and appends a newline to each file. It can be written in one line, you only need to add a ; before the done.
Note that $file is quoted to handle files with spaces and other funny characters in their names.
If the files are spread across many directories and not all in the same one, you can replace *.txt with **/*.txt to iterate over all .txt files in all subdirectories of the current folder.
An alternative way is to use sed:
sed -i "$ s:$:\n:" *.txt
The -i flag tells sed to edit the files in-place. $ matches the last line, and then the s command substitutes the end of the line (again $) with a new line (\n), thus appending a line to the end of the file.
Try this snippet:
for f in *; do ((cat $f && echo "") > $f.tmp) done && rename -f 's/\.tmp$//' *.tmp
This basically takes any file in the folder (for f in *; do).
Outputs the file on STDOUT (cat $f) followed by a newline (echo "")
and redirects the output into filename.tmp (> $f.tmp)
and then moves the *.tmp files to the original files (rename -f 's/\.tmp$//' *.tmp).
Edit:
Or even simpler:
for f in *; do (echo "" >> $f) done
This basically takes any file in the folder (for f in *; do).
Outputs a newline (echo "")
and appends it to the file (>> $f)