How can I remove the specific prefix for files in one directory with fish - fish

I want to loop files in one directory and then remove the specific prefix in batch, how can i do that using fish shell?

Assumed you have a directory like this:
mkdir /tmp/example
touch /tmp/example/prefix-file(seq 9)
Then you can do the following:
for i in /tmp/example/prefix-*
mv $i (echo $i | sed 's/prefix-//')
end
There are many different ways to do this, but I think this is the most straight forward way.

Related

sed -n function calling in same line repeatedly

I'm a complete novice wrt unix and writing shell scripts, so apologies if the solution to my problem is quite banal.
Essentially though, I'm working on a shell script that reads from a TextEdit file called "sursecout.txt", and runs it through another script called "sursec.x" (where sursec.x is simply a series of FORTRAN integrations). It then creates a folder named after a certain Jacobi integral ("CJ ="), and stores the ten SurSec[n] files there (where n = integer). My problem is that the different folders are created correctly with appropriate names, but are each filled with identical output files. My suspicion is that something is wrong with my sed command, in that it's reading the same two lines over and over again (where as it should be reading the first two lines of sursecout.txt, then next two, etc.)
Here are the first two folders I want to make, but I have 30 so any help would be appreciated.
./sursec.x < ./sursecout.txt
sed -n '1,2p;3q' sursecout.txt
cd ..
mv ./data ./CJ=3.029990
mkdir data
cd SurSec
./sursec.x < ./sursecout.txt
sed -n '3,4p;5q' sursecout.txt
cd ..
mv ./data ./CJ=3.030659
mkdir data
cd SurSec

Delete files in a folder using Perl

I want to delete all files in a folder, which contain he word TRAR in their filename.. I hav etried the following :
CONFIG_DIR=`pwd`
VENDOR=ericsson-msc
RELEASE=v1
BASE_DIR=/appl/virtuo/gways
system ("cd /appl/virtuo/gways/config/ericsson-msc/v1/spool/input_d; rm-rf *TRAR");
remove all your config lines ( are they even perl? )
CONFIG_DIR=`pwd`
VENDOR=ericsson-msc
RELEASE=v1
BASE_DIR=/appl/virtuo/gways
and
system ("cd /appl/virtuo/gways/config/ericsson-msc/v1/spool/input_d; rm -rf *TRAR")
should work but you should really be using perl code (unlink, etc)
I suspect you are confusing the usage of perl with how you will use awk in bash scripts.
As #Steffen Ullrich said, that isn't Perl or Shell. But I'll try to make it a little more Perlish for you:
First, note that
variables in Perl start with a $
strings need "quotes around them"
statements end with a ;
spaces around = are ok and make it all easier to read
so
$CONFIG_DIR = `pwd`;
$VENDOR = "ericsson-msc";
$RELEASE = "v1";
$BASE_DIR = "/appl/virtuo/gways";
Next, see how you can combine these into a single string like this (I'm guessing that's what you want to do)
$DIR_FOR_CLEANING = "$BASE_DIR/config/$VENDOR/$RELEASE/spool/input_d";
Lastly, you should be really careful whenever using the -r command to rm along with a wildcard like *. Look up the man page for rm and see if -r is something you want to do. I don't think you need it here, unless you have directories named *TRAR that you want to recurse into to remove. I'll bet you only have files named *TRAR in that input_d directory.
Also, the command the way you wrote it could fail the cd if that directory doesn't exist, and would then proceed to recursively remove *TRAR from whatever directory you're running the script from. But you don't need to change directories at all. Try something like this
system ("echo rm -f $DIR_FOR_CLEANING/*TRAR");
If the echo command lists the files you do in fact want it to remove, then remove the "echo" and the rm will start deleting stuff.

Recursively replace colons with underscores in Linux

First of all, this is my first post here and I must specify that I'm a total Linux newb.
We have recently bought a QNAP NAS box for the office, on this box we have a large amount of data which was copied off an old Mac XServe machine. A lot of files and folders originally had forward slashes in the name (HFS+ should never have allowed this in the first place), which when copied to the NAS were all replaced with a colon.
I now want to rename all colons to underscores, and have found the following commands in another thread here: pitfalls in renaming files in bash
However, the flavour of Linux that is on this box does not understand the rename command, so I'm having to use mv instead. I have tried using the code below, but this will only work for the files in the current folder, is there a way I can change this to include all subfolders?
for f in *.*; do mv -- "$f" "${f//:/_}"; done
I have found that I can find al the files and folders in question using the find command as follows
Files:
find . -type f -name "*:*"
Folders:
find . -type d -name "*:*"
I have been able to export a list of the results above by using
find . -type f -name "*:*" > files.txt
I tried using the command below but I'm getting an error message from find saying it doesn't understand the exec switch, so is there a way to pipe this all into one command, or could I somehow use the files I exported previously?
find . -depth -name "*:*" -exec bash -c 'dir=${1%/*} base=${1##*/}; mv "$1" "$dir/${base//:/_}"' _ {} \;
Thank you!
Vincent
So your for loop code works, but only in the current dir. Also, you are able to use find to build a file with all the files with : in the filename.
So, as you've already done all this, I would just loop over each line of your file, and perform the same mv command.
Something like this:
for f in `cat files.txt`; do mv $f "${f//:/_}"; done
EDIT:
As pointed out by tripleee, using a while loop is a better solution
EG
while read -r f; do mv "$f" "${f//:/_}"; done <files.txt
Hope this helps.
Will

Concatenate txt file contents and/or add break to all

I have a bunch of.txt files that need to be made into one big file that can be read by programs such as Microsoft Excel.
The problem is that the files currently do not have a break at the end of them, so they end up in one long line.
Here's an example of what I have (the numbers represent the line number):
1. | first line of txt file
2. | second line
Here's what I want to turn that into:
1. | first line of txt file
2. | second line
3. |
I have around 3000 of these files in a folder, all in the same format. Is there any way to take these files and add a blank line to the end of them all? I'd like to do this without the need for complicated code, i.e. PHP, etc.. I know there are similar things you can do using the terminal (I'm on CentOS), but if something does specifically what I require I'm missing it.
The simplest way to achieve this is with a bash for-loop:
for file in *.txt; do
echo >> "$file"
done
This iterates over all .txt files in the current directory and appends a newline to each file. It can be written in one line, you only need to add a ; before the done.
Note that $file is quoted to handle files with spaces and other funny characters in their names.
If the files are spread across many directories and not all in the same one, you can replace *.txt with **/*.txt to iterate over all .txt files in all subdirectories of the current folder.
An alternative way is to use sed:
sed -i "$ s:$:\n:" *.txt
The -i flag tells sed to edit the files in-place. $ matches the last line, and then the s command substitutes the end of the line (again $) with a new line (\n), thus appending a line to the end of the file.
Try this snippet:
for f in *; do ((cat $f && echo "") > $f.tmp) done && rename -f 's/\.tmp$//' *.tmp
This basically takes any file in the folder (for f in *; do).
Outputs the file on STDOUT (cat $f) followed by a newline (echo "")
and redirects the output into filename.tmp (> $f.tmp)
and then moves the *.tmp files to the original files (rename -f 's/\.tmp$//' *.tmp).
Edit:
Or even simpler:
for f in *; do (echo "" >> $f) done
This basically takes any file in the folder (for f in *; do).
Outputs a newline (echo "")
and appends it to the file (>> $f)

using grep and find commands - basic questions to help me sort it out in my simple mind

I am back with a second no-brainer question, but I would like to get this straight in my head.
I have an assignment in which I am charged with providing a command to find a file named test in my home directory (one command using find, and one using grep). I understand that using find is just 'find ~/test', but using grep, wouldn't I have to search out a pattern within the file 'test'? Or is there a way to search for the file (using grep), even if the file is empty?
ls ~ | grep test
I understand that using find is just 'find ~/test'
No. find ~/test will also have a match for every file or directory under the directory $HOME/test/. Rather use find ~ -type f -name test.
The assignment sounds unclear. But yes, if you give any filenames to grep, it will look at the contents of the files and ignore the names of the files. Perhaps you can grep the output of another command? Maybe ls as #Reese suggested, or maybe a different find command.
ls -R ~ | grep test
Explanation: ls -R ~ will recursively list all files and directories in your home folder. grep test will narrow down that list to files (and directories) that have "test" in their name.