Im trying make an script which can find and store specific type files in a folder.
When I run myScript.sh:
$ ./myScript.sh *.txt
It should save in files.txt all files with .txt extension but doesn't work for me. Only save first file.
myScript.sh:
var=`find $1`
for file in $var
do
echo $var >> files.txt
done
That's an example for practice
Do it like this:
for file in $#
do
echo "$file" >> files.txt
done
Using $#, you can get all the arguments in the script that you pass.
You don't need a loop at all. Just redirect the output of find to your file.
find . "$1" >> files.txt
Finally, i fix this issue passing .txt instead *.txt and changing my script like this:
$ ./myScript.sh *.txt
var=`find *$1`
for file in $var
do
echo $var >> files.txt
done
It works for me.
Thank you all for answer!
I don't understand why you are passing arguments to the script at all. If you want all the files in the current directory whose name ends in ".txt", you could do either:
ls *.txt > files.txt
or
find . -name '*.txt' > files.txt # descends the directory tree
or (not standard)
find . -maxdepth 1 -name '*.txt' > files.txt
None of these will handle files with embedded newlines particularly well, but that use case is not handled well when storing names in a file.
Related
I generate a list of everything in a directory (subdirectories and all files) with
ls -R $DIRPATH | awk '/:$/&&f{s=$0;f=0} /:$/&&!f{sub(/:$/,"");s=$0;f=1;next} NF&&f{ print s"/"$0 }' > filelist
and I would like to delete all files not ending in a certain file extension, for example .h. I am trying this with
sed -ne '/.h$/p' filelist > filelist_h
but this is allowing files like C:/dev/boost/boost_1_59_0/boost/graph. How do I get this working with .h and not h?
find is the tool you are looking for:
find "$DIRPATH" -type f -name '*.h'
I used this line to try and split all of the files in a directory into smaller files and put them in a new directory.
cd /path/to/files/ && find . -maxdepth 1 -type f -exec split -l 1000 '{}' "path/to/files/1/prefix {}" \;
The result was 'no file found', so how do I make this work so that I split all of the files in a directory into smaller 1000-line files and place them in a new directory?
Later on...
I tried many variations and this is not working. I read another article that split cannot operate on multiple files. Do I need to make a shell script, or how do I do this?
I had a bright idea to use a loop. So, I researched the 'for' loop and the following worked:
for f in *.txt.*; do echo "Professing $f file..."; split -l 1000 $f 1split.ALLEMAILS.txt. ; done
.txt. is in all of the files in the working directory. the 'echo' command was optional. for the 'split' command, instead of naming one file, I replaced that with $f as defined by the 'for' line.
The only thing I would like to have been able to do is move all of these to another directory in the command.
Right now, I am stuck on the find command for moving all matching files. This is what I have done so far that is not working:
find . -type f -name '1split.*' -exec mv {} new/directory/{} \;
I get the error ' not a directory ' ; or I tried:
find . -type f -name '1split.*' -exec mv * . 1/ \;
and I get ' no such file or directory '
Any ideas?
I found that this command moved ALL of the files to the new directory instead of the ones specifically meeting the criteria '1split.*'
So, the answers to my questions are:
for f in *.txt.*; do echo "Professing $f file..."; split -l 1000 $f 1split.ALLEMAILS.txt. ; done
and
mv *searchcriteria /new/directory/path/
I did not need a find command for this after all. So, combining both of these would have done the trick:
for f in *.txt.*; do echo "Professing $f file..."; split -l 1000 $f 1split.ALLEMAILS.txt. ; done
mv *searchcriteria /new/directory/path/ | echo "done."
---later on...
I found that this basically took 1 file and processed it.
I fixed that with a small shell script:
#!/bin/sh
for f in /file/path/*searchcriteria ; ## this was 'split.*' in my case
do echo "Processing $f in /file/path/..." ;
perl script.pl --script=options $f > output.file ;
done ;
echo "done."
I want to replace the string "Solve the problem" with "Choose the best answer" in only the xml files which exist in the subfolders of a folder. I have compiled a script which helps me to do this, but there are 2 problems
It also replaces the content of the script
It replaces the text in all files of the subfolders( but I want only xml to change)
I want to display error messages(text output preferably) if the text mismatch happens in a particular subfolder and file.
So can you please help me modify my existing script so that I can solve the above 3 problems.
The script I have is :
find -type f | xargs sed -i "s/Solve the problem/Choose the best answer/g"
Using bash and sed:
search='Solve the problem'
replace='Choose the best answer'
for file in `find -name '*.xml'`; do
grep "$search" $file &> /dev/null
if [ $? -ne 0 ]; then
echo "Search string not found in $file!"
else
sed -i "s/$search/$replace/" $file
fi
done
find -type f -name "*.xml" | xargs sed -i "s/Solve the problem/Choose the best answer/g"
Not sure I understand issue 3.
I'm trying to change the name of "my-silly-home-page-name.html" to "index.html" in all documents within a given master directory and subdirs.
I saw this: Shell script - search and replace text in multiple files using a list of strings.
And this: How to change all occurrences of a word in all files in a directory
I have tried this:
grep -r "my-silly-home-page-name.html" .
This finds the lines on which the text exists, but now I would like to substitute 'my-silly-home-page-name' for 'index'.
How would I do this with sed or perl?
Or do I even need sed/perl?
Something like:
grep -r "my-silly-home-page-name.html" . | sed 's/$1/'index'/g'
?
Also; I am trying this with perl, and I try the following:
perl -i -p -e 's/my-silly-home-page-name\.html/index\.html/g' *
This works, but I get an error when perl encounters directories, saying "Can't do inplace edit: SOMEDIR-NAME is not a regular file, <> line N"
Thanks,
jml
find . -type f -exec \
perl -i -pe's/my-silly-home-page-name(?=\.html)/index/g' {} +
Or if your find doesn't support -exec +,
find . -type f -print0 | xargs -0 \
perl -i -pe's/my-silly-home-page-name(?=\.html)/index/g'
Both pass to Perl as arguments as many names at a time as possible. Both work with any file name, including those that contains newlines.
If you are on Windows and you are using a Windows build of Perl (as opposed to a cygwin build), -i won't work unless you also do a backup of the original. Change -i to -i.bak. You can then go and delete the backups using
find . -type f -name '*.bak' -delete
This should do the job:
find . -type f -print0 | xargs -0 sed -e 's/my-silly-home-page-name\.html/index\.html/g' -i
Basically it gathers recursively all the files from the given directory (. in the example) with find and runs sed with the same substitution command as in the perl command in the question through xargs.
Regarding the question about sed vs. perl, I'd say that you should use the one you're more comfortable with since I don't expect huge differences (the substitution command is the same one after all).
There are probably better ways to do this but you can use:
find . -name oldname.html |perl -e 'map { s/[\r\n]//g; $old = $_; s/oldname.txt$/newname.html/; rename $old,$_ } <>';
Fyi, grep searches for a pattern; find searches for files.
I was trying to do some quick filename cleanup at the shell (zsh, if it matters). Renaming files. (I'm using cp instead of mv just to be safe)
foreach f (\#*.ogg)
cp $f `echo $f | perl -pe 's/\#\d+ (.+)$/"\1"/'`
end
Now, I know there are tools to do stuff like this, but for personal interest I'm wondering how I can do it this way. Right now, I get an error:
cp: target `When.ogg"' is not a directory
Where 'When.ogg' is the last part of the filename. I've tried adding quotes (see above) and escaping the spaces, but nonetheless this is what I get.
Is there a reason I can't use the output of s perl pmr=;omrt as the final argument to another command line tool?
It looks like you have a space in the file names being processed, so each of your cp command lines evaluates to something like
cp \#nnnn When.Ogg When.ogg
When the cp command sees more than two arguments, the last one must be a target directory name for all the files to be copied to - hence the error message. Because your source filename ($f) contains a space it is being treated as two arguments - cp sees three args, rather than the two you intend.
If you put double quotes around the first $f that should prevent the two 'halves' of the name from being treated as separate file names:
cp "$f" `echo ...
This is what you need in bash, hope it's good for zsh too.
cp "$f" "`echo $f | perl -pe 's/\#\d+ (.+)$/\1/'`"
If the filename contains spaces, you also have quote the second argument of cp.
I often use
dir /b ... | perl -nle"$o=$_; s/.../.../; $n=$_; rename $o,$n if !-e $n"
The -l chomps the input.
The -e check is to avoid accidentally renaming all the files to one name. I've done that a couple of times.
In bash (and I'm guessing zsh), that would be
foreach f (...)
echo "$f" | perl -nle'$o=$_; s/.../.../; $n=$_; rename $o,$n if !-e $n'
end
or
find -name '...' -maxdepth 1 \
| perl -nle'$o=$_; s/.../.../; $n=$_; rename $o,$n if !-e $n'
or
find -name '...' -maxdepth 1 -exec \
perl -e'for (#ARGV) {
$o=$_; s/.../.../; $n=$_;
rename $o,$n if !-e $n;
}' {} +
The last supports file names with newlines in them.