$ find /tmp/a1
/tmp/a1
/tmp/a1/b2
/tmp/a1/b1
/tmp/a1/b1/x1
simply trying
find /tmp/a1 -exec tar -cvf dirall.tar {} \;
simply doesn't work
any help
The command specified for -exec is run once for each file found. As such, you're recreating dirall.tar every time the command is run. Instead, you should pipe the output of find to tar.
find /tmp/a1 -print0 | tar --null -T- -cvf dirall.tar
Note that if you're simply using find to get a list of all the files under /tmp/a1 and not doing any sort of filtering, it's much simpler to use tar -cvf dirall.tar /tmp/a1.
You're one character away from the solution. The find command's exec option will execute the command for each file found, so you should replace -c with -r to put tar into append mode. Each time find invokes it, it'll tack on one more file:
rm -f dirall.tar
find /tmp/a1 -exec tar -rvf dirall.tar {} \;
I'd think something like "find /tmp/a1 | xargs tar cvf foo.tar" would work. But make sure you have backups first!
Does hpux have cpio ?
That will take a list of files on stdin and some versions
will write output in tar format.
Related
I want to copy files matching a regex to another folder but while keeping part of the folder structure, All the filepaths will start with src/main/java/ buth the path before that is different for most files
I know that I can use
find . -iregex ".*HeadersConstants\.java" -exec cp {} ./destination/ \;
To copy a file but then I lose the file path in the destination dir
Are you on linux? Historically, cpio would have been an obvious choice but these days rsync is likely to be better:
find . -iregex ".*HeadersConstants\.java" |\
rsync -v --files-from=- ./ ${destination}/
It's probaby not a good idea for the destination to be inside . as your question code suggests but we can stop find looking there with:
find . -path ./destination -prune \
-o -iregex ".*HeadersConstants\.java" -print |\
rsync -v --file-from=- ./ ./destination/
(You may want to investigate why the -print is required.)
In the meantime I got it to work (probably not the cleanest way)
javaRe='(.*)\/src\/main\/java\/(.*)\/'
find . -name "HeadersConstants\.java" | while read f
do
if [[ ${f} =~ ${javaRe} ]]; then
path=${BASH_REMATCH[2]}
fullpath=${destination}${srcDir}${path}
mkdir -p "$fullpath"
cp "$f" "$fullpath"
fi
done
how to undo gzip command in centos?
sudo gzip -r plugins
if I try sudo gunzip -r plugins it give me an error not in gzip format
what I want to do is zip the directory.
tar -zcvf archive.tar.gz directory/
check this answers https://unix.stackexchange.com/a/93158 https://askubuntu.com/a/553197 & https://www.centos.org/docs/2/rhl-gsg-en-7.2/s1-zip-tar.html
sudo find . -name "*.gz" -exec gunzip {} \;
I think you have two questions
How do I undo what I did?
How do I zip a directory
Have you even looked at man gzip or gzip --help?
Answers
find plugins -type f -name "*gz" | xargs gunzip
tar -zcvf plugins.tar.gz plugins
2b. I suspect that your level of linux experience is low so you'd probably be more comfortable using zip. (Remember to do a zip --help or man zip before coming for more advice.)
Explanation. gzip only zips up one file. If you want to do a bunch of files, you have to smush them up into one file first (using tar) and then compress that using gzip.
What you did was recursively gzip up each individual file in plugins/.
How can i delete all the file that are ending with *0x0.jpg in CENTOS ? I need to delete multiple files nested into folders and subfolders
I assume you have a shell - try
find /mydirectory -type f -print | grep '0x0.jpg$' | xargs -n1 rm -f
There is probable a more elegant solution but that should work
However I would put an echo in before rm on the first run to ensure that the right files are going to be removed.
Ed Heal's answer works just fine but neither the grep nor xargs calls are necessary. The following should work just as well and be a good bit more efficient for large amounts of files.
find /mydirectory -name '*0x0.jpg' -type f -exec rm -rf () \+
I have a perl script which is used to process some data files from a given directory. I have written below bash script to look for the last updated file in the given directory and process that file.
cd $data_dir
find \( -type f -mtime -1 \) -exec ./script.pl {} \;
Sometimes, user copied multiple files to the data dir and hence the previous one skipped. The perl script execute only the last updated file. Can you please suggest me how to fix this using bash script.
Try
cd $data_dir
find \( -type f -mtime -1 \) -exec ./script.pl {} +
Note the termination of -exec with a + vs your \;
From the man page
-exec command {} +
This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end;
Now that you'll have one or more file names passed into your perl script, you can alter your perl script to iterate over each passed in file name.
If I understood the question correctly, you need to process any files that were created or modified in a directory since the last time your script was run.
In my opinion find is not the right tool to determine those files, because it has no notion of which files it has already seen.
Using any of the -atime/-ctime/-mtime options will either produce duplicates if you run your script twice in the specified period, or miss some files if it is not executed at the right time. The timing intricacies of using these options for something like this are not easy to deal with.
I can propose a few alternatives:
a) Use three directories instead of one: incoming/ processing/ done/. Your users should only be allowed to put files in incoming/. You move any files in there to processing/ with a simple mv incoming/* processing/ before running your perl script. Then you move them from processing/ to done/ when its over.
In my opinion this is the simplest and best solution, and the one used by mail servers etc when dealing with this issue. If I were you and there were not any special circumstances preventing you from doing this, I'd stop reading here.
b) Have your finder script touch a special file (e.g. .timestamp, perhaps in a different directory, so that your users will not tamper with it) when it's done. This will allow your script to remember the last time it was run. Then use
find \( -cnewer .timestamp -o -newer .timestamp \) -type f -exec ./script.pl '{}' ';'
to run your perl script for each file. You should modify your perl script so that it can run repeatedly with a different file name each time. If you can modify it to accept multiple files in one go, you can also run it with
find \( -cnewer .timestamp -o -newer .timestamp \) -type f -exec ./script.pl '{}' +
which will minimise the number of ./script.pl processes. Take care to handle the first run of the find script, when the .timestamp file is missing. A good solution would be to simply ignore it by not using the -*newer options at all in that case. Also keep in mind that there is a race condition where files added after find was started but before touching the timestamp file will not be processed.
c) As a variation of (b), have your script update the timestamp with the time of the processed file that was created/modified most recently. This is tricky, because find cannot order its output on its own. You could use a wrapper around your perl script to handle this:
#!/bin/bash
for i in "$#"; do
find "$i" \( -cnewer .timestamp -o -newer .timestamp \) -exec touch -r '{}' .timestamp ';'
done
./script.pl "$#"
This will update the timestamp if it is called to process a file with a newer mtime or ctime, minimising (but not eliminating) the race condition. It is however somewhat awkward - unavoidable since bash's [[ -nt option seems to only check the mtime. It might be better if your perl script handled that on its own.
d) Have your script store each processed filename and its timestamps somewhere and then skip duplicates. That would allow you to just pass all files in the directory to it and let it sort out the mess. Kinda tricky though...
e) Since your are using Linux, you might want to have a look at inotify and the inotify-tools package - specifically the inotifywait tool. With a bit of scripting it would allow you to process files as they are added in the directory:
inotifywait -e MOVED_TO -e CLOSE_WRITE -m -r testd/ | grep --line-buffered -e MOVED_TO -e CLOSE_WRITE | while read d e f; do ./script.pl "$f"; done
This has no race conditions, as long as your users do not create/copy/move any directories rather than just files.
The perl script will only execute against the file which find gives it. Perhaps you should remove the -mtime -1 option from the find command so that it picks up all the files in the directory?
I want to create tar file with all the output files resulting from executing find command.
I tried the following command:
find . \(-name "*.log" -o -name "*.log.*" \) -mtime +7 -exec tar cvf test.tar.gz {} \;
But it is including only the last found file in the test.tar file. How to include all files in test.tar file?
Regards
Chaitanya
Use command line substitution:
tar cf test.tar $(find . \(-name "*.log" -o -name "*.log.*" \) -mtime +7)
What this does is run the command in $() and makes the output the command line arguments of the outer command.
This uses the more modern bash notation. If you are not using bash, you can also use backticks which should work with most shells:
tar cf test.tar `find . \(-name "*.log" -o -name "*.log.*" \) -mtime +7`
While backticks are more portable, the $() notation is easier if you need to nest command line substitution.
You want to pipe the file names found by find into tar.
find . \(-name "*.log" -o -name "*.log.*" \) -mtime +7 -exec tar cvf test.tar.gz {} \;
But it is including only the last found file in the test.tar file.
That's because for every file it finds it is running a new tar command that overwrites the tar file from the previous command.
You can make find batch the files together by changing the \; to a + but if there's more
files than can be listed at once, find will still run multiple commands, each overwriting the tar file from the previous one. You could pipe the output through xargs but it has the same issue of possibly running the command multiple times. The command line substitution recommended above is the safest way I know of to do it, ensuring that tar can only be called once -- but if too many files are found, it may give an error about the command line being too long.
This one should equally work:
find . -name "*.log" -o -name "*.log.*" -mtime +7 -exec tar cvf test.tar {} +
Note the "+" at the end vs "\;".
For a reliable way when a very large number of files will match the search:
find . -name "*.log" -o -name "*.log.*" -mtime +7 > /tmp/find.out
tar cvf test.tar -I /tmp/find.out