I have several directories with files of various sizes. I would like to archive only those files over 100 megabytes in size. Any ideas of a simple command line argument to do this?
Like this
find . -size +100M -exec gzip {} \;
If you are thinking of running it regularly, you may wish to exclude already gzipped files like this
find . ! -name "*.gz" -size +100M -exec gzip {} \;
And if you have lots of big files and (say) a quad core CPU, you could do 4 at a time like this
find . -size +100M | xargs -n 1 -P 4 gzip
Check out the -size option to the find command.
Documentation: find(1)
Related
how to undo gzip command in centos?
sudo gzip -r plugins
if I try sudo gunzip -r plugins it give me an error not in gzip format
what I want to do is zip the directory.
tar -zcvf archive.tar.gz directory/
check this answers https://unix.stackexchange.com/a/93158 https://askubuntu.com/a/553197 & https://www.centos.org/docs/2/rhl-gsg-en-7.2/s1-zip-tar.html
sudo find . -name "*.gz" -exec gunzip {} \;
I think you have two questions
How do I undo what I did?
How do I zip a directory
Have you even looked at man gzip or gzip --help?
Answers
find plugins -type f -name "*gz" | xargs gunzip
tar -zcvf plugins.tar.gz plugins
2b. I suspect that your level of linux experience is low so you'd probably be more comfortable using zip. (Remember to do a zip --help or man zip before coming for more advice.)
Explanation. gzip only zips up one file. If you want to do a bunch of files, you have to smush them up into one file first (using tar) and then compress that using gzip.
What you did was recursively gzip up each individual file in plugins/.
I need to free up some disk space on my web server and would like to ask if running the command below would break anything?
My server is running centos 6 with cpanel/whm.
$ find / -type f -name "*.tar.gz" -exec rm -i {} \;
Any help or advice will be greatly appreciated.
Thanks.
Well, you'll lose logs if they are already compressed, or uploaded files if any. By default there shouldn't be any of those files on installed system. Personally I think this is wrong to just jettison what you can instead of trying to find the cause.
You can try finding what's occupying space by running:
du -hs / # shows how much root directory occupies
Compare that to the output of:
df -h # shows used space on disks
If the number didn't match by a far - you probably have unclosed deleted files and a simple reboot will reclaim this space for you.
If not you can proceed by recursively doing:
cd <dir>; du -hs * # enter directory and calculate size of its contents
You can do that starting from / and proceeding to the biggest dir. After all you'll find your source of free space. :)
PS: CentOS doesn't compress logs by default. You will not detect those logs by searching for archived files, but they can be huge. Compressing them is an easy way to get some space:
Turn on compression in /etc/logrotate.conf:
compress
Compress already rotated logs with:
cd /var/log; find . -type f | grep '.*-[0-9]\+$' | xargs -n1 gzip -9
I've a cron will generate daily file with format like data.log.YYYYMMDD and I want to use logrotate only to delete those file older than 5 days.
I tried this but not work. Any idea? Thanks.
/log/data.log.* {
daily
missingok
rotate 0
maxage 5
}
It's not (easily) feasible... take a look at these posts:
logrotate-to-clean-up-date-stamped-files
logrotate-files-with-date-in-the-file-name
The most easy way to do that is just to make a cron task: see this example, basically something like:
$ /usr/bin/find /data/tier2/scripts/logs/ -mtime +7 -name "*.log" -print -exec /bin/rm {} \;
Say with a directory structure such as:
toplev/
file2.txt
file5.txt
midlev/
test.txt
anotherdirec/
other.dat
myfile.txt
furtherdown/
morefiles.txt
otherdirec/
myfile4.txt
file7.txt
How would you delete all files (not directories and not recursively) from the 'anotherdirec'? In this example it would delete 2 files (other.dat, myfile.txt)
I have tried the below command from within the 'midlev' directory but it gives this error (find: bad option -maxdepth find: [-H | -L] path-list predicate-list):
find anotherdirec/ -type f -maxdepth 1
I'm running SunOS 5.10.
rm anotherdirec/*
should work for you.
Rob's answer (rm anotherdirec/*) will probably work, but it is a bit verbose and generates a bunch of error messages. The problem is that you are using a version of find that does not support the -maxdepth option. If you want to avoid the error messages that 'rm anotherdirec/*' gives, you can just do:
for i in anotherdirec/*; do test -f $i && rm $i; done
Note that neither of these solutions will work if any of the files contain spaces or other special characters. You can put double quotes around $i if that is an issue.
Find is sensitive to options order. Try this:
find anotherdirec/ -maxdepth 1 -type f -exec rm {} \;
rm toplev/midlev/anotherdirec/* if you want to delete only files.
rm -rf toplev/midlev/anotherdirec/* if you want to delete files and lower directories
What's the easiest/best way to find and remove empty (zero-byte) files using only tools native to Mac OS X?
Easy enough:
find . -type f -size 0 -exec rm -f '{}' +
To ignore any file having xattr content (assuming the MacOS find implementation):
find . -type f -size 0 '!' -xattr -exec rm -f '{}' +
That said, note that many xattrs are not particularly useful (for example, com.apple.quarantine exists on all downloaded files).
You can lower the potentially huge number of forks to run /bin/rm by:
find . -type f -size 0 -print0 | xargs -0 /bin/rm -f
The above command is very portable, running on most versions of Unix rather than just Linux boxes, and on versions of Unix going back for decades. For long file lists, several /bin/rm commands may be executed to keep the list from overrunning the command line length limit.
A similar effect can be achieved with less typing on more recent OSes, using a + in find to replace the most common use of xargs in a style still lends itself to other actions besides /bin/rm. In this case, find will handle splitting truly long file lists into separate /bin/rm commands. The {} is customarily quoted to keep the shell from doing anything to it; the quotes aren't always required but the intricacies of shell quoting are too involved to cover here, so when in doubt, include the apostrophes:
find . -type f -size 0 -exec /bin/rm -f '{}' +
In Linux, briefer approaches are usually available using -delete. Note that recent find's -delete primary is directly implemented with unlink(2) and doesn't spawn a zillion /bin/rm commands, or even the few that xargs and + do. Mac OS find also has the -delete and -empty primaries.
find . -type f -empty -delete
To stomp empty (and newly-emptied) files - directories as well - many modern Linux hosts can use this efficient approach:
find . -empty -delete
find /path/to/stuff -empty
If that's the list of files you're looking for then make the command:
find /path/to/stuff -empty -exec rm {} \;
Be careful! There won't be any way to undo this!
Use:
find . -type f -size 0b -exec rm {} ';'
with all the other possible variations to limit what gets deleted.
A very simple solution in case you want to do it inside ONE particular folder:
Go inside the folder, right click -> view -> as list.
Now you'll find all the files listed as a list. Click on "Size" which must be a column heading. This will sort all the files based on it's size.
Finally, you can find all the files that have zero bites at the last. Just select those and delete it!