I'm trying to delete files inside a certain folder but it's throwing an error:
rm -rf /usr/html/sched/downloads/*
-bash: /bin/rm: Argument list too long
I searched online and found this solution but I'm afraid to try it being a production server and I don't know how to put the path correctly:
find . -name '*' | xargs rm -v
How can I delete thousands of files within the /downloads director? FYI, there's no sub-directory.
I think here you can check how you can handle it because for a large scale of files you will need to do it by a specific quantity by milliseconds.
find ./cache -mtime +0.5 -print0 | xargs -0 rm -f
Faster way to delete a large number of files [duplicate]
Is there a way to use fish with a command like
rm -rf *.pdf *.gz
although no pdf files are available? Currently I get the response
fish: No matches for wildcard '*.pdf'. See `help expand`.
I found for the zsh the possibilty to set setopt +o nomatch but this isn't recognized in fish shell. Using Version 3.0.2
It may be a duplicate question but i could not find the solution for this i want to copy a last 3 months files AND subdirectories from one disk to andother but i could find only to listing the files by using the following command. I really don't know how to copy the files by using -mtime. I'm new to linux please help me.
find . -mtime -90 -exec cp {} targetdir \;
but how to copy directories with subdirectories and files too? (but do not use command rsync, i don;t have it with this instance) Regards S.
Copy needs a recursive option specified to handle the subdirectories
$ find testroot # shows some dirs and files
testroot
testroot/sub1
testroot/sub1/subtestfile
testroot/sub2
testroot/testf
$ find target # empty at this stage
target
$ find ./testroot/ -exec cp -R {} target/ \;
$ find target
target
target/sub1
target/sub1/subtestfile
target/sub2
target/subtestfile
target/testf
Previously gsutil appeared to not upload hidden files. Now hidden files cannot be prevented from upload. Using the -x command with either
.*/\\..* or
.*/[.].* still uploads both hidden files and directories.
This is with a local directory up to a bucket.
Is there a different expression that is required?
The -x exclude option should work:
gsutil rsync -x '\..*|./[.].*$' source-dir gs://your-bucket
You can learn more about it from the [official documentation].
This works for both hidden files and directories, at any spot in the path:
gsutil rsync -x '.*/\..*|^\..*' source dest
The other answer didn't work for me.
As the regexp is not tied to the edges of the string, .*'s at the beginning and at the end are not necessary, plus we can use grouping to simplify (sic!) a bit:
gsutil rsync -x '(^|/)\.' source dest
Where \. is the dot itself and (^|/) states that the dot should follow either the beginning of file name (^) or a / - a dot file in a subfolder.
I have such directories structure on server 1:
data
company1
unique_folder1
other_folder
...
company2
unique_folder1
...
...
And I want duplicate this folder structure on server 2, but copy only directories/subdirectories of unique_folder1. I.e. as result must be:
data
company1
unique_folder1
company2
unique_folder1
...
I know that rsync is very good for this.
I've tried 'include/exclude' options without success.
E.g. I've tried:
rsync -avzn --list-only --include '*/unique_folder1/**' --exclude '*' -e ssh user#server.com:/path/to/old/data/ /path/to/new/data/
But, as result, I don't see any files/directories:
receiving file list ... done
sent 43 bytes received 21 bytes 42.67 bytes/sec
total size is 0 speedup is 0.00 (DRY RUN)
What's wrong? Ideas?
Additional information:
I have sudo access to both servers. One idea I have - is to use find command and cpio together to copy to new directory with content I need and after that use Rsync. But this is very slow, there are a lot of files, etc.
I've found the reason. As for me - it wasn't clear that Rsync works in this way.
So correct command (for company1 directory only) must be:
rsync -avzn --list-only --include 'company1/' --include 'company1/unique_folder1/***' --exclude '*' -e ssh user#server.com:/path/to/old/data/ /path/to/new/data
I.e. we need include each parent company directory. And of course we cannot write manually all these company directories in the command line, so we save the list into the file and use it.
Final things we need to do:
1.Generate include file on server 1, so its content will be (I've used ls and awk):
+ company1/
+ company1/unique_folder1/***
...
+ companyN/
+ companyN/unique_folder1/***
2.Copy include.txt to server 2 and use such command:
rsync -avzn \
--list-only \
--include-from '/path/to/new/include.txt' \
--exclude '*' \
-e ssh user#server.com:/path/to/old/data/ \
/path/to/new/data
If the first matching pattern excludes a directory, then all its descendants will never be traversed. When you want to include a deep directory e.g. company*/unique_folder1/** but exclude everything else *, you need to tell rsync to include all its ancestors too:
rsync -r -v --dry-run \
--include='/' \
--include='/company*/' \
--include='/company*/unique_folder1/' \
--include='/company*/unique_folder1/**' \
--exclude='*'
You can use bash’s brace expansion to save some typing. After brace expansion, the following command is exactly the same as the previous one:
rsync -r -v --dry-run --include=/{,'company*/'{,unique_folder1/{,'**'}}} --exclude='*'
An alternative to Andron's Answer which is simpler to both understand and implement in many cases is to use the --files-from=FILE option. For the current problem,
rsync -arv --files-from='list.txt' old_path/data new_path/data
Where list.txt is simply
company1/unique_folder1/
company2/unique_folder1/
...
Note the -r flag must be included explicitly since --files-from turns off this behaviour of the -a flag. It also seems to me that the path construction is different from other rsync commands, in that company1/unique_folder1/ matches but /data/company1/unique_folder1/ does not.
For example, if you only want to sync target/classes/ and target/lib/ to a remote system, do
rsync -vaH --delete --delete-excluded --include='classes/***' --include='lib/***' \
--exclude='*' target/ user#host:/deploy/path/
The important things to watch:
Don't forget the "/" from the end of the pathes, or you will get a copy into subdirectory.
The order of the --include, --exclude counts.
Contrary the other answers, starting with "/" an include/exclude parameter is unneeded, they will automatically appended to the source directory (target/ in the example).
To test, what exactly will happen, we can use a --dry-run flags, as the other answers say.
--delete-excluded will delete all content in the target directory, except the subdirectories we specifically included. It should be used wisely! On this reason, a --delete is not enough, it does not deletes the excluded files on the remote side by default (every other, yes), it should be given beside the ordinary --delete, again.