I want to download some folders under the same directories by using wget, here is the structure of the apache directory.
example.com/main/eschool/PhotoAlbum/Album/2008-10-13-citieneducationcenter/
example.com/main/eschool/PhotoAlbum/Album/2009-11-12-snfkdgjndfk/
example.com/main/eschool/PhotoAlbum/Album/2012-10-9-dsngosdgndfk/
...
It is found that there is a pattern:
example.com/main/eschool/PhotoAlbum/Album/20*, is it possible to download all those folders?
If you want to download everything under example.com/main/eschool/PhotoAlbum/Album/, but not above it, you can use the --recursive and --no-parent options:
wget --no-parent --recursive http://example.com/main/eschool/PhotoAlbum/Album/
That will download everything below Album directory. If you want to limit how deep wget dives into the subdirectories, you can specify the --level option:
wget --no-parent --recursive --level=3 http://example.com/main/eschool/PhotoAlbum/Album/
That will drill down up to 3 subdirectories below Album.
However, neither of these methods filter by name – they will blindly download everything in a directory, and its subdirectories. If you want more control (e.g. to only download albums beginning with 20*), you'll have to use a shell script, or a scripting language.
Related
I have a Github release and I want to download the latest assests with the version tag.
I want to save the .exe file with version include but this will prevent me from downloading the latest release with a single same link every time.
I want to download the latest released Outdated-Snake.Setup.exe with the tag name (i.e Outdated-Snake.Setup.v2.0.1.exe something like this)
Can I do it with editing the link somehow or do I have to change the .exe file name somehow? What should I do?
You can't do this when you're downloading via the web interface unless you use your browser's Save As functionality.
However, if you're downloading with curl from the command line, you can use the -o option to specify a file name that you'd like to use to save the file. For example, if I wanted to download the latest Git LFS Windows installer to foo.exe, I could do this:
$ curl -L -o foo.exe \
https://github.com/git-lfs/git-lfs/releases/download/v2.13.3/git-lfs-windows-v2.13.3.exe
You can also write a small shell function to extract the tag from the URL (say, with sed's s command) and then use that to name the file. For example, with the Git LFS file I mentioned above, you could do something like this:
download () {
url="$1"
version=$(echo "$1" | sed -e 's!^.*/\(v[0-9]*\.[0-9\]*\.[0-9]*\)/.*$!\1!')
curl -L -o foo-$version.exe "$url"
}
Since you haven't linked to the repository from which you're trying to download, I can't provide an example that will work with that specific repository, but you can make appropriate adjustments to suit your situation.
Trying to download a folder from git.
I tried wget --no-parent -r http://WEBSITE.com/DIRECTORY and also without --no-parent - did not work. curl works fine with single files, I thought wget should get the folder - it does everything but that.
Tried many options as suggested Using wget to recursively fetch a directory with arbitrary files in it none worked
You should try:
git clone <SSH> or <HTTPS>
Maybe this can help you in a simple way:
DownGit
So, if you instead to use wget to download a directory, just try this.
It will pack your target directory into a .zip, so you can curlorwget it.
MinhasKamal/DownGit#github
In default, value of fileName and rootDirectory is set to the name of the downloading file or directory. If you do not want to add the directory itself in the zip, then set rootDirectory=false. Like: this link- https://minhaskamal.github.io/DownGit/#/home?url=https://github.com/MinhasKamal/DownGit/tree/master/res/images&rootDirectory=false, will download a file named images.zip; however the root directory- "images", will not be included in the zip.
If you want to download file- https://github.com/MinhasKamal/DownGit/blob/master/res/images/downgit.png with name- DownGitIcon.zip, then the link will be- https://minhaskamal.github.io/DownGit/#/home?url=https://github.com/MinhasKamal/DownGit/blob/master/res/images/downgit.png&fileName=DownGitIcon
By the way, I used to use SVN to download files/directory from a Git System by route the URL to its trunk. But it's very inconvenient.
A bit late but in case some one stumbles here later.
You can use following tools :
Download-Directory
DownGit
In both tools, you can just enter your url to direct download or create a download link.
For those who prefer GUI tools, there is another easy way to download a folder using code sandbox.
Navigate to the folder and replace github to githubbox in the URL. Then on code sandbox go to files pain on the left and hover the mouse over the down arrow, it will show a popup tooltip "Export to Zip". Just click on it to download the folder as a zip file.
reference: Download a single folder or directory from a BRANCH in GitHub repo
I'm am running macOS Sierra, and am in the process of moving all dotfiles into one directory. I have successfully exported many environment variables for various installations (vagrant, composer, oh-my-zsh etc) that allow me to install to a sub-directory of my choice.
Unfortunately, programs like npm, subversion, homestead, git, and others do not offer such configurations.
I use a dotfiles repository, where I keep my configuration files under git. The idea is not new. What I did is move them to another directory and them create a symlink to them in the home directory. It does not clean the home directory as you wanted, since it's the standard place for configuration files, as stated by Norman Gray, but at least you can version them and share them across machines.
Example:
cd ~
mkdir dotfiles
mv .gitconfig dotfiles/.gitconfig
ln -s ~/dotfiles/.gitconfig ~/.gitconfig
Check out stow. That's what I use.
I have a ~/dotfiles/ directory which has folders in it like vim/ X/, etc.
Now for example vim/ will have a .vimrc file in it, and from ~/dotfiles I can run stow vim/ and it will automatically manage the symlinks to the home directory.
I can also run
cd ~/dotfiles
for folder in ./
do [[ -d $folder ]] && stow -R $folder
done
To update all my dotfiles (the -R deletes old symlinks that don't exist anymore)
There is a good introduction here: http://brandon.invergo.net/news/2012-05-26-using-gnu-stow-to-manage-your-dotfiles.html
When I download the master.zip of my project on GitHub I get a .zip file containing a folder named something-master that contains all the source code.
Is it possible to download the master.zip in a way that the whole sourcecode is in the "root" of the zip without subfolder?
No, but if you download the tar.gz (as file 'master') instead of the zip, you can extract it in any folder you want without the top folder:
tar xvf master -C yourFolder --strip-components 1
something is the name of the repository.
As the zip file is built by GitHub, you cannot customize the way it's done just clicking on the download button.
Anyhow you can always unzip everything in the same directory (discarding any subfolder) running this command:
unzip -j something-master.zip
from terminal/console
#VonC's answer did the trick. All together:
Download the repo via tar file:
curl -L -o reponame.tar.gz https://github.com/username/reponame/archive/master.tar.gz
Extract without the top directory to YourTargetDir:
tar xvf reponame.tar.gz -C YourTargetDir --strip-components 1
I'm trying to mirror files on FTP server.
Those files can be very large so downloads might be interrupted.
I'd like to keep the original files while downloading partial files to a temporary folder and once completed override local older versions.
Can I do this? how?
Is there another easy to use (command line) tool that I can use?
First, download the files to a temp directory. Use -c so you can resume.
After the download has completed, use copy, rename or rsync to copy the files to the final place.
Note: Consider using rsync for the whole process because it was designed for just this use case and it will cause much less strain on the server and the Internet. Most site admins are happy if you ask them for an rsync access just for this reason.
Looking at the wget manual I can't see this functionality, however you could write a bash script to do what you want, which would essentially run an individual wget for each file then move it using normal mv command.
Alternativly have a look at rsync according to the manual there is a paramater that sets a temp dir
-T --temp-dir=DIR create temporary files in directory DIR
I am not 100% sure wheather this is where it puts the files during downloads as not had chance to try it out.