Setting up a PowerShell script to auto-download applications with varying URLs.
I had a batch file script to download certain application to ensure my USB Toolkit was always updated. However I want to switch to powershell because I found it had the WGET command to download direct from a URL. What I was hoping, was to have the URL always get the latest version.
wget https://download.ccleaner.com/ccsetup556.exe -O ccleaner.exe
Note that the 556 in the URL is the variable I would like to always select the highest version.
They have two download pages, a direct link, and one that takes 2-5 seconds to download, however when I point wget to that page, it just downloads the HTML webpage.
Related
I'm trying to download the networks models of a Neural Network project used to detect NSFW images. The packed models are available in the assets section of a release at this URL: https://github.com/notAI-tech/NudeNet/releases
I would like to create a DOCKERFILE which download these assets when built. I started using the ADD command, but the files didn't get downloaded entirely (only few kB over the 120 MB of some files). So, I tried in my Linux CLI using wget and curl... But nothing worked as expected. For example the command :
curl -OJL https://github.com/notAI-tech/NudeNet/releases/download/v0/classifier_model.onnx
Starts the download but only download an HTML file instead of the actual ONNX file... It seems Github is doing some kind of redirection and I don't know how I can handle it with curl/wget and finally with the ADD command of a DOCKERFILE ?
I did visit
https://github.com/notAI-tech/NudeNet/releases/download/v0/classifier_model.onnx
in my browser and I did get login page, so apparently it is not publicly available. That would explain why you did get small HTML file (file with login form).
Github is doing some kind of redirection and I don't know how I can
handle it with(...)wget
You need to provide authentication data, I do not know how it is exactly done in this case, but I suspect they might use one of popular methods: basic authentication (see wget options --http-user=user and --http-password=pass) or cookies based solution (see wget options --load-cookies file and --save-cookies file and --keep-session-cookies).
Mentonied options are described in wget man page, which you might access by click link or doing man wget in terminal.
I'm trying to install PostgreSQL via Ansible on a Windows machine, meaning I'll need a URL. I know this shows me a download link, and I can usually right-click on the link and get the actual URL. Clocking on that line takes you to this page, and right-clicking on the start the download now link on that page doesn't give me a direct URL either, but a link to a javescript. I even looked a the HTML source in Brave browser Developer tools. I even found an example here, but the ftp url shown there doesn't have Windows installers. I also searched for "Postgres download URL" here, to no avail.
What the heck is the actual download URL, say for version 13?
On the page you linked to, if you click on the "Download" it takes you to this page](https://www.enterprisedb.com/postgresql-tutorial-resources-training?cid=437) which typically starts the download automatically.
However, there is a message
If your download does not begin automatically, start the download now
which is the direct link to the installer (in this case for 13.3)
I am trying to download a file to a remote server from the website adf.ly, but when I try wget it downloads the website html code. Help?
First start the download on your own computer (and watch the ad and everything). Then once its downloading the file, you should be able to get the URL where its downloading from by looking at your "Downloads" section (in Chrome, for example). Copy that URL and use it to download from the remote server.
Make sure you are wget'ing the link to the actual file, not the ad.fly link. Ad.fly has it setup this way to force you to watch their ads.
I'm trying to automate uploading and downloading from an ftp site using cURL inside MAtlab, but I'm having difficulties. Essentially I want one computer continuously uploading new files to an ftp, yet since there is a disk quota on the ftp, I want another computer continuously downloading and removing those same files from the ftp.
Easy enough, but my problem arises from wanting to make sure that I don't download a file that is still being uploaded, thereby resulting in an incomplete file.
First off, is there a way in cURL to make it so that the file wouldn't be available for download from the ftp site until the entire file has been uploaded?
One way around this is that I could upload files to one directory, and once they are finished uploading, then I could transfer them to a "Finished" directory on the ftp site. Then the download program would only look for files inside that "Finished" directory. However, I don't know how to transfer files within an ftp site using cURL.
Is it possible to transfer files between directories on an ftp site using cURL without having to download the file first?
And if anyone else has better ideas on how to perform this task, I'd love to hear em!
Thanks!
You can upload the files using a special name and then rename it when done, and have the download client only download files with that special "upload completed" name style.
Or you move them between directories just as you say (which is essentially a rename as well, just changing the directory too).
With the command line curl, you can perform "raw" commands after the upload with the -Q option and you can even find a tiny example in the curl FAQ: http://curl.haxx.se/docs/faq.html#Can_I_use_curl_to_delete_rename
how download site for viewing offline without specific folder
for example i want download the site without http://site.com/forum/ sub-directory
wget --help
might lead you to
-nH, --no-host-directories don't create host directories.
I'd try that first, but I'm not sure whether it will do what you want.