I'm trying to print http headers to a text file. I tried :
wget -S --spider -O SESSIONS.txt 'mysite.com'
wget -S --spider 'mysite.com' > SESSIONS.txt
In both cases SESSIONS.txt remains empty. why?
"--spider" option does not download anything.
You can try this -
wget -S --spider -q mysite.com 2>Sessions.txt
This will save only the headers to "Sessions.txt"
However, you will have to use echo and other commands to figure out which request generated which headers.
Or, you can remove the -q option and then parse the file to remove unnecessary lines.
Another way is to use "curl -I". However, this sends a HEAD request instead of a GET request. So, it will only work if the server supports and responds to HEAD requests.
Related
I'm trying to download a file using:
wget https://huggingface.co/distilbert-base-uncased/blob/main/vocab.txt
I'm expecting to get the .txt file, however, I get the page html instead.
I tried wget --max-redirect=2 --trust-server-names <url> based on the suggestions here and wget -m <url> which downloads the entire website, and a few other variations that also don't work.
wget https://huggingface.co/distilbert-base-uncased/blob/main/vocab.txt
This point wget to HTML page even though it has .txt suffix. After visting it I found there is link to text file itself under raw, which you should be able to use with wget following way
wget https://huggingface.co/distilbert-base-uncased/raw/main/vocab.txt
If you need to reveal true type of file without downloading it you might use --spider option, in this case
wget --spider https://huggingface.co/distilbert-base-uncased/blob/main/vocab.txt
gives output containing
Length: 7889527 (7,5M) [text/html]
and
wget --spider https://huggingface.co/distilbert-base-uncased/raw/main/vocab.txt
gives output containing
Length: 231508 (226K) [text/plain]
I have a download link to a large file.
You need to be logged in to the site, so a cookie is used.
The download link redirects to another URL.
I'm able to download the file with wget but I only want the output of the "real" direct download link.
wget does exactly this before starting the download
Location: https://foo.com/bar.zip [following]
Is there a way to make wget stop and not actually downloading the file?
The solutions I found recommend redirecting to dev/null but this would still download the file. What I want is wget following the redirects but not actually starting the download.
I couldn't find a way to do it with wget, but I found a way to do it with curl:
curl https://openlibrary.org/data/ol_dump_latest.txt.gz -s -L -I -o /dev/null -w '%{url_effective}'
This only downloads the HEAD of the page (and sends it to /dev/null), so the file itself is never downloaded.
(src: https://stackoverflow.com/a/5300429/2317712 )
Going off of #qqilihq's comment to the curl answer, this will first strip out the line starting with "Location:" then remove the "Location: " from the beginning and the " [following]" from the end using awk. Not sure if I would use this as it looks like a small change in the wget output could make it blow up. I would use the curl answer myself.
wget --max-redirect=0 http://example.com/link-to-get-redirec-url-from 2>&1 | awk '/Location: /,// { print }' | awk '{print $2}'
I tried '-N' and '--no-clobber' but the only result that I get is to retrieve a new copy of the existing example.exe with number a number added using this synax 'example.exe.1'. This is not what I'd like to get. I just need to download and overwrite the file example.exe in the same folder where I already saved a copy of example.com without that wget verifies if the mine is older or newer respect the on example.exe file already present in my download folder. Do you think is i possible or I need to create a script that delete the example.exe file or maybe something that change his modification date etc?
If you specify the output file using the -O option it will overwrite any existing file.
For example:
wget -O index.html bbc.co.uk
Run multiple times will keep over-writting index.html.
wget doesn't let you overwrite an existing file unless you explicitly name the output file on the command line with option -O.
I'm a bit lazy and I don't want to type the output file name on the command line when it is already known from the downloaded file. Therefore, I use curl like this:
curl -O http://ftp.vim.org/vim/runtime/spell/fr.utf-8.spl
Be careful when downloading files like this from unsafe sites. The above command will write a file named as the connected web site wishes to name it (inside the current directory though). The final name may be hidden through redirections and php scripts or be obfuscated in the URL. You might end up overwriting a file you don't want to overwrite.
And if you ever find a file named ls or any other enticing name in the current directory after using curl that way, refrain from executing the downloaded file. It may be a trojan downloaded from a rogue or corrupted web site!
wget --backups=1 google.com
renames original file with .1 suffix and writes new file to the intended filename.
Not exactly what was requested, but could be handy in some cases.
-c or --continue
From the manual:
If you use ā-cā on a non-empty file, and the server does not support
continued downloading, Wget will restart the download from scratch and
overwrite the existing file entirely.
I like the -c option. I started with the man page then the web but I've searched for this several times. Like if you're relaying a webcam so the image needs to always be named image.jpg. Seems like it should be more clear in the man page.
I've been using this for a couple years to download things in the background, sometimes combined with "limit-rate = " in my wgetrc file
while true
do
wget -c -i url.txt && break
echo "Restarting wget"
sleep 2
done
Make a little file called url.txt and paste the file's URL into it. Set this script up in your path or maybe as an alias and run it. It keeps retrying the download until there's no error. Sometimes at the end it gets into a loop displaying
416 Requested Range Not Satisfiable
The file is already fully retrieved; nothing to do.
but that's harmless, just ctrl-c it. I think it's always gotten the file I wanted even if wget runs out of retries or the connection temporarily goes away. I've downloaded things for days at a time with it. A CD image on dialup, yes, always with wget.
My use case involves two different URLs, sometimes the second one doesn't exist, but if it DOES exist, I want it to overwrite the first file.
The problem of using wget -O is that, when the second file DOESN'T exist, it will overwrite the first file with a BLANK file.
So the only way I could find is with an if statement:
--spider checks if a file exists, and returns 0 if it does
--quiet fail quietly, with no output
-nv is quiet, but still reports errors
wget -nv https://example.com/files/file01.png -O file01.png
# quietly check if a different version exists
wget --quiet --spider https://example.com/custom-files/file01.png
if [ $? -eq 0 ] ; then
# A different version exists, so download and overwrite the first
wget -nv https://example.com/custom-files/file01.png -O file01.png
fi
It's verbose, but I found it necessary. I hope this is helpful for someone.
Here is an easy way to get it done with parameter trimming
url=https://example.com/example.exe ; wget -nv $url -O ${url##*/}
Or you can use basename
url=https://example.com/example.exe ; wget -nv $url -O $( basename $url )
For those who do not want to use -O and want to specify the output directory only, the following command can be used.
wget \
--directory-prefix "$dest" \
--backups 0 \
-- "$link"
the first command will download from the source with the wget command
the second command will remove the older file
wget \
--directory-prefix "$dest" \
--backups 0 \
-- "$link"; \
rm '$file.1' -f;
I understand that -i flag takes a file (which may contain list of URLs) and I know that -o followed by a name can be specified to rename a item being downloaded using wget.
example:
wget -i list_of_urls.txt
wget -o my_custom_name.mp3 http://example.com/some_file.mp3
I have a file that looks like this:
file name: list_of_urls.txt
http://example.com/some_file.mp3
http://example.com/another_file.mp3
http://example.com/yet_another_file.mp3
I want to use wget to download these files with the -i flag but also save each file as 1.mp3, 2.mp3 and so on.
Can this be done?
You can use any script language (PHP or Python) for generate batch file. In thin batch file each line will contains run wget with url and -O options.
Or you can try write cycle in bash script.
I ran a web search again and found https://superuser.com/questions/336669/downloading-multiple-files-and-specifying-output-filenames-with-wget
Wget can't seem to do it but Curl can with -K flag, the file supplied can contain url and output name. See http://curl.haxx.se/docs/manpage.html#-K
If you are willing to use some shell scripting then https://unix.stackexchange.com/questions/61132/how-do-i-use-wget-with-a-list-of-urls-and-their-corresponding-output-files has the answer.
I have a question illustrated by the following command line interaction:
$ wget www.google.com -nv >> out.log
2014-10-28 21:41:43 URL:http://www.google.com/ [17700] -> "index.html.1" [1]
So wget www.google.com, and using -nv (nonverbose, but still printing error information), and i redirected all the output to out.log, so nothing should print on stdout, but information still gets printed to the terminal, which i can only assume is coming from stderr. Does anyone know why wget does that? How would i go about turning it off and still preserve error logging when there are actual errors?
Thanks a lot!
Jason
Like the manual says, the option you are looking for is -q. "Non-verbose" merely turns off verbose status reporting.
The somewhat weird design decisions in wget are one reason to prefer curl.
Use cURL instead:
$ curl -Ss http://www.stackoverflow.com -o /dev/null
(no output)
$ curl -Ss http://www.stackoverflow.invalid -o /dev/null
curl: (6) Couldn't resolve host 'www.stackoverflow.invalid'
If you for whichever reason really need to use wget, you can capture output and only show it on failure:
errors=$(2>&1 wget -nv http://www.stackoverflow.com) || echo "$errors" >&2