Recursive/no-parent wget doesn't follow links with cookie-based authentication - wget

I'm running wget --recursive --no-parent --adjust-extension --convert-links --page-requisites --restrict-file-names=windows --keep-session-cookies --load-cookies cookies.txt http://DOMAIN/private/ and it correctly downloads the private/index.html file.
I inspected this file and it is the correct page shown only with successful authentication. It contains markup like:
<ul><li><a class="CP___PAGEID_56400" href="http://DOMAIN/private/page1.html">My private page</a></li>...
However, after fetching all the resources (images etc.) it seems to think it's finished and shuts down after 'converting links'.
If I skip --no-parent it keeps going. So is the --no-parent flag somehow confusing wget as to subpages?

Finally realized that wget is obeying robots.txt! I changed my command to wget -e robots=off --wait 0.25 --recursive --no-parent ... and got it working. I added the --wait 0.25 since I didn't want to clobber the server either.

Related

wget converted some of the links - how to convert links after download?

I used
wget -mirror --convert-links http://example.com/ 2>&1 | tee -a wget.log
to download a website. It turns out that only some of the links were converted. How can I have all of the links converted, even after the download? I do not want to download all of the contents again.
Firstly, please be aware that --convert-links does it job after everything was downloaded so if you are inspecting certain downloaded file before wget finished working you might see unconverted list.
I do not want to download all of the contents again.
then you should use --no-clobber, but according to man page --mirror is equivalent to -r -N -l inf --no-remove-listing and --no-clobber and -N are mutually exclusive, therefore you must not use --mirror but parts of it excluding -N taking this is account your command should look following way
wget -r --no-clobber -l inf --no-remove-listing --convert-links http://example.com/

wget not following links with mirror

I'm trying to semi mirror a site. What I want is to download all of the MP3s and make sure I'm not redownloading those that I already have (hence the "mirror" part). I've typed in the following:
wget -m -nd -e robots=off --random-wait -A "*.mp3" -P FOLDER http://www.example.com/
And it downloads all the MP3s on the Current Page. It never follows the links to the "Next Page" or the likes. I've replaced the -m with -N -c -r without success. What other options can I use?
Try:
wget ‐‐execute robots=off ‐‐recursive ‐‐accept mp3,MP3 --random-wait ‐‐no-parent ‐‐continue ‐‐no-clobber //site.com/

use wget to get script links of a website

I've been using wget spider to collect all of a website's links, but it will not return the paths to scripts. Is there any way to do this? My current wget request:
wget --spider --recursive --no-verbose --output-file=wget.txt https://www.example.com
Add the -p or --page-requisites option. That will download all the extra assets.

how to use wget on a site with many folders and subfolders

I try to download this site, with this code:
wget -r -l1 -H -t1 -nd -N -np -A.mp3 -erobots=off tenshi.spb.ru/anime-ost/
But I only get the index and enter inside the first folder, not the subfolder, help me?
I use this command to download sites including their subfolders:
wget --mirror -p --convert-links -P . [site address]
A little explanation:
--mirror is a shortcut for -N -r -l inf --no-remove-listing.
--convert-links makes links in downloaded HTML or CSS point to local files
-p allows you to get all images, etc. needed to display HTML pages
-P specifies the next argument is the directory the files will be saved to
I found the command at:
http://www.thegeekstuff.com/2009/09/the-ultimate-wget-download-guide-with-15-awesome-examples/
You use -l 1 also known as --level=1 which limits recursion to one level. Set that to a higher level to download more pages. BTW, I like long options like --level because its easier to see what you are doing without going back to man pages.

WGET problem downloading pdfs from website

I'm trying to download all of the pdfs and ppts from this website: http://mlss2011.comp.nus.edu.sg/index.php?n=Site.Slides
I do in Cygwin:
wget --no-parent -A.pdf,.pptx -r -l1 http://mlss2011.comp.nus.edu.sg/index.php?n=Site.Slides
but no files are downloaded.
What do I need to change in the above wget command for this to work?
needed to use the -e robots=off code, so this worked
wget -e robots=off -A.pdf,.pptx -r -l1 http://mlss2011.comp.nus.edu.sg/index.php?n=Site.Slides
Also in general, use the --debug flag for more help.