I want to use wget to get a mirror from one RSS site, and want to mirror all the links of 3 level deep
wget -m -k -l 3 http://www.cnn.com/services/rss/
However, in the output, I only see the index.html and terms.html. Is there any special with RSS site or I can not mirror this site please?
Thanks in advance.
Related
I'm trying to scrape a forum site, to build a read-only archive.
I understand how to use -A and -R to limit the pages I retrieve, but is there a way to also retrieve page-prerequisites (e.g., icons and such)
Thanks!
I'm trying to recursively fetch all pages linked from a Moin wiki page. I've tried many different wget recursive options, which all have the same result: only the html file from the given URL gets downloaded, not any of the pages linked from that html page.
If I use the --convert-links option, wget correctly translates the unfetched links to the right web links. It just doesn't recursively download those linked pages.
wget --verbose -r https://wiki.gnome.org/Outreachy
--2017-03-02 10:34:03-- https://wiki.gnome.org/Outreachy
Resolving wiki.gnome.org (wiki.gnome.org)... 209.132.180.180, 209.132.180.168
Connecting to wiki.gnome.org (wiki.gnome.org)|209.132.180.180|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘wiki.gnome.org/Outreachy’
wiki.gnome.org/Outreachy [ <=> ] 52.80K 170KB/s in 0.3s
2017-03-02 10:34:05 (170 KB/s) - ‘wiki.gnome.org/Outreachy’ saved [54064]
FINISHED --2017-03-02 10:34:05--
Total wall clock time: 1.4s
Downloaded: 1 files, 53K in 0.3s (170 KB/s)
I'm not sure if it's failing because the wiki's html links don't end with .html. I've tried using various combinations of --accept='[a-zA-Z0-9]+', --page-requisites, and --accept-regex='[a-zA-Z0-9]+' to work around that, no luck.
I'm not sure if it's failing because the wiki has html pages like https://wiki.gnome.org/Outreachy that links page URLs like https://wiki.gnome.org/Outreachy/Admin and https://wiki.gnome.org/Outreachy/Admin/GettingStarted. Maybe wget is confused because there will need to be an HTML page and a directory with the same name? I also tried using --nd but no luck.
The linked html pages are all relative to the base wiki URL (e.g. Outreachy history page). I've tried also adding --base="https://wiki.gnome.org/ with no luck.
At this point, I've tried a whole lot of different wget options, read several stack overflow and unix.stackexchange.com questions, and nothing I've tried has worked. I'm hoping there's a wget expert that can look at this particular wiki page and figure why wget is failing to recursively fetch linked pages. The same options work fine on other domains.
I've also tried httrack, with the same result. I'm running Linux, so please don't suggest Windows or proprietary tools.
This seems to be caused by the following tag in the wiki:
<meta name="robots" content="index,nofollow">
If you are sure you want to ignore the tag, you can make wget ignore it using -e robots=off:
wget -e robots=off --verbose -r https://wiki.gnome.org/Outreachy
I'm attempting to create a mirror of a WordPress site with clean URLs (i.e. http://example.org/foo not http://example.org/foo.php). When Wget mirrors the site, it gives all pages and links a ".html" extension (i.e. http://example.org/foo.html).
Is it possible to set options for Wget to create a clean URL structure, so that the mirrored file corresponding to the page "http:example.org/foo" would be "/foo/index.html" and the link to that page would be "http:example.org/foo"? If so, how?
If I understand your question correctly, you're asking for what is the default behaviour of Wget.
Wget will only add the extension to the local copy, if the --adjust-extension option has been passed to it. Quoting the man page for Wget:
--adjust-extension
If a file of type application/xhtml+xml or text/html is downloaded and the URL does not end with the regexp \.[Hh][Tt][Mm][Ll]?, this option will cause the suffix .html to be appended to the
local filename. This is useful, for instance, when you're mirroring a remote site that uses .asp pages, but you want the mirrored pages to be viewable on your stock Apache server. Another good
use for this is when you're downloading CGI-generated materials. A URL like http://example.com/article.cgi?25 will be saved as article.cgi?25.html.
However, what you seem to be asking for, that Wget saves example.org/foo as /foo/index.html is actually the default option. If you're seeing some other output, you should post the complete output of Wget with the --debug switch.
I have this RSS feed URL:
http://mediosymedia.com/wp-content/plugins/nextgen-gallery/xml/media-rss.php
A client is trying to access to this RSS programmatically via PERL like this:
# Fetch the content available in source HTTP URL
`curl -g --compressed "$source_url" > $tempRSSFile`;
Where $source_url is http://mediosymedia.com/wp-content/plugins/nextgen-gallery/xml/media-rss.php
But they said that they couldn't access the feed this way with my URL, I know nothing about PERL so, you guys could point me in the right direction to make a compatible URL for the feed?
Thanks a lot!
The problem has nothing to do with perl. If you run the curl command from cmdline, then you get a Error 406 - Not Acceptable error. One possibility is to trick mod_security by using another User-Agent header. This works right now:
curl --user-agent Mozilla/5.0 -g --compressed http://mediosymedia.com/wp-content/plugins/nextgen-gallery/xml/media-rss.php > /tmp/feed.rss
But better, as amon already said, is to fix the server and allow RSS download also for curl.
I have the following problem. I need to mirror password protected site. Sounds like simple task:
wget -m -k -K -E --cookies=on --keep-session-cookies --load-cookies=myCookies.txt http://mysite.com
in myCookies.txt I am keeping proper session cookie. This works until wget come accross logout page - then session is invalidated and, effectively, further mirroring is usless.
W tried to add --reject option, but it works only with file types - I can block only html file download or swf file download, I can't say
--reject http://mysite.com/*.php?type=Logout*
Any ideas how to skip certain URLs in wget? Maybe there is other tool that can do the job (must work on MS Windows).
What if you first download (or even just touch) the logout page, and then
wget --no-clobber --your-original-arguments
This should skip the logout page, as it has already been downloaded
(Disclaimer: I didn't try this myself)
I have also encountered this problem and later solved it like this: "--reject-regex logout", more:wget-devTips