Allow all files in webroot, and disallow all directories unless specifically allowed - robots.txt

I like to disallow everything except:
All files in the web root
Specified directories in the web root.
I have seen this example at this answer
Allow: /public/section1/
Disallow: /
But does the above allow crawling of all files in web root?
I want to allow all files in web root.

If you want to disallow directories without disallowing files, you will need to use wildcards:
User-agent: *
Allow: /public/section1/
Disallow: /*/
The above will allow all of the following:
http://example.com/
http://example.com/somefile
http://example.com/public/section1/
http://example.com/public/section1/somefile
http://example.com/public/section1/somedir/
http://example.com/public/section1/somedir/somefile
And it will disallow all of the following:
http://example.com/somedir/
http://example.com/somedir/somefile
http://example.com/somedir/otherdir/somefile
Just be aware that wildcards are not part of the original robots.txt specification, and are not supported by all crawlers. They are supported by all of the major search engines, but there are many other crawlers out there that don't support them.

Related

Robots.txt - prevent index of .html files

I want to prevent index of *.html files on our site - so that just clean urls are indexed.
So I would like www.example.com/en/login indexed but not www.example.com/en/login/index.html
Currently I have:
User-agent: *
Disallow: /
Disallow: /**.html - not working
Allow: /$
Allow: /*/login*
I know I can just disallow e.g. Disallow: /*/login/index.html, but my issue is I have a number of these .html files that I do not want indexed - so wondered if there was a way to Disallow them all instead of doing them individually?
First of all, you keep using the word "indexed", so I want to ensure that you're aware that the robots.txt convention is only about suggesting to automated crawlers that they avoid certain URLs on your domain, but pages listed in a robots.txt file can still show up on search engine indexes if they have other data about the page. For instance, Google explicitly states they will still index and list a URL, even if they're not allowed to crawl it. I just wanted you to be aware of that in case you are using the word "indexed" to mean "listed in a search engine" rather than "getting crawled by an automated program".
Secondly, there's no standard way to accomplish what you're asking for. Per "The Web Robots Pages":
Note also that globbing and regular expression are not supported in either the User-agent or Disallow lines. The '*' in the User-agent field is a special value meaning "any robot". Specifically, you cannot have lines like "User-agent: bot", "Disallow: /tmp/*" or "Disallow: *.gif".
That being said, it's a common addition that many crawlers do support. For example, in Google's documentation of they directives they support, they describe pattern matching support that does handle using * as a wildcard. So, you could add a Disallow: /*.html$ directive and then Google would not crawl URLs ending with .html, though they could still end up in search results.
But, if your primary goal is telling search engines what URL you consider "clean" and preferred, then what you're actually looking for is specifying Canonical URLs. You can put a link rel="canonical" element on each page with your preferred URL for that page, and search engines that use that element will use it in order to determine which path to prefer when displaying that page.

Where to place the robots.txt file with G-WAN?

I want to disallow robots from crawling the csp folder and plan to use the following robots.txt file:
User-agent: *
Disallow: /csp
So, my question is double:
Is the syntax correct for G-WAN?
With G-WAN, where should I place this file?
The well-documented robots.txt file should be placed in the /www G-WAN fodler - if you want to use this feature. robots.txt is a hint for robots, many of them do not respect your will (so it's much safer to define file-system permissions or use an index.html file in the folders that you don't want to be browsed).
The /csp directory cannot be crawled by any HTTP client (including robots). Only the /www directory can.
This separation has worked pretty well in terms of simplicity, design and security so far, avoiding the pitfall of deciding what is executable and what is the presentation layer.

Robots.txt, how to allow access only to domain root, and no deeper? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I want to allow crawlers to access my domain's root directory (i.e. the index.html file), but nothing deeper (i.e. no subdirectories). I do not want to have to list and deny every subdirectory individually within the robots.txt file. Currently I have the following, but I think it is blocking everything, including stuff in the domain's root.
User-agent: *
Allow: /$
Disallow: /
How can I write my robots.txt to accomplish what I am trying for?
Thanks in advance!
There's nothing that will work for all crawlers. There are two options that might be useful to you.
Robots that allow wildcards should support something like:
Disallow: /*/
The major search engine crawlers understand the wildcards, but unfortunately most of the smaller ones don't.
If you have relatively few files in the root and you don't often add new files, you could use Allow to allow access to just those files, and then use Disallow: / to restrict everything else. That is:
User-agent: *
Allow: /index.html
Allow: /coolstuff.jpg
Allow: /morecoolstuff.html
Disallow: /
The order here is important. Crawlers are supposed to take the first match. So if your first rule was Disallow: /, a properly behaving crawler wouldn't get to the following Allow lines.
If a crawler doesn't support Allow, then it's going to see the Disallow: / and not crawl anything on your site. Providing, of course, that it ignores things in robots.txt that it doesn't understand.
All the major search engine crawlers support Allow, and a lot of the smaller ones do, too. It's easy to implement.
In short no there is no way to do this nicely using the robots.txt standard. Remember the Disallow specifies a path prefix. Wildcards and allows are non-standard.
So the following approach (a kludge!) will work.
User-agent: *
Disallow: /a
Disallow: /b
Disallow: /c
...
Disallow: /z
Disallow: /A
Disallow: /B
Disallow: /C
...
Disallow: /Z
Disallow: /0
Disallow: /1
Disallow: /2
...
Disallow: /9

Robots.txt Disallow Certain Folder Names

I want to disallow robots from crawling any folder, at any position in the url with the name: this-folder.
Examples to disallow:
http://mysite.com/this-folder/
http://mysite.com/houses/this-folder/
http://mysite.com/some-other/this-folder/
http://mysite.com/no-robots/this-folder/
This is my attempt:
Disallow: /.*this-folder/
Will this work?
Officially globbing and regular expressions are not supported:
http://www.robotstxt.org/robotstxt.html
but apparently some search engines support this.

how to disallow all dynamic urls robots.txt [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
how to disallow all dynamic urls in robots.txt
Disallow: /?q=admin/
Disallow: /?q=aggregator/
Disallow: /?q=comment/reply/
Disallow: /?q=contact/
Disallow: /?q=logout/
Disallow: /?q=node/add/
Disallow: /?q=search/
Disallow: /?q=user/password/
Disallow: /?q=user/register/
Disallow: /?q=user/login/
i want to disallow all things that start with /?q=
The answer to your question is to use
Disallow: /?q=
The best (currently accessible) source on robots.txt I could find is on Wikipedia. (The supposedly definitive source is http://www.robotstxt.org, but site is down at the moment.)
According to the Wikipedia page, the standard defines just two fields; UserAgent: and Disallow:. The Disallow: field does not allow explicit wildcards, but each "disallowed" path is actually a path prefix; i.e. matching any path that starts with the specified value.
The Allow: field is a non-standard extension, and any support for explicit wildcards in Disallow would be a non-standard extension. If you use these, you have no right to expect that a (legitimate) web crawler will understand them.
This is not a matter of crawlers being "smart" or "dumb": it is all about standards compliance and interoperability. For example, any web crawler that did "smart" things with explicit wildcard characters in a "Disallow:" would be bad for (hypothetical) robots.txt files where those characters were intended to be interpreted literally.
As Paul said a lot of robots.txt interpreters are not too bright and might not interpret wild-cards in the path as you intend to use them.
That said, some crawlers try to skip dynamic pages on their own, worrying they might get caught in infinite loops on links with varying urls. I am assuming you are asking this question because you face a courageous crawler who is trying hard to access those dynamic paths.
If you have issues with specific crawlers, you can try to investigate specifically how that crawler works by searching its robots.txt capacity and specifying a specific robots.txt section for it.
If you generally just want to disallow such access to your dynamic pages, you might want to rethink your robots.txt design.
More often than not, dynamic parameter handling "pages" are under a specific directory or a specific set of directories. This is why it is normally very simple to simply Disallow: /cgi-bin or /app and be done with it.
In your case you seem to have mapped the root to an area that handles parameters. You might want to reverse the logic of robots.txt and say something like:
User-agent: *
Allow: /index.html
Allow: /offices
Allow: /static
Disallow: /
This way your Allow list will override your Disallow list by adding specifically what crawlers should index. Note not all crawlers are created equal and you may want to refine that robots.txt at a later time adding a specific section for any crawler that still misbehaves.