Disallow /*foo but allow /*bar?foo=foo (i.e. how to disallow an API if query string might contain the same name?) - robots.txt

I want to disallow /*foo endpoint regardless of its query string, but allow /*bar regardless of its query string.
A robots.txt like below would also disallow /*bar?foo=foo with query string or with higher path which contains foo such as /foo/bar.
User-agent: *
Disallow: /*foo
How should I set robots.txt in this case? Does putting $ at the end work in this scenario?
The "standard" robots.txt doesn't accept wildcards, so I'm talking about the ones like used by Google.

Related

allowing certain urls and deny the rest with robots.txt

I need to allow only some particular directories and deny the rest. It is my understanding that you should allow first then disallow the rest. Is this right what I have setup?
Allow: /word-lists/words-that-start-with/letter/z/
Allow: /word-lists/words-that-end-with/letter/z/
Disallow: /word-lists/words-that-start-with/letter/
Disallow: /word-lists/words-that-end-with/letter/
Your snippet looks OK, just don't forget to add a User-Agent at the top.
The order of the allow/disallow keywords doesn't matter currently, but it's up to the client to make the correct choice. See Order of precedence for group-member records section in our Robots.txt documentation.
[...] for allow and disallow directives, the most specific rule based on the length of the [path] entry will trump the less specific (shorter) rule.
The original RFC does state that clients should evaluate rules in the order they're found, however I don't recall any crawler that would actually do that, instead they're playing on the safe side and follow the most restrictive rule.
To evaluate if access to a URL is allowed, a robot must attempt to
match the paths in Allow and Disallow lines against the URL, in the
order they occur in the record. The first match found is used. If no
match is found, the default assumption is that the URL is allowed.

Disallow dynamic URL in robots.txt

Our URL is:
http://example.com/kitchen-knife/collection/maitre-universal-cutting-boards-rana-parsley-chopper-cheese-slicer-vegetables-knife-sharpening-stone-ham-stand-ham-stand-riviera-niza-knives-block-benin.html
I want to disallow URLs to be crawled after collection, but before collection there are categories that are dynamically coming.
How would I disallow URLs in robots.txt after /collection?
This is not possible in the original robots.txt specification.
But some (!) parsers extend the specification and define a wildcard character (typically *).
For those parsers, you could use:
Disallow: /*/collection
Parsers that understand * as wildcard will stop crawling any URL whose path starts with anything (which may be nothing), followed by /collection/, followed by anything, e.g.,
http://example.com/foo/collection/
http://example.com/foo/collection/bar
http://example.com/collection/
Parsers that don’t understand * as wildcard (i.e., they follow the original specification) will stop crawling any URL whose paths starts with /*/collection/, e.g.
http://example.com/*/collection/
http://example.com/*/collection/bar

Does wget check if specified user agent is allowed in robots.txt?

If I specify a custom user agent for wget, eg "MyBot (info#mybot...)" Will wget check this in robots.txt as well, if the bot was banned, or only the general robot exclusions?
No, if you specify your own user agent, Wget does not check for it in the robots.txt file. In fact, I believe I've found another bug in Wget while trying to answer your question. Even if you specify a custom User Agent, Wget seems to adhere to its own User Agent rules when parsing robots.txt. I have created a test case for this and will fix the implementation in Wget ASAP.
Now for the authoritative answer to your original question. The answer is no, because the in the source of Wget, you see the following comment preceding the function that parses the robots file for rules:
/* Parse textual RES specs beginning with SOURCE of length LENGTH.
Return a specs objects ready to be fed to res_match_path.
The parsing itself is trivial, but creating a correct SPECS object is
trickier than it seems, because RES is surprisingly byzantine if you
attempt to implement it correctly.
A "record" is a block of one or more User-Agent' lines followed by
one or moreAllow' or Disallow' lines. Record is accepted by Wget if
one of theUser-Agent' lines was "wget", or if the user agent line
was "*".
After all the lines have been read, we examine whether an exact
("wget") user-agent field was specified. If so, we delete all the
lines read under "User-Agent: *" blocks because we have our own
Wget-specific blocks. This enables the admin to say:
User-Agent: * Disallow: /
User-Agent: google User-Agent: wget Disallow: /cgi-bin
This means that to Wget and to Google, /cgi-bin is disallowed, whereas
for all other crawlers, everything is disallowed. res_parse is
implemented so that the order of records doesn't matter. In the case
above, the "User-Agent: *" could have come after the other one. */

Disallow URLs with empty parameters in robots.txt

Normally I have this URL structure:
http://example.com/team/name/16356**
But sometimes my CMS generates URLs without name:
http://example./com/team//16356**
and then it’s 404.
How to disavow such URLs when they are empty?
Probably it would be possible with regex for empty symbol here, but I dont want to mess up with Googlebot, better do good from the beginning.
If you want to block URLs like http://example./com/team//16356**, where the number part can be different, you could use the following robots.txt:
User-agent: *
Disallow: /team//
This will block crawling of any URL whose path starts with /team//.

Help to rightly create robots.txt

I have dynamic urls like this.
mydomain.com/?pg=login
mydomain.com/?pg=reguser
mydomain.com/?pg=aboutus
mydomain.com/?pg=termsofuse
When the page is requested for ex. mydomainname.com/?pg=login index.php include login.php file.
some of the urls are converted to static url like
mydomain.com/aboutus.html
mydomain.com/termsofuse.html
I need to allow index mydomainname.com/aboutus.html, mydomainname.com/termsofuse.html
and disallow mydomainname.com/?pg=login, mydomainname.com/?pg=reguser, please help to manage my robots.txt file.
I have also mydomainname.com/posted.php?details=50 (details can have any number) which I converted to mydomainname.com/details/50.html
I need also to allow all this type of urls.
If you wish to only index your static pages, you can use this:
Disallow: /*?
This will disallow all URLs which contain a question mark.
If you wish to keep indexing posted.php?details=50 URLs, and you have a finite set of params you wish to disallow, you can create a disallow entry for each, like this:
Disallow: /?pg=login
Or just prevent everything starting with /?
Disallow: /?*
You can use a tool like this to test a sampling of URLs to see if it will match them or not.
http://tools.seobook.com/robots-txt/analyzer/