Robots.txt disallowing particular type of URL - robots.txt

I want to exclude this URL from bots:-
test.com/p/12345/qwerty
But allow this URL:-
test.com/p/12345
Will this line work?
User-Agent: *
Disallow: */p/*/*
Thanks

According to this tutorial there is no need for the * at the beginning. It should be:
User-Agent: *
Disallow: /p/*/*
Note that this will have the side effect of blocking bots on an address like test.com/p/abc/def
In order to do the exact functionality you are asking for and nothing more (i.e. no side effects), use this:
User-Agent: *
Disallow: /p/12345/qwerty
test.com/p/12345 will be allowed by default.

Related

How to block fake Googlebots?

I guess a fake Googlebot visited my site. Here is the entry log:
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
66.249.73.72
I think like that because it crawled some addresses that do not exist! actually, they had been created by me at all
the fake bot has some stracture, it adds a spicefic word to first of my URLs
for instance
this page is exist
https://stackoverflow.com/user
but the bot crawled :
https://stackoverflow.com/some-word-user
https://stackoverflow.com/some-word-jobs
and here my robots.txt.
User-agent: *
Disallow: /search?q=*
Disallow: *?replytocom
Disallow: /*add-to-cart=*
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php
Sitemap: -----
First you should know, googlebot crawls not existing addresses too,
i.e. on trying to discover new content.
Second, i personally would better live with fake googlebots, as to
risk to exclude googlebot per its IP. Google adds new IPs to
googlebot. Again: don't risk it.
In my experience Googlebot searches always come from a Googlebot IP address as in crawl-xx-xxx-xxx-xxx.googlebot.com
So a possible method is to check that if the agent includes Googlebot/2.1 AND the remote includes googlebot.com then it is valid. If not then it's a fake.
Here is the code -
$agent = $_SERVER['HTTP_USER_AGENT'];
$remote = isset($_SERVER['REMOTE_HOST']) ? $_SERVER['REMOTE_HOST'] : gethostbyaddr($_SERVER['REMOTE_ADDR']);
$value = "googlebot";
$pos1 = strpos(strtolower($remote),$value);
$pos2 = strpos(strtolower($agent),$value);
if ($pos1===false && $pos2>0) {
require_once($_SERVER['DOCUMENT_ROOT'].'errorpage.php');
exit();
}

Disallow /*foo but allow /*bar?foo=foo (i.e. how to disallow an API if query string might contain the same name?)

I want to disallow /*foo endpoint regardless of its query string, but allow /*bar regardless of its query string.
A robots.txt like below would also disallow /*bar?foo=foo with query string or with higher path which contains foo such as /foo/bar.
User-agent: *
Disallow: /*foo
How should I set robots.txt in this case? Does putting $ at the end work in this scenario?
The "standard" robots.txt doesn't accept wildcards, so I'm talking about the ones like used by Google.

Robot.txt blocking URLs with page parameter higher than 10

I was checking already for similar questions, but I don't think this specific case has been asked and answered yet.
I'd like to block all URLs with the page Parameter higher than 10 (I probably choose a lower value than 10).
Disallow: /events/world-wide/all-event-types/all?page=11
Allow : /events/world-wide/all-event-types/all?page=3
I have alot of similar URLS where the other "parameters" can change with some lists which have up almost 150 pages.
Disallow: /events/germany/triathlon/all?page=13
Allow : /events/germany/triathlon/all?page=4
How can I accomplish this without listing all the URLs (which is basically impossible)
Please let me emphasize again here that the page parameter is the important thing here.
I can probably do something like this:
Disallow: *?page=
Allow: *?page=(1-10)
What's the proper approach here?
The robots.txt "regEx" syntax is fairly limited so unfortunately it can result in unnecessarily large robots.txt files. Although the other answers address the primary use case, you might want to also consider adding some variants to account for shuffling of additional parameters.
Disallow: *?page=
Disallow: *&page=
Allow: *?page=1$
Allow: *?page=2$
Allow: *?page=3$
...
Allow: *?page=1&
Allow: *?page=2&
Allow: *?page=3&
...
Allow: *&page=1&
Allow: *&page=2&
Allow: *&page=3&
....
You can use this way:
Allow: /*?page=1
Allow: /*?page=2
Allow: /*?page=3
Allow: /*?page=4
Allow: /*?page=5
Allow: /*?page=6
Allow: /*?page=7
Allow: /*?page=8
Allow: /*?page=9
Allow: /*?page=10
Disallow: /*?page=1*
Disallow: /*?page=2*
Disallow: /*?page=3*
Disallow: /*?page=4*
Disallow: /*?page=5*
Disallow: /*?page=6*
Disallow: /*?page=7*
Disallow: /*?page=8*
Disallow: /*?page=9*
So we allow pages from 1 to 10
And disallow pages higher, than 10.
You can read the google docs there
Thanks #Bazzilio for the nice try, but we programmers are lazy and try to avoid writing code as much as possible. The best I can come up with for now is the following (which works)
Disallow: *?page=
Allow: *?page=1$
Allow: *?page=2$
Allow: *?page=3$
Allow: *?page=4$
....
But isn't there a way to combine the Allow statements?

Disallow URLs with empty parameters in robots.txt

Normally I have this URL structure:
http://example.com/team/name/16356**
But sometimes my CMS generates URLs without name:
http://example./com/team//16356**
and then it’s 404.
How to disavow such URLs when they are empty?
Probably it would be possible with regex for empty symbol here, but I dont want to mess up with Googlebot, better do good from the beginning.
If you want to block URLs like http://example./com/team//16356**, where the number part can be different, you could use the following robots.txt:
User-agent: *
Disallow: /team//
This will block crawling of any URL whose path starts with /team//.

Disallow dynamic urls using robots.txt

I have URLs like example.com/post/alai-fm-sri-lanka-listen-online-1467/
I want to remove all URLs which have post word in them using robots.txt
So which is corrent format?
Disallow: /post-*
Disallow: /?page=post
Disallow: /*page=post
(Note that the file has to be called robots.txt; I corrected it in your question.)
You only included one example URL, where "post" is the first path segment. If all your URLs look like that, the following robots.txt should work:
User-agent: *
Disallow: /post/
It would block the following URLs:
http://example.com/post/
http://example.com/post/foobar
http://example.com/post/foo/bar
…
The following URLs would still be allowed:
http://example.com/post
http://example.com/foo/post/
http://example.com/foo/bar/post
http://example.com/foo?page=post
http://example.com/foo?post=1
…
Googlebot and Bingbot both handle limited wildcarding, so this will work:
Disallow: /*post
Of course, that will also disallow any url that contains the words "compost", "outpost", "poster", or anything that contains the substring "post".
You could try to make it a little better. For example:
Disallow: /*/post // any segment that starts with "post"
Disallow: /*?post= // the post query parameter
Disallow: /*=post // any value that starts with "post"
Understand, though, that not all bots support wildcards, and of those that do some are buggy. Bing and Google handle them correctly. There's no guarantee if other bots do.