Robot.txt disallow page but not starting with - robots.txt

I want to disallow specific page
example.com/10
but not other pages starting with /10
example.com/101
example.com/102
example.com/103
how to do this?

You can use the Allow keyword to achieve it:
User-agent: *
Allow: /10*
Disallow: /10$
Results from http://tools.seobook.com/robots-txt/analyzer/:
Url: /10
Multiple robot rules found
Robots disallowed: All robots
Url: /101
Robots allowed: All robots
Url: /102
Robots allowed: All robots
Url: /103
Robots allowed: All robots
However, older robots may interpret it correctly. For example reading just the first line.

Related

How to block fake Googlebots?

I guess a fake Googlebot visited my site. Here is the entry log:
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
66.249.73.72
I think like that because it crawled some addresses that do not exist! actually, they had been created by me at all
the fake bot has some stracture, it adds a spicefic word to first of my URLs
for instance
this page is exist
https://stackoverflow.com/user
but the bot crawled :
https://stackoverflow.com/some-word-user
https://stackoverflow.com/some-word-jobs
and here my robots.txt.
User-agent: *
Disallow: /search?q=*
Disallow: *?replytocom
Disallow: /*add-to-cart=*
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php
Sitemap: -----
First you should know, googlebot crawls not existing addresses too,
i.e. on trying to discover new content.
Second, i personally would better live with fake googlebots, as to
risk to exclude googlebot per its IP. Google adds new IPs to
googlebot. Again: don't risk it.
In my experience Googlebot searches always come from a Googlebot IP address as in crawl-xx-xxx-xxx-xxx.googlebot.com
So a possible method is to check that if the agent includes Googlebot/2.1 AND the remote includes googlebot.com then it is valid. If not then it's a fake.
Here is the code -
$agent = $_SERVER['HTTP_USER_AGENT'];
$remote = isset($_SERVER['REMOTE_HOST']) ? $_SERVER['REMOTE_HOST'] : gethostbyaddr($_SERVER['REMOTE_ADDR']);
$value = "googlebot";
$pos1 = strpos(strtolower($remote),$value);
$pos2 = strpos(strtolower($agent),$value);
if ($pos1===false && $pos2>0) {
require_once($_SERVER['DOCUMENT_ROOT'].'errorpage.php');
exit();
}

Disallow /*foo but allow /*bar?foo=foo (i.e. how to disallow an API if query string might contain the same name?)

I want to disallow /*foo endpoint regardless of its query string, but allow /*bar regardless of its query string.
A robots.txt like below would also disallow /*bar?foo=foo with query string or with higher path which contains foo such as /foo/bar.
User-agent: *
Disallow: /*foo
How should I set robots.txt in this case? Does putting $ at the end work in this scenario?
The "standard" robots.txt doesn't accept wildcards, so I'm talking about the ones like used by Google.

Robots.txt disallowing particular type of URL

I want to exclude this URL from bots:-
test.com/p/12345/qwerty
But allow this URL:-
test.com/p/12345
Will this line work?
User-Agent: *
Disallow: */p/*/*
Thanks
According to this tutorial there is no need for the * at the beginning. It should be:
User-Agent: *
Disallow: /p/*/*
Note that this will have the side effect of blocking bots on an address like test.com/p/abc/def
In order to do the exact functionality you are asking for and nothing more (i.e. no side effects), use this:
User-Agent: *
Disallow: /p/12345/qwerty
test.com/p/12345 will be allowed by default.

Disallow URLs with empty parameters in robots.txt

Normally I have this URL structure:
http://example.com/team/name/16356**
But sometimes my CMS generates URLs without name:
http://example./com/team//16356**
and then it’s 404.
How to disavow such URLs when they are empty?
Probably it would be possible with regex for empty symbol here, but I dont want to mess up with Googlebot, better do good from the beginning.
If you want to block URLs like http://example./com/team//16356**, where the number part can be different, you could use the following robots.txt:
User-agent: *
Disallow: /team//
This will block crawling of any URL whose path starts with /team//.

Disallow dynamic urls using robots.txt

I have URLs like example.com/post/alai-fm-sri-lanka-listen-online-1467/
I want to remove all URLs which have post word in them using robots.txt
So which is corrent format?
Disallow: /post-*
Disallow: /?page=post
Disallow: /*page=post
(Note that the file has to be called robots.txt; I corrected it in your question.)
You only included one example URL, where "post" is the first path segment. If all your URLs look like that, the following robots.txt should work:
User-agent: *
Disallow: /post/
It would block the following URLs:
http://example.com/post/
http://example.com/post/foobar
http://example.com/post/foo/bar
…
The following URLs would still be allowed:
http://example.com/post
http://example.com/foo/post/
http://example.com/foo/bar/post
http://example.com/foo?page=post
http://example.com/foo?post=1
…
Googlebot and Bingbot both handle limited wildcarding, so this will work:
Disallow: /*post
Of course, that will also disallow any url that contains the words "compost", "outpost", "poster", or anything that contains the substring "post".
You could try to make it a little better. For example:
Disallow: /*/post // any segment that starts with "post"
Disallow: /*?post= // the post query parameter
Disallow: /*=post // any value that starts with "post"
Understand, though, that not all bots support wildcards, and of those that do some are buggy. Bing and Google handle them correctly. There's no guarantee if other bots do.