Disalow robots for different purpose with 1 Directive - robots.txt

Can I combine the 2 directive below into one as show under it and google or bing bot will still follow my robots? I have recently seen bingbot not following the second directive and thinking if I combine the directive they might follow it.
Original
User-agent:*
Disallow: /folder1/
Disallow: /folder2/
User-agent: *
Disallow: /*.png
Disallow: /*.jpg
Wanted to change to this
User-agent:*
Disallow: /folder1/
Disallow: /folder2/
Disallow: /*.png
Disallow: /*.jpg

You may only have one record with User-agent: *:
If the value is '*', the record describes the default access policy for any robot that has not matched any of the other records. It is not allowed to have multiple such records in the "/robots.txt" file.
When having more than one of these records, bots (that are not matched by a more specific record) might only follow the first one in the file.
So you have to use this record:
User-agent: *
Disallow: /folder1/
Disallow: /folder2/
Disallow: /*.png
Disallow: /*.jpg
Note that the * in a Disallow value has no special meaning in the original robots.txt specification, but some consumers use it as a wildcard.

Related

Robots.txt file to allow all root php files except one and disallow all subfolders content

I seem to be struggling with a robots.txt file in the following scenario. I would like all root folder *.php files to be indexed except for one (exception.php) and would like all content from all subdirectories of the root folder not to be indexed.
I have tried the following, but it allows accessing php files in subdirectories even though subdirectories in general are not indexed?
....
# robots.txt
User-agent: *
Allow: /*.php
disallow: /*
disallow: /exceptions.php
....
Can anyone help with this?
For crawlers that interpret * in Disallow values as wildcard (it’s not part of the robots.txt spec, but many crawlers support it anyway), this should work:
User-agent: *
Disallow: /exceptions.php
Disallow: /*/
This disallows URLs like:
https://example.com/exceptions.php
https://example.com//
https://example.com/foo/
https://example.com/foo/bar.php
And it allows URLs like:
https://example.com/
https://example.com/foo.php
https://example.com/bar.html
For crawlers that don’t interpret * in Disallow values as wildcard, you would have to list all subfolders (on the first level):
User-agent: *
Disallow: /exceptions.php
Disallow: /foo/
Disallow: /bar/

Need to stop indexing the URL parameters for custom build CMS

I would like for Google to ignore URLs like this:
https://www.example.com/blog/category/web-development?page=2
As my links are getting indexed in Google I need to stop indexing them. What code should I use to not index them?
This is my curet robots.txt file:
Disallow: /cgi-bin/
Disallow: /scripts/
Disallow: /privacy
Disallow: /404.html
Disallow: /500.html
Disallow: /tweets
Disallow: /tweet/
Can I use this to disallow them?
Disallow: /blog/category/*?*
With robots.txt, you can prevent crawling, not necessarily indexing.
If you want to disallow Google to crawl URLs
whose paths start with /blog/category/, and
that contain a query component (e.g., ?, ?page, ?page=2, ?foo=bar&page=2 etc.)
then you can use this:
Disallow: /blog/category/*?
You don’t need another * at the end because Disallow values represent the start of the URL (beginning from the path).
But note that this is not supported by all bots. According to the original robots.txt spec, the * has no special meaning. Conforming bots would interpret the above line literally (* as part of the path). If you were to follow only the rules from the original specification, you would have to list every occurrence:
Disallow: /blog/category/c1?
Disallow: /blog/category/c2?
Disallow: /blog/category/c3?

Why is googlebot blocking all my urls if the only disallow I selected in robots.txt was for iisbot?

I have had the following robots.txt for over a year, seemingly without issues:
User-Agent: *
User-Agent: iisbot
Disallow: /
Sitemap: http://iprobesolutions.com/sitemap.xml
Now I'm getting the following error from the robots.txt Tester
Why is googlebot blocking all my urls if the only disallow I selected was for iisbot ?
Consecutive User-Agent lines are added together. So the Disallow will apply to User-Agent: * as well as User-Agent: iisbot.
Sitemap: http://iprobesolutions.com/sitemap.xml
User-Agent: iisbot
Disallow: /
You actually don't need the User-Agent: *.
Your robots.txt is not valid (according to the original robots.txt specification).
You can have multiple records.
Records are separated by empty lines.
Each record must have at least one User-agent line and at least one Disallow line.
The spec doesn’t define how invalid records should be treated. So user-agents might either interpret your robots.txt as having one record (ignoring the empty line), or they might interpret the first record as allowing everything (at least that would be the likely assumption).
If you want to allow all bots (except "iisbot") to crawl everything, you should use:
User-Agent: *
Disallow:
User-Agent: iisbot
Disallow: /
Alternatively, you could omit the first record, as allowing everything is the default anyway. But I’d prefer to be explicit here.

Allow only one file of directory in robots.txt?

I want to allow only one file of directory /minsc, but I would like to disallow the rest of the directory.
Now in the robots.txt is this:
User-agent: *
Crawl-delay: 10
# Directories
Disallow: /minsc/
The file that I want to allow is /minsc/menu-leaf.png
I'm afraid to do damage, so I dont'know if I must to use:
A)
User-agent: *
Crawl-delay: 10
# Directories
Disallow: /minsc/
Allow: /minsc/menu-leaf.png
or
B)
User-agent: *
Crawl-delay: 10
# Directories
Disallow: /minsc/* //added "*" -------------------------------
Allow: /minsc/menu-leaf.png
?
Thanks and sorry for my English.
According to the robots.txt website:
To exclude all files except one
This is currently a bit awkward, as there is no "Allow" field. The
easy way is to put all files to be disallowed into a separate
directory, say "stuff", and leave the one file in the level above this
directory:
User-agent: *
Disallow: /~joe/stuff/
Alternatively you can explicitly disallow all disallowed pages:
User-agent: *
Disallow: /~joe/junk.html
Disallow: /~joe/foo.html
Disallow: /~joe/bar.html
According to Wikipedia, if you are going to use the Allow directive, it should go before the Disallow for maximum compatability:
Allow: /directory1/myfile.html
Disallow: /directory1/
Furthermore, you should put Crawl-delay last, according to Yandex:
To maintain compatibility with robots that may deviate from the
standard when processing robots.txt, the Crawl-delay directive needs
to be added to the group that starts with the User-Agent record right
after the Disallow and Allow directives).
So, in the end, your robots.txt file should look like this:
User-agent: *
Allow: /minsc/menu-leaf.png
Disallow: /minsc/
Crawl-delay: 10
Robots.txt is sort of an 'informal' standard that can be interpreted differently. The only interesting 'standard' is really how the major players are interpreting it.
I found this source saying that globbing ('*'-style wildcards) are not supported:
Note also that globbing and regular expression are not supported in either the User-agent or Disallow lines. The '*' in the User-agent field is a special value meaning "any robot". Specifically, you cannot have lines like "User-agent: bot", "Disallow: /tmp/*" or "Disallow: *.gif".
http://www.robotstxt.org/robotstxt.html
So according to this source you should stick with your alternative (A).

Multiple User-agents: * in robots.txt

Related question: Multiple User Agents in Robots.txt
I'm reading a robots.txt file on a certain website and it seems to be contradictory to me (but I'm not sure).
User-agent: *
Disallow: /blah
Disallow: /bleh
...
...
...several more Disallows
User-agent: *
Allow: /
I know that you can exclude certain robots by specifying multiple User-agents, but this file seems to be saying that all robots are disallowed of a bunch of files but also allowed to access all the files? Or am I reading this wrong.
This robots.txt is invalid, as there must only be one record with User-agent: *. If we fix it, we have:
User-agent: *
Disallow: /blah
Disallow: /bleh
Allow: /
Allow is not part of the original robots.txt specification, so not all parsers will understand it (those have to ignore the line).
For parsers that understand Allow, this line simply means: allow everything (else). But that is the default anyway, so this robots.txt has the same meaning:
User-agent: *
Disallow: /blah
Disallow: /bleh
Meaning: Everything is allowed except those URLs whose paths start with blah or bleh.
If the Allow line would come before the Disallow lines, some parsers might ignore the Disallow lines. But, as Allow is not specified, this might be different from parser to parser.