Prevent indexing of images containing a given string - robots.txt

I am a photographer and I need to prevent the indexing ( thus the finding ) of the images of my clients that are displayed on a password protected shop.
I could include in the file names a specific string like ... WWWWW ... that would mark the files I want to hide.
Does this robots.txt do the work ?
User-agent: *
Disallow: /*WWWWW*
How can I test if it does ?
Thanks

User-agent: Googlebot-Image
Disallow: /*.gif$
or you can totally disable with the help of htaccess file.
Deny from all

You can test your existing robots.txt file by using for example https://en.ryte.com/free-tools/robots-txt/ or even Googles own tester https://support.google.com/webmasters/answer/6062598?hl=en
The following will disallow a specific directory:
User-agent: *
Disallow: /path/to/images/
You can also use an wildcard *:
User-agent: *
Disallow: /*.jpg # Disallows any JPEG images
Disallow: /*/images/ # Disallows parsing of all */images/* directories
There's no need for trailing wildcards, they are ignored. /*/path/* equals /*/path/.
You don't want to make a extensive list of every single file to disallow, because the contents of the robots.txt file is publicly available. Therefore it is good practice to prioritize directories over file paths.
See https://developers.google.com/search/reference/robots_txt#url-matching-based-on-path-values for examples of paths/wildcards, and what they actually match.

Related

Robots.txt file to allow all root php files except one and disallow all subfolders content

I seem to be struggling with a robots.txt file in the following scenario. I would like all root folder *.php files to be indexed except for one (exception.php) and would like all content from all subdirectories of the root folder not to be indexed.
I have tried the following, but it allows accessing php files in subdirectories even though subdirectories in general are not indexed?
....
# robots.txt
User-agent: *
Allow: /*.php
disallow: /*
disallow: /exceptions.php
....
Can anyone help with this?
For crawlers that interpret * in Disallow values as wildcard (it’s not part of the robots.txt spec, but many crawlers support it anyway), this should work:
User-agent: *
Disallow: /exceptions.php
Disallow: /*/
This disallows URLs like:
https://example.com/exceptions.php
https://example.com//
https://example.com/foo/
https://example.com/foo/bar.php
And it allows URLs like:
https://example.com/
https://example.com/foo.php
https://example.com/bar.html
For crawlers that don’t interpret * in Disallow values as wildcard, you would have to list all subfolders (on the first level):
User-agent: *
Disallow: /exceptions.php
Disallow: /foo/
Disallow: /bar/

Allow only one file of directory in robots.txt?

I want to allow only one file of directory /minsc, but I would like to disallow the rest of the directory.
Now in the robots.txt is this:
User-agent: *
Crawl-delay: 10
# Directories
Disallow: /minsc/
The file that I want to allow is /minsc/menu-leaf.png
I'm afraid to do damage, so I dont'know if I must to use:
A)
User-agent: *
Crawl-delay: 10
# Directories
Disallow: /minsc/
Allow: /minsc/menu-leaf.png
or
B)
User-agent: *
Crawl-delay: 10
# Directories
Disallow: /minsc/* //added "*" -------------------------------
Allow: /minsc/menu-leaf.png
?
Thanks and sorry for my English.
According to the robots.txt website:
To exclude all files except one
This is currently a bit awkward, as there is no "Allow" field. The
easy way is to put all files to be disallowed into a separate
directory, say "stuff", and leave the one file in the level above this
directory:
User-agent: *
Disallow: /~joe/stuff/
Alternatively you can explicitly disallow all disallowed pages:
User-agent: *
Disallow: /~joe/junk.html
Disallow: /~joe/foo.html
Disallow: /~joe/bar.html
According to Wikipedia, if you are going to use the Allow directive, it should go before the Disallow for maximum compatability:
Allow: /directory1/myfile.html
Disallow: /directory1/
Furthermore, you should put Crawl-delay last, according to Yandex:
To maintain compatibility with robots that may deviate from the
standard when processing robots.txt, the Crawl-delay directive needs
to be added to the group that starts with the User-Agent record right
after the Disallow and Allow directives).
So, in the end, your robots.txt file should look like this:
User-agent: *
Allow: /minsc/menu-leaf.png
Disallow: /minsc/
Crawl-delay: 10
Robots.txt is sort of an 'informal' standard that can be interpreted differently. The only interesting 'standard' is really how the major players are interpreting it.
I found this source saying that globbing ('*'-style wildcards) are not supported:
Note also that globbing and regular expression are not supported in either the User-agent or Disallow lines. The '*' in the User-agent field is a special value meaning "any robot". Specifically, you cannot have lines like "User-agent: bot", "Disallow: /tmp/*" or "Disallow: *.gif".
http://www.robotstxt.org/robotstxt.html
So according to this source you should stick with your alternative (A).

Is wildcard in Robots.txt in middle of string recognized?

I need some string for robots.txt like:
disallow:
/article/*/
but I don't know if this is a proper way to do this or not?!
I need that for example:
/article/hello
/article/123
may be followed; BUT:
/article/hello/edit
/article/123/768&goshopping
the last lines would not be followed....
Wildcards are not part of the original robots.txt specification, but they are supported by all of the major search engines. If you just want to keep Google/Bing/Yahoo from crawling these pages, then the following should do it:
User-agent: *
Disallow: /article/*/
Older crawlers that do not support wildcards will simply ignore this line.

robot.txt to block directory showing

Few questions
How can you effectively block directories and their contents using robots.txt?
Is it ok to do:
User-agent: *
Disallow: /group
Disallow: /home
Do you have to put a trailing slash, for example:
User-agent: *
Disallow: /group/
Disallow: /home/
Also what is the difference between Disallow in robots.txt and adding ?
If I want google not to show specific pages and folders in a directory, what should I do?
Is it ok to do:
User-agent: * Disallow: /group Disallow: /home
You must place these on separate lines
It is highly recommended that you put a trailing slash if you are trying to exlude the directories home and group
I would do something like this:
User-agent: *
Disallow: /group/
Disallow: /home/
About the trailing slash, yes, you should add it according to http://www.thesitewizard.com/archive/robotstxt.shtml:
Remember to add the trailing slash ("/") if you are indicating a directory. If you simply add
User-agent: *
Disallow: /privatedata
the robots will be disallowed from accessing privatedata.html as well as ?privatedataandstuff.html as well as the directory tree beginning from /privatedata/ (and so on). In other words, there is an implied wildcard character following whatever you list in the Disallow line.
If you do not want google to show specific pages or directories, add a Disallow line for each of these pages or directories.

Multiple User-agents: * in robots.txt

Related question: Multiple User Agents in Robots.txt
I'm reading a robots.txt file on a certain website and it seems to be contradictory to me (but I'm not sure).
User-agent: *
Disallow: /blah
Disallow: /bleh
...
...
...several more Disallows
User-agent: *
Allow: /
I know that you can exclude certain robots by specifying multiple User-agents, but this file seems to be saying that all robots are disallowed of a bunch of files but also allowed to access all the files? Or am I reading this wrong.
This robots.txt is invalid, as there must only be one record with User-agent: *. If we fix it, we have:
User-agent: *
Disallow: /blah
Disallow: /bleh
Allow: /
Allow is not part of the original robots.txt specification, so not all parsers will understand it (those have to ignore the line).
For parsers that understand Allow, this line simply means: allow everything (else). But that is the default anyway, so this robots.txt has the same meaning:
User-agent: *
Disallow: /blah
Disallow: /bleh
Meaning: Everything is allowed except those URLs whose paths start with blah or bleh.
If the Allow line would come before the Disallow lines, some parsers might ignore the Disallow lines. But, as Allow is not specified, this might be different from parser to parser.