Is wildcard in Robots.txt in middle of string recognized? - robots.txt

I need some string for robots.txt like:
disallow:
/article/*/
but I don't know if this is a proper way to do this or not?!
I need that for example:
/article/hello
/article/123
may be followed; BUT:
/article/hello/edit
/article/123/768&goshopping
the last lines would not be followed....

Wildcards are not part of the original robots.txt specification, but they are supported by all of the major search engines. If you just want to keep Google/Bing/Yahoo from crawling these pages, then the following should do it:
User-agent: *
Disallow: /article/*/
Older crawlers that do not support wildcards will simply ignore this line.

Related

Prevent indexing of images containing a given string

I am a photographer and I need to prevent the indexing ( thus the finding ) of the images of my clients that are displayed on a password protected shop.
I could include in the file names a specific string like ... WWWWW ... that would mark the files I want to hide.
Does this robots.txt do the work ?
User-agent: *
Disallow: /*WWWWW*
How can I test if it does ?
Thanks
User-agent: Googlebot-Image
Disallow: /*.gif$
or you can totally disable with the help of htaccess file.
Deny from all
You can test your existing robots.txt file by using for example https://en.ryte.com/free-tools/robots-txt/ or even Googles own tester https://support.google.com/webmasters/answer/6062598?hl=en
The following will disallow a specific directory:
User-agent: *
Disallow: /path/to/images/
You can also use an wildcard *:
User-agent: *
Disallow: /*.jpg # Disallows any JPEG images
Disallow: /*/images/ # Disallows parsing of all */images/* directories
There's no need for trailing wildcards, they are ignored. /*/path/* equals /*/path/.
You don't want to make a extensive list of every single file to disallow, because the contents of the robots.txt file is publicly available. Therefore it is good practice to prioritize directories over file paths.
See https://developers.google.com/search/reference/robots_txt#url-matching-based-on-path-values for examples of paths/wildcards, and what they actually match.

robots.txt: Does Wildcard mean no characters too?

I have the following example robots.txt and questions about the wildcard:
User-agent: *
Disallow: /*/admin/*
Does this rule now apply on both pages:
http://www.example.org/admin
and http://www.example.org/es/admin
So can the Wildcard stand for no characters?
In the original robots.txt specification, * in Disallow values has no special meaning, it’s just a character like any other. So, bots following the original spec would crawl http://www.example.org/admin as well as http://www.example.org/es/admin.
Some bots support "extensions" of the original robots.txt spec, and a popular extension is interpreting * in Disallow values to be a wildcard. However, these extensions aren’t standardized somewhere, each bot could interpret it differently.
The most popular definition is arguably the one from Google Search (Google says that Bing, Yahoo, and Ask use the same definition):
* designates 0 or more instances of any valid character
Your example
When interpreting the * according to the above definition, both of your URLs would still allowed to be crawled, though.
Your /*/admin/* requires three slashes in the path, but http://www.example.org/admin has only one, and http://www.example.org/es/admin has only two.
(Also note that the empty line between the User-agent and the Disallow lines is not allowed.)
You might want to use this:
User-agent: *
Disallow: /admin
Disallow: /*/admin
This would block at least the same, but possibly more than you want to block (depends on your URLs):
User-agent: *
Disallow: /*admin
Keep in mind that bots who follow the original robots.txt spec would ignore it, as they interpret * literally. If you want to cover both kinds of bots, you would have to add multiple records: a record with User-agent: * for the bots that follow the original spec, and a record listing all user agents (in User-agent) that support the wildcard.

Need to stop indexing the URL parameters for custom build CMS

I would like for Google to ignore URLs like this:
https://www.example.com/blog/category/web-development?page=2
As my links are getting indexed in Google I need to stop indexing them. What code should I use to not index them?
This is my curet robots.txt file:
Disallow: /cgi-bin/
Disallow: /scripts/
Disallow: /privacy
Disallow: /404.html
Disallow: /500.html
Disallow: /tweets
Disallow: /tweet/
Can I use this to disallow them?
Disallow: /blog/category/*?*
With robots.txt, you can prevent crawling, not necessarily indexing.
If you want to disallow Google to crawl URLs
whose paths start with /blog/category/, and
that contain a query component (e.g., ?, ?page, ?page=2, ?foo=bar&page=2 etc.)
then you can use this:
Disallow: /blog/category/*?
You don’t need another * at the end because Disallow values represent the start of the URL (beginning from the path).
But note that this is not supported by all bots. According to the original robots.txt spec, the * has no special meaning. Conforming bots would interpret the above line literally (* as part of the path). If you were to follow only the rules from the original specification, you would have to list every occurrence:
Disallow: /blog/category/c1?
Disallow: /blog/category/c2?
Disallow: /blog/category/c3?

How To Use a Wildcard in robots.txt

Is it possible to:
User-agent: *
Disallow: /apps/abc*/
In a robots.txt file to disallow abc123, abc-xyz, etc.?
Quoting Wikipedia:
The Robot Exclusion Standard does not mention anything about the "*" character in the Disallow: statement. Some crawlers like Googlebot and Slurp recognize strings containing "*", while MSNbot and Teoma interpret it in different ways.
Additional research may be found in the cited source.

Can I use robots.txt to block certain URL parameters?

Before you tell me 'what have you tried', and 'test this yourself', I would like to note that robots.txt updates awfully slow for my siteany site on search engines, so if you could provide theoretical experience, that would be appreciated.
For example, is it possible to allow:
http://www.example.com
And block:
http://www.example.com/?foo=foo
I'm not very sure.
Help?
According to Wikipedia, "The robots.txt patterns are matched by simple substring comparisons" and as the GET string is a URL you should be able to just add:
Disallow: /?foo=foo
or something more fancy like
Disallow: /*?*
to disable all get strings. The asterisk is a wildcard symbol so it matches one or many characters of anything.
Example of a robots.txt with dynamic urls.