robots.txt remove entire subdomain/directory - robots.txt

I have a subdomain forums.example.com
It is in public_html/forums
If I put the following robots.txt in public_html/forums will it remove all the forums from the index? (I migrated forums to a different provider and want to remove all the forum pages from google's index)
User-agent: *
Disallow: /

User-agent: *
Disallow: /forums/
If you don't put the folder name "forums" you will be completely removed from google/spider robots for all your content in that web host.

Related

Why does Google not index my "robots.txt"?

I am trying to allow the Googlebot webcrawler to index my site. My robots.txt initially looked like this:
User-agent: *
Disallow: /
Host: www.sitename.com
Sitemap: https://www.sitename.com/sitemap.xml
And I changed it to:
User-agent: *
Allow: /
Host: www.sitename.com
Sitemap: https://www.sitename.com/sitemap.xml
Only Google is still not indexing my links.
I am trying to allow the Googlebot webcrawler to index my site.
Robots rules has nothing to do with indexing! They are ONLY about crawling ability. A page can be indexed, even if it is forbidden to be crawled!
host directive is supported only by Yandex.
If you want all bots are able to crawl your site, your robots.txt file should be placed under https://www.sitename.com/robots.txt, be available with status code 200, and contain:
User-agent: *
Disallow:
Sitemap: https://www.sitename.com/sitemap.xml
From the docs:
Robots.txt syntax can be thought of as the “language” of robots.txt files. There are five common terms you’re likely come across in a robots file. They include:
User-agent: The specific web crawler to which you’re giving crawl instructions (usually a search engine). A list of most user agents can be found here.
Disallow: The command used to tell a user-agent not to crawl particular URL. Only one "Disallow:" line is allowed for each URL.
Allow (Only applicable for Googlebot): The command to tell Googlebot it can access a page or subfolder even though its parent page or subfolder may be disallowed.
Crawl-delay: How many seconds a crawler should wait before loading and crawling page content. Note that Googlebot does not acknowledge this command, but crawl rate can be set in Google Search Console.
Sitemap: Used to call out the location of any XML sitemap(s) associated with this URL. Note this command is only supported by Google, Ask, Bing, and Yahoo.
Try to specifically mention Googlebot in your robots.txt-directives such as:
User-agent: Googlebot
Allow: /
or allow all web crawlers access to all content
User-agent: *
Disallow:

Robots.txt different from the one in directory

I can't figure out why Google reads my robots.txt file as Disallow: /.
This is what I have in my robots.txt file that is in the main root directory:
User-agent: *
Allow: /
But if I digit in browser it will show Disallow: /: http://revita.hr/robots.txt
I tried everything, submitted the sitemap, added meta robots index, follow into <head>, but it's always the same.
Any ideas?
You seem to have a different robots.txt file if accessing it via HTTPS (→ Allow) instead of HTTP (→ Disallow).
By the way, you don’t need to state
User-agent: *
Allow: /
because allowing everything is the default. As Allow is not part of the original robots.txt specification, you might want to use this instead:
User-agent: *
Disallow:
Also note that you should not have a blank line inside a record.

What is the "unique" keyword in robots.txt?

I have the following code in robots.txt to allow crawling from everyone for now
User-agent: *
Disallow:
Before I changed this the layout of the file was this below. I've been looking for details about unique and I can't find it. Anyone see this before and what is "unique" doing exactly?
User-agent: *
Disallow: /unique/
It's not a keyword, it's a directory on your server that shouldn't be visited by a web crawler.

Remove multiples urls of same type from google webmaster

I accidentally kept some urls of type www.example.com/abc/?id=1 in which value of id can vary from 1 to 200. I don't want these to appear in search so i am using remove url feature of google webmasters tools. How can i remove all these types of urls in one shot? i tried www.example.com/abc/?id=* but this doesn't worked!
just block them using robots.txt ie.
User-agent: *
Disallow: /junk.html
Disallow: /foo.html
Disallow: /bar.html

Url blocked by robots.txt message in Google Webmaster

I've a wordpress site in root domain. Now, i've added a forum in subfolder as mydomain/forum
which makes a sitemap as follows: mydomain/forum/sitemap_index.xml.
Submitting that sitemap to google, It sounds google can't access sub-sitemaps with the message of "Url blocked by robots.txt" - Value: mydomain/forum/sitemap-forums.xml?page=1 --- Value: mydoamin/forum/sitemap-index.xml?page=1.
This is my robots.txt:
User-agent: *
Disallow: /cgi-bin
Disallow: /wp-admin
Disallow: /wp-includes
Disallow: /wp-content/plugins
Disallow: /wp-content/cache
Disallow: /wp-content/themes
Disallow: /trackback
Disallow: /feed
Disallow: /comments
Disallow: /category/*/*
Disallow: */trackback
Disallow: */feed
Disallow: */comments
Disallow: /*?*
Disallow: /*?
Allow: /wp-content/uploads
# Google Image
User-agent: Googlebot-Image
Disallow:
Allow: /*
Sitemap: mydomain/sitemap_index.xml
Sitemap: mydomain/forum/sitemap_index.xml
What should i add to robots.txt? Any help would be greatly appreciated.
Thanks in advance
Just to clarify, I'm assuming 'mydomain' in your example is a stand-in for the scheme plus fully qualified domain name, correct? (e.g. "http://www.whatever.com", not "whatever.com" or "www.whatever.com") I figure this must be the case because you have it in the Google error message in the same format.
The error message suggests that Google is getting the URL from somewhere other than your robots.txt file. The robots.txt file lists the sitemap URL as:
mydomain/forum/sitemap_index.xml
but the error message shows that Google is trying to load the URL:
mydomain/forum/sitemap-index.xml?page=1
This second URL is getting blocked, because your robots.txt file blocks any URL that contains a question mark:
Disallow: /*?*
Disallow: /*?
(Incidentally, these two lines do exactly the same thing. You can safely delete the first one) Google should still be able to read the sitemap file using the simpler URL however, so the pages will probably still be crawled. If you really want to get rid of the error message, you could always add:
Allow: /forum/sitemap-index.xml?page=1
This will override the disallows for just the sitemap URL. (This will work on Google at least - YMMV for any other search engines)