Few days ago, i've submited the sitemap to Google Web Master Tools, but no one single page was indexed.
Here's my sitemap: ghost.asepmaulanaismail.com/sitemap.xml
Sitemap Result: http://i.imgur.com/eHaWUy3.png
my robot.txt:
User-agent: *
Sitemap: http://ghost.asepmaulanaismail.com/sitemap.xml
Disallow: /ghost/
am i missing something?
Related
I am trying to run web scraping using selenium
What does this robot.txt content mean?
User-Agent: *
Disallow: /go/
Disallow: /launch-announcement/
Can i run web scraping in all folders except go and launch-announcement?
What is a robots.txt file?
Robots.txt is a text file webmasters create to instruct web robots (typically search engine robots) how to crawl pages on their website. The robots.txt file is part of the the robots exclusion protocol (REP), a group of web standards that regulate how robots crawl the web, access and index content, and serve that content up to users. The REP also includes directives like meta robots, as well as page-, subdirectory-, or site-wide instructions for how search engines should treat links (such as “follow” or “nofollow”).
In practice, robots.txt files indicate whether certain user agents (web-crawling software) can or cannot crawl parts of a website. These crawl instructions are specified by “disallowing” or “allowing” the behavior of certain (or all) user agents. view more...
The Disallow: tells the robot that it should not visit the mentioned page on the site.
Can i run web scraping in all folders except go and launch-announcement?
Yes you can scrape the other page except these 2.
According to the basic robots.txt guide, the rule-
User-Agent: *
Disallow: /go/
Disallow: /launch-announcement/
means crawling /go/ and /launch-announcement/ (and their subdirectories) is disallowed for all user agents.
I am trying to allow the Googlebot webcrawler to index my site. My robots.txt initially looked like this:
User-agent: *
Disallow: /
Host: www.sitename.com
Sitemap: https://www.sitename.com/sitemap.xml
And I changed it to:
User-agent: *
Allow: /
Host: www.sitename.com
Sitemap: https://www.sitename.com/sitemap.xml
Only Google is still not indexing my links.
I am trying to allow the Googlebot webcrawler to index my site.
Robots rules has nothing to do with indexing! They are ONLY about crawling ability. A page can be indexed, even if it is forbidden to be crawled!
host directive is supported only by Yandex.
If you want all bots are able to crawl your site, your robots.txt file should be placed under https://www.sitename.com/robots.txt, be available with status code 200, and contain:
User-agent: *
Disallow:
Sitemap: https://www.sitename.com/sitemap.xml
From the docs:
Robots.txt syntax can be thought of as the “language” of robots.txt files. There are five common terms you’re likely come across in a robots file. They include:
User-agent: The specific web crawler to which you’re giving crawl instructions (usually a search engine). A list of most user agents can be found here.
Disallow: The command used to tell a user-agent not to crawl particular URL. Only one "Disallow:" line is allowed for each URL.
Allow (Only applicable for Googlebot): The command to tell Googlebot it can access a page or subfolder even though its parent page or subfolder may be disallowed.
Crawl-delay: How many seconds a crawler should wait before loading and crawling page content. Note that Googlebot does not acknowledge this command, but crawl rate can be set in Google Search Console.
Sitemap: Used to call out the location of any XML sitemap(s) associated with this URL. Note this command is only supported by Google, Ask, Bing, and Yahoo.
Try to specifically mention Googlebot in your robots.txt-directives such as:
User-agent: Googlebot
Allow: /
or allow all web crawlers access to all content
User-agent: *
Disallow:
In Google Webmaster Tools when I 'Fetch as Google' it tells me there are 2 blocked resources which are blocked by robots.txt:
https://dash.reviews.co.uk/[cut]
https://googleads.g.doubleclick.net/[cut]
But I cannot see how these are blocked in my robots.txt, which contains the following:
User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /category/
Disallow: /tag/
Disallow: /tools/
Any clues?
You don't have to worry about those resources being blocked because those are on domains that you don't control and is being blocked by their robots.txt files.
Google Webmaster Tools is showing you that for the page you had it fetch, it can't see all the resources which is fairly common. Google and many large sites robots.txt many of their resources. (DoubleClick is a Google owned property)
As long as you can see the entirety of your content when you "fetch and render" you're in good shape.
I want to allow google robot:
1) to only see the main page
2) to see description in a search results for main page
I have the following code but it seems that it doesn't work
User-agent: *
Disallow: /feed
Disallow: /site/terms-of-service
Disallow: /site/rules
Disallow: /site/privacy-policy
Allow: /$
Am I missing something or I just need to wait the google robot to visit my site?
Or maybe it is some action required from google webmaster panel?
Thanks in advance!
Your robots.txt should work (and yes, it takes time), but you might want to make the following changes:
It seems you want to target only Google’s bot, so you should use User-agent: Googlebot instead of User-agent: * (which targets all bots that don’t have a specific record in your robots.txt).
It seems that you want to disallow crawling of all pages except the home page, so there is no need to specify a few specific path beginnings in Disallow.
So it could look like this:
User-agent: Googlebot
Disallow: /
Allow: /$
Google’s bot may only crawl your home page, nothing else. All other bots may crawl everything.
User-agent: *
Sitemap: https://somedomain.com/sitemap.xml
Disallow: /
Allow: /sitemap.xml
Allow: /some-page
Allow: /some-other-page
After submitting sitemap manually via google webmaster tools, it says that it can't read the Allowed pages, because they are blocked by robots.txt.
How to modify robots.txt, to allow them to be indexed, but leaving the rest of portal pages non-indexed?
It’s probably just a matter of time until Google recognizes the new/updated robots.txt.
You can "ask Google to more quickly crawl and index a new robots.txt file for your site" in the Google Webmaster Tools: Submit your updated robots.txt to Google.
Side note: As the Sitemap field does not belong to a single record (as the protocol defines: "independent of the user-agent line"), you might want to structure your robots.txt like this:
User-agent: *
Disallow: /
Allow: /sitemap.xml
Allow: /some-page
Allow: /some-other-page
Sitemap: https://somedomain.com/sitemap.xml