I want to allow the indexing of a pdf page in Magento 2 directory, I navigate to content-> Configuration -> Edit (first row)
Under search engine robots -> Edit custom instruction of robots.txt File, I have the following:
User-agent: *
Allow: /
Sitemap: example.com/sitemap.xml
If the PDF name is: 2018-document.pdf, how can I add it to the above so that I have it along with the sitemap?
You don't need (and you can't) add a PDF reference in the robots.txt. You should include a link to it in your sitemap.xml.
Related
I am trying to run web scraping using selenium
What does this robot.txt content mean?
User-Agent: *
Disallow: /go/
Disallow: /launch-announcement/
Can i run web scraping in all folders except go and launch-announcement?
What is a robots.txt file?
Robots.txt is a text file webmasters create to instruct web robots (typically search engine robots) how to crawl pages on their website. The robots.txt file is part of the the robots exclusion protocol (REP), a group of web standards that regulate how robots crawl the web, access and index content, and serve that content up to users. The REP also includes directives like meta robots, as well as page-, subdirectory-, or site-wide instructions for how search engines should treat links (such as “follow” or “nofollow”).
In practice, robots.txt files indicate whether certain user agents (web-crawling software) can or cannot crawl parts of a website. These crawl instructions are specified by “disallowing” or “allowing” the behavior of certain (or all) user agents. view more...
The Disallow: tells the robot that it should not visit the mentioned page on the site.
Can i run web scraping in all folders except go and launch-announcement?
Yes you can scrape the other page except these 2.
According to the basic robots.txt guide, the rule-
User-Agent: *
Disallow: /go/
Disallow: /launch-announcement/
means crawling /go/ and /launch-announcement/ (and their subdirectories) is disallowed for all user agents.
I have a subdomain eg blog.example.com and i want this domain not to index by Google or any other search engine. I put my robots.txt file in 'blog' folder in the server with following configuration:
User-agent: *
Disallow: /
Would it be fine to not to index by Google?
A few days before my site:blog.example.com shows 931 links but now it is displaying 1320 pages. I am wondering if my robots.txt file is correct then why Google is indexing my domain.
If i am doing anything wrong please correct me.
Rahul,
Not sure if your robots.txt is verbatim, but generally the directives are on TWO lines:
User-agent: *
Disallow: /
This file must be accessible from http://blog.example.com/robots.txt - if it is not accessible from that URL, the search engine spider will not find it.
If you have pages that have already been indexed by Google, you can also try using Google Webmaster Tools to manually remove pages from the index.
This question is actually about how to prevent indexing of a subdomain, here your robots file is actually preventing your site from being noindexed.
Don’t use a robots.txt file as a means to hide your web pages from Google search results.
Introduction to robots.txt: What is a robots.txt file used for? Google Search Central Documentation
For the noindex directive to be effective, the page or resource must not be blocked by a robots.txt file, and it has to be otherwise accessible to the crawler. If the page is blocked by a robots.txt file or the crawler can’t access the page, the crawler will never see the noindex directive, and the page can still appear in search results, for example if other pages link to it.
Block Search indexing with noindex Google Search Central Documentation
I would like to know if it is possible to block all robots from my site. I get some trouble because I redirect exsample.com to www.exsample.com. The robots.txt checker tools says I don't have a robots.txt file on exsample.com but have it on www.exsample.com.
Hope someone can help me out :)
just make a text file named robots.txt and in this file you write the following
User-agent: *
Disallow: /
and put it in your www folder or public_html folder
this would ask all the search engines to disallow all content of the website but not all the search engines would obbay to this protocol, but the most important search engines would read it and do as you asked
Robots.txt works per host.
So if you want to block URLs on http://www.example.com, the robots.txt must be accessible at http://www.example.com/robots.txt.
Note that the subdomain matters, so you can’t block URLs on http://example.com with a robots.txt only available on http://www.example.com/robots.txt.
I am trying to disallow a specific page and its parameters along with a parameter on the entire site. Below I have the exact examples.
We now have a page that will redirect and track exteral urls. Any external URL we want to track will be linked like /redirect?u=http://example.com We do not want to add rel="nofollow" to every link.
Last but not least (our biggest seo and index issue) is every single page has an auto generate URL to disable or enable mobile. So it can be on any page like /?mobileVersion=off (or on) or /accounts?login_to=%2Fdashboard&mobileVersion=off
Basically the easy way to disallow the two parameters would be to disallow mobileVersion and u from any page. (u is the parameter needed to redirect the URL and is only valid on /redirect)
My current robots.txt config:
User-Agent: *
Disallow: /redirect
Disallow: / *?*mobileVersion=off
If you want to see our full robots.txt files its located at http://spicethymeinc.com/robots.txt.
you could change
Disallow: / *?*mobileVersion=off
to
Disallow: /*mobileVersion=off
but it looks like it should work.
I'm going off the wildcard section and examples on this page:
http://tools.seobook.com/robots-txt/
edit: I have tested with the googlebot and googlebot mobile. The are blocked by both your current robots.txt and my suggested change. Google webmaster tools has a handy robots checker you can use to test.
I have a website (Ex: www.examplesite.com), and I am creating another site as a separate, stand-alone site in IIS. This second site's URL will make it look like it's part of my main site: www.examplesite.com/anothersite. This is accomplished by creating a virtual directory under my main site that points to the second site.
I am allowing my main site (www.examplesite.com) to be indexed in search engines, but I do not want my second, virtual directory site to be seen by search engines. Can I allow my second site to have its own robots.txt file, and disallow all pages for that site there? Or do I need to modify my main site's robots.txt file and tell it to disallow the virtual directory?
You can't have an own robots.txt for directories. Only a "host" can have it's own robots.txt: example.com, www.example.com, sub.example.com, sub.sub.example.com, …
So if you want to set rules for www.example.com/anothersite, you have to use the robots.txt at www.example.com/robots.txt.
If you want to block all pages of the sub-site, simply add:
User-agent: *
Disallow: /anothersite
This will block all URL paths that start with "anothersite". E.g. these links are all blocked then:
www.example.com/anothersite
www.example.com/anothersite.html
www.example.com/anothersitefoobar
www.example.com/anothersite/foobar
www.example.com/anothersite/foo/bar/
…
Note: If your robots.txt already contains User-agent: *, you'd have to add the Disallow line in this block instead of adding a new block (bots will stop reading the robots.txt as soon as they found a block that matches for them).