Stop spiders and malicious spiders crawling my site - robots.txt

I have created a robots.txt file and in the file i have put the following
User-agent: *
Disallow: /
I have uploaded this to my root folder.
If the "spider" does not check robots.txt then it is likely malicious, in which case would spoof the user-agent.
Is there away to stop this? or even make it harder for them?

Related

Why does Google not index my "robots.txt"?

I am trying to allow the Googlebot webcrawler to index my site. My robots.txt initially looked like this:
User-agent: *
Disallow: /
Host: www.sitename.com
Sitemap: https://www.sitename.com/sitemap.xml
And I changed it to:
User-agent: *
Allow: /
Host: www.sitename.com
Sitemap: https://www.sitename.com/sitemap.xml
Only Google is still not indexing my links.
I am trying to allow the Googlebot webcrawler to index my site.
Robots rules has nothing to do with indexing! They are ONLY about crawling ability. A page can be indexed, even if it is forbidden to be crawled!
host directive is supported only by Yandex.
If you want all bots are able to crawl your site, your robots.txt file should be placed under https://www.sitename.com/robots.txt, be available with status code 200, and contain:
User-agent: *
Disallow:
Sitemap: https://www.sitename.com/sitemap.xml
From the docs:
Robots.txt syntax can be thought of as the “language” of robots.txt files. There are five common terms you’re likely come across in a robots file. They include:
User-agent: The specific web crawler to which you’re giving crawl instructions (usually a search engine). A list of most user agents can be found here.
Disallow: The command used to tell a user-agent not to crawl particular URL. Only one "Disallow:" line is allowed for each URL.
Allow (Only applicable for Googlebot): The command to tell Googlebot it can access a page or subfolder even though its parent page or subfolder may be disallowed.
Crawl-delay: How many seconds a crawler should wait before loading and crawling page content. Note that Googlebot does not acknowledge this command, but crawl rate can be set in Google Search Console.
Sitemap: Used to call out the location of any XML sitemap(s) associated with this URL. Note this command is only supported by Google, Ask, Bing, and Yahoo.
Try to specifically mention Googlebot in your robots.txt-directives such as:
User-agent: Googlebot
Allow: /
or allow all web crawlers access to all content
User-agent: *
Disallow:

robots.txt - Disallow folder but allow files within folder

I seem to have a conflict between my sitemap.xml and my robots.txt
All the images on my site are stored in the folder /pubstore
When google crawls that folder it finds nothing because I an not including a listing of files in that folder.
This in turn generates hundreds of 404 errors in google search console.
What I decided to do, is block google from crawling the folder by adding:
Disallow: '/pubstore/'
What now happens is that files within that folder or in a sub-directory in that folder are block for google and thus Google is not indexing my images.
So an example scenario,
I have a page that uses the image /pubstore/12345/image.jpg
Google doesn't fetch it because /pubstore is blocked.
My end result is that I want the actual files to be crawlable but not the folder or its subdirectories.
Allow:
/pubstore/file.jpg
/pubstore/1234/file.jpg
/pubstore/1234/543/file.jpg
/pubstore/1234/543/132/file.jpg
Disallow:
/pubstore/
/pubstore/1234/
/pubstore/1234/543/
/pubstore/1234/543/132/
How can this be achieved?
If you don’t link to /pubstore/ and /pubstore/folder/ on your site, there is typically no reason to care about 404s for them. It’s the correct response for such URLs (as there is no content).
If you still want to use robots.txt to prevent any crawling for these, you have to use Allow, which is not part of the original robots.txt specification, but supported by Google.
For example:
User-agent: Googlebot
Disallow: /pubstore/
Allow: /pubstore/*.jpg$
Allow: /pubstore/*.JPG$
Or in case you want to allow many different file types, maybe just:
User-agent: Googlebot
Disallow: /pubstore/
Allow: /pubstore/*.
This would allow all URLs whose path starts with /pubstore/, followed by any string, followed by a ., followed by any string.

Robots.txt: Allow everything but the root directory

I have a site that is meant to have http://domain.com/blog as the root directory, and any traffic to http://domain.com is redirected to http://domain.com/blog.
This causes a problem cause when I go to Google and do site:domain.com, I see the root directory with the title of one of the first articles on the page. How can I block the root from being crawled, thus not showing up in search?
In webmaster tools I added the site as http://domain.com but I only fetch as google on the /blog directory and other static pages. Is that correct?
I usually know how to do this but this time the site has a sub-directory as the intended root so it's a bit different.
Can someone verify if this will do what I am trying to achieve?
User-agent: *
Allow: /$
Disallow: /
Robots.txt does NOT block a crawler from crawling certain webpages. Robots.txt is simply a text file with a set of guidelines that you ask the crawler to follow it does not at any time block a crawler. If you want to block a certain webpage from being crawl/visited - you will then have to block all access to that page, this includes other users that are not crawlers. But since you have already have it to redirect I see no issue.
Also the $ is not a unified standard, neither is Allow(technically). Try to make it focused on specific bots. Google and Bing recognise the Allow keyword, but many other bots does not.
Also your current robots.txt says this: Do not crawl any pages, but the root
I recommend this as your robots.txt
User-agent: *
Disallow: /
User-agent: googlebot
Disallow: /$
This tells all other bots, but google to not crawl your webpage. And it tells the google crawler not to crawl in root, but everything else is allowed.

robots.tx disallow all with crawl-delay

I would like to get information from a certain site, and checked to see if I were allowed to crawl it. The robots.txt file had considerations for 15 different user agents and then for everyone else. My confusion comes from the everyone else statement (which would include me). It was
User-agent: *
Crawl-delay: 5
Disallow: /
Disallow: /sbe_2020/pdfs/
Disallow: /sbe/sbe_2020/2020_pdfs
Disallow: /newawardsearch/
Disallow: /ExportResultServlet*
If I read this correctly, the site is asking that no unauthorized user-agents crawl it. However, the fact that they included a Crawl-delay seems odd. If I'm not allowed to crawl it, why would there even be a crawl delay consideration? And why would they need to include any specific directories at all? Or, perhaps I've read the " Disallow: /" incorrectly?
Yes, this record would mean the same if it were reduced to this:
User-agent: *
Disallow: /
A bot matched by this record is not allowed to crawl anything on this host (having an unneeded Crawl-delay doesn’t change this).

Stopping Google's crawl of my site

Google has started crawling my site, but from a temporary domain (beta.mydomain instead of just mydomain) and also I only want him to crawl just some of my pages. Therefore, I want to stop their crawl and only let them crawl pages I specify in a sitemap. How can I do that? (I know how to add a sitemap, but how can I stop their current crawling and request that they'll crawl just the sitemap)
Update: If I kill the subdomain beta.mydomain - will that be "fine" by them or will they continue go over all killed pages and "not like" them? Can I specify that in each page's header?
Create a single text file called 'robots.txt' in the root folder for your site. Inside...
User-agent: *
Disallow: /thisfolder/
Disallow: /foo.html
Disallow: /andthisfoldertoo/
Disallow: /andthisfile.html
I use this for project files. In fact, as I write this I think I'll change the way I work on projects and always put them in a sub-directory called /projects/project1/ so one line will do...
Disallow: /projects/
AND I also add a line for my image files. I don't like my images all over the web...
Disallow: /imgs/
You could start with a robots.txt file.
See google's info here
I presume you have already looked at webmaster tools and sitemaps from what you say? Do be aware that while a sitemap will help tell google WHAT to crawl, it won't work very well for telling them what NOT to crawl.
For that you will want to use the robots.txt file to block certain pages / folders.
Use a robots.txt, see this site.