Google adsense crawl this page http://www.finewallpaperss.com/doubleclick that is not available on my site.
following is url that attempt many times by Google adsense.
http://www.finewallpaperss.com/doubleclick/DARTIframe.html?gtVersion=200_26&mediaserver=http%3A%2F%2Fs0.2mdn.net%2F879366&xpc=%7B%22cn%22%3A%22peerIframe1377068657642%22%2C%22tp%22%3Anull%2C%22osh%22%3Anull%2C%22pru%22%3A%22http%3A%2F%2Fwww.finewallpaperss.com%2Fdoubleclick%2FDARTIframe.html%3FgtVersion%3Drelay_200_26%26mediaserver%3Dhttp%3A%2F%2Fs0.2mdn.net%2F879366%22%2C%22ppu%22%3A%22http%3A%2F%2Fgoogleads.g.doubleclick.net%2Frobots.txt%22%2C%22lpu%22%3A%22http%3A%2F%2Fwww.finewallpaperss.com%2Frobots.txt%22%7D
Any solution for this.
To fix the DoubleClick DARTIframe issue, you will need to create a new folder in the root directory of your website named as doubleclick. For example, the one we have here is http://techie-buzz.com/doubleclick/ and then upload a file provided by them called DARTIframe.html to that directory.
https://support.google.com/richmedia/answer/156583
Related
I recently migrated my static website to Google Cloud : www.vizitdata.com. Any page that is in the root folder renders. However, I have created a sub directory called "Cubs" and none of the html pages render in this directory. I am redirected to GoDaddy where my domain is hosted. This makes me think that Google Cloud is interpreting my sub directory as a subdomain. Is this assumption correct?
And is there a way to fix this issue?
I figured this one out. I'm guessing I need to do something in GoDaddy to recognize vizitdata.com (without the www) because once I added the "www" the page resolved. Cheers
This is my site link:
www.englishact.com
This is the sitemap current position:
Google is showing no error in sitemap or any other pages. But indexed pages are 0 for about 3 months. I also have uploaded new sitemaps which are acting same way with no index.
NB:
I am using 1and1 paid hosting package. Also, google has accepted adsence for this site. Now what can I do? Any suggestions?
Your website is index on Google, i just searched for site: www.englishact.com and got many results.
Check if the links in your XML sitemap are valid or redirecting to another URL.
Also you have to solve the duplication in the URLs, you can access your website with WWW and without it, also you have two URLs for the homepage http://englishact.com/ and http://www.englishact.com/index.php
After fixing these errors your website should be healthy and Google will understand the structure of it.
Got a notification from google webmaster tools that the number of 404 errors have gone up considerably, on inspecting the crawl errors I see a lot of errors to something that shouldn't there:
webmaster tools
I checked the source code but didnt find a mention of said URL so dont know where google is getting is from. This plugin directory doesn't even exist. Its a WordPress installation so theres a wp-content/plugins folder but no plugins/ folder.
What could be going on.. why is google trynna index a non existent URL and getting a 404.
Site URL is http://ladiesnightandbrunchesdubai.com
Any help would be appreciated.
This URL comes from Facebook Comments plugin. As it is not absolute URL, Google crawler thinks it is pointing to your website.
This probably didn't happen before either because:
1) Google crawler recently started execute more and more javascript - http://googlewebmastercentral.blogspot.be/2015/10/deprecating-our-ajax-crawling-scheme.html If this is the case, we could encounter more problems like this with 3rd party scripts on our website.
2) maybe Facebook comments plugin didn't have relative URLs
Solution:
Tell Google not to crawl these URLs by adding them to robots.txt
Disallow: /plugins/feedback.php
I'm seeing the same thing on my wordpress site. First occurrence was 11/23. There are now around 500 urls and growing.
I've grepped the wordpress codebase and can't find where that path is being constructed and discovered by google.
To fix the 404 report in Webmaster Tools I've added a 301 redirect on '^/plugins/feedback.php' to the homepage. And then marked all as 'Fixed' in Webmaster Tools.
A site I'm working on has been hacked. The CMS (which I didn't build) was accessed and some files (e.g. "km2jk4.php.jpg") were uploaded in image fields. I have since deleted them (a week ago). Now, when I search for the site on Google, then click the result, it either:
a) simply redirects me to the Google search page
OR
b) a download dialogue appears asking me to download a zip file, with the source domain something like gb.celebritytravelnetwork.com
Clearly the site's been compromised. But if I simply type the URL in the address bar, the site loads fine. This only happens when I click through Google results.
There is no .htaccess file on the server, and this is not a virus on my computer, since many other people have reported the same thing happening, so this question is not relevant.
Any ideas please?
Thanks.
Your Source files have been changed.
Check all the files included in the index page. They might be header , footer pages.
And try using : fetch as google bot.
I own some webspace which is registered with a University. Google has unfortunately found my CV (resume) on the site, but has mis-indexed it as a scholarly publication, which is screwing up things like citation counts on Google Scholar. I tried to upload a robots.txt into my local subdirectory. The problem is that google ignores this file, and instead uses the rules listed for the school domain.
That is, the url looks like
www.someschool.edu/~myusername/mycv.pdf
I have uploaded a robots.txt, which can be found here
www.someschool.edu/~myusername/robots.txt
And Google is ignoring it and instead using the robots.txt for the school's domain
www.someschool.edu/robots.txt
How can I make Googlebot ignore my CV?
Sadly, robots.txt is defined to be whatever you get when you GET /robots.txt, so you can't use it for your subdirectory.
What you can do is use the X-Robots-Tag HTTP header, if you can use custom .htaccess files. Here's Google's documentation on X-Robots-Tag.