Blocked errors in google webmaster for SMARTPHONE - google-search-console

I am getting blocked errors in GWT for smartphones. All blocked error pages which is showing in GWT already disallowed from robots.txt. But I don't why GWT showing these page in blocked errors (Smartphone).
Using in Robots.txt:
User-agent: *
Disallow: /directory/
Can anyone help?

These errors are mainly due to the desktop version of the pages being blocked by robots.txt
Examine the robots.txt ...which clearly shows it is blocked for smartphones as well
For more details..
https://support.google.com/webmasters/answer/4066608?hl=en&ref_topic=2446029

Related

Unwanted 303 Redirects prevent Facebook Sharing

I run a website which seems to have a problem with 303 Redirects.
The CMS is TYPO3 9.5.24.
I don't know where the redirects are coming from. Unfortunately the 303 redirects are not listed in the network tab of the console (testet Chrome, FF). Why not?
The Problem is Facebook is not able to scrape the pages. Their Sharing debugger (https://developers.facebook.com/tools/debug/) tells me "URL requested a HTTP redirect, but it could not be followed."
I checked with https://www.redirect-checker.org/index.php, there I get a loop of 303 redirects.
I can view the website in any browser just fine, no problems there.
I checked .htaccess and the TYPO3 Backend for 303 redirects, but found nothing.
I suspected a server (nginx) misconfiguration but can't figure it out. Other websites on the same server do not have that problem.
Has anyone experienced similar problems?
Found the redirection in our custom code. Had nothing to do with Typo3.
Sorry for the confusion.
Thanks Peter Kraume, the curl check helped me to find the problem.
Appearently modern browsers ignore a 303 redirect loop, so i was not able to see anything in browser console.
Can be closed.

"/recaptcha/api2/logo_48.png" blocked by Google

I have a contact form on my site, it all works fine. I also using Google Captcha for that form.
When I go to my G search console to make sure all is fine, I see I get one error stating:
Googlebot couldn't get all resources for this page. Here's a list:
https://www.gstatic.com/recaptcha/api2/logo_48.png << Blocked
I have gone to my robots.txt file and added the following but that didnt help
Allow: https://www.gstatic.com/recaptcha/api2/logo_48.png
Allow: /recaptcha/api2/logo_48.png
Your own robots.txt is only for URLs from your host.
The message is about a URL from a different host (www.gstatic.com). This host would have to edit its robots.txt file to allow crawling of /recaptcha/api2/logo_48.png. It’s currently disallowed.
In other words: You can’t control the crawling of files that are hosted by someone else.

slew of 404 errors on google webmaster tools to plugins/feedback.php file

Got a notification from google webmaster tools that the number of 404 errors have gone up considerably, on inspecting the crawl errors I see a lot of errors to something that shouldn't there:
webmaster tools
I checked the source code but didnt find a mention of said URL so dont know where google is getting is from. This plugin directory doesn't even exist. Its a WordPress installation so theres a wp-content/plugins folder but no plugins/ folder.
What could be going on.. why is google trynna index a non existent URL and getting a 404.
Site URL is http://ladiesnightandbrunchesdubai.com
Any help would be appreciated.
This URL comes from Facebook Comments plugin. As it is not absolute URL, Google crawler thinks it is pointing to your website.
This probably didn't happen before either because:
1) Google crawler recently started execute more and more javascript - http://googlewebmastercentral.blogspot.be/2015/10/deprecating-our-ajax-crawling-scheme.html If this is the case, we could encounter more problems like this with 3rd party scripts on our website.
2) maybe Facebook comments plugin didn't have relative URLs
Solution:
Tell Google not to crawl these URLs by adding them to robots.txt
Disallow: /plugins/feedback.php
I'm seeing the same thing on my wordpress site. First occurrence was 11/23. There are now around 500 urls and growing.
I've grepped the wordpress codebase and can't find where that path is being constructed and discovered by google.
To fix the 404 report in Webmaster Tools I've added a 301 redirect on '^/plugins/feedback.php' to the homepage. And then marked all as 'Fixed' in Webmaster Tools.

Why robots.txt doesn't work for when I do redirection from http to https

Today I experience the problem with search in the google.
When I type "trakopolis" in the google in shows me my page (so it is indexed by google robots), but the description of the page is not available. It is very important to have a description on my website.
the website is:
https://trakopolis.com
the robots txt file is, so I allow everything:
User-agent: *
Allow: /
https://www.google.com.ua/?gws_rd=cr#gs_rn=23&gs_ri=psy-ab&tok=O7cIXclKCSxtMd3uDVRVhg&cp=2&gs_id=h&xhr=t&q=trakopolis&es_nrs=true&pf=p&output=search&sclient=psy-ab&oq=tr&gs_l=&pbx=1&bav=on.2,or.r_qf.&bvm=bv.50165853,d.bGE&fp=d3f611552977418f&biw=1680&bih=949
but as you see the description is not available. I confused :( Sorry if the questio is stupid.
As I see from the google webmaster tools. Google use this robots.txt file, so maybe the issue with redirection from http to https? The website doesn't allow http and we use https. And on main page I use redirection to Login.aspx page in case if user didn't authenticate.
Google shows a description when searching for "trakopolis":
It seems that your robots.txt disallowed crawling of your site some time ago, as some other search engines still display that they are not allowed to show your description, e.g. DuckDuckGo.
Note that your robots.txt uses Allow, which is not part of the original robots.txt specification (but many parsers understand it anyway). It’s equivalent to:
User-agent: *
Disallow:
(But because parsers have to ignore unknown fields, you should have no problem using Allow. An empty or no existent robots.txt always allows crawling of everything.)

security warning in IE9 "Show all content"

I'm implementing the facebook Comments plugin on my site. Users get the warning "Show all content" in IE9
This other publisher using the same plugin and it does not bring up the warning.
Can some please help me with this?
Asking users to turn of the mixed content warning in their IE9 is not an option.
We were just looking at this today and our workaround for now was to include the Facebook Library over https (even when the page itself is viewed over http). Although not ideal it gets rid of the mixed content warnings in IE9 until they have fixed their bug.
That seems to be how it was accomplished at www.vg.no linked in the original question, the library is linked via https.
From their code:
<script src="https://connect.facebook.net/nb_NO/all.js"></script>
I have the same problem:
I have a page that's 100% http. But, the facebook javascript (which I call over http), is returning assets (.js, images) over https, which is generating security warnings for IE(9) users.
I have figured out it's the comment widget from Facebook (
Here's an example of a live page on http: with the error:
http://app.gophoto.com/p?id=10173&rkey=CD01891B287792415384&s=1&a=6940
Here's one of the assets that Facebook returns over HTTPS
https://s-static.ak.facebook.com/rsrc.php/v1/y8/r/7Htnnss1mJY.js
(I'm unable to comment (for some reason?) on Joel's answer. But, his suggestion to fetch the initial all.js over https on http sites does not actually work. I've tried it, and it also inherently looks incorrect since even the initial js fetch violates the mixing up of http & https content.)