There is some errors releved from my GWT account - google-search-console

The GWT has sent the bellow message:
Dear webmaster,
Your smartphone users will be happier if they don’t land on error- or non-existent pages, so we recommend you make sure that your pages return the appropriate HTTP code. Currently, Googlebot for smartphones detects a significant increase in URLs returning a 200 (available) response code, but we think they should return an HTTP 404 (page not found) code.
Recommended actions
Check the Smartphone Crawl Errors page in Webmaster Tools.
Return a 404 (page not found) or 410 (gone) HTTP response code in response to a request for a non-existent URL.
Improve the user experience by configuring your site to display a custom 404 page when returning a 404 response code.
Now how to resolved this?

Have you made any significant changes lately? Like changing URLs to all pages?
First of, make sure your pages are available and working with the URLs. Try searching yourself on google with "site:yourdomain.com". Are these pages correct or do they not exist?
You should also check that IF your page does not exist (yourdomain.com/blahblah), it will return HTTP404 (Not found) and not HTTP200 (OK). You can see this in Chrome Developer Tools. Go to Network tab, reload the page, check the Status column for your HTML page.
How you change the HTTP code depends on your web server and language. In PHP you can use header().

Related

Redirects and Meta Description

I am currently redirecting a website to another new website. My question is does a redirect automatically move my metadata (titles, taglines, meta description, focus keyphrases) to the new site? or Do I have to manually copy and paste metadata to the new site?
Better put - do my rankings from the old site as a result of metadata automatically point to my new site upon redirecting?
Thanks.
You are going to want to look into using a specific kind of redirect, which is the HTTP 301 redirect. That status code means its a permanent redirect, and most search engines will recognize that and will help keep your SEO.
If you do a HTTP 302 redirect, this being known as a temporary redirect, will technically be considered a new URL in the eyes of a search engine.
So when you are testing you redirect, make sure you open up Chrome Dev Tools (F12) and click the network tab. Refresh the page and click the first row that appears in the table. It should specify the HTTP status code, most likely being 200 (status ok), (301/302) permanent/temporary redirect, (404) file not found, 500 (server error).
So essentially, make sure your redirect is using a 301 redirect, if it is permanent of course!

How to remove google search results for 303 redirect?

I run a dynamic site that may or may not redirect a certain route based on user preferences.
Let's say it's http://clientname.example.com/maybe. Our backend has a response for /maybe, but if the client decides they would rather use their site for the information on that page, we instead use a 303 Redirect to their page on a separate domain.
All of our content pages use the <meta name="robots" content="noindex"> tag, so google will not index any of our pages. HOWEVER, when I search google for "site:our_domain_name.com", I get a bunch of results that all trace back to those dynamic routes that return a 303. When I click on the search results in google, the 303 is followed as expected and I arrive at the client's site. What I want, is for my piece of the puzzle to not show in results at all.
I was troubleshooting it this morning, and I realized that our noindex meta tag was obviously not being seen by the robot as it was following the redirect, so I added a rule on the server that adds the 'X-Robot-Tag: noindex' header to redirect responses.
Is that enough? If I wait long enough, will those search results be removed?
Is that enough? If I wait long enough, will those search results be removed?
No because if an external page links to your site, Google will follow the link to your site, then your 303 (if your return such a code) and won't see the noindex.
Don't return a 303 for Google bots and you should be fine. It may take a bit of time, because Google needs to reprocess the page and see the noindex to remove it.

Page renders on browsers but throws "404 page not found" for SEO crawlers and/or requests made by program. What could be the issue?

I built a site out of ZF and installed it fine on my server. I have the MVC structure and use custom routing (for SEO purposes) as below:
mysite.com/controller.html
mysite.com/controller/action.html
Generally, everything is working fine but the only problem is that SE crawlers won't find any .html files. If i open the "Activity" window from Safari, I see all the css and other files being referenced/read fine but not the page itself.
So, the page renders fine on a browser but SE crawlers or any program that made the request won't find the page. I'm wondering if it's an Apache issue. My .htaccess file is the same file that shipped with the ZF.
I really appreciate any advise/suggestions/comments!
Is it possible that your app is serving all pages with a 404 status code? So browsers and crawlers are getting the same thing, but the browser will render the content whereas the crawlers ignore it. I've seen some people use the Error Controller in ZF as a way of doing routing (not a good idea), where the Error Controller 'catches' all requests and then examines the params to determine what to display.
If this isn't your problem please could you edit your question to include:
How it is you know that crawlers are getting a 404
Some more info on how you are doing your routing
Also if you can provide an example URL we can check the headers that are being returned.

Silent failure loading page application in iframe over https

Problem
I have an application driving a tab on a client's page. The application works correctly if the user has not enabled FB's "secure browsing" feature. If attempting to view over HTTPS, the iframe doesn't even appear (no errors, no mixed-content warnings). When correctly loading over HTTP, the div with the id "pagelet_app_runner" has an iframe inserted into it and the application content is loaded inside there. Over HTTPS, this div remains empty and the iframe is not inserted into the page. There are no Javascript errors appearing in Firebug or Chrome's equivalent console.
Why I'm Asking Here
The host has a valid SSL certificate and there is no 'mixed content' at the URL in question. I can successfully view the content over HTTP or HTTPS by visiting the URL directly, and I can do the same by visiting apps.facebook.com/canvasURL/tabURL. It is only when attempting to view within a Page Tab that the HTTPS load fails as described above. My application is configured with both regular and secure canvas and tab URLs.
Attempted Debugging
I've recorded some sessions with Charles but since the iframe isn't being inserted into the page, I think I'm coming at the problem after it's already occured. I'm no Charles expert so happy to be corrected here.
Apache isn't seeing any request (in either regular or ssl logs) for the affected loads. non-SSL loads come through as expected in access_log.
Plea for Help
I'm out of ideas for debugging this. Does anybody have any suggestions? What really obvious and stupid mistake might I have made? :)
edit: nicer formatting
Your app canvas URL is https://skinnycomp.nextstudio.com.au/skinnycowcomps/ , which send 404 error to Facebook proxy (request is going through proxy when viewing app via tab), also when viewing your app via apps (https://apps.facebook.com/122381834451561/), again 404... maybe Facebook proxy is ignoring 404 and posting blank...
Try changing canvas URL to https://skinnycomp.nextstudio.com.au/skinnycowcomps/tab, also you can check if your app is accessed via page tab, in signed_request there should be page_id...
23:51:15.379[549ms][total 1667ms] Status: 404[Not Found]
GET https://skinnycomp.nextstudio.com.au/skinnycowcomps/
This is a real longshot since I'm sure you've triple checked all the settings, but the blank page can happen if an invalid url is specified in the Page Tab URL field in the app settings. Since it only happens on https, it would imply something specifically with the Secure Page Tab URL entry. It might be worth checking that again, and maybe even re-saving it or changing it to something else to see if it helps.
I was using relative URLs for the regular and secure tab URL fields. From memory relative URLs here were mandatory at some point in the past. It appears now that a relative URL will still work for HTTP but not for HTTPs. Fix: absolute URLs. Hopefully FB update their field validation to match what's required too.

security warning in IE9 "Show all content"

I'm implementing the facebook Comments plugin on my site. Users get the warning "Show all content" in IE9
This other publisher using the same plugin and it does not bring up the warning.
Can some please help me with this?
Asking users to turn of the mixed content warning in their IE9 is not an option.
We were just looking at this today and our workaround for now was to include the Facebook Library over https (even when the page itself is viewed over http). Although not ideal it gets rid of the mixed content warnings in IE9 until they have fixed their bug.
That seems to be how it was accomplished at www.vg.no linked in the original question, the library is linked via https.
From their code:
<script src="https://connect.facebook.net/nb_NO/all.js"></script>
I have the same problem:
I have a page that's 100% http. But, the facebook javascript (which I call over http), is returning assets (.js, images) over https, which is generating security warnings for IE(9) users.
I have figured out it's the comment widget from Facebook (
Here's an example of a live page on http: with the error:
http://app.gophoto.com/p?id=10173&rkey=CD01891B287792415384&s=1&a=6940
Here's one of the assets that Facebook returns over HTTPS
https://s-static.ak.facebook.com/rsrc.php/v1/y8/r/7Htnnss1mJY.js
(I'm unable to comment (for some reason?) on Joel's answer. But, his suggestion to fetch the initial all.js over https on http sites does not actually work. I've tried it, and it also inherently looks incorrect since even the initial js fetch violates the mixing up of http & https content.)