Facebook Lint/Debugger 403 and 503 Response code. (Wordpress site.) - facebook

Humbly asking for any assistance people have time to give me on this one. Let me start by saying that I am aware there are previous questions about this on this site and elsewhere on the web; I have read a lot of them, and they are either unanswered/resolved, had a particular cause that doesn't apply to me, or suggests things I have already done.
Over the past few days, Facebook has suddenly stopped scraping my website posts successfully, so when I paste a link into Facebook it pulls nothing through - no thumb or description. I run the links through the FB lint/debugger, and it alternates between 403 and 503 response codes, but mainly 403. Previous links that Facebook has cached/successfully scraped still display with thumbs and desc, but still present as a 403 or 503 response.
My site is http://21stcenturyburlesque.com
One of the new URLs I have been testing is : http://21stcenturyburlesque.com/the-burlesque-top-50-2013/
I have checked with the server/host people. Nothing has changed, everything fine.
I have tried with the default wordpress theme. No change.
I have read threads about Bullet Proof Security causing issues, although why it suddenly would I don't know. It was deactivated on my site anyway, but I went through the removal process to remove the htaccess file with the BPS code in it. I have then run debug without an htaccess file present, and with a very basic htaccess present. No change.
Hotlinking protection is disabled in my cpanel.
I have experimented with adding/removing www. and / when I paste the link into lint as someone suggested. No change.
I use Facebook OGP Wordpress plugin. I spoke to the creator and he says the plugin is working as it should and to contact my host/server. See bullet one.
I tried creating a new FB App and using the new App Id number with the OGP plugin. No change.
Checked the cpanel error log. This came up three times tonight:
[Fri Nov 01 21:47:53 2013] [error] [client 193.242.149.35] File does not exist: /home/**/public_html/403.shtml
There are a few other things I ruled out but I've been at this for so long I can't remember all of them, so if someone suggests something else I've tried then I apologise for not mentioning it here in advance.
If anyone can suggest anything else, I would really appreciate it. I manage to fix most technical problems I come up against, but this has stumped me and my much more experienced colleague and it is really affecting my clickthrough rates and site traffic. If it comes down to adding things to my htaccess file, I would appreciate guidance on what to add/remove. Many thanks in advance.

I had the same problem. Drove me crazy for hours (maybe days). In your FB app settings make sure that the top Facebook url has http://

Related

Coldfusion - Redirect website if it hits /folder/index.cfm?

Very new to Coldfusion, but not to web development so hopefully this is an easy question.
We recently changed a link on our website that took us to /folder/index.cfm. I want to make sure that when someone types www.ourwebsite.com/folder that it doesn't take them to /folder/index.cfm and instead to redirect them to another website.
Any pointers?
There are at least three ways it to do this.
Don't even bother with ColdFusion. Have your web server do the redirect. You are going to need to know if it Apache or IIS or whatever. You can then search for how that web server does it.
This might help you with some of that: Custom 404 error page not working on IIS 8.5
You can make a file at /folder/index.cfm and have a file that has
OR with cfscript
<cfscript>
location("newpage.cfm", false, 301)
</cfscript>
Note the addtoken and statuscode are optional. Add token helps because almost no CF website uses this kind of token. The status code helps because tells the browser that this is a permanent move.
You could intercept the request in application.cfc . In fact, in some systems all requested are checked for validity in application.cfc. You might still need a blank page at the target, but at least some ColdFusion is processed
Of all the options, 1 is my favorite, because there really isn't a lot that can be done with requests to missing pages. And the list of potential missing pages is unlimited.

Thumbnails won't show from specific server. Identical page's thumbs works from other server

I've placed two identical files on two different servers to test how the thumbnails work when sharing in FB. Other domain works, the other doesn't. FB debugger gives 503 and scraper says "Document returned no data".
Works:
https://developers.facebook.com/tools/debug/og/object?q=http%3A%2F%2Fwww.muikkumedia.fi%2Ffacebook%2F
Doesn't work (other provider):
https://developers.facebook.com/tools/debug/og/object?q=http%3A%2F%2Fartblog.pa-la.fi%2Ffacebook%2F
As there is dash (-) in other domain's (& server's) name, I was thinking it might be the "dash-bug", but debugger can find info from the root domain, so that doesn't seem to be the problem either. I'd give the link ready here too, but I don't have enough reputation for this system here.
Anyway, server provider's answer is: "I'd suggest you contact facebook's support on this issue, as everything looks ok on our end with the info they provide."
Any ideas?
Changed the provider, works now.

Domain blocked and no data scraped

I recently purchased the domain www.iacro.dk from UnoEuro and installed WordPress planning to integrate blogging with Facebook. However, I cannot even get to share a link to the domain.
When I try to share any link on my timeline, it gives the error "The content you're trying to share includes a link that's been blocked for being spammy or unsafe: iacro.dk". Searching, I came across Sucuri SiteCheck which showed that McAfee TrustedSource had marked the site as having malicious content. Strange considering that I just bought it, it contains nothing but WordPress and I can't find any previous history of ownership. But I got McAfee to reclassify it and it now shows up green at SiteCheck. However, now a few days later, Facebook still blocks it. Clicking the "let us know" link in the FB block dialog got me to a "Blocked from Adding Content" form that I submitted, but this just triggered a confirmation mail stating that individual issues are not processed.
I then noticed the same behavior as here and here: When I type in any iacro.dk link on my Timeline it generates a blank preview with "(No Title)". It doesn't matter if it's the front page, a htm document or even an image - nothing is returned. So I tried the debugger which returns the very generic "Error Parsing URL: Error parsing input URL, no data was scraped.". Searching on this site, a lot of people suggest that missing "og:" tags might cause no scraping. I installed a WP plugin for that and verified tag generation, but nothing changed. And since FB can't even scrape plain htm / jpg from the domain, I assume tags can be ruled out.
Here someone suggests 301 Redirects being a problem, but I haven't set up redirection - I don't even have a .htaccess file.
So, my questions are: Is this all because of the domain being marked as "spammy"? If so, how can I get the FB ban lifted? However, I have seen examples of other "spammy" sites where the preview is being generated just fine, e.g. http://dagbok.nu described in this question. So if the blacklist is not the only problem, what else is wrong?
This is driving me nuts so thanks a lot in advance!
I don't know the details, but it is a problem that facebook has with web sites hosted on shared servers, i.e. the server hosting your web site also hosts a number of other web sites.

Facebook can't read my page

I run a site called http://www.theinspiration.com.
a few days ago my facebook share button stopped working. I can still share, but I dont get any fb meta data with, when sharing it.
When i try to linter it:
http://developers.facebook.com/tools/debug/og/object?q=http%3A%2F%2Fwww.theinspiration.com%2F2011%2F11%2Ftime-lapse-view-from-space-by-nasa%2F
I get: Critical Errors That Must Be Fixed
Error Scraping Page: Bad Response Code
If i just copy the source code, and make a plain HTML file, post it on a server and linter it, it works with all meta data. (really I just need to have the fB thumbnail image working, when sharing)
I run W3 Total Cache and CDN (amazon) and I read that this might be why, but when i disable W3 Total Cache, I still get the error.
I spend 10 hours trying to figure it out today. Can someone help me?
Thanks.
Daniel
I have has problems with this myself. The errors you receive are utterly useless, as I am sure the problem is it really a bad response code.
I am truly sorry that I do not remember my fix for this earlier, but I will try to remember.
Just make sure you have added an app_id and an administrator id to your metadata.
Facebook can suddenly change their required parameters, so they might have done just that.
Good luck!

How to tell google that a specific page of my website disappeared and won't come back?

I have a website where 50% of the pages have a limited lifetime.
To give an idea, 4.000 pages appear each week and the same amount disappears.
By "appearing" and "disappearing", I mean that the appearing pages are completely new ones, and disappearing pages are removed from the website forever. There is no "this new page replaces this old page".
I naively used a 410 code on every URL where a page had disappeared.
Meaning the url http://mywebsite/this-page-was-present-until-yesterday.php returned until yesterday a 200 OK code, and returns now a 410 Gone code.
I didn't use no redirect, because I want to tell the user that the URL he accessed isn't wrong, but that it is expired.
The problem is : Google won't acknowledge this information. It is still crawling the pages and Webmaster Tools alerts me as if the page was 404 broken. This affects significantly my "reputation".
Did I do something wrong ? How should I proceed ?
It's always a very good idea to make your own error page. This can save you a lot of visits through broken links.
.htaccess error pages
The Webmaster Tools of Google enables you to delete certain pages.
You can find this under "crawler access".
Try adding a noindex header.