dymaically changing the meta tag values!! works fine with debug tool but not working while posting with graph api explorer - facebook

I am posting an Open Graph "Level Up" action however while I am testing I am getting strange results. I have set the code to dynamically change the title so that it says Level "1", Level "2" etc.
url.php?level=6
. and this works perfectly on the debug tool, updating the title with whatever parameter value I pass in. The problem comes when I actually try to post using the Graph API Explorer tool. For some reason whatever parameter I pass, ie. =1, =2, it doesn't seem to take the parameter value. Has anybody encountered the same problem?

You can post request in Windows using Fiddler. Don't forget to set User-agent header (if you really check it).
I have the same issue. Checked my logs: FB does not even try to load my object by specified URL! After checking link in Debug Tool or making a request from Graph API Tool it will work.
There's already bug report on FB

Each URL needs to be accesible by Facebook's Debug Tool and needs to be be internally consistent without redirects or loops.
If on a page, you have an og:url tag, Facebook will load that URL instead, so if your URL includes parameters which control the output of the meta tags, the og:url tag needs to include the same parameters which loaded the page in the first place.
If you're not sure what the debug tool is seeing, and don't trust it for some reason, you can make a manual request using curl on the command line to see what Facebook is detecting:
url -A "facebookexternalhit/1.1" -i [URL GOES HERE]

Related

Googlebot vs "Google Plus +1 Share Button bot"?

Site Setup
I have a fully client-side one page webapp that is dynamically updated and routed on the client side. I redirect any #! requests to a headless server that renders the request with javascript executed and returns the final html to the bot. The head of the site also contains:
<meta name="fragment" content="!">
Fetch as Google works
Using the Fetch as Google webmaster tool, in the Fetch Status page, I can see that the jQuery I used to update the og:title, og:image, and og:description was executed and the default values replaced. Everything looks good, and if I mouseover the URL, the screenshot is correct.
However, with the Google Plus button, no matter what values og:title, og:image, and og:description tags are updated to, the share pop-up always uses the default/initial values.
Attempted use
I call this after each time the site content is updated, rerouted, and og meta content updated.
gapi.plusone.render("plusone-div");
I was assuming that if this approach works for the Googlebot, it should also work for the +1 button. Is there a difference between the Googlebot and whatever is used by +1 to retrieve the site metadata?
edit:
Passing a url containing the #! results in a 'site not found'
gapi.plusone.render("plusone-div", {"href" : 'http://www.site.com/#!city/Paris');
The Google crawler does not render the snippet when the +1 button is rendered but rather when a user clicks the +1 button (or share button). What you should try is to determine what your server is sending to the Googlebot during this user initiated and asynchronous load by the Google crawler.
You can emulate this by using the following cURL command:
curl -A "Mozilla/5.0 (Windows NT 6.1; rv:6.0) Gecko/20110814 Firefox/6.0 Google (+https://developers.google.com/+/web/snippet/)" http://myurl.com/path/to/page
You can output that command to a file by adding -o testoutput.html to the command.
This will give you an idea of what the Google crawler sees when it encounters your page. The structured data testing tool can also give you hints.
What you'll likely see is that unless your doing your snippet preparation in a static file or on the server side is that you're likely not going to get the snippet that you desire.
If you can provide real URLs to test, I can probably provide more specific feedback.
Google+ fetch the pages using the _escaped_fragment_ query parameter but without the equal sign.
So, it would fetch http://www.site.com/?_escaped_fragment and NOT https://www.site.com?_escaped_fragment_=
Google Search crawler still using the fragment with the equal sign, this is just for Google plus crawler.

Why does Object debugger say my URL is a facebook URL and isn't "scrapable"

In trying to create an "object" page for my first facebook app, I've run into some difficulty. I followed Facebook's Open Graph Tutorial nearly exactly.
After creating an "object" html page with the appropriate <meta property="og:... tags I tried running the URL through the Debugger Tool as suggested in the tutorial but I'm given the following error:
"Facebook URLs aren't scrapable by this Debugger. Try your own."
This page is in the same directory on my company's linux box as the canvas page, and is certainly not a "Facebook URL". If it matters, I'm using an IP instead of a domain name: xx.x.x.xxx/app/obj.html
...
I continued the tutorial anyway, but ultimately it does not seem to want to post a new action/object (is this even right?). I did however manage to get something to work, as in the app timeline view I apparently actioned one of those objects a couple hours ago. I assume this happened when I was pasting curl POST commands into the terminal.
I'm pretty new to the whole open graph, and facebook APIs, etc., so I'm probably operating under false assumptions of some sort, and I've been all over trying different things, but this error seems pretty bizarre to me and I can't seem to resolve it.
UPDATE
I just took the object page and put it on my own personal shared hosting acct. The debugger worked (inexplicably) fine on it, but I couldn't go too far since it's a different domain than the one authorized by my app.
Make sure og:url inside your html page does not point to facebook.
Also, make sure to look at the open graph protocol page (to see you formatted the og tags correctly.
Also, make sure the page is accessible to everyone, not just yourself.
Without knowing the URL it's hard to be sure, but it's most likely that your URL is either including a og:url tag pointing to a facebook.com address, or a HTTP 301/302 redirect to Facebook instead

Facebook like is broken, adds a trailing # to the final url

I'm running my site on Cargo Collective and trying to have likes per page. I cannot modify the code in the head tag only within the body tag.
When i debug a page i get the following;
Response Code 206
Fetched URL http://www.iamneuron.com/Break-this
Canonical URL http://www.iamneuron.com/Break-this
URL for Likes http://www.iamneuron.com/#Break-this
Final URL http://www.iamneuron.com/#Break-this
I can't figure this out and i have been searching for a while now. Even if i explicitly specify the url rather than leaving the code to figure it out, facebook still adjusts the url to one that doesn't work with the trailing #.
Originally i was trying to create the like code via facebook but i have now switched to this which works better with cargo but still produces the same error:
http://randomcodescraps.tumblr.com/post/1363402555/js-dynamic-like-button-on-cargo-collective-projects
Anybody any ideas? Thanks in advance btw!
Your page doesn't have any OG tags that Facebook can use. And those that are in your page are commented out. Add proper OG tags and then use the link tool to check you tags and see if still have the same problem.

URLs redirect to spyware site

We are developing an app that makes posts on behalf of our users to Facebook. Within those posts, we want to put links to external (non-Facebook) websites.
Looking at the links in the status bar of the browser (usually Chrome), the correct URL is displayed. However, Facebook seems to wrap the actually-clicked link into some extra bells-and-whistles. Usually, this works correctly.
Sometimes, however, this URL wrapping ends up sending the click to a URL like:
http: //spywaresite.info/0/go.php?sid=2
(added space to make it non-browsable!) which generates Chromes severe warning message:
This happens very occasionally on Chrome, but very much more often in the iOS browser on the iPhone.
Does anyone have any pointers as to how to deal with this?
EDIT
For example, the URLs we put in the link is
http://www.example.com/some/full/path/somewhere
but the URL that actually gets clicked is:
http://platform.ak.fbcdn.net/www/app_full_proxy.php?app=374274329267054&v=1&size=z&cksum=fc1c17ed464a92bc53caae79e5413481&src=http%3A%2F%2Fwww.example.com%2Fsome%2Ffull%2Fpath%2Fsomewhere
There seems to be some JavaScript goodness in the page that unscrambles that and usually redirects correctly.
EDIT2
The links above are put on the image and the blue text to the right of the image in the screenshot below.
Mousing over the links (or the image) in the browser shows the correct link. Right-clicking on the link and selecting "Copy Link Address" gets the fbcdn.net link above (or one like it). Actually clicking on the link seems to set off some JavaScript processing of the fbcdn.net link into the right one... but sometimes that processing fails.
I'm not 100% sure what you're asking here, but i'll tell you what I know:- are you referring to this screen on Facebook?
(or rather, the variation of that screen which doesn't allow clickthrough?)
If you manually send a user to facebook.com/l.php?u=something they'll always see that message - it's a measure to prevent an open redirector
if your users are submitting such links, including the l.php link, you'll need to extract the destination URL (in the 'u' parameter)
If you're seeing the l.php URLs come back from the API this is probably a bug.
If links clicked on facebook.com end up on the screen it's because facebook have detected the link as suspicious (e.g. for URL redirector sites - the screen will allow clickthrough but warn the user first) or malicious/spammy (will not allow clickthrough)
In your app you won't be able to post links to the latter (an error will come back saying the URL is blocked), and the former may throw a captcha sometimes (if you're using the Feed dialog, this should be transparent to the app code, the user will enter the captcha and the dialog will return as normal)
If this isn't exactly what you were asking about please clarify and i'll update my answer
Rather than add to the question, I thought I'd put more details here.
It looks like the Facebook mention in the original title was mis-directed, so I've removed it.
We still haven't got to the bottom of the issue.
However, we used both Wireshark and Fiddler to look at the HTTP traffic between the Chrome browser (on the PC) and Facebook. Both showed that Facebook was returning the correct URL refresh.
Here's what Wireshark showed:
What we saw on Fiddler was that our server is issuing a redirect to the spywaresite.info site:
We are working with our ISP to figure out what is happening here.

Caching OG data in the debugger, forcing og:type website, and authorization popup for "See exactly what our scraper sees for your URL" woes

Having issues getting Timeline to work. It is a two part problem.
First, there is an issue of caching parts of the OG metatags. When the debugger goes to my URL, I know it is hitting it correctly because the og:url it spits back is correct which means it has been processed on my end (ex: I send it to og.php?og=read&chapter=799, and it will spit back the right book_id for the og:url, meaning my script processed it). But all the other information seems to be cached. I originally and erroneously had an fb:app_id and og:site_url for an object, so I removed those. The output still shows those as having an existing site_url which is throwing an error. Having a fb:app_id forces the og:type of 'website', which I have set (correctly) to my namespace and object. When I try to POST the action, I get an oAuthException error back, that an og:type of 'website' isn't valid for an object. Once again, that should be fixed, but it keeps caching the old OG data. I have tried adding ?fbrefresh=1, but that did nothing.
Another issue, possibly related...even though I know it got there, and my script processed the request, Facebook doesn't report that. When I click on "See exactly what our scraper sees for your URL" it shows the authentication URL (see below)! As though, it never got there and the popup was initiated, which isn't even how the code for og.php works!! My guess is they got that from the base domain name itself (exmaple.com) before trying the full request with example.com/og.php.
window.parent.location='https://www.facebook.com/dialog/oauth?client_id=164431733642252&redirect_uri=http%3A%2F%2Fapps.facebook.com%2Fexample%2F%3Fpage%3D&state=064bd26ff582a9ec7c96729e6b69bbd2&canvas=1&fbconnect=0&scope=email%2Cpublish_stream%2Cpublish_actions%2C';
I figured it out. I thought the og:url was the URL you wanted people to use to get to the correct page in your app, like an action link. It is, but it isn't. I now have it match the OBJECT_URL you send to timeline.
I had a different URL (an action link to the app), which when redirected, can't be reached by the crawler because it is inside the applications authorized wall. This caused the og:type of website, and data to appear cached.
To fix it, the object_url I post to timeline, and the og:url in the metatag is the same. But you can figure out if it is the crawler or the action link by looking for the query string: ?fb_action_ids=SOME_ID which is sent from link on the timeline. If it contains that, then I forward it to the application page needed from there.
I'm having similar problems to you. It kept complaining about og:site_url being set, even though I never set those. It appears that the error messages it sends are actually inaccurate, and the problem is not that og:site_url is being set, but that the og:url is different from the object url. Sometimes a wrong error message is worse than no error message!
A further question is why an object url has to correspond to a live page that a user will see. An object is a logical unit, but it doesn't necessarily correspond to a single user-visible page. Your redirection trick might work, but it is not the proper way to do something. When I post an action related to an obect, the object url should be used to draw the information of the object, but I should be able to send the user somewhere else. If this was an intended design I think it is a mistake.