Facebook Insights not working with domain that has an homepage redirect? - facebook-insights

I have a domain that 301 redirects itself from http://www.domainname.com to http://www.domainname.com/subfolder/
The Facebook meta property tag is present on every page of the subfolder and it was recognized by Facebook when I added it in my Insights Dashboard. However, insights data are not showing for anything -- not the website, the like button or the comment box.
Any advice? Can it be related to the fact that the domain homepage redirects istself to a subfolder?

If Facebook's Debug Tool can't read the tags, you can't claim the domain. Check you're exempting the Facebook crawler from your redirect, or else it won't be able to access your meta tags.

Related

Facebook Share in my website keeps redirecting to website home?

I have a website that I want to share to facebook
https://pvp5.com/item/12057/Sakura-Transient-House
But when facebook fetch the url it always fetch the home url
https://pvp5.com/home
I have a screenshot to prove it. I also did a basic troubleshooting turning off my caching mechanism and website page rules. Why is this happening?
Why is this happening?
Because you explicitly said so, by specifying the Canonical URL as https://pvp5.com/home for all pages.
The canonical URL should be set to the individual URL of that article/piece of content you want to share.

Open Graph scraping base URL instead the URL it's given

The Facebook OpenGraph debug tool is scraping the wrong page.
If I give it a full URL (pointing to an individual page on my site) that I want it to scrape, instead of scraping that page and finding its meta tags, it scrapes my site's main page and returns those meta tags (which are obviously wrong in this context).
The weird thing is, it will even find and scrape my site's main page even if it's not located at the root of my domain. For example:
I want it to scrape http://mydomain.com/myhomepage/specific_page.html
Instead, it scrapes http://mydomain.com/myhomepage/
This implies to me that the error must be a setting someplace, either on my site or on my Facebook App settings. Would the App settings do that? Redirect to whatever URL is set if a requested URL is a descendent of it?
The URL I'm requesting is not doing a 302 or anything - I can click the link from the FB debug tool even and it will take me to the appropriate page.
A few notes:
specific_page.html is not an actual file, it is routed through index.php using mod_rewrite in Apache's htaccess. I tried being specific with http://mydomain.com/myhomepage/index.php/specific_page.html and it did not work then either.
Another SO question led me to believe that the user-agent might be getting redirected if it doesn't allow cookies (as the Facebook web crawler does not) so I opened a fresh browser, disabled cookies, tried again, and I still reached the appropriate page.
As mentioned in the comments above, in your case this was due to an og:url meta tag, redirecting Facebook's crawler to that URL
In general, cases like this are usually the og:url tag, a HTTP redirect, or a canonical meta tag pointing at the 'other' / 'wrong' URL - Facebook's crawler follows those redirects looking for the final URL

Facebook open graph url redirect issue

I am trying use open graph API to publish an action. URL that I provide as part of meta data property og:url can be accessed by authenticated users only. Facebook is trying to scrape the URL and is ending up with a sign_in page due to a 302 redirect.
Do I have to construct a page just for facebook scraping with meta tags in it? Isn't this url linked to the content published on facebook?
If you want your articles to be sharable or do any SEO, you shouldn't be using 302 redirects. Bots will only see the content from the destination of the redirect.
You want to have just one URL for each piece of content. If an unauthenticated user, the Facebook Scraper, or Googlebot visits that URL, you want it to see all your Meta tags and some teaser content.
If the user isn't authenticated, use a server-side scripting language to display a register/sign_in dialog instead of the premium content. If the user is authenticated, then you show them the full content.
This is better even from a UX perspective: Say I follow the link from Facebook and register. When I sign in, how are you going to get me back to the content I wanted to see in the first place?

Make Facebook scraper not redirect?

When using the Facebook dev tool to create a like button for my website, the button likes the final URL (visitors to my site are automatically redirected to the most recent post. I don't want to change the redirect.
What do I have to do to get the Facebook scraper to ignore the redirect?
You don't, you need to serve content to the scraper with the meta tags for Facebook, and not serve a redirect to the scraper.

How to force facebookexternalhit to reaccess?

I'm playing with facebook links via posting them. On the frist time a access from a user agent containing "facebookexternalhit" will visit your site and look for some meta tags.
So far so good that works. But if I try to repost the link no furure calls happens. How can I trigger to let facebook read the page again?
Are there some API calls I can use to trigger an update?
in the documentation for the Like Button it says:
When does Facebook scrape my page?
Facebook needs to scrape your page to know how to display it around
the site.
Facebook scrapes your page every 24 hours to ensure the properties are
up to date. The page is also scraped when an admin for the Open Graph
page clicks the Like button and when the URL is entered into the
Facebook URL Linter. Facebook observes cache headers on your URLs - it
will look at "Expires" and "Cache-Control" in order of preference.
However, even if you specify a longer time, Facebook will scrape your
page every 24 hours.
The user agent of the scraper is: "facebookexternalhit/1.1
(+http://www.facebook.com/externalhit_uatext.php)"
The Linter is now known as the Facebook Debugger and when you use it for a url, it will clear the facebook cache for the same url and will then cache the new result.
One trick you can use is to simply append a "random" GET parameter to the URL that you're sharing. It won't have any effect on the page's content, but will cause Facebook's scraper bot to reaccess your site.
Original URLs:
http://example.com
http://example.com?param=1
New URLs that will force a "reaccess":
http://example.com?cache_buster=784932789532
http://example.com?param=1&cache_buster=784932789532