Facebook URL Linter gives "Error parsing input URL, no data was scraped" [closed] - facebook

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I don't used to post any questions on forums but this time, I have no other solution...
The Facebook URL Linter gives me the following "Error parsing input URL, no data was scraped" on this URL: http://phrasesbox.com/test.html
But with this URL is OK: http://jeudephrases.com/test.html
My problem is that these 2 domains are located on the same server, and the "test.html" file is unique (these 2 domains point to the same folder).
This gives me a very big problem because there is no preview on sharing contents on Facebook... (title, description, image).
All was working fine until one month ago. It's like if my phrasesbox domain were in a blacklist but when I share contents, there is no SPAM notice.
Same problem with 2 other domains that also point to the same folder.
Any idea??
/* EDIT */
I have already lost 70% of visits ...

Related

PDF embed api UNAUTHORIZED_CLIENT [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 days ago.
This post was edited and submitted for review 3 days ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
We are using Adobe PDF embed api to display pfs files in our website
I have created the api key in adobe website. It displays de pdf file content but then it stops showing it
I'm getting this error on jwt authentication .
I have created the right api key and the right domain but it is still not working.
{"reason":"UNAUTHORIZED_CLIENT","message":"Client is not authorized for this domain"}
enter image description here

My entire website is not being displayed on github [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I'm not sure as to why my entire website is not displaying, i've tried everything that was recommended but still nothing works. My banner image and text is not showing up when I published the website on github.
What my website should look like:
file:///Users/tamannahoque/Documents/Skincare/index.html#products
(if that works)
The code:
https://github.com/TamannaHoque/the-ordinary.github.io
The website when published on github:
https://tamannahoque.github.io/the-ordinary.github.io/
The basic website is being displayed on GitHub; but, your images and other items the website refers to contain links that don't refer to the "GitHub published" locations, so all you are getting is the HTML.

Facebook App ID - Multiple Site URL's [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a product consisting of hundreds of separate websites - each of which has a unique URL. Unfortunately it is not feasible for me to create a separate Facebook App ID for each website. Is it possible to configure one Facebook App ID that can be shared/used across all websites?
I know it's possible with subdomains but haven't had any luck finding much information about my situation.
Thanks!
There isn't. The only way it so redirect to a login page that is the same for all.

Need help scraping a website in perl [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am new into the world of perl and right now I am trying to scrape a webpage. I have done some scraping before and used WWW::Mechanize. The pages that I scraped before were somehow simple, so I took the page source and then extracted the data I needed from there. But now I have a different website that seems to contain frames. http://www.usgbc-illinois.org/membership/directory/
I am not asking for any code, but some ideas or modules I could use to extract data from the website above.
Thank1s
You may find some useful information on this website Web Scraping and also you can take look at this module Web Scraper Module

Dynamically generated page URLs don't work this morning [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
This morning dynamically generated links to FB pages from our app stopped working. Link to a page like this http://www.facebook.com/pages/125567350861413 used to work just fine, now it requires page name http://www.facebook.com/pages/Daniels-Real-Estate/125567350861413
Why would this be changed? Was there a problem with old pages link format?
We tweaked our code to take into account page name and made it work. But if a user changes the name of a page the link to this page will break, until we refresh the list of pages in our db. We'll write a chron job that will refresh the list of page names for all users using our app multiple times a day, but we'd prefer not having to do that.
Anyone else ran into this issue? What was your workaround (other than the above)?
While this is probably an off-topic question, you can get the page url using the graph api:
http://graph.facebook.com/125567350861413 and looking at the link property. As to why Facebook changed this, you would have to ask them at their developer group or log a bug.