How to determine if a given tweet is a video? - twitter-search

Let’s say I have two tweets:
https://twitter.com/BuzzFeed/status/917922958307295233
https://twitter.com/BuzzFeed/status/876083996026916865
I want to be able to quickly determine if the tweet contains a video. What is the best way to do that?
I’ve tried using oEmbed API, but it doesn’t give me the information I need.
https://dev.twitter.com/web/embedded-timelines/oembed

Posting a temporary answer
I have an access to the embedded html of the tweet. Thus, the first tweet example will have the embedded html of:
<blockquote class=\"twitter-tweet\" data-lang=\"en\"><a href=\"https://twitter.com/BuzzFeed/status/917922958307295233?ref_src=twsrc%5Etfw\" /></blockquote>
and second tweet example will have the embedded html of:
<blockquote class=\"twitter-video\" data-lang=\"en\"><a href=\"https://twitter.com/BuzzFeed/status/876083996026916865?ref_src=twsrc%5Etfw\" /></blockquote>
I can simply look at the blockquote class name to determine if the twitter is a video or a simple tweet.
This solution, however, will not work if you do not have access to the embedded html of the tweet, thus leaving this question still open for answers.

You can use the filter:videos advanced search operator in Twitter search to filter out only tweets containing videos like from:BuzzFeed filter:videos for example. Such filter can be coupled with other filters and/or Boolean arguments to make sure you narrow the search to only the tweets you wish to verify.

Related

Facebook API Multiple Search Type

Hi and thanks in advance.
I am currently working with the Facebook API, and I want to incorporate the search feature in my system, but the only problem is that I would like to do a search with one or more keywords for one or more type of object.
In other words not only search for posts but also, I would like to be able to search for posts, users and probably even events.
I have tried to put the parameter type like following type='post,user' but it doesn't return anything, the result is empty.
Is there a way to do it? The Facebook API manual doesn't say much about the search.

Facebook Graph API SEO Comments and Profanity Filter

I'm trying to integrate the Facebook comments left on our site in a way in which the content can be crawled by search engines and also for people (although I highly doubt there will be many) who don't have Javascript enabled on their browser.
Currently our Facebook comments are displayed via the use of the Facebook comment social plugin (using the <fb:comments href="MY_URL" num_posts="50" width="665"></fb:comments> tag). This ends up rendering an iFrame (which are mostly ignored by search engine crawlers) so the plan is to render this information and format it with basic HTML. To do this, the comments are pulled using the Graph API - this is then only be displayed to crawlers and people with Javascript disabled.
This all works nicely using the Graph API call (https://graph.facebook.com/comments/?ids=MY_URL), parsing the JSON result and displaying it on the page. The problem is that the <fb:comments> approach filters our results based on a blacklist we have set up on one of our Facebook Apps. The AppId with the relevant blacklist is stored on the page using metadata (<meta property="fb:app_id" content="APP_ID"/>) which the <fb:comments> control obviously must somehow use to filter the comments.
The problem is the Graph API method does not filter any results as I guess no blacklist (or App Id containing a blacklist) is specified. Does anyone know how to specify a Facebook App ID to the API call URL or of another way to not fetch commnents back that violate the terms of the blacklist?
On a side note, I know the debate about filtering content in comments rages on but it is a management decision to implement the blacklist, and one that I have no influence in changing - just incase anyone felt the need to explain the reasons why content filtering is or isn't a good idea!
Any thoughts on a solution?
Unfortunately there's no way to access a filtered list of comments using the API - it might be a reasonably request to have this in the API - you should file a wishlist item in Facebook's bug tracker
Otherwise, the only solution I can think of is to implement your own filter on your side when retrieving and displaying the comments from the API.
According to the Comments plugin documentation the filter on Facebook's side is implemented as a simple substring match, so it should be trivial to implement.
A fairly simple regular expression match should be able to check each comment against a relatively long list quickly.
(Unfortunately, the tradeoff here is that implementing a filter is easy, but you'd also need to write an interface so that whoever's updating the list of disallowed words can maintain the list for both the Facebook plugin, and your own filtering.)
Quote from docs:
The comment is checked via substring matching. This means if you blacklist the
word 'at', if the comment contains the sequence 'a' 't' anywhere it will be
marked with limited visibility; e.g. if the comment contained the words 'bat',
'hat', 'attend', etc it would be caught.
Pretty sure there is no current way of doing this from the graph API, the only thing I can suggest is taking the blacklist and build your own filter

Facebook Graph API: Getting the total number of posts

I've been using the Facebook Graph API to display user posts. When I get the initial "page" of posts, the resulting data object has a paging property object with a previous and next URL property. I was hoping to generate navigation links based on this available paging information. However, sometimes these URLs point to an empty set of data, so I obviously don't want to navigate the user to an empty page.
Is there a way to find the total count of objects in a collection so that better navigation can be derived? Is there any way to get smarter paging data?
Update:
Sorry if my post isn't clear. To illustrate, look at the data at https://graph.facebook.com/7901103/posts and its paging property URLs. Then follow those URLs to see the issue: empty pages of data.
Since it pages the datas with date-time base. You can't get the knowledge of whether if there are datas or not before you actually send the request to it. But you can preload the data from previous url to determine is it suitable to dispaly a previous link in your web page.
Why be dependent of Facebook?
Why don't you preload all data for a user and save into a database. Then you fetch the posts from db and show to user. This way you have all the control on how many posts there are and how to manage next and prev.
I was going to try to post this as a comment to your question, but I can't seem to do so...
I know that the Graph API returns JSON, and while I've never come across a way to have the total number of posts returned, depending on what technology you are using to process the response, you might be able to capture the size of the JSON array containing the posts.
For example, if I were using a java application I could use the libraries available at json.org (or Google GSON, or XStream with the JSON driver) to populate an object and then simply use the JSONArray.length() method to check for the number of posts returned.
see:
http://www.json.org/javadoc/org/json/JSONArray.html
It might seem like a bit of a simplistic solution, but might be the type of work around you require if you can't find a way to have Facebook return that data.
Can you specify what technology your application is based in?

Converting "tweets" to HTML

I'm looking for a solution to "linkify" (twitters term, not mine) tweets - to ensure that #names are correctly linked, at that html links are ready for use too prior to displaying the tweet in a uiwebview
I can see a couple of potential routes
NSScanner based solution where I look for an # and then the next " " and link everything between. Do same for http://
Use Twitter Anywhere linkifier in a UIWebView, which just feels wrong
Some clever regex
So before I reinvent any wheels, has anyone got any advice, done this, know of a prewritten class, or anything else I've forgotten?
Check out this page from the twitter documentation:
http://dev.twitter.com/pages/tweet_entities
Are you using the API? I believe there's now an entities payload on each tweet that identifies where things like URLs, hashtags and usernames are within the text of a tweet.
I'd verify that's there myself, but I'm on a locked down corporate network at the moment.
Edit:
Here's a link to their announcement of the functionality - http://groups.google.com/group/twitter-api-announce/browse_thread/thread/9b869a9fe4d4252e?hl=en#

How does facebook's Share a link feature work?

I'm trying to implement a feature like that where a user inputs a url and when displaying that url I want to have a custom display (an embed object if it's a video from youtube, a thumbnail if it's an image link, title and excerpt of body if it's a normal link).
How can such a feature be realized?
There is a new idea called oEmbed that a few sites support (Flickr, Vimeo and a few others) that addresses this problem. oEmbed site
Otherwise, just check the site against a list of ones you pick and then pull out the relevant bits to construct an embed link.
I liked the idea of oEmbed a lot but unfortunately it doesn't has that much adoption yet.
oohEmbed tries to solve this issue by building oEmbed for many websites.
For the feature to work, it needs the server's interaction where I believe the following scenario is how it works
Assume that we have the site humanzz.com and that it provides such feature
A user enters a url on the humanzz.com's webpage and presses a button like facebooks' preview button
An AJAX call is made to a dedicated page on humanzz.com
humanzz.com does calls the remote website and gets its data
The AJAX call now returns the page's data (oEmbed JSON object)
This involves so much server's overhead.
I really wanted to do it using JavaScript as the server's role was only to bypass "Same Origin Policy"'s restrictions.
oohEmbed allows bypassing the server's step by specifying a callback parameter to oohEmbed so that the JSON object returned is passed to a callback function on your page.
An example illustrating this is as follows
Add a script tag dynamically to your page
< script type="text/javascript" src="http://oohembed.com/oohembed/?url=http%3A//www.amazon.com/Myths-Innovation-Scott-Berkun/dp/0596527055/&callback=myCallBack">< /script>
This would result in executing myCallback(oEmbedJSONObject) which is great.
The problem with that solution is you still have to have a fallback for websites that don't have oEmbed representations.
For the embedded things, I have been using auto_html ( https://github.com/dejan/auto_html) with great success (vimeo, youtube, images) and even added soundcloud myself. But I am still looking for a "thumbnail" generation with an image and text facebook-like.
I guess you have to construct it by yourself by manually parsing the kind of URL you get.
If it is an image url, well then you just have to rescale it and in case the user clicks on it, then handle that by opening the original one somehow.
If it is a link to some youtube video, then you have to take a look at how the embedding of Youtube videos works. You can just copy the code that is provided by Youtube itself, and then exchange the parts with the URL to the video with the URL you got from your user.
I did never implement something like that, but I assume it should work somehow like this.