I've been trying to use facebook graph api public search.
It works just fine for english search queries, for example,
http://graph.facebook.com/search?q=watermelon%20&type=post
On the other hand, while setting a non english search result, I'm also receiving only english results, but non of the results in a result in the language of the search query.
For example,
http://graph.facebook.com/search?q=ביבי&type=post
Does not return any relevant result (the search query is "ביבי" , a word in Hebrew. None of the returned results are in Hebrew).
What could I do to fix it ?
Any suggestion will be helpful.
Thanks in advance.
To receive response in a locale which is different from your computer default locale, you should specify it with the request. Right now there are 2 ways to do that:
use &locale=he_IL in the URL (list of Facebook locales), e.g.
https://graph.facebook.com/search?q=YOUR_QUERY&type=post&fields=message&locale=he_IL
use Accept-Language header with your request, e.g.
'Accept-Language': 'he_IL,he,iw;q=0.9'
Recently, the first approach had a bug (now it works well), so I would use both of them (the first one will have more priority then another).
Note: Search across the specified locale will return posts available in this locale.
Related
I need to use the Bing Maps API to check if a UK postcode (Not sure how different it is for other countries) is valid.
It seems that I can put any nonsense into the field for postcode and I still get a response.
E.G. http://dev.virtualearth.net/REST/v1/Locations/GB/aregsfdgsdfgsdfgdsf?key=BINGMAPSKEYHERE
Gives a result that has a lat and long of 53.9438323974609, -2.55055809020996 in the "point" field, despite that clearly not being a valid postcode.
Is there a way that I can simply test the validity of a postcode?
If you look at the response object for your request you will see a matchCode value. This indicates if the match it is good or not. In this case it says "UpHierarchy" which indicates that it didn't find the exact result so it when up the address hierarchy until it found a result. The result being returned is for the United Kingdom. Additionally, the results also have an entityType value which tells you the type location that was found. In this case it says CountryRegion. You want an entityType value of "PostalCode". By checking these two values you can determine if the returned result is a postal code or not. More details on the geocode response object is documented here: https://msdn.microsoft.com/en-us/library/ff701725.aspx
One thing I would highlight is that the URL format you are using is a bit of a legacy one and isn't as accurate as passing in a single string query (i.e. &q=YOURQUERY). This is highlighted in the best practices docs. If you are using .NET, I hiehgly recommend using the Bing Maps .NET REST toolkit. It makes things really easy and implements best practices for you.
I am attempting to preview a track via the 7digital api. I have utilised the reference app to test the endpoint here:-
http://7digital.github.io/oauth-reference-page/
I have specified what I consider to be the correct format query, as in:-
http://previews.7digital.com/clip/8514023?oauth_consumer_key=MY_KEY&country=gb&oauth_nonce=221946762&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1456932878&oauth_version=1.0&oauth_signature=c5GBrJvxPIf2Kci24pq1qD31U%2Bs%3D
and yet, regardless of what parameters I enter I always get an invalid signature as a response. I have also incorporated this into my javascript code using the same oauth signature library as the reference page and yet still get the same invalid signature returned.
Could someone please shed some light on what I may be doing incorrectly?
Thanks.
I was able to sign it using:
url = http://previews.7digital.com/clip/8514023
valid consumer key & consumer secret
field 'country' = 'GB'
Your query strings parameters look a bit out of order. For OAuth the base string, used to sign, is meant to be in alphabetical order, so country would be first in this case. Once generated it doesn't matter the order in the final request, but the above tool applies them back in the same order (so country is first).
Can you make sure there aren't any spaces around your key/secret? It doesn't appear to strip white space.
If you have more specific problems it may be best to get in touch with 7digital directly - https://groups.google.com/forum/#!forum/7digital-api
I have a collection of cities for which I'm creating a REST API (my first REST API). Each city has a number of language independent things, such as founding date and population count. Cities also have things that depend on the language, such as the title and a short description. Internally the city documents have this format:
{
"population": 9042,
"name": {
"en": "Berlin",
"nl": "Berlijn",
// ...
},
// ...
}
The users of my API always want to view the city info for a specific language only, and get back something like:
{
"population": 9042,
"name": Berlin,
// ...
}
I've made these accessible via /cities/:id/:language_code, for instance cities/123/en. Now I want to implement listing of cities: GET /cities. There the requested language is also needed. Since this would result in /cities/:language_code, I'm getting the impression putting this at the end of the url is not a good idea, and suspect I'll be better off with /en/cities/...whatever....
How is this typically done in REST APIs? Any big, nicely implemented, API out there that is a good example?
REST API's are based upon HTTP protocol, so they can use the headers, that are traditionaly used to defined the prefered locale.
So I would use a Accept-Language parameter for this.
# Request
GET /cities/.... HTTP/1.1
Host: www.example.org
Accept-Language: en,en-US,fr;q=0.6
Would give :
{
"population": 9042,
"name": Berlin,
// ...
}
It depends on your clients. If the clients are applications, then #Orabîg's answer is 100% correct. If your clients are web browers, though, you should track language as a user preference. The reason is that a user might be using a non-personal machine where the browser is set to a different language, and they may not know how to or be able to change that setting. Rather than forcing them to use an unfamiliar language, you build the preference into your API.
In that case, I would start with the language provided in Accept-Language until the user either identified themself. Once they are passing some identifying token in a header with each request, use that to figure out what language responses should be in.
Just to mention the related article of W3C about using headers for locale determination (not only language that seems be the original case of the question)
The HTTP Accept-Language header was originally only intended to specify the user's language. However, since many applications need to know the locale of the user, common practice has used Accept-Language to determine this information. It is not a good idea to use the HTTP Accept-Language header alone to determine the locale of the user. If you use Accept-Language exclusively, you may handcuff the user into a set of choices not to his liking.
For a first contact, using the Accept-Language value to infer regional settings may be a good starting point, but be sure to allow them to change the language as needed and specify their cultural settings more exactly if necessary. Store the results in a database or a cookie for later visits.
Some of the potential issues include the following:
Many users never change the defaults for Accept-Language. They are
set when the user agent is installed. Unless they are multilingual or
have some other reason to adjust language preferences they may not
even know such settings exist. Hence, the user may not have ever
ratified the Accept-Language setting.
A user agent may send a request
that specifies only a language and not a region, for example you may
not get a header with de-DE, de-CH or de-AT to indicate German as
spoken in Germany, Switzerland or Austria, respectively. On the other
hand, you might only get de indicating a preference for German. If
you were planning to use the region to decide what currency to use
you are now in a bind. Your particular circumstances might allow you
to make assumptions such as "Germany has 83 million people,
Switzerland has 7 million but only 63% speak German, Austria has 8
million, so this user probably uses the Euro. If we're wrong we'll
only offend 4.6% of our German speaking customers or just over 4
million people." Feel free to make such an assumption, if you can
accept the risk. Its a lot simpler to ask the user for more
information. Also, the math gets more difficult for Spanish or
English, for instance.
People borrow machines from friends, they rent
them from internet cafes. In these cases the inferred locale may be
inappropriate, and care should be taken to allow the user to select
the appropriate language and locale from whatever page they are
looking at.
Full article: Accept-Language used for locale setting
I've written an app to publish some events via the open graph api to facebook. For most of the events this works fine. But some events facebook denies:
"OAuthException: (#100) Invalid event name specified: event_info-name"
I searched the facebook doc but I couldn't find a detailed description how the link has to look alike. I convert it to utf8 with utf8_encode (PHP). I guess that the string length is limited. If so: How long can the string be? Are there some other restrictions?
Thanks, Michael
I created events with different name lengths an it seems that the max event name size is 74 characters (one with a length of 75 or more throws the "(#100) Invalid event name specified").
I think the characters in the name are pretty flexible. My titles had " and ' among others and showed up fine, without encoding, on the event page.
What events do you have coded for your application in the application settings See: https://developers.facebook.com/apps/{YOUR_APP_ID}/opengraph
I get the same error, but there appears to be STOP keywords but not sure where they are. If anyone is getting this error you might also want to look at Facebook Graph Error (#100) Invalid event name specified: event_info-name
As of January 2015 there's no character limit for event names. There's been a torrent of troll events here in Poland where people copypasted whole 100k character long books or pi number with 100k or so decimal places. The names were cut short in the event pages but in notifications page the whole names are displayed, cluttering it into oblivion.
I have a couple search forms, 1 with ~50 fields and the other with ~100. Typically, as the HTML spec says, I do searches using the GET method as no data is changed. I haven't run into this problem yet, but I'm wondering if I will run out of URL space soon?
The limit of Internet Explorer is 2083 characters. Other browsers, have a much higher limit. I'm running Apache, so the limit there is around 4000 characters, which IIS is 16384 characters.
At 100 fields, say average field name length of 10 characters, that's already 5000 characters...amazing on the 100 field form, I haven't had any errors yet. (25% of the fields are multiple selects, so the field length is much longer.)
So, I'm wondering what my options are. (Shortening the forms is not an option.) Here my ideas:
Use POST. I don't like this as much because at the moment users can bookmark their searches and perform them again later--a really dang nice feature.
Have JavaScript loop through the form to determine which fields are different than default, populate another form and submit that one. The user would of course bookmark the shortened version.
Any other ideas?
Also, does anyone know if the length is the encoded length or just plain text?
I'm developing in PHP, but it probably doesn't make a difference.
Edit: I am unable to remove any fields; I am unable to shorten the form. This is what the client has asked for and they often do use a range of fields, in the different categories. I know that it's hard to think of a form that looks nice with this many fields, but the users don't have a problem understanding how it works.
Are your users actually going to be using all 50-100 fields to do their searches? If they're only using a few, why not POST the search to an "in between" page which header()-redirects them to the results page with only the user-changed fields in the URL? The results page would then use the default values for the fields that don't exist in the URL.
To indirectly address your question, if I was faced with a 100-field form to fill in on one page, I'd most likely close my browser, it sounds like a complete usability nightmare.
My answer is, if there's a danger that I'm getting anywhere near that limit for normal usage of the form, I'm probably Doing It Wrong.
In order of preference, I would
Split the form up and use some server-side state retention
Switch to POST, and then generate and redirect to a shorter URL on POST that resolved to the same result
Give up ;)
You mention in a comment that many of the fields "are hidden and can be opened as required".
If you are willing to discard graceful degradation, you could always actually add and remove the fields from the form, rather than just hiding and showing them: the browser won't submit the ones that aren't included in the form.
This is a variant of the "Make and model" forms that online insurance etc. pages use -- select the make, submit back to the server and get the list of models for that manufacturer.
If you don't mind using javascript then you could have it work out the length of the query string and if it is too long then switch to a post. Then have some sort of url mapper to allow them to bookmark these posted searches.
Use post and if the user bookmarks the search, save it in a database and give it a unique token, then redirect to the search page using GET and passing the token as parameter.
TinyURL is a nice example: You give it a very long URL, it saves it to a DB, gives you a unique identifier for that URL and later you can request the long URL using that identifier.
In PHP it would be something along the lines of:
<?php
if (isset($_GET['token']))
{
$token = addslashes($_GET['token']);
$qry = mysql_query("SELECT fields FROM searches WHERE token = '{$token}'");
if ($row = mysql_fetch_assoc($qry))
{
performSearch(unserialize($row['fields']));
exit;
}
showError('Your saved search has been removed because it hasn\'t been used in a while');
exit;
}
$fields = addslashes(serialize($_POST));
$token = sha1($_SERVER['REMOTE_ADDR'].rand());
mysql_query("INSERT INTO searches (token, fields, save_time) Values ('{$token}', '{$fields}', NOW())");
header('Location: ?token='.$token);
exit;
?>
And run a script daily:
<?php
mysql_query('DELETE FROM searches WHERE save_time < DATE_ADD(NOW(), INTERVAL -200 DAY)');
?>
Also, does anyone know if the length
is the encoded length or just plain text?
My guess was for encoded length. I made a simple test: a textarea and a submit button to a simplistic PHP script.
Loaded the page in IE6, pasted some French text in the textarea, 2000 characters. If I hit the submit button, nothing. I had to reduce the length of the text to be able to submit.
In other words, the 2083 character limit is exactly the maximal length of the URL found in the address bar after submitting the GET request.
I would go for the JavaScript solution: on submit, analyze the form, create a secondary form with hidden attributes, and submit that.
Some strategies on shortening the output:
As you point out, you can already skip all values left to default (no field, no value).
If you have a form like the one at Processing forum search you can group all checkbox states in one variable only, eg. using letter encoding.
Use short value attributes (in select for example).
Note: if the search page is actually composed of several independent forms, where users fill only one section or another, you can make several separate forms.
Might not apply to your case and might seems obvious but worth mentioning for the record... ^_^
One could philosophically look at the search submission POST as the creation of a saved search (especially when a search is as complex an object as the one your users are making). In this case, you could accept the post for the creation of a search and then redirect using a GET to fetch the appropriate search results (post/redirect/get).
This would also allow the users to bookmark the search results (GET) to coming back at any time to re-run the search.
Get can have one advantage if your search results can be shared, in case of post request if you send the link to someone, that person won't see any search results