It appears that calls to the Soundcloud API with a query containing a space will fail as API search appears to only search the Permalink field, example:
http://api.soundcloud.com/users.json?q=memory%20echo&consumer_key=CONSUMERKEY&limit=20&offset=0
This is not returning the artist Memory Echo at all. If I just search for the word "memory" then I can find it after paging through 8 pages of 20 users.
How can I get results that include an artists name with a space in it as part of the larger set of search results that should be associated with the words "memory" and "echo"? Is it possible to tell the API to specifically search the Username field?
In answer to your question: Probably not.
You probably already know this, but if there is a way to do it, it isn't documented by SoundCloud. As much as it may suck, you probably are going to have to continue to crunch through search results yourself to find the correct username unless you can convince the folks at SoundCloud to add that feature onto their API.
You may be able to optimize your search by guessing the user's permalink (which appears to often be the username in lowercase with spaces removed) and falling back to a full search if the search with the permalink fails to find the user you are looking for.
Using this scheme, searching for "Memory Echo" would first call /users.json?q=memoryecho and then fall back to /users.json?q=Memory Echo if the first query failed to turn up the correct user.
Also, FYI, I tried the link provided by #bnz and also got "Memory Echo" as the first response. When I searched "memoryecho", username "Memory Echo" was the only result.
Related
We use Nationbuilder for our website and discovered that when Nationbuilder encodes links (for tracking), it will break them if they contain multiple query parameters.
For instance, say we insert the following link in an email in Nationbuilder:
click me
Assuming our Nationbuilder website is hosted at www.website.org, then Nationbuilder will rewrite the link as such:
click me
When one clicks the link above, Nationbuilder processes it and records the click event in their system, but then incorrectly redirects to http://www.example.com?a=1 and discards &b=2.
Most people will immediately identify the problem -- our original url, passed as the "u" query parameter above, was not properly encoded by Nationbuilder. At the very least the ampersand before "b" should have been encoded, if not the equal signs as well, so that our entire original url would be captured in the "u" parameter. The correct link created by Nationbuilder, with the proper encoding, should have been this:
click me
Shockingly, Nationbuilder tech support and their engineers say this behavior is "working as expected". We pointed out that no one would expect a working link to become a broken link, but they refuse to treat is as a bug or at least as a design error.
Does anyone have a suggestion for how we can get around this Nationbuilder "feature" of breaking links with query parameters? We use query parameters extensively in our URLs. We were thinking of shortening every link through bit.ly so they would have no query parameters but that seems like a lot of unnecessary work.
Thanks!
Yeah, simply take your link with a parameter, and encode it, using a tool like this, http://meyerweb.com/eric/tools/dencoder/ so your URLs are not broken by NB's processing.
so
example.com/page?a=1&b=2
becomes
example.com%2Fpage%3Fa%3D1%26b%3D2
If none of the provided solutions work, you could use a URL shortening service such as bit.ly, and link to those shortened URLs from your NB email blast, which will then redirect to your full URLs as provided to the shortening service.
I have a Facebook app, and I'd like to allow my users to invite their Facebook friends to my app. The proper endpoint is /me/invitable_friends which is working well. But towards the bottom of that doc page, they recommend implementing a "search box" to filter the results, yet they don't offer any example of how to do this. I've searched around and haven't found anything.
It doesn't appear as though you can pass additional params for filtering the results. Obviously I can filter the results after the fact, but that's not scalable since the API only returns ~20 users at a time. That limit is modifiable (I believe), though it's of course not wise to bump it too high.
So how can I build a search box interface if I can't pass the search text to the endpoint? I must be missing something.
Thanks in advance.
PS - I'm using the JS SDK.
You should probably file a bug.
Based on the documentation the default size is 1000 records (average Facebook friend list size is 300-400)
If you don't see the next parameter at the end of the result under paging then there are no more results.
I'm a composer/producer, and for a group of songs, I created a dedicated Facebook Page.
Now, once reached the 50 "Likes", I wanted to have an URL for each page.
But in one case it still not possible.
the tune is called "STRIPTEASE" which FB returns: it's not available.
Ok but there is not yet any page with that URL
however, I tried also
striptease.tune
striptease.song
but nothing.
To be sure it is an unique URL, I appended also the name of the band
striptease.tune.name_of_the_band
nothing ... FB tells always it's a not available address.
Thus I think there are "forbidden" words to use into addresses and probably "striptease" is one of these. Can anyone verify this please?
Thank you so much for your help
Easiest way would be trying with a different name than striptease, and if it works then yeah, there's probably a word filter somewhere.
I'm trying to integrate the Facebook comments left on our site in a way in which the content can be crawled by search engines and also for people (although I highly doubt there will be many) who don't have Javascript enabled on their browser.
Currently our Facebook comments are displayed via the use of the Facebook comment social plugin (using the <fb:comments href="MY_URL" num_posts="50" width="665"></fb:comments> tag). This ends up rendering an iFrame (which are mostly ignored by search engine crawlers) so the plan is to render this information and format it with basic HTML. To do this, the comments are pulled using the Graph API - this is then only be displayed to crawlers and people with Javascript disabled.
This all works nicely using the Graph API call (https://graph.facebook.com/comments/?ids=MY_URL), parsing the JSON result and displaying it on the page. The problem is that the <fb:comments> approach filters our results based on a blacklist we have set up on one of our Facebook Apps. The AppId with the relevant blacklist is stored on the page using metadata (<meta property="fb:app_id" content="APP_ID"/>) which the <fb:comments> control obviously must somehow use to filter the comments.
The problem is the Graph API method does not filter any results as I guess no blacklist (or App Id containing a blacklist) is specified. Does anyone know how to specify a Facebook App ID to the API call URL or of another way to not fetch commnents back that violate the terms of the blacklist?
On a side note, I know the debate about filtering content in comments rages on but it is a management decision to implement the blacklist, and one that I have no influence in changing - just incase anyone felt the need to explain the reasons why content filtering is or isn't a good idea!
Any thoughts on a solution?
Unfortunately there's no way to access a filtered list of comments using the API - it might be a reasonably request to have this in the API - you should file a wishlist item in Facebook's bug tracker
Otherwise, the only solution I can think of is to implement your own filter on your side when retrieving and displaying the comments from the API.
According to the Comments plugin documentation the filter on Facebook's side is implemented as a simple substring match, so it should be trivial to implement.
A fairly simple regular expression match should be able to check each comment against a relatively long list quickly.
(Unfortunately, the tradeoff here is that implementing a filter is easy, but you'd also need to write an interface so that whoever's updating the list of disallowed words can maintain the list for both the Facebook plugin, and your own filtering.)
Quote from docs:
The comment is checked via substring matching. This means if you blacklist the
word 'at', if the comment contains the sequence 'a' 't' anywhere it will be
marked with limited visibility; e.g. if the comment contained the words 'bat',
'hat', 'attend', etc it would be caught.
Pretty sure there is no current way of doing this from the graph API, the only thing I can suggest is taking the blacklist and build your own filter
Hey, so this is one of those questions that seems obvious, and I'm probably going to feel stupid, but here goes:
I'm doing a CodeIgniter site with a search. Think of a Google type input, where you'd search for "white huskies." I have a search results page that takes a URI (MySite.com/dogs/white huskies), and takes the third part, and performs the search on that term. I'd like this to be done in the URI, and no by POST so my users can bookmark results.
The problem I'm having is how to get that search button directed to Mysite.com/dogs/WHATEVER IS IN THE INPUT. How do I get the what is in the input part into the anchor href? I know I could do this with javascript, but I've heard it's bad practice to force people to have javascript for things this small.
Thanks for the help!
Read: Form redirect to URL containing query term? - pure HTML or Django
(asked for Django, but answer fits here too)
You could have an intermediate POST page that collects the form inputs and concatenates them into a valid URL which you can then redirect to. I'm not sure if this is good or bad SEO practice however, but I can't see another way of doing this without some Javascript intervention.
Perhaps you could look at doing the intermediate POST page which takes the values are redirects you to /search/dog/white/huskies, but also have a Javascript equivalent that does this on the fly on the form submit and does a window.location refresh to the same /search/dog/white/huskies?
Just my 2 pennies worth ;)
It is possible to have CodeIgniter work with $_GET variables and URI segments securely.
A work around I have used in the past is to have the search term collected using POST, parse the required URL for use with URI segments and then redirect your user to this page.
$url = 'mysite.com/search/' . urlencode($_POST['query']);
redirect($url);
This shouldn't effect SEO but something like the URL of a search result is unlikely to have any effect on SEO anyway. Clean URLs are only really meant to be used for permanent content. If you're going to be displaying the search term on the page, remember to use xss_clean(), seen a few people make this fatal mistake before.