When calling this api:
https://api.soundcloud.com/tracks.json?offset=0&q=ben+pearce&filter=streamable&order_by=hotness&consumer_key=[XXX]
I get a track that is NOT streamable, even though I've added &filter=streamable to the request. Am I doing it wrong?
Related
I have been using the Soundcloud favorites endpoint api.soundcloud.com/users/${USER_ID}/favorites for the past few months and since yesterday it has been returning 404 with username in ID field or a max of 35 tracks with no next_href returned in the response.
For example a request with "domdolla" in USER_ID field will return 404,
https://api.soundcloud.com/users/domdolla/favorites?linked_partitioning=1&offset=0&limit=200&client_id=XXXXXXXXXX
and a request with domdolla's user_id "627109" returns a collection of 32 tracks and no next_href.
https://api.soundcloud.com/users/627109/favorites?linked_partitioning=1&offset=0&limit=200&client_id=XXXXXXXXXX
However, if you retrieve domdolla's profile, it shows that he has 1082 public_favorites_count:
https://api.soundcloud.com/users/domdolla?client_id=XXXXXXXXXX
This endpoint is still documented in the Soundcloud HTTP API Reference here:
https://developers.soundcloud.com/docs/api/reference#users
They changed the API without updating the Docs.
The new endpoint is:
https://api-v2.soundcloud.com/users/{ID}/track_likes
This returns a list of "Likes".
The new api-v2 also already works with the resolve endpoint.
We are working on a new RESTful api using ASP.NET Web API. Many of our customers receive a nightly datafeed from us. For each feed we run a schedule SQL Agent job that fires off a stored procedure which executes an SSIS package and delivers files via Email/FTP. Several customers would benefit from being able to run this job on demand and then receive either their binary file (xml, xls, csv, txt, etc.) or a direct transfer of the data in JSON or XML.
The main issue is that the feeds generally take a while to run. Most run within a few minutes but there are a couple that can take 20 minutes (part of the project is optimizing these jobs). I need some help finding a best practice for setting up this api.
Here are our actions and proposed REST calls
Create Feed Request
POST ./api/feedRequest
Status 201 Created
Returns feedID in the body (JSON or XML)
We thought POST would be the correct request type because we're creating a new request.
Poll Feed Status
GET ./api/feedRequest/{feedID}
Status 102 Processing (feed is processing)
Status 200 OK (feed is completed)
Cancel Feed Request
DELETE .api/feedRequest/{feedID}
Status 204 No Content
Cancels feed request.
Get Feed
GET .api/feed/{feedID}
Status 200 OK
This will return the feed data. We'll probably pass parameters into the header to specify how they want their data. Setting feedType to "direct" would require JSON or XML setting in Content-Type. Setting feedType to "xml", "xls", "csv", etc., will transfer a binary data file back to the user. For some feeds this is a custom template that is specified in the feed definition already stored in our tables.
Questions
Does it appear that we're on the right track? Any immediate suggestions or concerns?
We are trying to decide whether to have a /feed resource and a /feedRequest resource or whether to keep it all under /feed. The above scenario is the two resource approach. The single resource would POST /feed to start request, PUT /feed to check the status, GET /feed when it's done. The PUT doesn't feel right and right now we're leaning towards the stated solution above. Does this seem right?
We're concerned about very large dataset returns. Should we be breaking these into pieces or will REST service handle these large returns. Some feeds can be in excess of 100MB.
We also have images that may be generated to accompany the feed, they're zipped up in a separate file when the feed stored procedure and package are called. We can keep this all in the same request and call GET /feed/{feedID}/images on the return.
Does anyone know of a Best Practice or a good GitHub example we could look at that does something similar to this with MS technologies? (We considered moving to ASP.NET Core as well.
Is there any possibility of "listening" to the state of GET SiteCatalyst image requests ?
I'd like to run a callback function only when the requests are over, to be more clear when they receive the 200 status code and I'm sure they're done.I'm confident no "built-in" method is available and maybe I should hack the core s.track.s.t() function...?Thanks a lot.
You are right, there is no global "built-in" callback method for when the Adobe Analytics request is complete.
A couple notes I should mention to you about attempting to hack the core code:
1) If you are using the AppMeasurement library version 1.4.1+, in some circumstances, a POST request may be made instead of an image request.
2) Responses that are not 200/OK or otherwise completed/successful does not necessarily mean the data failed to be sent to Adobe. Most common scenario is a NS_BINDING_ABORTED error returned.
The main bad effect I'm getting here is what I previously thought as a double XHR request.
It wasn't. In reality the first request gets redirected as it would be the first visit of a new visitor (302 status) and a new visitorID is brought down by Adobe server.
Then the redirected "200 status" request is made with this new visitorID within.This is bad because every XHR requests would result in a new visit of a new visitor even though a previously set "s_vi" cookie is there in browser, with the lack of previous collected data for that user.I know what XHR redirects couldn't be blocked so I'm wondering if there is a way to "tell" Adobe server it's not the first request ever made, in order to stop the redirect and do not use a new visitorID.
I'm trying to get a grasp on how to reverse engineer the URL's to download streams.
I know there are allready Open Souce tools, that do that, but by copying them i do not get the process of how to do it.
As an Example: I try to get a downloader for soundcloud to work. Im guessing the download url should be something like api.soundcloud.com/track/... . Somewhere inbetween there surely are the track_id and client_id which can be excracted from the source of the page.
But i can't seem to get further than that right now.
Before I answer my own post, i want to state, that downloading Streams from Soundcloud is illegal and hurts the artists. Also playing the stream outside of Soundcloud is only allowed under their terms, so please check those first.
So to grab the Stream Link i first looked into the Soundcloud Python Library. There i found, that i can just question the API with api.soundcloud.com/resolve?url=<URL of the desired Song Page>&client_id=<client_id>.
The Client id has to be sent with each api request. Searching trough the code its really easy to find a client_id. It seems to be static for unregistered users at least and further searching for it suggested, that it is that way for at least a year.
After you call the resolve URL above you get an XML document with properties of the Song/Stream. There you will find the Stream URL. You can just make a normal HTTP Request for that Stream URL. (Don't forget to append the client_id).
If for some reasons the link doesn't work properly, try disabling your 302 HTTP redirections.
I'm streaming music from the soundcloud API, and sometimes when I call SC.stream the track does not stream and I get the error:
"GET http://api.soundcloud.com/tracks/80608808/stream?client_id=78bfc6a742a617082972ddc5ef20df2a 404 (Not Found)"
The GET request works for some tracks and not for others. I cannot figure out why or if there is an issue with the SC API.
He is a replication of my problem on plnkr:
http://plnkr.co/edit/TMVWGg
Thanks!
This is because some soundcloud tracks are not streamable via api. You can check this for each track by property "streamable".