Searching for playlists on soundcloud api returns 500 error - soundcloud

I'm trying to use the soundcloud api to find playlists with a specific tag. My first step in doing so is pinging the soundcloud api for all playlists. I run the following in my command line to do so (client id replaced for privacy):
curl 'https://api.soundcloud.com/playlists.json?client_id=MY_CLIENT_ID'
This always returns a 500 interval server error, whether I ask for it in json or normal xml:
curl 'https://api.soundcloud.com/playlists?client_id=MY_CLIENT_ID'
However, when I make the analogous request for tracks, it works fine:
curl 'https://api.soundcloud.com/tracks?client_id=MY_CLIENT_ID'
curl 'https://api.soundcloud.com/tracks.json?client_id=MY_CLIENT_ID'
What gives? Is this an error on my side or their side?

I don't think you can grab all playlists. There must be millions of them, and that is a lot to return.
If you look here at the Soundcloud API Docs, they add a playlist id to the URL.
$ curl "http://api.soundcloud.com/playlists/405726.json?client_id=YOUR_CLIENT_ID"
Hopefully this helps!

Related

Why does Github actions rest API download artifacts by creating a temporary URL?

I am following the docs here https://docs.github.com/en/rest/actions/artifacts#download-an-artifact to use Github actions rest API to download artifacts. Given an ARTIFACT_ID and access token if the repo is private, one can call the API via cURL or the github CLI to get a response from github. The response header contains Location:... which provides a temporary URL lasting 1 minute from which the artifact can be downloaded. The artifact can then be downloaded via a second call to cURL.
I would like to know the reason for this design decision on the part of Github. In particular, why not just return the artifact in response to the first call to cURL? Additionally, given that the first call to cURL is intended to return a temporary URL from which the artifact can be retrieved, why not have this temporary URL returned directly by call to cURL rather than having it only contained in the header. Other information such as if the credentials are bad, or if the object has been moved are returned in json when this cURL command is run, so why can't the temporary URL also be contained here?
To help clarify my question, here is some relevant code:
# The initial cURL command looks something like this:
curl -v \
-H "Accept: application/vnd.github+json" \
-H "Authorization: token <TOKEN>" \
https://api.github.com/repos/OWNER/REPO/actions/artifacts/ARTIFACT_ID/ARCHIVE_FORMAT
# the temporary URL, which can be curled to retrieve the artifact, looks like something like this:
curl https://pipelines/actions/githubusercontent.com/serviceHosts/<HEXSTRING>/_apis/pipelines/1/runs/16/\
signedartifactscontent?artifactName=<artName>&urlExpires=<date>&urlSigningMethod=HMACV2&urlSignature=<SIGNATURE>
Additionally, I am currently capturing the standard error of the cURL command and then running regex on it so as to extract the temporary URL. Is there a better way to do this? For example, is there a flag I could pass to cURL that would give me the value of Location directly?
Additionally, it is stated that The archive_format must be zip. Given this is the case, what is the benefit of having this parameter. Is it not redundant? If so, what is the benefit of this redundency?
This is a consequence of a 2011 design decision regarding https://github.blog/2011-08-02-nodeload2-downloads-reloaded/
When implementing a proxy of any kind, you have to deal with clients that can’t read content as fast as you can send it.
When an HTTP server response stream can’t send any more data to you, write() returns false.
Then, you can pause the proxied HTTP request stream, until the server response emits a drain event.
The drain event means it’s ready to send more data, and that you can now resume the proxied HTTP request stream.
TO avoid DDOS, it is better to manage that stream from a temporary URL, rather than a fixed one.
You can use -D to display response header, but you would still need to post-process its answer to get the redirection URL.

What is the Curl while making a Post request into the Vision Api Product Search?

I saw that they ask me To send your request, choose one of these options:
what are these options i didn't understand, i want only to make a post request like the first command with the json body as mentioned, but down in the curl i didn't understand what to do.

How do I know Splunk REST API Base URL?

We have Splunk deployed in https://splunkit.corp.company.com (url modified).
and able to access Splunk Web home page on https://splunkit.corp.company.com/en-US/app/launcher/home (url modified).
I am building a dashboard application which uses the JSON data provided by Splunk REST services.
I have gone through the link and rest end points as here.
From above links I got know
I need to make post request to services/auth/login with username and password. This returns session key which will be used in further API calls.
Have to make post request to services/search/jobs to create a search. This returns search id.
I need to check services/search/jobs/ for search complete.
If search complete Then I can retrieve results using services/search/jobs//results.
The problem here I facing is I don't know whats the base URL. I tried constructing https://splunkit.corp.company.com/en-US/services/auth/login and etc but not working.
Any help appreciated. Thanks
I had the same question earlier. Well here is an workaround to find out the REST API Base URL. I found this solution by accident in fact.
In the Firefox browser, open the Web Developer / Network tool, to inspect the URLs between your local computer and the Splunk server
Logon to the Splunk via Web Interface
We assume you had finished a search beforehand, so there should be an Job stored on the Server already. Then, we click the Activity / Jobs link at the right top location of the window
There will be a list of jobs listed. Choose any job, click the Job / Delete Job button, then the Job search result will be deleted.
In the Web Developer tool, inspect the URL when deleting the job
For me, I got an URL looks like:
https://the-company-splunk-server.com/en-US/splunkd/__raw/services/search/jobs/scheduler_search_RMD554b7a649e94cdf69_at_1526886000_58534?output_mode=json
The top secret is: the URL before /services/ is the REST API Base URL. In this case, the base URL is:
https://the-company-splunk-server.com/en-US/splunkd/__raw/services/
Test the Base URl
We can try this Base URL for login with CURL:
curl --insecure https://the-company-splunk-server.com/en-US/splunkd/__raw/services/auth/login -d username=your-user -d password=your-password
We got the following result:
<response>
<sessionKey>kq6gkXO_dFcJzJG2XpwZs1IwfhH8MkkYDaBsZrPxZh8</sessionKey>
</response>
So the test is succeed. We have proven the base URL works.
Good luck.

How do I use this REST authentication style (token immediately following https://)?

I'm trying to use this API: https://bibles.org/pages/api/documentation
The docs say to simply do: https://#{your API token}:bibles.org/v2/versions/eng-GNTD.xml as their example. However, this doesn't work for me; Chrome and Firefox just forward as if I'm trying to do a google search.
If I do their curl example: curl -u #{your API token}:X -k https://bibles.org/v2/versions/eng-GNTD.xml, everything works fine.
I've never seen an authentication style where I passed my token before the url. Is there a special way to do this that I just don't know about?
Looking at your curl command,it looks like you are sending basic authentication with get request.
Check here
Try
https://#{your API token}:X#bibles.org/v2/versions/eng-GNTD.xml

502 Bad Gateway on api.soundcloud.com /playlists HTTP API request

Simply souncloud servers responds with 502 Bad Gateway error on a /playlists call to API with client_id and user_id parameters. Issue regards a specific user, so maybe something it's messed up on server or on profile.. but how can I resolve?! There's no recovery procedure or similar to try things works again?
I ran into this issue myself and believe I have found the root cause. After trying the playlists query in browser & curl I was able to get the valid data response (JSON).
Turns out Soundcloud includes all the tracks information of each playlist! I have the max (500) tracks in some playlists so the response payload size must be the issue. It must also be why we get an occasional timeout response.
I will see what I can do about limiting the data size in the response and may try other libraries (I'm using soundcloud-ruby.)