SoundCloud API: what is the item limit on the activities feed? - soundcloud

As it's undocumented, what's the max limit I can reasonably set when using the activities feed?
I.e,
curl -i "https://api.soundcloud.com/me/activities?limit=200&oauth_token={}"
Will return a valid response.
But:
curl -i "https://api.soundcloud.com/me/activities?limit=1000&oauth_token={}"
Will return:
HTTP/1.1 500 Internal Server Error
Content-Type: application/json;charset=utf-8
Date: Fri, 22 Jan 2016 07:22:50 GMT
Server: am/2
Content-Length: 26
{"error":"Unknown error."}

Default limit is 50, max is 200.
https://developers.soundcloud.com/blog/offset-pagination-deprecated
If you wanna page through all activities, you need to use the linked_partitioning parameter, described above.
or here under pagination:
https://developers.soundcloud.com/docs/api/reference#activities

Related

Uploading a file with google cloud API with a PUT at root of server?

I have a server using the google Drive API. I tried with a curl PUT request to upload a simple file (test.txt) at http://myserver/test.txt. As you can see, I did the PUT request at the root of my server. The response I get is the following:
HTTP/1.1 200 OK
X-GUploader-UploadID: AEnB2UqANa4Bj6ilL7z5HZH0wlQi_ufxDiHPtb2zq1Gzcx7IxAEcOt-AOlWsbX1q_lsZUwWt_hyKOA3weAeVpQvPQTwbQhLhIA
ETag: "6e809cbda0732ac4845916a59016f954"
x-goog-generation: 1548877817413782
x-goog-metageneration: 1
x-goog-hash: crc32c=jwfJwA==
x-goog-hash: md5=boCcvaBzKsSEWRalkBb5VA==
x-goog-stored-content-length: 6
x-goog-stored-content-encoding: identity
Content-Type: text/html; charset=UTF-8
Accept-Ranges: bytes
Via: 1.1 varnish
Content-Length: 0
Accept-Ranges: bytes
Date: Wed, 30 Jan 2019 19:50:17 GMT
Via: 1.1 varnish
Connection: close
X-Served-By: cache-bwi5139-BWI, cache-cdg20732-CDG
X-Cache: MISS, MISS
X-Cache-Hits: 0, 0
X-Timer: S1548877817.232336,VS0,VE241
Vary: Origin
Access-Control-Allow-Methods: POST,PUT,PATCH,GET,DELETE,OPTIONS
Access-Control-Allow-Headers: Cache-Control,X-Requested-With,Authorization,Content-Type,Location,Range
Access-Control-Allow-Credentials: true
Access-Control-Max-Age: 300
I know you're not supposed to use the API that way. I did that for testing purposes. I understand every headers returned but can't figure out if my file has been uploaded because I don't have enough knowledge of this API.
My question is very simple :
Just by looking at the response, can you tell me if my file has been uploaded ?
If yes can I retrieve it and how ?
The HTTP status code traditionally indicates, for any given request, if it was successful. The status code in the response is always on the first line:
HTTP/1.1 200 OK
200 type status codes mean success. You should take some time to familiarize yourself with HTTP status codes if you intend to work with HTTP APIs.

How to search using Github API with enterprise

I'm trying to search through repositories, but I can't seem to figure it out with github enterprise edition. I have tried the following with no results. Any suggestions?
curl -i http://my.domain.com/api/v3/repositories "If-Modified-Since: Mon, 16 Jun 2014 01:01:01 CST"
curl -i http://my.domain.com/api/v3/search/repos?q=pushed:2014-06-17
HTTP/1.1 404 Not Found
Server: GitHub.com
Date: Wed, 18 Jun 2014 16:45:58 GMT
Content-Type: application/json; charset=utf-8
Connection: keep-alive
Status: 404 Not Found
X-GitHub-Media-Type: github.beta
X-Content-Type-Options: nosniff
Content-Length: 29
Access-Control-Allow-Credentials: true
Access-Control-Expose-Headers: ETag, Link, X-RateLimit-Limit, X-RateLimit-Remaining, X- RateLimit-Res
et, X-OAuth-Scopes, X-Accepted-OAuth-Scopes
Access-Control-Allow-Origin: *
X-GitHub-Request-Id: b4eec0e7-1b1a-48b7-81d8-d63c28b55b37
{
"message": "Not Found"
}
One of the nice things of Github's API both public and Enterprise, is if you go to the API root, it will tell you what endpoints are available. On an enterprise instance it is: http://my.domain.com/api/v3/. Looking at my company's enterprise instance (sorry not sure of the version), I only see the legacy search API endpoints.
As a result: http://my.domain.com/api/v3/legacy/repos/search/pushed:2014-06-17 is likely the search URL you are wanting.

Unable to upload a track using the soundcloud API larger than 7MB

I am unable to upload any tracks larger than about 7MB (413 Request Entity Too Large is returned). This functionality was previously working and the soundcloud api states that tracks can be upto 500MB.
Here is an example using curl with a successful upload(4.9MB) and an unsuccessful one(7.4MB)
I have provided dropbox links to the tracks(my own production, so no copyright issues!!!) if anyone wants to try to replicate. You will need to add your oauth_token.
successful upload = 4900kb_307sec_128kbps_44100hz.mp3
curl -i -X POST "https://api.soundcloud.com/tracks.json" \
-F 'oauth_token=********' \
-F 'track[asset_data]=#4900kb_307sec_128kbps_44100hz.mp3' \
-F 'track[title]=A 4.9MB track' \
-F 'track[sharing]=public'
HTTP/1.1 100 Continue
HTTP/1.1 201 Created
Access-Control-Allow-Headers: Accept, Authorization, Content-Type, Origin
Access-Control-Allow-Methods: GET, PUT, POST, DELETE
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Date
Age: 0
Cache-Control: no-cache
Content-Type: application/json; charset=utf-8
Date: Wed, 06 Nov 2013 18:22:57 GMT
Location: https://api.soundcloud.com/tracks/118866401
Server: nginx
Via: 1.1 varnish
X-Cache: MISS
X-Cacheable: NO:Cache-Control=no-cache
X-Runtime: 436
X-Varnish: 3652774389
Content-Length: 1623
unsuccessful upload = 7400kb_307sec_192kbps_44100hz.mp3
curl -i -X POST "https://api.soundcloud.com/tracks.json" \
-F 'oauth_token=********' \
-F 'track[asset_data]=#7400kb_307sec_192kbps_44100hz.mp3' \
-F 'track[title]=A 7.4MB track' \
-F 'track[sharing]=public'
HTTP/1.1 100 Continue
HTTP/1.1 413 Request Entity Too Large
Date: Wed, 06 Nov 2013 18:23:21 GMT
Server: ECS (lhr/4799)
Content-Length: 0
Connection: close
thanks
We have the same issue with soundcloud. Seems to be an issue in their nginx.conf (webserver
configuration).
Please contact Soundcloud support for developers.
Was an issue in SoundCloud API routing. Has been fixed now.
For details see comments at:
Soundcloud: increased 413 failures (Request Entity Too Large) on upload

How to request only for a web page header using netcat?

I am not sure whether netcat has some command for requesting http header or should I use sed to filter the result? If sed is used in this case, is it so that I only have to extract everything before the first "<" occurs ?
The command line tool curl has the -I option which only gets the headers:
-I/--head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature the command HEAD which this uses to get nothing but the header of a document. When used on a FTP or FILE file, curl displays the file size and last modification time only.
Demo:
$ curl -I stackoverflow.co.uk
HTTP/1.1 200 OK
Cache-Control: private
Content-Length: 503
Content-Type: text/html; charset=utf-8
Server: Microsoft-IIS/7.5
X-AspNet-Version: 2.0.50727
X-Powered-By: ASP.NET
Date: Thu, 26 Sep 2013 21:06:15 GMT

How to correctly set Expires headers on Google Cloud Storage?

The Google Cloud Storage Developer Guide explains how to set Cache-Control headers, and explains their critical impact on the consistency behavior of the api, yet the Expires headers aren't mentioned nor did they appear to be inheriting from the Cache-Control configuration.
The Expires header appeared to always be equal to request time plus 1 year, regardless of Cache-Control setting, eg.
$ gsutil setmeta -h "Cache-Control:300" gs://example-bucket/doc.html
A request was made to a document (doc.html) in the Google Cloud Storage bucket (example-bucket) via
$ curl -I http://example-bucket.storage.googleapis.com/doc.html
which produced the following headers
HTTP/1.1 200 OK
Server: HTTP Upload Server Built on Oct 3 2012 16:52:30 (1349308350)
Date: Sat, 13 Oct 2012 00:51:13 GMT
Cache-Control: 300, no-transform
Expires: Sun, 13 Oct 2013 00:51:13 GMT
Last-Modified: Fri, 12 Oct 2012 20:08:41 GMT
ETag: "28fafe4213ae34c7d3ebf9ac5a6aade8"
x-goog-sequence-number: 82
x-goog-generation: 1347601001449082
x-goog-metageneration: 1
Content-Type: text/html
Accept-Ranges: bytes
Content-Length: 7069
Vary: Origin
Not sure why you say the Expires header shows request time plus 1 year. In your example, the Expires header shows a timestamp one hour after the request date, which is to be expected.
I just did an experiment where I set an object's max age to 3600 and then 7200 via this command:
gsutil setmeta "Cache-Control:max-age=7200" gs://marc-us/xyz.txt
Then I retrieved the object using the gsutil cat command with the -D option to see the request/response details, like this:
gsutil -D cat gs://marc-us/xyz.txt
In both experiments, the Expires header produced the expected timestamp, as per the object's max-age setting (i.e. one hour after request time and two hours after request time).
Looks like this was caused by a malformed header. Duh.
Cache-Control: 300, no-transform
should be
Cache-Control: public, max-age=300, no-transform
When things are set correctly, they work. See RFC 2616 (HTTP/1.1) Section 14.9 (Cache-Control).