Unable to upload a track using the soundcloud API larger than 7MB - soundcloud

I am unable to upload any tracks larger than about 7MB (413 Request Entity Too Large is returned). This functionality was previously working and the soundcloud api states that tracks can be upto 500MB.
Here is an example using curl with a successful upload(4.9MB) and an unsuccessful one(7.4MB)
I have provided dropbox links to the tracks(my own production, so no copyright issues!!!) if anyone wants to try to replicate. You will need to add your oauth_token.
successful upload = 4900kb_307sec_128kbps_44100hz.mp3
curl -i -X POST "https://api.soundcloud.com/tracks.json" \
-F 'oauth_token=********' \
-F 'track[asset_data]=#4900kb_307sec_128kbps_44100hz.mp3' \
-F 'track[title]=A 4.9MB track' \
-F 'track[sharing]=public'
HTTP/1.1 100 Continue
HTTP/1.1 201 Created
Access-Control-Allow-Headers: Accept, Authorization, Content-Type, Origin
Access-Control-Allow-Methods: GET, PUT, POST, DELETE
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Date
Age: 0
Cache-Control: no-cache
Content-Type: application/json; charset=utf-8
Date: Wed, 06 Nov 2013 18:22:57 GMT
Location: https://api.soundcloud.com/tracks/118866401
Server: nginx
Via: 1.1 varnish
X-Cache: MISS
X-Cacheable: NO:Cache-Control=no-cache
X-Runtime: 436
X-Varnish: 3652774389
Content-Length: 1623
unsuccessful upload = 7400kb_307sec_192kbps_44100hz.mp3
curl -i -X POST "https://api.soundcloud.com/tracks.json" \
-F 'oauth_token=********' \
-F 'track[asset_data]=#7400kb_307sec_192kbps_44100hz.mp3' \
-F 'track[title]=A 7.4MB track' \
-F 'track[sharing]=public'
HTTP/1.1 100 Continue
HTTP/1.1 413 Request Entity Too Large
Date: Wed, 06 Nov 2013 18:23:21 GMT
Server: ECS (lhr/4799)
Content-Length: 0
Connection: close
thanks

We have the same issue with soundcloud. Seems to be an issue in their nginx.conf (webserver
configuration).
Please contact Soundcloud support for developers.

Was an issue in SoundCloud API routing. Has been fixed now.
For details see comments at:
Soundcloud: increased 413 failures (Request Entity Too Large) on upload

Related

Uploading a file with google cloud API with a PUT at root of server?

I have a server using the google Drive API. I tried with a curl PUT request to upload a simple file (test.txt) at http://myserver/test.txt. As you can see, I did the PUT request at the root of my server. The response I get is the following:
HTTP/1.1 200 OK
X-GUploader-UploadID: AEnB2UqANa4Bj6ilL7z5HZH0wlQi_ufxDiHPtb2zq1Gzcx7IxAEcOt-AOlWsbX1q_lsZUwWt_hyKOA3weAeVpQvPQTwbQhLhIA
ETag: "6e809cbda0732ac4845916a59016f954"
x-goog-generation: 1548877817413782
x-goog-metageneration: 1
x-goog-hash: crc32c=jwfJwA==
x-goog-hash: md5=boCcvaBzKsSEWRalkBb5VA==
x-goog-stored-content-length: 6
x-goog-stored-content-encoding: identity
Content-Type: text/html; charset=UTF-8
Accept-Ranges: bytes
Via: 1.1 varnish
Content-Length: 0
Accept-Ranges: bytes
Date: Wed, 30 Jan 2019 19:50:17 GMT
Via: 1.1 varnish
Connection: close
X-Served-By: cache-bwi5139-BWI, cache-cdg20732-CDG
X-Cache: MISS, MISS
X-Cache-Hits: 0, 0
X-Timer: S1548877817.232336,VS0,VE241
Vary: Origin
Access-Control-Allow-Methods: POST,PUT,PATCH,GET,DELETE,OPTIONS
Access-Control-Allow-Headers: Cache-Control,X-Requested-With,Authorization,Content-Type,Location,Range
Access-Control-Allow-Credentials: true
Access-Control-Max-Age: 300
I know you're not supposed to use the API that way. I did that for testing purposes. I understand every headers returned but can't figure out if my file has been uploaded because I don't have enough knowledge of this API.
My question is very simple :
Just by looking at the response, can you tell me if my file has been uploaded ?
If yes can I retrieve it and how ?
The HTTP status code traditionally indicates, for any given request, if it was successful. The status code in the response is always on the first line:
HTTP/1.1 200 OK
200 type status codes mean success. You should take some time to familiarize yourself with HTTP status codes if you intend to work with HTTP APIs.

Unable to paginate releases in GitHub V3 API

I'm trying to list all of the releases of a public repository on GitHub using their V3 API. Here's the request I'm making:
curl -is -H 'Accept: application/vnd.github.v3+json' \
https://api.github.com/repos/ffmpeg/ffmpeg/releases
The response headers I receive back are here:
HTTP/1.1 200 OK
Server: GitHub.com
Date: Fri, 29 Jan 2016 20:23:15 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 29612
Status: 200 OK
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 19
X-RateLimit-Reset: 1454099558
Cache-Control: public, max-age=60, s-maxage=60
ETag: "947039722a1073c5279a9fd39d00c0bf"
Vary: Accept
X-GitHub-Media-Type: github.v3; format=json
Access-Control-Allow-Credentials: true
Access-Control-Expose-Headers: ETag, Link, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-
Access-Control-Allow-Origin: *
Content-Security-Policy: default-src 'none'
Strict-Transport-Security: max-age=31536000; includeSubdomains; preload
X-Content-Type-Options: nosniff
X-Frame-Options: deny
X-XSS-Protection: 1; mode=block
Vary: Accept-Encoding
X-Served-By: b0ef53392caa42315c6206737946d931
X-GitHub-Request-Id: XXXXXXXXXXXXXXXXXXXXXXXXXXXX
Notice the lack of the Link response header? In the response body, I only get back around 7 releases and I can't seem to paginate forward or backward by manually specifying the ?page=N query parameter.
For some background, FFmpeg has about 226 releases present in its GitHub repository, and I'm only getting around 7 of those, unable to paginate through them.
Is there something I'm doing wrong here that would limit my responses back from the GitHub v3 API?
GitHub conflates its proprietary Releases feature with regular Git tags. Many of the "releases" that you see for ffmpeg are actually just tags.
Here is an example of a real release. Note how it contains much more information than a tag does. Despite their web UI showing tags mixed in with releases, GitHub's release API endpoint doesn't include regular tags:
This returns a list of releases, which does not include regular Git tags that have not been associated with a release. To get a list of Git tags, use the Repository Tags API.
Using the tags endpoint as GitHub suggests gives more results, and includes a Link header as you expect:
curl -is -H 'Accept: application/vnd.github.v3+json' \
https://api.github.com/repos/ffmpeg/ffmpeg/tags

Questions on proper REST api design specifically on the PUT action when updating a resource

I'm creating a REST interface (aren't we all), and I want to UPDATE a resource.
So, I think to use a PUT.
So, i read this.
My take away is that i PUT to a URL like this
/hc/api/v1/organizer/event/762d36c2-afc5-4c51-84eb-9b5b0ef2990c
with a payload, then a permanent redirect to the URL that it can GET an updated version of the resource.
In this case it happens to be the same URL, different action.
So my questions are:
Is my understanding of updating a resource correct in using a PUT, and is my understanding of the use of the PUT correct.
When a client gets a redirect does it do the same action on the redirected URL as it did on the original URL? If its "depends" is there a standard most clients follow?
I ask the 2nd question, because POSTMAN and my JQuery AJAX calls are choking. JQuery because of net::ERR_TOO_MANY_REDIRECTS. So is it redirecting and trying the PUT again, which it will get another REDIRECT?
curl blows up too but even though it says if it gets a 301 it will switch to a GET, it doesn't really seem to do that when i look at the output (below).
When curl follows a redirect and the request is not a plain GET (for example POST or PUT), it will do the following request with a GET if the HTTP response was 301, 302, or 303. If the response code was any other 3xx code, curl will re-send the following request using the same unmodified method.
CURL OUTPUT (edited for brevity) (also note how it says its going to switch to a GET [incorrectly from a POST], but then it seems to do a PUT anyway):
curl -X PUT -H "Authorization: Basic AUTHZ==" -H "Content-Type: application/json" -H "Cache-Control: no-cache" -H "Postman-Token: e80657f0-a8f5-af77-1d9d-d7bc22ed0b30" -d '{ JSONDATA"}' http://localhost:8080/hc/api/v1/organizer/event/762d36c2-afc5-4c51-84eb-9b5b0ef2990c -v -L
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> PUT /hc/api/v1/organizer/event/762d36c2-afc5-4c51-84eb-9b5b0ef2990c HTTP/1.1
> User-Agent: curl/7.37.1
> Host: localhost:8080
> Accept: */*
> Authorization: Basic AUTHZ==
> Content-Type: application/json
> Cache-Control: no-cache
> Postman-Token: e80657f0-a8f5-af77-1d9d-d7bc22ed0b30
> Content-Length: 203
>
* upload completely sent off: 203 out of 203 bytes
< HTTP/1.1 301 Moved Permanently
< Connection: keep-alive
< X-Powered-By: Undertow/1
< Set-Cookie: rememberMe=deleteMe; Path=/hc; Max-Age=0; Expires=Fri, 20-Feb-2015 03:53:28 GMT
< Set-Cookie: JSESSIONID=uwI3_41LAa7vlvapTsrZdw10.macbook-air; path=/hc
* Server WildFly/8 is not blacklisted
< Server: WildFly/8
< Location: /hc/api/v1/organizer/event/762d36c2-afc5-4c51-84eb-9b5b0ef2990c
< Content-Length: 0
< Date: Sat, 21 Feb 2015 03:53:28 GMT
<
* Connection #0 to host localhost left intact
* Issue another request to this URL: 'http://localhost:8080/hc/api/v1/organizer/event/762d36c2-afc5-4c51-84eb-9b5b0ef2990c'
* Switch from POST to GET
* Found bundle for host localhost: 0x7f9e4b415430
* Re-using existing connection! (#0) with host localhost
* Connected to localhost (127.0.0.1) port 8080 (#0)
> PUT /hc/api/v1/organizer/event/762d36c2-afc5-4c51-84eb-9b5b0ef2990c HTTP/1.1
> User-Agent: curl/7.37.1
> Host: localhost:8080
> Accept: */*
> Authorization: Basic dGVzdHVzZXIxOlBhc3N3b3JkMQ==
> Content-Type: application/json
> Cache-Control: no-cache
> Postman-Token: e80657f0-a8f5-af77-1d9d-d7bc22ed0b30
>
< HTTP/1.1 500 Internal Server Error
< Connection: keep-alive
< Set-Cookie: JSESSIONID=fDXxlH2xI-0-DEaC6Dj5EhD9.macbook-air; path=/hc
< Content-Type: text/html; charset=UTF-8
< Content-Length: 8593
< Date: Sat, 21 Feb 2015 03:53:28 GMT
<
...failure ensues... It actually does a PUT
thanks in advance.
I think you're reading too much into the 301 redirect section.
If you want to update a resource using PUT, return:
201: if the resource was created
200: with the updated resource
The 301 in question only applies if there actually is a redirect in question - like, if something can be identified by name, and you need to redirect it to a url that has the id or something. (Maybe you refactor and people are still consuming the old endpoint).
So, do you really need to redirect your PUT requests? Because you should be sending back the updated resource within the same loop using 200, like stated above, instead of "redirecting to GET".
EDIT: Fix some spelling.

curl, play & expect 100 continue header

consider a web service written in play, which excepts POST request (for uploads). now, when testing this with a medium size image (~75K) I've found out a strange behaviour. well, code speaks more clearly than long explanations, so:
$ curl -vX POST localhost:9000/path/to/upload/API -H "Content-Type: image/jpeg" -d #/path/to/mascot.jpg
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9000 (#0)
> POST /path/to/upload/API HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:9000
> Accept: */*
> Content-Type: image/jpeg
> Content-Length: 27442
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=utf-8
< Content-Length: 16
<
* Connection #0 to host localhost left intact
{"success":true}
as you can see, curl decides to add the header Content-Length: 27442, but it's not true, the real size is 75211, and in play, I indeed got a body in size only 27442. of coarse, this is not the intended behaviour. so I tried a different tool, instead of curl I used the POST tool from libwww-perl:
cat /path/to/mascot.jpg | POST -uUsSeE -c image/jpeg http://localhost:9000/path/to/upload/API
POST http://localhost:9000/path/to/upload/API
User-Agent: lwp-request/6.03 libwww-perl/6.05
Content-Length: 75211
Content-Type: image/jpeg
200 OK
Content-Length: 16
Content-Type: application/json; charset=utf-8
Client-Date: Mon, 16 Jun 2014 09:21:00 GMT
Client-Peer: 127.0.0.1:9000
Client-Response-Num: 1
{"success":true}
this request succeeded. so I started to pay more attention to the differences between the tools. for starter: the Content-Length header was correct, but also, the Expect header was missing from the second try. I want the request to succeed either way. so the full list of headers as seen in play (via request.headers) is:
for curl:
ArrayBuffer((Content-Length,ArrayBuffer(27442)),
(Accept,ArrayBuffer(*/*)),
(Content-Type,ArrayBuffer(image/jpeg)),
(Expect,ArrayBuffer(100-continue)),
(User-Agent,ArrayBuffer(curl/7.35.0)),
(Host,ArrayBuffer(localhost:9000)))
for the libwww-perl POST:
ArrayBuffer((TE,ArrayBuffer(deflate,gzip;q=0.3)),
(Connection,ArrayBuffer(TE, close)),
(Content-Length,ArrayBuffer(75211)),
(Content-Type,ArrayBuffer(image/jpeg)),
(User-Agent,ArrayBuffer(lwp-request/6.03 libwww-perl/6.05)),
(Host,ArrayBuffer(localhost:9000)))
So my current thoughts are: the simpler perl tool used a single request, which is bad practice. the better way would be to wait for a 100 continue confirmation (especially if you gonna' upload a several GB of data...). curl would continue to send data until it receives a 200 OK or some bad request error code. So why play sends the 200 OK response without waiting for the next chunk? is it because curl specifies the wrong Content-Length? if it's wrong at all... (perhaps this refers to the size of the current chunk?).
so where's the problem lies? in curl or in the play webapp? and how do I fix it?
the problem was in my curl command. I used the -d argument, which is a short for --data or --data-ascii, when I should have used --data-binary argument.

How to request only for a web page header using netcat?

I am not sure whether netcat has some command for requesting http header or should I use sed to filter the result? If sed is used in this case, is it so that I only have to extract everything before the first "<" occurs ?
The command line tool curl has the -I option which only gets the headers:
-I/--head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature the command HEAD which this uses to get nothing but the header of a document. When used on a FTP or FILE file, curl displays the file size and last modification time only.
Demo:
$ curl -I stackoverflow.co.uk
HTTP/1.1 200 OK
Cache-Control: private
Content-Length: 503
Content-Type: text/html; charset=utf-8
Server: Microsoft-IIS/7.5
X-AspNet-Version: 2.0.50727
X-Powered-By: ASP.NET
Date: Thu, 26 Sep 2013 21:06:15 GMT