Developing the client for the icecast server - streaming

I am developing the client for the icecast server (www.icecast.org). Can anybody tell me, what is the format they are using for streaming the content?
I was looking on their pages, but there is no information about the stream format at all.
I have then checked the Wireshark trace and due to my understanding the format of the audio data I am receiving within the 200 OK response to the GET request it is just a plain binary audio data without any metadata included, so comparing to the SHOUTcast or HTTP Live Streaming (HLS) it is relative simple approach.
Is that right? Any experience with it?
Wireshark trace snippet:
GET /bonton-128.mp3 HTTP/1.1
Host: icecast3.play.cz
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.19.4 (KHTML, like Gecko) Version/5.0.3 Safari/533.19.4
Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en-US
Accept-Encoding: gzip, deflate
Connection: keep-alive
HTTP/1.0 200 OK
Content-Type: audio/mpeg
icy-br:128
ice-audio-info: ice-samplerate=44100;ice-bitrate=128;ice-channels=2
icy-br:128
icy-description:Radio Bonton
icy-genre:Pop / Rock
icy-name:Radio Bonton
icy-pub:0
icy-url:http://www.radiobonton.cz
Server: Icecast 2.3.2
Cache-Control: no-cache
Here are then aac or MPEG data
Thanks and regards,
STeN

For your purposes, Icecast and SHOUTcast are equivalent.
They both use a bastardized version of HTTP. In fact, you can make a simple HTTP request and use standard HTTP client libraries, and it will almost always work just fine. The only thing different is that SHOUTcast will return ICY 200 OK instead of HTTP 200 OK in its response.
Now if you make the request, as you have done above, you get a standard audio stream that you can play directly. As you have pointed out, MP3 and AAC are used almost exclusively, but other formats can be used.
If you want metadata, you have to tell the server you are prepared to receive it. You have to put this header in your request:
Icy-MetaData:1
Once you do that, you will see another header come back to you in the response, such as icy-metaint:8192, which means that every 8192 bytes, you will receive a chunk of metadata.
I won't go into further details because this is already well documented. No need to re-type the wheel:
Pulling Track Info From an Audio Stream Using PHP
http://www.smackfu.com/stuff/programming/shoutcast.html
However, if you do have questions as you go, please post them on StackOverflow and tag them as icecast or shoutcast, and I will be happy to assist you.

I have just recently finished a project for radio station, where they used icecast. I want to share you the radio player and some PHP Wrappers that i have used to get information from centova,icecast and lastfm.

Related

Streaming data/file to a REST service. Files/attachments with huge size

I am trying to figure if there is a way to stream large files chunks by chunks using HTTP transfer-encoding CHUNKED between client and server in REST. By Semantics REST service provider accepts only application/json, but I read this Chunked Transfer Encoding and thinking if this is something I can make it possible using any REST client, say for example apache Http client.
Handling large files (memory overhead will be more during normal/huge loads) is always a challenge with REST API during transfer, so is there an optimistic solution for this.
If not chunking is there any other way like reading bytes into fixed buffer and transmit over HTTP. The service provider is not willing to change the REST contract and always expects application/json media type with attachment being a part of multipart request.
The usecase that comes to my mind is how to handle attachments in email typically big in size.
Please advise.
The main use-case for Chunked transfer encoding is sending data for which you can't accurately predict the Content-Length.
So if your JSON is built up dynamically, and you can't accurately know in advance how many bytes it will be, chunked is the way to go.
However, if you can predict the Content-Length, it's not needed to use Chunked, and you shouldn't, as there is a bit of overhead in doing so. Chunked is not a requirement for streaming responses.
Also note:
Most programming languages/HTTP frameworks will automatically use Chunked encoding in circumstances where the Content-Length cannot be predicted. For Node.js this is for example any time you just write to the response stream, and for PHP this is basically always unless you explicitly set a Content-Length. So in many cases there's basically nothing you need to do to switch to this.
Chunked no longer exists in HTTP/2 as HTTP/2 has its own framing protocol that supports responses with unknown sizes.

Handling chunked requests in Scalatra

I’m interesting in processing chunked requests with Scalatra. Does Scalatra support an access to the single chunk of chunked http request? Or I only have to wait the end of a chunked request and process the whole request after?
The Scalalatra is just a wrapper around Java Servlets. It allows you to access richRequest.inputStream directly. Everything else is same as for Java.
You might need to parse chunked encoding from the input stream.
See also: Chunked http decoding in java?
You can find a wrapper for InputStream here: http://www.java2s.com/Code/Java/File-Input-Output/AnInputStreamthatimplementsHTTP11chunking.htm

Play Framework: Gzip compression/decompression of data over WebSocket

Is there a way in Play to pass data compressed with Gzip over WebSocket channels? I read in https://www.playframework.com/documentation/2.3.1/GzipEncoding about GzipFilter. Is there a way to use it with WebSocket?
Besides my server also accepts HTTP requests but I don't want to apply GZIP to them. Is there a way to use Gzip ONLY WITH WebSocket, excluding HTTP?
AFAIK there is some compression extension for websocket connections. (https://datatracker.ietf.org/doc/html/draft-tyoshino-hybi-permessage-compression-00).
In some browsers this should be fixed by now and enabled by default (Chrome).
In others (Firefox, WebKit) it is not (yet): https://bugzilla.mozilla.org/show_bug.cgi?id=792831 and https://bugs.webkit.org/show_bug.cgi?id=98840
As long as client and server support this there shouldn't be any problem just to enable Gzip.
You can configure when Gzip is used (at least to some degree). For example if you have an JSON API and you also serve normal HTML, you can decide to Gzip only the JSON data:
new GzipFilter(shouldGzip = (request, response) =>
response.headers.get("Content-Type").exists(_.startsWith("application/json")))

Google Cloud Storage and gzip-encoded data when client does not declare gzip support with Accept-Encoding

I'm comparing various options for hosting a static website. Right now I'm hesitating between Google App Engine and Google Cloud Storage.
For App Engine, I know from the documentation that it's possible to have content served compressed only for clients that declare support for this (via the HTTP Accept-Encoding header).
For Cloud Storage, I see that if you upload compressed content and set the Content-Encoding field to "gzip", Cloud Storage will correctly serve it back compressed to clients that declare support for that. My question is, what happens with Cloud Engine when a client does a GET on an object stored with "gzip" content encoding, but the client does not declare support for gzip-compressed data with accept-encoding in its request? Is the data decompressed on the fly (which is what I would hope), or is some kind or error returned, or is the data served compressed anyway (not great)?
Indeed you can store objects in Google Cloud Storage with Content-Encoding: gzip. If a subsequent request for this object does not include the Accept-Encoding: gzip header, the object will get decompressed on the fly, yes.
(Sidenote: Content-Encoding should not be confused with Content-Type, e.g., application/gzip, which is always left untouched.)

Does compressing data before encrypting it violate any web standards to fulfill the "Accept-Encoding: gzip, deflate" header in an HTTP request?

We have a REST interface that returns encrypted messages. When we receive a request with the Accept-Encoding: gzip, deflate header, we compress the data. Currently, we compress the data AFTER we encrypt the data, but we noticed that compressing the data BEFORE encrypting it drastically reduces the content length and would increase performance. Does compressing the data before encrypting it violate any web standards?
The Accept-Encoding header is about which Content-Transfer-Encoding the client will accept. You are not obliged to compress just because someone says they will accept compressed content in transit, so you are not doing anything wrong if you don't compress after encryption at all. (Which, if your encryption is worth anything, will not compress at all.)
You can't set a C-T-E and expect the client to decrypt before they decompress, because the C-T-E header says "apply this transformation before you do anything else with the content".
However, compressing before encrypting without the C-T-E header is perfectly legal. Just do that, and don't set a C-T-E or compress after encryption.