How do I extract streamed "now playing" data embedded in an Icecast audio (radio) stream on Samsung Smart-TV - samsung-smart-tv

I am creating a Samsung TV app for a radio station and they provide the "Now Playing" info within the Icecast stream. Is it possible to (and how do I) extract this information?

Shoutcast supports "Icy-MetaData" - an additional field in the request header. When set, its a request to the shoutcast server to embed metadata about the stream at periodic intervals(once every "icy-metaint" bytes) in the encoded audio stream itself. The value of "icy-metaint" is decided by the shoutcast server configuration and is sent to the client as part of the initial reply.
Check out this post on Shoutcast Internet Radio Protocol for details on icy:metadata and sample code in C.
A somewhat more technical discussion is also available at
http://forums.radiotoolbox.com/viewtopic.php?t=74

Yes, this is possible. The metadata is interleaved into the stream data at a specified interval. Basically, you read 8192 bytes (or whatever is specified by the Icy-MetaInt response header), and then you read the metadata block.
The first byte of that metadata block tells you the length of the data. A length of 0 means there is no updated metadata.
Once you read the meta block, then you go back to reading stream data.
I have all of this in more detail on my answer here: https://stackoverflow.com/a/4914538/362536 While I know you're not writing PHP, the principal is identical no matter what language.

From native player there is no option to get this meta.
You could probably use jQuery.stream plugin to fetch the meta directly - but you need to setup Access-Control-Allow-Origin on you icecast server - but I have no idea if it will work.
The best solution here will be to use this script:
http://code.google.com/p/icecast-now-playing-script/
So you install this script on your web server and from the SmartTV application you will AJAX it once for a while, while your stream is playing.

I just created a radio player for icecast and centova, it uses lastFM api to extract the song meta data. https://github.com/johndavedecano/Icecast-Centova-LastFM-API

If you are doing this for a radio station, then they can provide this data through the XSLT feature of Icecast. Some random old XSLT examples for offering stream metadata that I did at some point.
The other option is to run Icecast 2.4.1 or add the two files (xml2json.xsl status-json.xsl) to an old version.
Note that only Icecast 2.4.1 or newer supports adding CORS/ACAO headers that might be necessary to access data from a web app / web site.
If you are not directly cooperating with the radio station and can't ask them to do this, then disregard this answer. Someone else might find it useful though.

Related

Swift - HLS CUE Metadata Capture

Working with HLS and therefore m3u8 files that contain ongoing metadata during the stream. I need to intercept metadata such as #EXT-X-CUE-OUT and #EXT-X-CUE-IN. I am not finding a process with AVPlayer that reports on these kinds of tags within the stream. Is there a means of capturing this sort of metadata during the streaming of the content? I know there are things like AVPlayerItemMetadataCollector, however, that does not seem to address the tags I am talking about.

(Bluemix) Conversion of audio file formats

I've created an Android Application and I've connected different watson services, available on Bluemix, to it: Natural Language Classifier, Visual Recognition and Speech to Text.
1) The first and the second work well; I've a little problem with the third one about the format of the audio. The app should register a 30sec audio, save it on memory and send to the service to obtain the corresponding text.
I've used an instance of the class MediaRecorder to register the file. It works, but the available Output formats are AAC_ADTS, AMR_WB, AMR_NB, MPEG_4, THREE_GPP, RAW_MR and WEBM.
The service, differently, accepts in input these formats: FLAC, WAV, PCM.
What is the best way to convert the audio file from the first set of outputs to the second one? Is there a simple method to do that? For example, from THREE_GPP or MPEG_4 to WAV or PCM.
I've googled searching infos and ideas, but I've found only few and long-time methods, not well understood.
I'm looking for a fast method, because I would make the latency of conversion and elaboration by the service as short as possible.
Is there an available library that does this? Or a simple code snippet?
2) One last thing:
SpeechResults transcript = service.recognize(audio, HttpMediaType.AUDIO_WAV);
System.out.println(transcript);
"transcript" is a json response. Is there a method to directly extract only the text, or should I parse the json?
Any suggestion will be appreciated!
Thanks!
To convert the audio records in different formats/encodings you could:
- find an audio encoder lib to include into your app which supports the required libs but it could very heavy to run on a mobile device (if you find the right lib)
- develop an external web application used to send your record, make it encoded and returned as a file or a stream
- develop a simple web application working like a live proxy that gets the record file, makes a live conversion of the file and send to Watson
Both the 2nd option and the 3rd one expects to use an encoding tool like ffmpeg.
The 3rd one is lighter to develop but a little bit more complex but could allow you to save 2 http request from you android device

Live Stream midroll ad injection in Wowza Streaming Engine

I haven't found any way to automate inserting an ad spot into an existing live stream without stopping the streams and/or using a Flash client to interact with Wowza.
The idea is that these ads can be randomly chosen and inserted into the stream programatically & automated.
Can someone please point me in the right direction of how to properly change sources on the fly?
Thanks!
The following articles may be of interest for you
https://www.wowza.com/docs/how-to-switch-streams-using-stream-class-streams
https://www.wowza.com/docs/how-to-control-stream-class-streams-dynamically-modulestreamcontrol
https://www.wowza.com/docs/how-to-use-ipublishingprovider-api-to-publish-server-side-live-streams
I've previously created a custom module for Wowza that allows you to create an output stream from a live input stream, then control the output and switch between the live input stream and other live or on demand streams.

Flex mobile project for IOS, server side proxy

I am trying to write an iphone app that loads a video from an inbuilt web server running off a camera (connect to iphone via wifi).
I am using flash builder / flex mobile project - not particularly familiar but finding it easier to understand than xcode !!
The files from the camera have the wrong file extension so will not play on the ios video app, can I set up a server side proxy in flex mobile and use this to alter the file extension and then pass this link to the ios video app ?
If so any help anybody could give me ( examples etc) would be really grateful received , I have been trying to get round this problem for a couple of weeks .
Cheers
Toby
I can explain, conceptually, what a server side proxy would do in this case. Let's say you are retrieving a URL, like this:
http://myserver.com/somethingSomething/DarkSide/
to retrieve a video stream from the server. You say it won't be played because there is no file extension; so you have to, in essence, use a different URL with the extension. Set up 'search engine friendly' URLs on the server. And do something like this:
http://myserver.com/myProxy.cfm/streamURL/somethingSomething%5CDarkSide/Name/myProxyVid.mp4
Here is some information on how to deal with Search Engine Friendly URLs in ColdFusion. Here is some information on how to deal with Search Engine Friendly URls in PHP. I'm sure Other technologies will come up in a Google Search.
In the URL above; this is what you have:
http://myserver.com/: This is your server
myProxy.cfm: This is your server side file; that is a proxy
streamURL/somethingSomething%5CDarkSide/Name/myProxyVid.mp4: This is the query string. It consists of two name value pairs. The first is the streamURL. This is the URL you want to retrieve with your proxy. The second is just random; but as long as it ends with the file extension .mp4 the URL should be seen as an 'mp4 file'
The code behind your myProxy.cfm should be something like this, in psuedo-code:
Parse URL Query String
Retrieve Stream.
Set mimeType on return value.
Return stream data
I used a similar approach on TheFlexShow.com to track the number of people who watch our screencast on-line vs downloading it first. I also used the same approach to keep track of impressions of advertiser's banner ads. For example, the browser can't tell that this is not a JPG image:
http://www.theflexshow.com/blog/mediaDisplay.cfm?mediaid=51
Based on this, and one of your previous questions; I am not convinced this is the best solution, though. I make a lot of assumptions here. I assume that the problem with playing the file does relate to the extension and not the file data. I assume that you are not actually streaming video with an open connection on both client and server to send data back and forth.

Is it possible to inject IDv3 into MP3 stream?

I'd making a relay audio stream server (like shoutcast relaying but with customization) in PHP.
Is it possible to dynamicly add IDv3 tag's every specified pack of data (maybe every second - every 64KB)?
If it`s possible how to do it?
IDv3 tags occur at the beginning of a mp3 but as an mp3 is just a series of frames due to the way it's possible to cut them with say mp3splt without re-encoding that stream would be IDv3 tags followed by mp3 data and then it would repeat in the same format for the next part of the stream
clearly i'm ignoring a lot of the details