I'm new with Red5. I would like to know how can I take a stream from a port (something like this rr.tt.yy.uu:1234) and publish it using Red5. I was looking the oflaDemo and the Simple Broadcaster included in Red5, but this only takes the camera and I need to take the stream. Can you help me please?, may be with an example or a guideline.
Thanks in advance
If its a flash stream you can use the RTMPClient, which is part of red5. If this is an RTSP stream or output from something like an Axis camera then you will need to consume it with something like Xuggler and probably transcode it before trying to access it with red5.
You cant publish a stream with Red5 with that kind of information i.e IP address and port number.
Useful link for trancoding with red5: Xuggler & ffmpeg
Please notice option -re (output in 'near real time' mode) must be in front of the input file, not after.
well, basically it's netStream.Publish("someName")
Related
I can't seem to find a way in Wowza or any other solution like nginx-rtmp-module to take a live stream's RTMP metadata and insert it into the specific frames of the MP4 saved file.
Maybe I'm looking at this the wrong way, but our live streaming app only allows the broadcaster to post comments/feedback and we want to make sure the viewers see those comments on the exact frame that the comment was made on by the broadcaster. We tried having the two being separated where the video was streamed via wowza and the comments sent to clients via PubNub but there is just to much variance on when the comments show up.
The only way I can think of is to include the broadcaster comments inside metadate for the specific frame the comment was made. Then we'd be guaranteed to have the comment show up in the right place.
Help is much appreciated.
Thanks!
I've created an Android Application and I've connected different watson services, available on Bluemix, to it: Natural Language Classifier, Visual Recognition and Speech to Text.
1) The first and the second work well; I've a little problem with the third one about the format of the audio. The app should register a 30sec audio, save it on memory and send to the service to obtain the corresponding text.
I've used an instance of the class MediaRecorder to register the file. It works, but the available Output formats are AAC_ADTS, AMR_WB, AMR_NB, MPEG_4, THREE_GPP, RAW_MR and WEBM.
The service, differently, accepts in input these formats: FLAC, WAV, PCM.
What is the best way to convert the audio file from the first set of outputs to the second one? Is there a simple method to do that? For example, from THREE_GPP or MPEG_4 to WAV or PCM.
I've googled searching infos and ideas, but I've found only few and long-time methods, not well understood.
I'm looking for a fast method, because I would make the latency of conversion and elaboration by the service as short as possible.
Is there an available library that does this? Or a simple code snippet?
2) One last thing:
SpeechResults transcript = service.recognize(audio, HttpMediaType.AUDIO_WAV);
System.out.println(transcript);
"transcript" is a json response. Is there a method to directly extract only the text, or should I parse the json?
Any suggestion will be appreciated!
Thanks!
To convert the audio records in different formats/encodings you could:
- find an audio encoder lib to include into your app which supports the required libs but it could very heavy to run on a mobile device (if you find the right lib)
- develop an external web application used to send your record, make it encoded and returned as a file or a stream
- develop a simple web application working like a live proxy that gets the record file, makes a live conversion of the file and send to Watson
Both the 2nd option and the 3rd one expects to use an encoding tool like ffmpeg.
The 3rd one is lighter to develop but a little bit more complex but could allow you to save 2 http request from you android device
I haven't found any way to automate inserting an ad spot into an existing live stream without stopping the streams and/or using a Flash client to interact with Wowza.
The idea is that these ads can be randomly chosen and inserted into the stream programatically & automated.
Can someone please point me in the right direction of how to properly change sources on the fly?
Thanks!
The following articles may be of interest for you
https://www.wowza.com/docs/how-to-switch-streams-using-stream-class-streams
https://www.wowza.com/docs/how-to-control-stream-class-streams-dynamically-modulestreamcontrol
https://www.wowza.com/docs/how-to-use-ipublishingprovider-api-to-publish-server-side-live-streams
I've previously created a custom module for Wowza that allows you to create an output stream from a live input stream, then control the output and switch between the live input stream and other live or on demand streams.
I am creating a Samsung TV app for a radio station and they provide the "Now Playing" info within the Icecast stream. Is it possible to (and how do I) extract this information?
Shoutcast supports "Icy-MetaData" - an additional field in the request header. When set, its a request to the shoutcast server to embed metadata about the stream at periodic intervals(once every "icy-metaint" bytes) in the encoded audio stream itself. The value of "icy-metaint" is decided by the shoutcast server configuration and is sent to the client as part of the initial reply.
Check out this post on Shoutcast Internet Radio Protocol for details on icy:metadata and sample code in C.
A somewhat more technical discussion is also available at
http://forums.radiotoolbox.com/viewtopic.php?t=74
Yes, this is possible. The metadata is interleaved into the stream data at a specified interval. Basically, you read 8192 bytes (or whatever is specified by the Icy-MetaInt response header), and then you read the metadata block.
The first byte of that metadata block tells you the length of the data. A length of 0 means there is no updated metadata.
Once you read the meta block, then you go back to reading stream data.
I have all of this in more detail on my answer here: https://stackoverflow.com/a/4914538/362536 While I know you're not writing PHP, the principal is identical no matter what language.
From native player there is no option to get this meta.
You could probably use jQuery.stream plugin to fetch the meta directly - but you need to setup Access-Control-Allow-Origin on you icecast server - but I have no idea if it will work.
The best solution here will be to use this script:
http://code.google.com/p/icecast-now-playing-script/
So you install this script on your web server and from the SmartTV application you will AJAX it once for a while, while your stream is playing.
I just created a radio player for icecast and centova, it uses lastFM api to extract the song meta data. https://github.com/johndavedecano/Icecast-Centova-LastFM-API
If you are doing this for a radio station, then they can provide this data through the XSLT feature of Icecast. Some random old XSLT examples for offering stream metadata that I did at some point.
The other option is to run Icecast 2.4.1 or add the two files (xml2json.xsl status-json.xsl) to an old version.
Note that only Icecast 2.4.1 or newer supports adding CORS/ACAO headers that might be necessary to access data from a web app / web site.
If you are not directly cooperating with the radio station and can't ask them to do this, then disregard this answer. Someone else might find it useful though.
Is it possible to get Flash Media Interactive Server working in conjunction with MogileFS? What it boils down to is that I need FMIS to fetch the FLV files from MogileFS over HTTP. As far as I can tell, however, the FMIS can only fetch and stream files from a local store :/
Anyone have experience with this or other ideas?
Thanks!
You can use a psuedo streaming setup using the PHP xmoov script, then fetch the needed bytes of the files from MogileFS using PHP, and then push them to the client one chuck at the time.