JPG Images as Fallback fom RTMP Live Streaam - streaming

I would like to offer a few Live broadcast to users who don't support any of the available Streaming protocols as auto refreshing JPG's.
I got the idea to use ffmpeg (or mjpg_stramer) to extract two frames per second from the RTMP Live Stream base64 ecndoe it and load them by JavaScript at half second interval, but with 5-50 concurrent streams this is a hard job for the Server.
What would be the best way to get from multiple RTMP Live Streams two (or more) images per second as Base64 encoded JPG?

Related

Playing a streamed data from the server on a client software

I'm trying to plan a learning curve for a nodeJS module that streams all my output sounds to a server. Say I have a nodeJS module that forwards all outgoing sounds and music as packets to the server's port 8000. How can I connect some client's mp3 player to playback the streaming audio formats from the server? I mean the buffer that is sent is just raw messy bits, how to make my audio player on the client recognize the format, connect to the stream, forward the packets to the player etc.
You (I) need to open up a file, meaning by that a resource through POST request's answer and transmit to that file chunks of data from your original video resource according to the indices[ranges] the request asks for. So the request asks for data at xyz (just in an additional field) and tries to download resource Z and you constantly fill that resource with data so that it is always full.
This is a rather complex topic but there are many examples and documentation already available.
Just doing a quick search these came out:
node (socket) live audio stream / broadcast
https://www.npmjs.com/package/audio-stream
By the way, I'm not an expert but I think if you want to do audio streaming probably mp3 is not the right choice and you may get some benefit if you convert it to an intermediate streaming format.

How can one measure video streaming latency?

We're building yet another video streaming service with awesome killer featureā„¢, and we need to estimate client latency to deliver off-stream events in sync. The video stream passes through several processors, including CDN in the very end of pipeline, so latency may vary and it's not possible to pass something with the stream.
How can i measure latency between the streamer and consumer? We have couple of weird algorithms, but they are not even close to be reliable. Reading RTMP timestamps is also not the option at the moment, and we're planning to deliver HLS as well.
One way would be to insert cue points / timed metadata into the stream and have your player read them. These can pass through the CDN, and you can use them to deliver events if you like, or just to measure the latency.
The procedure for inserting/reading cue points varies with the media server and video player. I know Wowza can insert cue points into RTMP streams and convert them to ID3 metadata for HLS streams.

Live Streaming Service Development

I am about to develop a service that involves an interactive audio live streaming. Interactive in the sense that a moderator can have his stream paused and upon request, stream audio coming from one of his listeners (during the streaming session).
Its more like a Large Pipe where what flows through but the water can come in from only one of many small pipes connected to it at a time with a moderator assigned to each stream controlling which pipe is opened. I know nothing about media streaming, I dont know if a cloud service provides an interactive programmable solution such as this.
I am a programmer and I will be able to program the logic involved in such interaction. The issue is I am a novice to media streaming, don't have any knowledge if its technologies and various software used on the server for such purpose, are there any books that can introduce on to the technologies employed in media streaming, and I am trying to avoid using Flash,?
Clients could be web or mobile. I dont think I will have any problem with integrating with client system. My issue is implementing the server side
You are effectively programming a switcher. Basically, you need to be able to switch from one audio stream to the other. With uncompressed PCM, this is very simple. As long as the sample rates and bit depth are equal, cut the audio on any frame (which is sample-accurate) and switch to the other. You can resample audio and apply dithering to convert between different sample rates and bit depths.
The complicated part is when lossy codecs get involved. On a simlar project, I have gone down the road of trying to stitch streams together, and I can tell you that it is nearly impossible, even with something as simple as MP3. (The bit reservoir makes things difficult.) Plus, it sounds as if you will be supporting a wide variety of devices, meaning you likely won't be able standardize on a codec anyway. The best thing to do is take multiple streams and decode them at the mix point of your system. Then, you can switch from stream to stream easily with PCM.
At the output of your system, you'll want to re-encode to some lossy codec.
Due to latency, you don't typically want the server doing this switching. The switching should be done at the desk of the person encoding the stream so that way they can cue it accurately. Just write something that does all of the switching and encoding, and use SHOUTcast/Icecast for hosting your streams.

what is difference between mp4 and mpegts?

Recently I had a task to convert the file format to mp4 and stream it. I have used ffmpeg as the transcoding tool. The MP4 file doesn't get streamed over the http protocol [have used php cgi wrapper], but then the output format is changed to mpegts the streaming occurs and works fine. A quick search on net http://wiki.videolan.org/MPEG relates and advises to use mpegts for streaming mp4 file. I need more insight on these two formats, their advantages and differences.
Thanks,
Peter
MPEG-TS is designed for live streaming of events over DVB, UDP multicast, but also
over HTTP.
It divides the stream in elementary streams, which are segmented in small chunks.
System information is sent at regular intervals, so the receiver can
start playing the stream any time.
MPEG-TS isn't good for streaming files, because it doesn't provide info about the
duration of the movie or song, as well as the points you can seek to.
There are some new protocols that can use MPEG-TS for streaming over HTTP,
that put additional metadata in files and fix the disadvantage I talked before.
These are HTTP Live Streaming and DASH (Dynamic adaptive streaming over HTTP).
On the other hand MP4 has that info in part of the stream, called moov atom.
The point is that the moov must be placed before the media content and downloaded
from the server first.This way the video player knows the duration and can seek to any point without downloading the whole file (this is called HTTP pseudostreaming).
Sadly ffmpeg places the moov at the end of the file. You can fix that with software
like Xmoov-PHP.
Here you can find more info about pseudostreaming.
You can reorder your MP4 file, putting the moov section at the start of it using the following FFMPEG command:
ffmpeg -i your.mp4 -vcodec copy -acodec copy -movflags +faststart reordered.mp4
.mp4 is the extension of a file
while mpeg ts is used for transport streams.....mpeg ts is a standard used for digital video broadcasting to send the mpeg video and mpeg audio. there are basically two types of ts
spts and mpts
spts contains the single program only whereas mpts contains the multiple programs in it.
ts reader and vlc media players are used to play the mpeg ts
if you want to know more about it the follow,
MPEG TS OR TRANSPORT STREAM MPTS SPTS
The extension for transport stream files is .ts

Darwin Streaming Server - Adaptive Bitrate?

Can anyone provide any direction or links on how to use the adaptive bitrate feature that DSS says it supports? According to the release notes for v6.0.3:
3GPP Release 6 bit rate adaptation support
I assume that this lets you include multiple video streams in the 3gp file with varying bitrates, and DSS will automatically serve the best stream based on the current bandwidth. At least that's what I hope it does.
I guess I'm not sure what format DSS is expecting to receive the file. I tried just adding several streams to a 3gp file which resulted in Quicktime unable to play it, and VLC opening up a different window for each stream in the file.
Any direction would be much appreciated.
Adaptive streaming used in DSS 6.x uses a dropped frame approach to reduce overall bandwidth rather than dynamic on the fly bitrate adjustments. The result of this can be unpredictable. The DSS drops the frames, and does not need the video encoded in any special way for it to work.