what is difference between mp4 and mpegts? - mp4

Recently I had a task to convert the file format to mp4 and stream it. I have used ffmpeg as the transcoding tool. The MP4 file doesn't get streamed over the http protocol [have used php cgi wrapper], but then the output format is changed to mpegts the streaming occurs and works fine. A quick search on net http://wiki.videolan.org/MPEG relates and advises to use mpegts for streaming mp4 file. I need more insight on these two formats, their advantages and differences.
Thanks,
Peter

MPEG-TS is designed for live streaming of events over DVB, UDP multicast, but also
over HTTP.
It divides the stream in elementary streams, which are segmented in small chunks.
System information is sent at regular intervals, so the receiver can
start playing the stream any time.
MPEG-TS isn't good for streaming files, because it doesn't provide info about the
duration of the movie or song, as well as the points you can seek to.
There are some new protocols that can use MPEG-TS for streaming over HTTP,
that put additional metadata in files and fix the disadvantage I talked before.
These are HTTP Live Streaming and DASH (Dynamic adaptive streaming over HTTP).
On the other hand MP4 has that info in part of the stream, called moov atom.
The point is that the moov must be placed before the media content and downloaded
from the server first.This way the video player knows the duration and can seek to any point without downloading the whole file (this is called HTTP pseudostreaming).
Sadly ffmpeg places the moov at the end of the file. You can fix that with software
like Xmoov-PHP.
Here you can find more info about pseudostreaming.

You can reorder your MP4 file, putting the moov section at the start of it using the following FFMPEG command:
ffmpeg -i your.mp4 -vcodec copy -acodec copy -movflags +faststart reordered.mp4

.mp4 is the extension of a file
while mpeg ts is used for transport streams.....mpeg ts is a standard used for digital video broadcasting to send the mpeg video and mpeg audio. there are basically two types of ts
spts and mpts
spts contains the single program only whereas mpts contains the multiple programs in it.
ts reader and vlc media players are used to play the mpeg ts
if you want to know more about it the follow,
MPEG TS OR TRANSPORT STREAM MPTS SPTS
The extension for transport stream files is .ts

Related

Playing a streamed data from the server on a client software

I'm trying to plan a learning curve for a nodeJS module that streams all my output sounds to a server. Say I have a nodeJS module that forwards all outgoing sounds and music as packets to the server's port 8000. How can I connect some client's mp3 player to playback the streaming audio formats from the server? I mean the buffer that is sent is just raw messy bits, how to make my audio player on the client recognize the format, connect to the stream, forward the packets to the player etc.
You (I) need to open up a file, meaning by that a resource through POST request's answer and transmit to that file chunks of data from your original video resource according to the indices[ranges] the request asks for. So the request asks for data at xyz (just in an additional field) and tries to download resource Z and you constantly fill that resource with data so that it is always full.
This is a rather complex topic but there are many examples and documentation already available.
Just doing a quick search these came out:
node (socket) live audio stream / broadcast
https://www.npmjs.com/package/audio-stream
By the way, I'm not an expert but I think if you want to do audio streaming probably mp3 is not the right choice and you may get some benefit if you convert it to an intermediate streaming format.

How can one measure video streaming latency?

We're building yet another video streaming service with awesome killer featureā„¢, and we need to estimate client latency to deliver off-stream events in sync. The video stream passes through several processors, including CDN in the very end of pipeline, so latency may vary and it's not possible to pass something with the stream.
How can i measure latency between the streamer and consumer? We have couple of weird algorithms, but they are not even close to be reliable. Reading RTMP timestamps is also not the option at the moment, and we're planning to deliver HLS as well.
One way would be to insert cue points / timed metadata into the stream and have your player read them. These can pass through the CDN, and you can use them to deliver events if you like, or just to measure the latency.
The procedure for inserting/reading cue points varies with the media server and video player. I know Wowza can insert cue points into RTMP streams and convert them to ID3 metadata for HLS streams.

JPG Images as Fallback fom RTMP Live Streaam

I would like to offer a few Live broadcast to users who don't support any of the available Streaming protocols as auto refreshing JPG's.
I got the idea to use ffmpeg (or mjpg_stramer) to extract two frames per second from the RTMP Live Stream base64 ecndoe it and load them by JavaScript at half second interval, but with 5-50 concurrent streams this is a hard job for the Server.
What would be the best way to get from multiple RTMP Live Streams two (or more) images per second as Base64 encoded JPG?

Darwin Streaming Server - Adaptive Bitrate?

Can anyone provide any direction or links on how to use the adaptive bitrate feature that DSS says it supports? According to the release notes for v6.0.3:
3GPP Release 6 bit rate adaptation support
I assume that this lets you include multiple video streams in the 3gp file with varying bitrates, and DSS will automatically serve the best stream based on the current bandwidth. At least that's what I hope it does.
I guess I'm not sure what format DSS is expecting to receive the file. I tried just adding several streams to a 3gp file which resulted in Quicktime unable to play it, and VLC opening up a different window for each stream in the file.
Any direction would be much appreciated.
Adaptive streaming used in DSS 6.x uses a dropped frame approach to reduce overall bandwidth rather than dynamic on the fly bitrate adjustments. The result of this can be unpredictable. The DSS drops the frames, and does not need the video encoded in any special way for it to work.

Rebuild media file from wireshark logs

Is it possible to recreate the media file from the captured wireshark logs. Is there any doc which explains how this needs to be done.
I am doing RTSP based streaming from my darwin test server. So I want to compare the Quality of the original and the streamed file.
I'm not familar with Darwin Streaming Servers but generally RTSP is only for establishing the RTP stream. The direction of RTP packets is normally in one direction (ignoring the ACK-packages for TCP).
For comparing the files I would use a tool suggested by all other users.
But to answer your question for wireshark:
filter you stream for the destination ip by using 'ip.addr eq '
look for your RTP or UDP packages from the RTSP-server
in case you see UDP-packages: right click on the package->'Decode As' and choose 'RTP' in Transport tab
choose from context menu 'follow UDP stream'
now you have the whole RTP-stream without RTP headers.
But keep in mind that in H.264 you have packetization which gives you extra bytes in the displayed stream. You cannot compare this with the original file!!
Look here in chapter 5.4. for further description.
Better use the tools mentioned by the others!
I don't think it is possible the way you hope, as RTSP is a sort of conversation between a client and a server (or servers). To recreate the RTSP session you would have to recreate all of this two-way traffic - it is not really comparable to opening a file in a video player.
I think you will find it easier to use VLC to stream the rtsp:// link and save it to a file. The stream will be transcoded while saving, so if you need a "true" comparison to the original file, you will want to use a lossless video codec for transcoding, and the output file could be very large.
Using Ostinato, You should be able to replay the file and capture using VLC.