How to distinguish "RTP over TCP" from "RTP over HTTP"? - streaming

dear all:
I wanna ask some questions about streaming protocol.
1.If I get video streaming use RTSP the video streaming will be sent add RTP header,but why some video streaming also add RTP header via cgi command?
2.I could distinguish "RTP over TCP" from "RTP over UDP",but what differences between "RTP over TCP" and "RTP over HTTP"
I'm confuse ~

i think you're confusing RTP over HTTP with RTSP over HTTP ( and therefore RTP too ): see http://developer.apple.com/quicktime/icefloe/dispatch028.html for more info.

Related

Audio + visual through TCP/IP

im in the process of making an application similar to skype to interact with another computer and i have a few questions.
I know all the basics such as how to send data over tcp etc in the form of an image and audio.
How does applications like skype send live audio? Does it litrally record 1 byte of audio, send it and play it and then repeat the process? For me its not instant so i dont see how that would be possible.
How would u send string and image through tcp at the same time (video call + chat), would you use multiple ports? i can see how that would be very bad. The way im doing it atm is when i click to recive an image, i set it up to receive an image so it receives properly, if a string got sent at this time for example, it wouldnt work as it cant be converted to an image if you see what im saying. im not sure how else i would do it. I Could send each thing with its type as the beginning for example "string Hello how are you" then decypher the data type through that, but that seems abit tedious and slow.
If anyone could give me an insight, that would be fantastic
I can't speak for how skype does it, but this should be a starting point:
Streaming audio/video is usually transported over UDP sockets, not TCP. TCP guarantees delivery whereas UDP is best effort. If you have a temporary connection loss you care more that the video you're receiving is current, not that you receive the whole stream.
The data is usually compressed (and sometimes encrypted) using a standard compression algorithm after being received from a camera/microphone. Have a look at H264, which is commonly used to compress video.
RTP is often used to transmit audio/video. It allows multiple types of stream to be combined over a single socket.
Control traffic is usually sent separately over a different socket, usually TCP. For example SIP which is used for VoIP phones initiates a control connection over a TCP or UDP port (usually 5060). The two ends then negotiate which types of stream will be supported, and how those streams will be sent. For SIP, this will be an RTP stream which is set up on a different UDP port.

RTP Multiplexing or Mixing?

I'm designing a real-time voice comm system that I want to use RTP with. Here's my general requirements:
Every user streams one audio stream to the server
The incoming stream may be compressed differently, depending on the source (a SIP trunk, an Android phone, a desktop client, etc.)
Users can pick which streams they want to receive
If the users had unlimited bandwidth and there wasn't a limited number of ports, I would just have them each open an RTP stream with the server for each stream they wanted to receive. However, a lot of the users will be over a 3G or 2G network, so my question is, how can I bundle the streams (chosen by the user) into a single RTP stream?
One option I've seen is multiplexing the streams into a single packet, but as far as I can tell, that actually goes against the RFC (however, there are working drafts for multiplexing).
Another option would be just mixing the audio together into one packet. Is that the recommended way to do this? I would have to normalize all of the chosen streams into one format first.
I'm very new to the whole VoIP/streaming media thing, so this may be a poor question.
I'm guessing that you want to use tcp in order to not lose data.
you need to use rtp over tcp see rfc
this allow sending several rtp stream over a single socket with a unique id per stream.

Rebuild media file from wireshark logs

Is it possible to recreate the media file from the captured wireshark logs. Is there any doc which explains how this needs to be done.
I am doing RTSP based streaming from my darwin test server. So I want to compare the Quality of the original and the streamed file.
I'm not familar with Darwin Streaming Servers but generally RTSP is only for establishing the RTP stream. The direction of RTP packets is normally in one direction (ignoring the ACK-packages for TCP).
For comparing the files I would use a tool suggested by all other users.
But to answer your question for wireshark:
filter you stream for the destination ip by using 'ip.addr eq '
look for your RTP or UDP packages from the RTSP-server
in case you see UDP-packages: right click on the package->'Decode As' and choose 'RTP' in Transport tab
choose from context menu 'follow UDP stream'
now you have the whole RTP-stream without RTP headers.
But keep in mind that in H.264 you have packetization which gives you extra bytes in the displayed stream. You cannot compare this with the original file!!
Look here in chapter 5.4. for further description.
Better use the tools mentioned by the others!
I don't think it is possible the way you hope, as RTSP is a sort of conversation between a client and a server (or servers). To recreate the RTSP session you would have to recreate all of this two-way traffic - it is not really comparable to opening a file in a video player.
I think you will find it easier to use VLC to stream the rtsp:// link and save it to a file. The stream will be transcoded while saving, so if you need a "true" comparison to the original file, you will want to use a lossless video codec for transcoding, and the output file could be very large.
Using Ostinato, You should be able to replay the file and capture using VLC.

What is the difference between RTP or RTSP in a streaming server?

I'm thinking about developing a streaming server and I have the following question, do over RTSP (example url: rtsp://192.168.0.184/myvideo.mpg) or RTP (example url: rtp://192.168.0.184).
As I have understood, an RTSP server is mainly used for streaming of files that already exist, ie, not live. RTP server is used to broadcast.
Somebody correct me if I'm wrong, am I right?.
What I want to develop a server to broadcast live content on the computer screen, that is, which is displayed at the time that is broadcast in streaming.
You are getting something wrong... RTSP is a realtime streaming protocol. Meaning, you can stream whatever you want in real time. So you can use it to stream LIVE content (no matter what it is, video, audio, text, presentation...). RTP is a transport protocol which is used to transport media data which is negotiated over RTSP.
You use RTSP to control media transmission over RTP. You use it to setup, play, pause, teardown the stream...
So, if you want your server to just start streaming when the URL is requested, you can implement some sort of RTP-only server. But if you want more control and if you are streaming live video, you must use RTSP, because it transmits SDP and other important decoding data.
Read the documents I linked here, they are a good starting point.
AFAIK, RTSP does not transmit streams at all, it is just an out-of-band control protocol with functions like PLAY and STOP.
Raw UDP or RTP over UDP are transmission protocols for streams just like raw TCP or HTTP over TCP.
To be able to stream a certain program over the given transmission protocol, an encapsulation method has to be defined for your container format. For example TS container can be transmitted over UDP but Matroska can not.
Pretty much everything can be transported through TCP though.
(The fact that which codec do you use also matters indirectly as it restricts the container formats you can use.)
Some basics:
RTSP server can be used for dead source as well as for live source. RTSP protocols provides you commands (Like your VCR Remote), and functionality depends upon your implementation.
RTP is real time protocol used for transporting audio and video in real time. Transport used can be unicast, multicast or broadcast, depending upon transport address and port. Besides transporting RTP does lots of things for you like packetization, reordering, jitter control, QoS, support for Lip sync.....
In your case if you want broadcasting streaming server then you need both RTSP (for control) as well as RTP (broadcasting audio and video)
To start with you can go through sample code provided by live555
I hear your pain. I'm going through this right now (years later).
From what I've learned, you can think of RTSP as a "VCR controller", the protocol allows you to specify which streams (presentations) you want to play, it will then send you a description of the media, and then you can use RTSP to play, stop, pause, and record the remote stream. The media itself goes over RTP. RTSP is normally implemented over a different socket or communication layer. Although it is simply a protocol, most often it's implemented by a server over a socket. For live streams, the RTSP stream you request is simply a name of a stream. It doesn't need to refer to a file on the server, the server's RTSP implementation can parse that stream, put together a live graph, and then provide the SDP (description) for that stream name. But, this is of course specific to the way the RTSP server has been implemented. For "live" streams, it's probably simpler to just use RTP, but you'll need a way to transfer the SDP from the RTP server to the client that wants to play that stream.
I think thats correct. RTSP may use RTP internally.
RTSP is widely used in IP camera, running as RTSP server in camera, so that user could play(pull) the RTSP stream from camera. It is a low cost solution, because we don't need a central media server (think about thousands of camera streams). The arch is bellow:
IP Camera ----RTSP(pull)---> Player
(RTSP server) (User Agent)
The RTSP protocol actually contains:
A signaling over TCP, at port 554, used to exchange the SDP (also used in WebRTC), about media capabilities.
UDP/TCP streams over serval ports, generally two ports, one for RTCP and one for RTP (also used in WebRTC).
Comparing to WebRTC, which is now available in H5:
A signaling over HTTP/WebSocket or exchange by any other protocols, used to exchange the SDP.
UDP streams(RTP/RTCP) over one or many ports, generally bind to one port, to make cloud services load balancer happy.
In protocol view, RTSP and WebRTC are similar, but the use scenario is very different, because it's off the topic, let's grossly simplified, WebRTC is design for web conference, while RTSP is used for IP camera systems.
So it's clear both RTSP and WebRTC are solution and protocol, used in different scenario. While RTP is transport protocol, also it can be used in live streaming by WebRTC.
Note: RTSP is not available for H5 or internet live streaming, but we could covert it by FFmpeg and a gateway server, please see here.
RTP is the transport protocol for real-time data. It provides timestamp, sequence number, and other means to handle the timing issues in real-time data transport.
RTSP is a control protocol that initiates and directs delivery of streaming multimedia data from media servers. It is the "Internet VCR remote control protocol." Its role is to provide the remote control; how­ever, the actual data delivery is done separately, most likely by RTP.
also, RTCP is the control part of RTP that helps with quality of service and membership management.
These three related protocols are used for real-time multimedia data over the Internet. Read the excellent full documentation at this link: RTP, RTCP & RTSP
RTSP (actually RTP) can be used for streaming video, but also many other types of media including live presentations. Rtsp is just the protocol used to setup the RTP session.
For all the details you can check out my open source RTSP Server implementation on the following address: https://net7mma.codeplex.com/
Or my article # http://www.codeproject.com/Articles/507218/Managed-Media-Aggregation-using-Rtsp-and-Rtp
It supports re-sourcing streams as well as the dynamic creation of streams, various RFC's are implemented and the library achieves better performance and less memory then FFMPEG and just about any other solutions in the transport layer and thus makes it a good candidate to use as a centralized point of access for most scenarios.

Reasons to use RTP when streaming a pre-existing file?

The only reason I could think of for using RTP to transfer a pre-existing file is if you're trying to monitor the amount of time a user is streaming the file, like if you're running a time-based On-Demand website. The other streaming-solution i know of is to use HTTP to upload a media file, then providing a client to progressively play the file. Can anyone come up with another reason to use RTP to stream media files?
You don't use RTP to transfer files, you use RTP to stream media to media players.
If you want to serve media RTP has some advantages:
RTP capable clients can use the stream, they might not be able to use whatever else you come up with.
Tolerates network congestion. If you serve the data over a TCP connection, the stream is quite sensible to packet loss and congestion. TCP has long timeouts and you might experience stalls.