For a server which tries implement video chatting or (multimedia or text chatting for that matter) using RTP which one should be used for control? SIP or RTSP? I went through the abstract of the corresponding RFCs however I could only see that both of them are used for just controls and the actual transmission of data is done using other protocols like UDP or RTP over UDP. From my understanding, SIP is for controlling transmission of data where sessions between two users are involved and RTSP otherwise. Of course it is a very basic understanding. What are the actual differences between these 2 protocols ? A simple google search did not give me a comparison.
Both SIP and RTSP are signalling protocols. SIP can handle more diverse and sophisticated scenarios than RTSP and I can't think of anything significant that RTSP can do that SIP can't. The advantage of RTSP over SIP is that it's a lot simpler to use and implement.
RTSP is suited for client-server applications, for example where one server has a media stream to feed to multiple clients. SIP is suited for peer-to-peer scenarios where media streams need to flow both ways.
One way to think of it is that RTSP is kind of like using the television where the broadcaster is the server and your tv is the client; you turn your tv on and can switch between a certain number of pre-defined channels. SIP is like using the phone (not surprising given it was mainly designed for VoIP), you can call anyone you want or they can call you.
Both SIP and RTSP use exactly the same media transfer mechanisms such as SDP and RTP so they aren't a consideration when choosing between them.
Related
What's the difference between WebRTC and Jingle. I am going to build Android based voice calling app using XMPP ejabberd server. So, which one of these will be best choice for voice calling on Android?
XMPP is a messaging protocol. Jingle the subprotocol that XMPP uses for establishing voice-over-ip calls or transfer files. WebRTC is a Javascript API (there is also a library implementing that API).
You can use Jingle as a signaling protocol to establish a peer-to-perconnection between two XMPP clients using the WebRTC API. This shows an example in Javascript that works in Chrome and Firefox (and Microsoft Edge if you only want audio).
WebRTC code in code.google.com only contains the video and audio codec, the RTP stack. The libjingle project contains the API of webRTC, it looks nurse but it's true. Besides, the libjingle has the stacks of XMPP and STUN, ICE implementation. If you want to make a total solution for VOIP, you have to build both.
How can i stream a live video call between 2 people, to thousands of people. I prefer to use webRTC but I can't find the answer to my question. The viewers should be able to watch the stream in a web app.
Streaming to thousands of people is not trivial! It's not as hard as it used to be 10 years ago but is still pretty hard.
WebRTC supports direct browser to browser (peer to peer) connections. This means that WebRTC is primarily targeted at 1:1 conversation. If you want the same conversation (video or audio) to be shared among more than 2 people you have the following options:
Connect any user to any other user. This creates a fully connected graph between the viewers. This is easy to do because all you need is webrtc. No special hardware/software. However it is also very inefficient in thems of trafic and distribution and doesn't scale boyound 5-6 people.
Use A WebRTC Video relay like Jitsi VideoBridge. According to the official performance claims VideoBridge can scale to 500-1000 people given fast and wide enough internet connection.
Direct the Webrtc stream between the two participants to a WebRTC enabled streaming server. If needed, transcode the input stream to a suitalbe codex - x264/VP8/VP9. Convert the input stream to a sutable protocl - RTMP/HLS/DASH. Distribute the content using the buildin functionality of the media server or by the use of a CDN. Play the video on the client side with a player - Flowplayer/JwPlayer/ViblastPlayer/VideoJs/your own Custome Player or a combination of the above. This is the hardest solution but it is also the best one in terams of scalability and platform reach. Such a solution can scale easily to thousands of people and reach all major browsers and mobile platforms.
I guess the third alternative is the one for you. You can read more about the whole capturing/publishing/transcoding/converting business in BlookGeek's greate blog post.
A webrtc based peer2peer connection is not the choice for one-to-n streaming. As there is no broadcast so far in webrtc you should consider another technique.
I am working on an iOS application to stream live video from iPhone to a media server and then make it available to a larger audience using RTSP.
Which protocol or method I should use to send the video stream to a server.
Thanks.
HTTP Live streaming is not designed for you needs, it's for server->clients + I won't comment about huge delay it implies
You better check RTSP or RTMP protocol and LivU blog
For Celluar
Apple seems to make a distinction between apps that are used for just streaming content from servers and those that are used for some type of conferencing.
I think VOIP types are safe, and it seems like gocoder presenter types apps don't have issues either. There's no official page detailing this, but there is some mention under what apples considers VOIP app.
No app has issues if its over wifi only.
I am making a video chatting web application using C# socket programming to transfer data. I want to use the Web Audio API to capture audio and video in my view page, but I dont know how to transfer the audio using sockets (which are defined in controller class.) Can the API be used for socket programming if I can capture the raw bits from the API?
(I've also tried using WEB RTC, but I am unable to create multiple peer connections. As my application involves multiple peers, I prefer normal socket programming.)
If you mean, can you just get access to the raw audio/video bits from getUserMedia - yes, you can. (For audio, check out any of the input demos on webaudiodemos.appspot.com - particularly AudioRecorder shows how to get the bits from a ScriptProcessor node.) But I would caution that streaming audio and video over the net is not a trivial task. You can't really just push the bits over the wire with no thought to buffering (or adaptive capabilities, unless you can guarantee high-speed local network only).
I am a php mysql developer ... just an (below) average. and i am interested in the way television and radio are broadcasted over internet live. i want to know how it works and and what are its requirements. i must admit that i am a complete layman but i expect it do by next half month or year or so.
there're a couple of network protocols for multimedia streaming. most popular atm are RTSP and RTMP. typically you need to setup streaming server that takes audio/video from some source and streams to all connected clients. popular streaming servers are Adobe Flash Media Server, Wowza Media Server and others.
Streaming is done over TCP or UDP, depending on your requirements. TCP guarantees that no packets are lost but they can be significantly delayed. This can be smoothed by large enough jitter buffer. Streaming over TCP is often used for one-way streaming from server to client when that delay can be tolerated. UDP is used for "live" streaming, especially in chat/conference, when you cannot tolerate several seconds delay.
RTSP is open standard. You can receive and play RTSP streaming by VLC player (free). RTSP provides a "media session setup" and goes over TCP. Actual streaming is done by RTP/RTCP protocols over UDP or "interleaved" with RTSP packets over TCP.
RTMP is proprietary protocol of Adobe.