RTSP 1.0 vs. RTSP 2.0 - streaming

The Real-Time Streaming Protocol (RTSP) Version 1.0 was published as RFC 2326 in 1998.
Now nearly 20 years later Version 2.0 was published as RFC 7826 in December 2016.
I am wondering whether the changes affect the performance of live streaming using RTSP (over the Real-Time Transport Protocol (RTP)).
I know that RTSP is not used to send the real-time data, but used for session establishment and controlling mechanisms like playing, pausing or stoping the stream. So I guess the changes don't have an affect on the end-to-end latency between sender and receiver?
But in the changes it states for example
request pipelining for quick session start-up;
So my question: Is there an measurable impact on the performance regarding the introduced changes?
For example:
session start-up time (time till the stream starts playing)
end-to-end latency
RTSP traffic amount
...

It depends on what your implementation supports today... if you read the news groups associated or even the first few paragraphs of the RFC you will quickly begin to understand this...
In short I believe more than having a measurable impact on performance there are changes which should hopefully create better interoperability however this is as of yet to be seen.
Most of the changes (oddly enough) are for creating and playing archived media and how to cope with such changes in the transport layer as when unsupported bandwidth meets a requested playback rate....
The most useful changes are probably the definition of the text/parameters content type and the Accept header semantics.
Pipelining is now just more widely supported and may have already been supported... IPV6 has not changed... NAT was handled better and UDP support was dropped and another type of TCP transport was supported without frame headers...
Overall though.... there's nothing else which makes Rtsp 2 any better then 1....

Related

Should I use RTP or WebRTC for local network audio communication

I have a set of Raspberry Pi Zeros that I would like to use as a home intercom. I initially set them up to send audio to each other using golang with gRPC and bidirectional streaming, which works for short calls, but the lag builds up over time, so I think I need to switch to a real-time protocol like RTP or WebRTC. Since I already know the IP address of each device, and the hardware/supported codecs for each is the same, and they are all on the same network, is there any advantage to using WebRTC over using plain RTP? My understanding is that WebRTC mainly provides some additional security and connection orchestration like ICE and SDP, which I wouldn't necessarily need. I am trying to minimize resource usage since these devices are not as powerful as a phone or desktop. If I do use WebRTC, I can do the SDP signaling with gRPC or some other direct delivery method. Since there are more than 2 devices, I'm also curious about multicast functionality, which seems pure-RTP specific, while WebRTC (which uses RTP), doesn't necessarily support multicasting, and would require (n-1)! p2p connections. I'm very unclear/unsure about this point.
Also, does either support mixing audio channels natively, or would that need to be handled in the custom software?
You could use WebRTC, but you'd need to rig a signalling server, and a STUN / TURN server. These can be super simple and low capacity because everything is on a private network, but you still need 'em. The signalling server handles the necessary SDP interchange. Going full WebRTC might be overengineering this. (But of course learning to get WebRTC working can be useful.)
You've built out a golang infrastructure. Seeing as how you're on a private network, you could change up that program to send multicast UDP packets or RTP packets. Then you can rig your listeners to listen to them.
No matter what you do, you'll need to deal with the lag. A good way to do it in the packet world: don't build a queue of buffers ready to play. Instead, always put each received packet as the next-to-play packet, even if you have to overwrite a previously received packet. (That is, skip ahead.) You may get a pop once in a while, but with reasonably short packets, under 50ms, it shouldn't affect the user experience significantly. And the lag won't build up.
The oldtimey phone system ran on a continent-wide 8K synchronous clock. So lag was not an issue. But it's always a problem when audio analog-to-digital and digital-to-analog clocks aren't synchronized. That's true whenever they are on different devices. The slightest drift builds up over time. (RPis don't have fifty-dollar clock parts in them with guaranteed low drift.)
If all your audio sources run at the same sample rate, you can average them to mix them. That should get you started. (If you're using WebRTC in a browser, it will mix multiple sources for you. )
Since you are using Go check out offline-browser-communication. This removes the need for Signaling and STUN/TURN. It uses mDNS and pre-generated certificates. It is also being discussed in the WICG Discourse no idea if/when it will land.
'Lag' is a pretty common problem to have when doing media over TCP. You have lots of queues and congestion control you are dealing with. WebRTC (and RTP in general) is great at solving this. You have the following standardized things to solve it.
RTP packets have the relative timestamp
RTP Sender reports have a mapping of relative to NTP timestamp. Use this for sync/timing.
RTP Receiver reports give you packet loss/jitter. Use this to assert your network health.
Multicast is a fantastic suggestion as well. You reduce the complexity of having to signal all those 1:1 connections, and reduce the amount of bandwidth required. It does make security a little bit more delicate/roll your own though.
With Pion we decoupled all the RTP/RTCP stuff Pion Interceptor. So you don't have to use the full WebRTC stack to get the media transport things mentioned above.

Protocol for sending streaming data over multiple sockets

I am working on designing an API for consuming messages from an application that will generate a very large amount of data; 10+ of GB/s is likely, even for smaller clients. I am looking for a protocol that allows me to deliver this data in a way that is easy for clients to consume.
The obvious answer for me is: split up the messages so they are consumable over multiple connections. Each connection would consume a fraction of the overall load.
But if I do this, there are a few things I need to account for:
How does the user know they are falling behind and need to launch more connections?
Twitter says consumers should check timestamps, which could work for us
When they launch a new connection to consume more of the data, how do they specify that this is part of the same consumption session?
We could give the session a name, correlate that with a "direct" amqp queue, and let our queue do the hard work
Is there something very important I am missing.
Probably.
For this reason, I'd much rather a protocol that already exists.
The protocol would be considered extra awesome if it:
is websocket or streaming HTTP friendly
supports data compression
The problems you are describing are pretty much the same issues that video streaming has to deal with, which you probably already know. The key HTTP friendly streaming protocols are HLS (Apple), SmoothStreaming (Microsoft), HDS (Adobe) and MPEG-DASH (open protocol, but new).
When considering video streaming, it is also worth understanding whether your streams are more like 'live' streams or 'static' content - the former is generated on the fly and any given part of the live stream may only be available for a set tine, while the latter stored on the server in full and generally any part is available at any time (until the content is removed). How you stream and playback these is subtly different.
It may be that you can simply reuse one of the above video streaming protocols by wrapping your data as if it were video (or maybe it even is video), and implementing your own custom client on the receiving side.
Alternatively, these protocols could provide a good reference point if you wanted to create your own simpler protocol - there are several open source streaming servers you could look to for ideas or even adapt to your needs if that looks like a sensible route:
http://gstreamer.freedesktop.org
http://icecast.org
Video streaming is quite complex as you may already be aware, but if your use cases are simpler you may be able to ignore or remove much of the complexity - for example you may not need seek, multiple format and bit rate streams, accompanying streams (for subtitles etc). Being able to simplify like this might justify the effort to modify one of the above for your needs, if you are not able to use them out of the box.
One final point - video and audio streaming protocols usually have a built in way of dealing with delayed or lost packets. Depending on your application these may not be applicable to you so you should look carefully at this aspect if reusing a video or audio streaming protocol or server. For example, audio clients are typically tolerant of a small amount of packet loss, and will generally discard delayed packets rather than pause the audio (packets received outside the 'jitter buffer' window). If your application cannot tolerate any packet loss, then you will need to look carefully at the underlying solution and protocol to make sure it really meets your needs over all network conditions.

RTP/RTSP start up latency: Would this method help to reduce it, and if yes, why we don't have it

This is probably not the best forum for such a specialized question, but at the moment I don't know of a better one (open to suggestions/recommendations).
I work on a video product which for the last 10+ years has been using proprietary communications protocol (DCOM-based) to send the video across the network. A while ago we recognized the need to standardize and currently are almost at a point of ripping out all that DCOM baggage and replacing it with a fully compliant RTP/RTSP client/server framework.
One thing we noticed during testing over the last few months is that when we switch the client to use RTP/RTSP, there's a noticeable increase in start-up latency. The problem is that it's not us but RTSP.
BEFORE (DCOM): we would send one DCOM command and before that command even returned back to the client, the server would already be sending video. -- total latency 1 RTT
NOW (RTSP): This is the sequence of commands, each one being a separate network request: DESCRIBE, SETUP, SETUP, PLAY (assuming the session has audio and video) -- total of 4 RTTs.
Works as designed - unfortunately it feels like a step backwards because prior user experience was actually better.
Can this be improved? If you stay with the standard, short answer is, NO. However, my team fully controls our entire RTP/RTSP stack and I've been thinking we could introduce a new RTSP command (without touching any of existing commands so we are still fully inter-operable) as a solution: DESCRIBE_SETUP_PLAY.
We could send this one command, pass in types of streams interested in (typically, there's only one video and 0..1 audio). Response would include the full SDP text, as well as all the port information and just like before, server would start streaming instantly without waiting for anything else from the client.
Would this work? any downside that I may not be seeing? I'm curious why this wasn't considered (or was dropped) from official spec, since latency even in local intranet is definitely noticeable.
FYI, it is possible according to the RTSP 1.0 specification:
9.1 Pipelining
A client that supports persistent connections or connectionless mode
MAY "pipeline" its requests (i.e., send multiple requests without
waiting for each response). A server MUST send its responses to those
requests in the same order that the requests were received.
The RTSP 2.0 draft also contains support for pipelining.
However none of the clients/servers I've used implement it AFAIK.

long polling vs streaming for about 1 update/second

is streaming a viable option?
will there be a performance difference on the server end depending on which i choose?
is one better than the other for this case?
I am working on a GWT application with Tomcat running on the server end. To understand my needs, imagine updating the stock prices of several stocks concurrently.
Do you want the process to be client- or server-driven? In other words, do you want to push new data to the clients as soon as it's available, or would you rather that the clients request new data whenever they see fit, even though that might not be once/second? What is the likelyhood that the client will be able to stick around to wait for an answer? Even though you expect the events to occur once/second, how long does it take between a request from a client and the return from the server? If it's longer than a second, I'd expect you to lean towards pushing the events to the clients, though the other way around, I'd expect polling to be okay. If the response takes longer than the interval, then you're essentially streaming anyway, since there's a new event ready by the time the client receives the last one, so the client could essentially poll continually and always receive events - in this case, streaming the data would actually be more lightweight, since you're removing the connection/negotiation overhead from the process.
I would suspect that server load to be higher for a client-based (pull) subscription, instead of a streaming configuration, since the client would have to re-negotiate the connection each time, instead of leaving a connection open, but each open connection in a streaming model would require server resources as well. It depends on what the trade-off is between how aggressive your negotiation process is vs. how much memory/processing is required for each open connection. I'm no expert, though, so there may be other factors.
UPDATE: This guy talks about the trade-offs between long-polling and streaming, and he seems to say that with HTTP/1.1, the connection re-negotiation process is trivial, so that's not as much of an issue.
It doesn't really matter. The connection re-negotiation overhead is so slim with HTTP1.1, you won't notice any significant performance differences one way or another.
The benefits of long-polling are compatibility and reliability - no issues with proxies, ports, detecting disconnects, etc.
The benefits of "true" streaming would potentially be reduced overhead, but as mentioned already, this benefit is much, much less than it's made out to be.
Personally, I find a well-designed comet server to be the best solution for large numbers of updates and/or server-push.
Certainly, if you're looking to push data, streaming would seem to provide better performance, if your server can handle the expected number of continuous connections. But there's another issue that you don't address: Are you internet or intranet? Streaming has been reported to have some problems across proxies, much as you'd expect. So for a general purpose solution, you would probably be better served by long poll - for an intranet, where you understand the network infrastructure, streaming is quite likely a simpler, better performance solution for you.
The StreamHub GWT Comet Adapter was designed exactly for this scenario of streaming stock quotes. Example here: GWT Streaming Stock Quotes. It updates the stock prices of several stocks concurrently. I think the implementation underneath is Comet which is essentially streaming over HTTP.
Edit: It uses a different technique for each browser. To quote the website:
There are several different underlying
techniques used to implement Comet
including Hidden iFrame,
XMLHttpRequest/Script Long Polling,
and embedded plugins such as Flash.
The introduction of HTML 5 WebSockets
in to future browsers will provide an
alternative mechanism for HTTP
Streaming. StreamHub uses a "best-fit"
approach utilizing the most performant
and reliable technique for each
browser.
Streaming will be faster because data only crosses the wire one way. With polling, the latency is at least twice.
Polling is more resilient to network outages since it doesn't rely on a connection being kept open.
I'd go for polling just for the robustness.
For live stock prices I would absolutely keep the connection open, and ensure user alert/reconnection on disconnect.

What do you use when you need reliable UDP?

If you have a situation where a TCP connection is potentially too slow and a UDP 'connection' is potentially too unreliable what do you use? There are various standard reliable UDP protocols out there, what experiences do you have with them?
Please discuss one protocol per reply and if someone else has already mentioned the one you use then consider voting them up and using a comment to elaborate if required.
I'm interested in the various options here, of which TCP is at one end of the scale and UDP is at the other. Various reliable UDP options are available and each brings some elements of TCP to UDP.
I know that often TCP is the correct choice but having a list of the alternatives is often useful in helping one come to that conclusion. Things like Enet, RUDP, etc that are built on UDP have various pros and cons, have you used them, what are your experiences?
For the avoidance of doubt there is no more information, this is a hypothetical question and one that I hoped would elicit a list of responses that detailed the various options and alternatives available to someone who needs to make a decision.
What about SCTP. It's a standard protocol by the IETF (RFC 4960)
It has chunking capability which could help for speed.
Update: a comparison between TCP and SCTP shows that the performances are comparable unless two interfaces can be used.
Update: a nice introductory article.
It's difficult to answer this question without some additional information on the domain of the problem.
For example, what volume of data are you using? How often? What is the nature of the data? (eg. is it unique, one off data? Or is it a stream of sample data? etc.)
What platform are you developing for? (eg. desktop/server/embedded)
To determine what you mean by "too slow", what network medium are you using?
But in (very!) general terms I think you're going to have to try really hard to beat tcp for speed, unless you can make some hard assumptions about the data that you're trying to send.
For example, if the data that you're trying to send is such that you can tolerate the loss of a single packet (eg. regularly sampled data where the sampling rate is many times higher than the bandwidth of the signal) then you can probably sacrifice some reliability of transmission by ensuring that you can detect data corruption (eg. through the use of a good crc)
But if you cannot tolerate the loss of a single packet, then you're going to have to start introducing the types of techniques for reliability that tcp already has. And, without putting in a reasonable amount of work, you may find that you're starting to build those elements into a user-space solution with all of the inherent speed issues to go with it.
ENET - http://enet.bespin.org/
I've worked with ENET as a reliable UDP protocol and written an asynchronous sockets friendly version for a client of mine who is using it in their servers. It works quite nicely but I don't like the overhead that the peer to peer ping adds to otherwise idle connections; when you have lots of connections pinging all of them regularly is a lot of busy work.
ENET gives you the option to send multiple 'channels' of data and for the data sent to be unreliable, reliable or sequenced. It also includes the aforementioned peer to peer ping which acts as a keep alive.
We have some defense industry customers that use UDT (UDP-based Data Transfer) (see http://udt.sourceforge.net/) and are very happy with it. I see that is has a friendly BSD license as well.
Anyone who decides that the list above isn't enough and that they want to develop their OWN reliable UDP should definitely take a look at the Google QUIC spec as this covers lots of complicated corner cases and potential denial of service attacks. I haven't played with an implementation of this yet, and you may not want or need everything that it provides, but the document is well worth reading before embarking on a new "reliable" UDP design.
A good jumping off point for QUIC is here, over at the Chromium Blog.
The current QUIC design document can be found here.
RUDP - Reliable User Datagram Protocol
This provides:
Acknowledgment of received packets
Windowing and congestion control
Retransmission of lost packets
Overbuffering (Faster than real-time streaming)
It seems slightly more configurable with regards to keep alives then ENet but it doesn't give you as many options (i.e. all data is reliable and sequenced not just the bits that you decide should be). It looks fairly straight forward to implement.
As others have pointed out, your question is very general, and whether or not something is 'faster' than TCP depends a lot on the type of application.
TCP is generally as fast as it gets for reliable streaming of data from one host to another. However, if your application does a lot of small bursts of traffic and waiting for responses, UDP may be more appropriate to minimize latency.
There is an easy middle ground. Nagle's algorithm is the part of TCP that helps ensure that the sender doesn't overwhelm the receiver of a large stream of data, resulting in congestion and packet loss.
If you need the reliable, in-order delivery of TCP, and also the fast response of UDP, and don't need to worry about congestion from sending large streams of data, you can disable Nagle's algorithm:
int opt = -1;
if (setsockopt(sock_fd, IPPROTO_TCP, TCP_NODELAY, (char *)&opt, sizeof(opt)))
printf("Error disabling Nagle's algorithm.\n");
If you have a situation where a TCP connection is potentially too slow and a UDP 'connection' is potentially too unreliable what do you use? There are various standard reliable UDP protocols out there, what experiences do you have with them?
The key word in your sentence is 'potentially'. I think you really need to prove to yourself that TCP is, in fact, too slow for your needs if you need reliability in your protocol.
If you want to get reliability out of UDP then you're basically going to be re-implementing some of TCP's features on top of UDP which will probably make things slower than just using TCP in the first place.
Protocol DCCP, standardized in RFC 4340, "Datagram Congestion Control Protocol" may be what you are looking for.
It seems implemented in Linux.
May be RFC 5405, "Unicast UDP Usage Guidelines for Application Designers" will be useful for you.
RUDP. Many socket servers for games implement something similar.
Did you consider compressing your data ?
As stated above, we lack information about the exact nature of your problem, but compressing the data to transport them could help.
It is hard to give a universal answer to the question but the best way is probably not to stay on the line "between TCP and UDP" but rather to go sideways :).
A bit more detailed explanation:
If an application needs to get a confirmation response for every piece of data it transmits then TCP is pretty much as fast as it gets (especially if your messages are much smaller than optimal MTU for your connection) and if you need to send periodic data that gets expired the moment you send it out then raw UDP is the best choice for many reasons but not particularly for speed as well.
Reliability is a more complex question, it is somewhat relative in both cases and it always depends on a specific application. For a simple example if you unplug the internet cable from your router then good luck keeping reliably delivering anything with TCP. And what even worse is that if you don't do something about it in your code then your OS will most likely just block your application for a couple of minutes before indicating an error and in many cases this delay is just not acceptable as well.
So the question with conventional network protocols is generally not really about speed or reliability but rather about convenience. It is about getting some features of TCP (automatic congestion control, automatic transmission unit size adjustment, automatic retransmission, basic connection management, ...) while also getting at least some of the important and useful features it misses (message boundaries - the most important one, connection quality monitoring, multiple streams within a connection, etc) and not having to implement it yourself.
From my point of view SCTP now looks like the best universal choice but it is not very popular and the only realistic way to reliably pass it across the Internet of today is still to wrap it inside UDP (probably using sctplib). It is also still a relatively basic and compact solution and for some applications it may still be not sufficient by itself.
As for the more advanced options, in some of the projects we used ZeroMQ and it worked just fine. This is a much more of a complete solution, not just a network protocol (under the hood it supports TCP, UDP, a couple of higher level protocols and some local IPC mechanisms to actually deliver messages). Since a couple of releases its initial developer has switched his attention to his new NanoMSG and currently the newest NNG libraries. It is not as thoroughly developed and tested and it is not very popular but someday it may change. If you don't mind the CPU overhead and some network bandwidth loss then some of the libraries might work for you. There are some other network-oriented message exchange libraries available as well.
You should check MoldUDP, which has been around for decades and it is used by Nasdaq's ITCH market data feed. Our messaging system CoralSequencer uses it to implement a reliable multicast event-stream from a central process.
Disclaimer: I'm one of the developers of CoralSequencer
The best way to achieve reliability using UDP is to build the reliability in the application program itself( for example, by adding acknowledgment and retransmission mechanisms)