Does it make sense to use RTP protocol for multiple streamers and single receiver? - streaming

I am in a process of learning and trying to use the RTP/RTCP protocol. My situation is that there is 1 to n streamers and 1 (or potentially 1 to m if needed) receiver(s), but in a way that the streamers themselves do not know about each other (they cannot directly due to technical reasons, such as different network, limited bandwidth, etc...). So it is more like multiple unicast sessions, but the receiver actually knows about them all, collects data from all of them, it is just the senders do not know about each other .
Now reading about the protocol, it seems to me that huge portion of it is related to sending some feedback, collision detections, and so on. So I have doubts, is RTP is really applicable in this case? Is is already used in this way somewhere?
Seems to me it is still beneficial to collect statistic about data transfer that RTP provides (data sent, loss, times, etc...), it just feels the most of the protocol is sort of left out...
Also I have one additional question, going through the various RTP libraries, they all assume that sender will also open ports for receiving RTP/RTCP data, does RTP forbid use of one way communication? I mean application that would only stream the data, not expecting to receive anything back. The libraries (e.g. ccRTP) seem to assume both way communication only...

RTCP is the protocol that provides statistics. The stream receiver (client) will send stats to the sender (server) via RTCP. I don't believe the client will get any statistic reports from the server.
There's nothing wrong with a single client receiving multiple unicast sessions from various servers.
RTP requires two way communication during the setup process. Once setup is complete and the play cmd is sent, it is mostly one way. The exception are the "keep alive" packets that must be sent to the server periodically (usually every 60 seconds or so) to keep the stream going. The exact timeout value is sent to the client during the setup process.
But if you implement your own RTP, there's nothing stopping you from having the server send the stream continuously without any feedback from the client. Basically it would be implementing an infinite timeout value.
You can read about all the details in the spec: RTP: A Transport Protocol for Real-Time Applications

Related

Is it possible to receive from one port and send through another in mainline dht?

I'm trying to implement mainline dht. While implementing I found it easier to use multithreading to handle requests and send requests at the same time. But it's impossible for a singular port to both send and receive at the same time. There's two solutions I thought of, one of those would be using different ports for receiving and sending, however in mainline dht it seems like whenever you send a request, nodes will remember you based on the port you send the request on. Is it possible to still implement a different port for receiving and sending?
The DHT requires that the same port is used for sending and receiving.
But it's impossible for a singular port to both send and receive at the same time.
Sockets are thread-safe, you can issue send and receive syscalls to the same socket at the same time.
If you want to load-balance reading across multiple threads you can open multiple sockets bound to the same port via SO_REUSEPORT but that shouldn't be necessary because any regular DHT implementation will only see a dozen packets per second, perhaps with short burts into the thousands, something that a single core can comfortably handle.

Game server TCP networking sockets - fairness

I'm writing a game server for a turn-based game. One criteria is that the game needs to be as fair for all players as possible.
So far it works like this:
Each client has a TCP connection. (If relevant, the connection is opened via WebSockets)
While running, continually check for incoming socket messages via epoll.
Iterate through clients with sockets ready to read:
Read all messages from the client.
Update the internal game state for each message.
Queue outgoing messages to affected clients.
At the end of each "window" (turn):
Iterate through clients and write all queued outgoing messages to their sockets
My concern for fairness raises the following questions:
Does it matter in which order I send messages to the clients?
Calling write() on all the sockets takes only a fraction of a second for my program, but somewhere in the underlying OS or networking would it make a difference if I sorted the client list?
Perhaps I should be sending to the highest-latency clients first?
Does it matter how I write the outgoing messages to the sockets?
Currently I'm writing them as one large chunk. The size can exceed a single packet.
Would it be faster for the client to begin its processing if I sent messages in smaller chunks than 1 packet?
Would it be better to write 1 packet worth to each client at a time, and iterate over the clients multiple times?
Are there any linux/networking configurations that would bear impact here?
Thanks in advance for your feedback and tips.
Does it matter in which order I send messages to the clients?
Yes, by fractions of milliseconds. If the network interface is available for sending the OS will immediately start sending. Why would it wait?
Perhaps I should be sending to the highest-latency clients first?
I think you should be sending in random order. Shuffle the list prior to sending. This makes it fair. I think your question is valid and this should be addressed.
Currently I'm writing them as one large chunk. [...]
First, realize that TCP is stream-based and that there are no packets/messages at the protocol level. On a physical level data is indeed packetized.
It is not necessary to manually split off packets because clients will read data as it arrives anyway. If a client issues a read, that read will complete immediately once the first packet has arrived. There is no artificial waiting in the OS.
Are there any linux/networking configurations that would bear impact here?
I don't know. Be sure to disable nagling.

Packet drop notification in Layer-2

IS there a way I can in user-space get notification about a packet being dropped at Layer-2 in 802.11.
According to my understanding what happens is, when a packet is sent out on the medium, there are Layer-2 ACKs which are received if it is delivered correctly (if not,it does the retransmission and ultimately drops the packet if not delivered after several retries..)
I want to be able to access this notification (in user-space)and change the behavior of packet transmission.
I want to be able to send the packet to another host from the FIB rather than dropping the packet.
I have read about libpcap libraries and netfilter hooks which allows me to capture packet and inject them back on the networking stack..
But I'm not able to find hooks (if any, for the wireless stack) to help me capture the packet notification in Layer-2.
Please correct me if I'm not understanding something correctly. Also, any heads-up or links to read would be great.
No, you cannot, at least not using the standardised sockets interfaces. 802.11 is a link layer, and the sockets API is strictly link-layer agnostic: unless it's going to work on all link layers, it's not in sockets. There are good reasons for that: the kind of cross-layer interaction that you envision has been tried many times, and it's always turned out more trouble than it's worth.
You didn't give us any details about the application — but the best solution is most probably to change your application-layer protocol to send explicit acknowledgments, and send your data over the fallback route when you fail to receive an ACK.

What are the required mechanisms for a reliable layer over UDP?

I've been working on writing my own networking engine for my own game development side projects. This requires the options of having unreliable, reliable, and ordered reliable messages. I have not, however, been able to identify all of the mechanisms necessary for reliable and ordered reliable protocols.
What are the required mechanisms for a reliable layer over UDP? Additional details are appreciated.
So far, I gather that these are requirements:
Acknowledge received messages with a sequence number.
Resend unacknowledged messages after a retransmission time expires.
Track round trip times for each destination in order to calculate an appropriate retransmission time.
Identify and remove duplicate packets.
Handle overflowing sequence numbers looping around.
This has influenced my architecture to have reliable message headers with sequences and timestamps, acknowledge messages that echo a received sequence and timestamp, a system for tracking appropriate retransmission times based on address, and a thread that a) receives messages and queues them for user receipt, b) acknowledges reliable messages, and c) retransmits unacknowledged messages with expired retransmission timers.
NOTE:
Reliable UDP is not the same as TCP. Even ordered reliable UDP is not the same as TCP. I am not secretly unaware that I really want TCP. Also, before someone plays the semantics games, yes... reliable UDP is an "oxymoron". This is a layer over UDP that makes for reliable delivery.
You might like to take a look at the answers to this question: What do you use when you need reliable UDP?
I'd add 'flow control' to your list. You want to be able to control the amount of data you're sending on a particular link depending on the round trip time's you're getting or you'll flood the link and just be throwing datagrams away.
Note that depending on the overall protocol, it might be possible to dispense with retransmission timers. See, for example, the Quake 3 network protocol.
In Q3 reliable packets are simply sent until an ack is seen.
Why are you trying to re-invent TCP? It provides all of the features you originally stated, and has been show to work well.
EDIT - Since your comments show that you have additional requirements not originally stated, you should consider whether a hybrid model using multiple sockets would be better than trying to fulfill all of those criteria in a single application-layer protocol.
Actually it seems that what you really need is SCTP.
SCTP supports:
message based (rather than byte stream) transmissions
multiple streams over a single netsock socket
ordered or unordered receipt of packets
... message ordering is optional in SCTP; a receiving application may choose to process messages in the order they are received instead of the order they were sent

how to make SIP protocol more reliable using UDP

Actually We are doing thesis work where we need to make 10 voip phones which are SIP based connected with each other.So they can call and talk among each other.Also we want to add video calls access.Another question is it possible video calls on SIP.
SIP already has built in reliability measures, most of which are specifically to cope with unreliable transports such as UDP. You should read the section in the SIP RFC on Transactions to gain an understanding of how it works. One aspect missing from the SIP RFC is reliability for provisional responses and the supplementary RFC3262 deals with that.
SIP is agnostic to the type of sessions, such as voice or video, it sets up so yes it can be used to set up video calls. There are heaps of readily available SIP softphones around that already provide video, one example being x-lite.
To make it reliable you need to emulate the following two features:
For Calls
You need to sequence the packets.
One end needs to tell the other end that a sequenced packet is missing if this happens, and you probably want take jitter into account -- i.e., wait a small amount of time before you request a missing packet.
For protocol commands
You need to ackknowledge command packets -- if a command is not acknowledged it has to be sent again.