I want to transfer a large number of messages. The messages do not need to be reliable. UDP comes to mind as a protocol choice.
Latency is important as well. I do not want to suffer from TCP head-of-line blocking.
I'm concerned I might overload the network when I just start sending messages at maximum speed (e.g. while (messagesRemaining != 0) Send(...);). If I send more than some middle-box can relay then, I think, large numbers of messages might be dropped. Some messages being dropped is fine but most of them should arrive.
How can I address this issue? How can I find out how fast I can send? I want to maintain reasonable packet loss (a few percent) and otherwise maximize bandwidth.
Whether you will overload the network or not depends on what is between the sender and the receiver hosts. The iperf utility has a UDP option that could help you determine the max rate you could send for a certain level of acceptable packet loss.
That said, from personal experience:
If it's local Gigabit network and client/server on same subnet, I highly doubt you would lose any packets. I've done tests before with iperf in this type of environment and never lost any packets; iperf is going to be one of the fastest and most efficient ways to put UDP packets on to the wire from a PC and we still never lost packets. We were even running the packets through an intel Atom-based Linux host with bridged ports setup while doing tcpdump at the same time and still never lost packets (note that even cheapo switches would perform as good or better than a bridge setup in a PC). The only way we were ever able to get packets to be lost was when we used FPGA/ASIC test devices that could put packets onto the wire at true line-speed for long periods of time. Even at that, the test setup was only losing packets when the packets were less than around 500 bytes.
If you aren't on a local network though (i.e. going over internet or routers) you will just need to do some testing with iperf to see what is acceptable for your environment. Problem is though that what rate can be sustained one day isn't guaranteed to be the same the next day. UDP doesn't have any sort of congestion/flow control algorithms like TCP does so you will have to figure out on your own how fast you can send.
Related
I've read many stack overflow questions similar to this, but I don't think any of the answers really satisfied my curiosity. I have an example below which I would like to get some clarification.
Suppose the client is blocking on socket.recv(1024):
socket.recv(1024)
print("Received")
Also, suppose I have a server sending 600 bytes to the client. Let us assume that these 600 bytes are broken into 4 small packets (of 150 bytes each) and sent over the network. Now suppose the packets reach the client at different timings with a difference of 0.0001 seconds (eg. one packet arrives at 12.00.0001pm and another packet arrives at 12.00.0002pm, and so on..).
How does socket.recv(1024) decide when to return execution to the program and allow the print() function to execute? Does it return execution immediately after receiving the 1st packet of 150 bytes? Or does it wait for some arbitrary amount of time (eg. 1 second, for which by then all packets would have arrived)? If so, how long is this "arbitrary amount of time"? Who determines it?
Well, that will depend on many things, including the OS and the speed of the network interface. For a 100 gigabit interface, the 100us is "forever," but for a 10 mbit interface, you can't even transmit the packets that fast. So I won't pay too much attention to the exact timing you specified.
Back in the day when TCP was being designed, networks were slow and CPUs were weak. Among the flags in the TCP header is the "Push" flag to signal that the payload should be immediately delivered to the application. So if we hop into the Waybak
machine the answer would have been something like it depends on whether or not the PSH flag is set in the packets. However, there is generally no user space API to control whether or not the flag is set. Generally what would happen is that for a single write that gets broken into several packets, the final packet would have the PSH flag set. So the answer for a slow network and weakling CPU might be that if it was a single write, the application would likely receive the 600 bytes. You might then think that using four separate writes would result in four separate reads of 150 bytes, but after the introduction of Nagle's algorithm the data from the second to fourth writes might well be sent in a single packet unless Nagle's algorithm was disabled with the TCP_NODELAY socket option, since Nagle's algorithm will wait for the ACK of the first packet before sending anything less than a full frame.
If we return from our trip in the Waybak machine to the modern age where 100 Gigabit interfaces and 24 core machines are common, our problems are very different and you will have a hard time finding an explicit check for the PSH flag being set in the Linux kernel. What is driving the design of the receive side is that networks are getting way faster while the packet size/MTU has been largely fixed and CPU speed is flatlining but cores are abundant. Reducing per packet overhead (including hardware interrupts) and distributing the packets efficiently across multiple cores is imperative. At the same time it is imperative to get the data from that 100+ Gigabit firehose up to the application ASAP. One hundred microseconds of data on such a nic is a considerable amount of data to be holding onto for no reason.
I think one of the reasons that there are so many questions of the form "What the heck does receive do?" is that it can be difficult to wrap your head around what is a thoroughly asynchronous process, wheres the send side has a more familiar control flow where it is much easier to trace the flow of packets to the NIC and where we are in full control of when a packet will be sent. On the receive side packets just arrive when they want to.
Let's assume that a TCP connection has been set up and is idle, there is no missing or unacknowledged data, the reader is blocked on recv, and the reader is running a fresh version of the Linux kernel. And then a writer writes 150 bytes to the socket and the 150 bytes gets transmitted in a single packet. On arrival at the NIC, the packet will be copied by DMA into a ring buffer, and, if interrupts are enabled, it will raise a hardware interrupt to let the driver know there is fresh data in the ring buffer. The driver, which desires to return from the hardware interrupt in as few cycles as possible, disables hardware interrupts, starts a soft IRQ poll loop if necessary, and returns from the interrupt. Incoming data from the NIC will now be processed in the poll loop until there is no more data to be read from the NIC, at which point it will re-enable the hardware interrupt. The general purpose of this design is to reduce the hardware interrupt rate from a high speed NIC.
Now here is where things get a little weird, especially if you have been looking at nice clean diagrams of the OSI model where higher levels of the stack fit cleanly on top of each other. Oh no, my friend, the real world is far more complicated than that. That NIC that you might have been thinking of as a straightforward layer 2 device, for example, knows how to direct packets from the same TCP flow to the same CPU/ring buffer. It also knows how to coalesce adjacent TCP packets into larger packets (although this capability is not used by Linux and is instead done in software). If you have ever looked at a network capture and seen a jumbo frame and scratched your head because you sure thought the MTU was 1500, this is because this processing is at such a low level it occurs before netfilter can get its hands on the packet. This packet coalescing is part of a capability known as receive offloading, and in particular lets assume that your NIC/driver has generic receive offload (GRO) enabled (which is not the only possible flavor of receive offloading), the purpose of which is to reduce the per packet overhead from your firehose NIC by reducing the number of packets that flow through the system.
So what happens next is that the poll loop keeps pulling packets off of the ring buffer (as long as more data is coming in) and handing it off to GRO to consolidate if it can, and then it gets handed off to the protocol layer. As best I know, the Linux TCP/IP stack is just trying to get the data up to the application as quickly as it can, so I think your question boils down to "Will GRO do any consolidation on my 4 packets, and are there any knobs I can turn that affect this?"
Well, the first thing you can do is disable any form of receive offloading (e.g. via ethtool), which I think should get you 4 reads of 150 bytes for 4 packets arriving like this in order, but I'm prepared to be told I have overlooked another reason why the Linux TCP/IP stack won't send such data straight to the application if the application is blocked on a read as in your example.
The other knob you have if GRO is enabled is GRO_FLUSH_TIMEOUT which is a per NIC timeout in nanoseconds which can be (and I think defaults to) 0. If it is 0, I think your packets may get consolidated (there are many details here including the value of MAX_GRO_SKBS) if they arrive while the soft IRQ poll loop for the NIC is still active, which in turn depends on many things unrelated to your four packets in your TCP flow. If non-zero, they may get consolidated if they arrive within GRO_FLUSH_TIMEOUT nanoseconds, though to be honest I don't know if this interval could span more than one instantiation of a poll loop for the NIC.
There is a nice writeup on the Linux kernel receive side here which can help guide you through the implementation.
A normal blocking receive on a TCP connection returns as soon as there is at least one byte to return to the caller. If the caller would like to receive more bytes, they can simply call the receive function again.
I am willing to know about the comparison of the Packet delivery rate between MQTT and CoAP transmission. I know that TCP is more secure than UDP, so MQTT should have a higher Packet delivery rate. I just want to know, if 2000 packets are sent using both protocols separately what would be the approximate percentage in the two cases?
Please help with an example if possible.
If you dig a little, you will find, that both, TCP and UDP, are mainly sending IP messages. And some of these messages may be lost. For TCP, the retransmission is handled by the TCP protocol without your influence. That sure works not too bad (at least in many cases). For CoAP, when you use CON messages, CoAP does the retransmission for you, so also not too much to lose.
When it comes to transmissions with more message loss (eg. bad connectivity), the reliability may also depend on the amount of data. If it fits into one IP message, the probability that this reaches the destination is higher, than 4 messages reaching their destination. In that situation the difference starts:
Using TCP requires, that all the messages are reaching the destination without gaps (e.g. 1,2, (drop 3), 4 will not work). CoAP will deliver the messages also with gaps. It depends on your application, if you can benefit from that or not.
I've been testing CoAP over DTLS 1.2 (Connection ID) for mostly a year now using a Android phone and just moving around sending request (about 400 bytes) with wifi and mobile network. It works very reliable. Current statistic: 2000 request, 143 retransmissions, 4 lost. Please note: 4 lost mainly means "no connection", so be sure, using TCP will have results below that, especially when moving around and frequently new TCP/TLS handshakes get required.
So my conclusion:
If you have a stable network connection, both should work.
If you have a less stable network connection and gaps are not acceptable by your application, both have trouble.
If you have a less stable network connection and gaps are acceptable by your application, then CoAP will do a better job.
I am currently creating a server software that communicates with multiple arduino boards. Due to the hardware, I am using the UDP Protocol. I have a pretty simple mechanism that will resend packages in most of the cases when they get lost. I have two questions now:
How probable is it that UDP Packets get lost in a Network with no Internet access and about 20 arduinos and one computer? Is it even neccessary to have a resend method?
Is there a way I can simulate UDP Packet loss in this network to check if the resend mechanisms are working?
How probable is it that UDP Packets get lost in a Network with no
Internet access and about 20 arduinos and one computer?
The probability is 100% that sooner or later a packet will be dropped.
If you want a more detailed statistic, like the probability of a packet being dropped within any particular period of time, the only real way to know is to try it and found out (using e.g. sequence numbers in the packets so that the receiver(s) can detect when a packet has been dropped by noting the skipped sequence-number). The probability will depend very much on the size of the packets, the speed at which the packets are being sent, the CPU speed of the receivers, what other tasks the receivers are spending CPU time on, the quality of your Ethernet switch, the quality of your Ethernet cables, the phase of the moon, etc etc.
Is it even neccessary to have a resend method?
That depends on what the consequences would be to having a packet dropped. For some applications (e.g. streaming audio or video, or audio-metering data), dropping a packet is no big deal; you just ignore the fact that some data was lost, and continue on with the next packet as usual. For other applications (e.g. file transmit/receive), the loss of a packet means the loss of data that the receiver needs, so you'll want to have some way to recover from that loss, e.g. by detecting it and triggering a resend, or else the entire transfer will fail (or at least the receiver will end up with only a partial file).
Is there a way I can simulate UDP Packet loss in this network to check
if the resend mechanisms are working?
Sure, just put some logic into the receivers so that they occasionally pretend to not have received a packet:
int numBytesReceived = recv(...);
if ((rand()%100) == 0) // Simulate a 1% packet loss rate
{
printf("Pretending to have dropped a packet!\n");
}
else
{
// handle the incoming packet as usual
}
I know, I know. This question has been asked many times before. But I've spent an hour googling now without finding what I am looking for so I will ask it again and mention my context along with what makes the decision hard for me:
I am writing the server for a game where the response time is very important and a packet loss every now and then isn't a problem.
Judging by this and the fact that I as a server mostly have to send the same data to many different clients, the obvious answer would be UDP.
I had already started writing the code when I came across this:
In some applications TCP is faster (better throughput) than UDP.
This is the case when doing lots of small writes relative to the MTU size. For example, I read an experiment in which a stream of 300 byte packets was being sent over Ethernet (1500 byte MTU) and TCP was 50% faster than UDP.
In my case the information units I'm sending are <100 bytes, which means each one fits into a single UDP packet (which is quite pleasant for me because I don't have to deal with the fragmentation) and UDP seems much easier to implement for my purpose because I don't have to deal with a huge amount of single connections, but my top priority is to minimize the time between
client sends something to server
and
client receives response from server
So I am willing to pick TCP if that's the faster way.
Unfortunately I couldn't find more information about the above quoted case, which is why I am asking: Which protocol will be faster in my case?
UDP is still going to be better for your use case.
The main problem with TCP and games is what happens when a packet is dropped. In UDP, that's the end of the story; the packet is dropped and life continues exactly as before with the next packet. With TCP, data transfer across the TCP stream will stop until the dropped packet is successfully retransmitted, which means that not only will the receiver not receive the dropped packet on time, but subsequent packets will be delayed also -- most likely they will all be received in a burst immediately after the resend of the dropped packet is completed.
Another feature of TCP that might work against you is its automatic bandwidth control -- i.e. TCP will interpret dropped packets as an indication of network congestion, and will dial back its transmission rate in response; potentially to the point of dialing it down to near zero, in cases where lots of packets are being lost. That might be useful if the cause really was network congestion, but dropped packets can also happen due to transient network errors (e.g. user pulled out his Ethernet cable for a couple of seconds), and you might not want to handle those problems that way; but with TCP you have no choice.
One downside of UDP is that it often takes special handling to get incoming UDP packets through the user's firewall, as firewalls are often configured to block incoming UDP packets by default. For an action game it's probably worth dealing with that issue, though.
Note that it's not a strict either/or option; you can always write your game to work over both TCP and UDP, and either use them simultaneously, or let the program and/or the user decide which one to use. That way if one method isn't working well, you can simply use the other one, and it only takes twice as much effort to implement. :)
In some applications TCP is faster (better throughput) than UDP. This
is the case when doing lots of small writes relative to the MTU size.
For example, I read an experiment in which a stream of 300 byte
packets was being sent over Ethernet (1500 byte MTU) and TCP was 50%
faster than UDP.
If this turns out to be an issue for you, you can obtain the same efficiency gain in your UDP protocol by placing multiple messages together into a single larger UDP packet. i.e. instead of sending 3 100-byte packets, you'd place those 3 100-byte messages together in 1 300-byte packet. (You'd need to make sure the receiving program is able to correctly intepret this larger packet, of course). That's really all that the TCP layer is doing here, anyway; placing as much data into the outgoing packets as it has available and can fit, before sending them out.
I am writing an GUI application which receives UDP packets from a FPGA board of 4Gb data continuously (application is a data retrieval system).
I created my own class inherited from CAyncSocket and on receive message I am reading packets through ReceiveFrom API and writing data to file.
As packets are sent continuously from FPGA (about 400k packets of 1KB data) my application is missing the packets. I am receiving only 200k packets. but when I am monitoring with Wireshark all packets are received.
Can anyone suggest any technique or algorithm to solve this problem, so that I can receive large number of UDP packets without loss.
The first thing to understand and accept is that you cannot guarantee that no UDP packets will be dropped. It is part of the nature of the UDP transport layer that any step in the transmission is allowed to drop a UDP packet for any reason, and that this is something that will happen from time to time. In your case, it sounds like the Windows networking stack is dropping the incoming UDP packets after receiving them from the network card, probably because the incoming-UDP-packets buffer associated with your socket is too full and does not have room to store them. This could happen for example if your write-to-disk calls occasionally take a number of milliseconds to return, during which time your app is unable to read more data from the UDP socket.
That said, there are a few things you can do to make the dropping of packets somewhat less likely.
The first (and easiest) thing to do is to increase the size of your socket's incoming-packets-buffer, using setsockopt(SO_RCVBUF). This helps because the larger the buffer is, the more time your program will have to read packets out of the buffer before the networking stack fills the buffer up entirely and starts dropping packets because it has no place to put them.
If that isn't sufficient for your purposes, the other thing you can do is spawn a separate thread that does nothing but receive incoming UDP packets and add them to a queue (for another thread to process later). Because this thread does nothing else besides receive UDP packets, it will be able to respond quickly when new packets have arrived, and thus the incoming-sockets-buffer will be less likely to ever fill up and overflow. You'll probably want to run this thread at a high priority if possible, so that there is less chance of it being held off of the CPU in the case where other threads or programs are competing for CPU time.
If you've implemented both of the above and the rate of packet loss still isn't acceptable, then you may have to step back and re-evaluate your approach. This might include switching from UDP protocol to TCP, or rewriting your code as an in-kernel driver, or switching to a real-time OS that can make better guarantees about response times.