Write big messages to socket buffer vs lots of small messages - sockets

My question is quite simple. Assume we have a TCP socket server which is going to send 10 length prefixed messages each second to its connected clients. each message has size of 500 bytes. Is it better to merge all those 10 messages and call socket.write() once with a 5000 bytes lengthed message each second , or it is better to call socket.write() 10 times per second ? Which one cause lower latency ?

Related

What happens when a process tries to read more bytes than the one that sent it

If Two processes communicate via sockets and Process A sends Process B 100 bytes.
Process B tries to read 150 bytes. Later Process A sends 50 bytes.
What is the result of Process B's read?
Will the process B read wait until it receives 150 bytes?
That is dependent on many factors, especially related to the type of socket, but also to the timing.
Generally, however, the receive buffer size is considered a maximum. So, if a process executes a recv with a buffer size of 150, but the operating system has only received 100 bytes so far from the peer socket, usually the available 100 are delivered to the receiving process (and the return value of the system call will reflect that). It is the responsibility of the receiving application to go back and execute recv again if it is expecting more data.
Another related factor (which will not generally be the case with a short transfer like 150 bytes but definitely will if you're sending a megabyte, say) is that the sender's apparently "atomic" send of 1000000 bytes will not all be delivered in one packet to the receiving peer, so if the receiver has a corresponding recv with a 1000000 byte buffer, it's very unlikely that all the data will be received in one call. Again, it's the receiver's responsibility to continue calling recv until it has received all the data sent.
And it's generally the responsibility of the sender and receiver to somehow coordinate what the expected size is. One common way to do so is by including a fixed-length header at the beginning of each logical transmission telling the receiver how many bytes are to be expected.
Depends on what kind of socket it is. For a STREAM socket, the read will return either the amount of data currently available or the amount requested (whichever is less) and will only ever block (wait) if there is no data available.
So in this example, assuming the 100 bytes have (all) been transmitted and received into the receive buffer when B reads from the socket and the additional 50 bytes have not yet been transmitted, the read will return those 100 bytes and will not wait.
Note also, the dependency of all the data being transmitted and received -- when process A writes data to a socket it will not necessarily be sent immediately or all at once. Depending on the underlying transport, there's an MTU size and any write larger than that will be broken up. Smaller writes may also be delayed and combined with later writes to make up the MTU. So in your case the send of 100 bytes might be too large (and broken up), or might be too small and not be transmitted immediately.

Advantage of multiple socket connections

I keep hearing people say, to get a better throughput you create multiple socket connection.
But my understanding is that however many tcp sockets you open between two end points. the ip layer is still one. So not sure where this additional throughput comes from
The additional throughput comes from increasing the amount of data sent in the first couple of round trip times (RTTs). TCP can send only IW packets the first round trip time (RTT). The amount is then doubled each RTT (slow start). If you open 4 connections you can send 4 * IW packets the first RTT. The throughput is quadrupled.
Lets say that a client requests a file that requires IW+1 packets. Opening two connections can complete the sending in one RTT, rather than two RTTS.
HOWEVER, this comes at a price. The initial packets are sent as a burst, which can cause severe congestion and packet loss.

How safe is UDP?

I'm considering whether to use TCP or UDP for some really simple communication I'm working on. Here are the basic details:
All messages fit in a single 1500-byte packet (so ordering is irrelevant)
The recipient of these messages will be bombarded with packets from a number of different sources. TCP would handle congestion, but would UDP packets arriving at the same port simultaneously from tens or hundreds of sources corrupt each other?
Missed/corrupted messages are not a big deal. So long as they remain a small minority, and they are correctly identified as invalid, they can just be ignored
Packets arrive in waves, a few per second for a few seconds and then tens of thousands in a fraction of a second. The network should be able to handle the bandwidth in these spikes
Do you see any problem with using UDP for this, keeping in mind that ordering doesn't matter, lost/corrupted packets can be safely ignored, and these packet spikes will have tens of thousands of packets arriving possibly simultaneously?
All messages fit in a single 1500-byte packet (so ordering is irrelevant)
1500 is the MTU usually used in local networks. It can be lower on the internet and protocols like DNS assume that at least 512 byte will work. But even if the MTU is lower the packet gets only fragmented and reassembled at the end, so no half messages arrive at the application.
.. but would UDP packets arriving at the same port simultaneously from tens or hundreds of sources corrupt each other?
They would not corrupt each other. If they arrive too fast and your application is not able to read them in time from the socket buffer so that the socket buffer fills up then the packet will simply be lost.
Missed/corrupted messages are not a big deal. So long as they remain a small minority, and they are correctly identified as invalid, they can just be ignored
There is an optional checksum for UDP which gets used in most cases. If the checksum does not fit the packet gets discarded, i.e. not delivered to the application. The checksum does account for simple bitflips but will not be able to detect every corruption. But this is the same with all checksums and also with TCP.
Packets arrive in waves, a few per second for a few seconds and then tens of thousands in a fraction of a second. The network should be able to handle the bandwidth in these spikes
If the bandwidth in the network can deal with it then the network is able to handle it. But the question is if your local machine and especially your application is able to cope with such waves, that is to process packets that fast that the buffer of the network card and the socket buffer not overflow. You should probably increase the receive buffer size to better deal with such waves.
All messages fit in a single 1500-byte packet (so ordering is irrelevant)
Non sequitur. The generally accepted payload limit for UDP datagrams is 534 bytes, and the fact that all messages fit into one datagram doesn't imply that order is irrelevant, unless the order of messages is irrelevant, which you haven't stated.
would UDP packets arriving at the same port simultaneously from tens or hundreds of sources corrupt each other?
No.
Missed/corrupted messages are not a big deal. So long as they remain a small minority, and they are correctly identified as invalid, they can just be ignored.
If you don't disable UDP checksum checking, they will be dropped, not ignored.
Packets arrive in waves, a few per second for a few seconds and then tens of thousands in a fraction of a second. The network should be able to handle the bandwidth in these spikes.
It won't. UDP packets can be dropped any time, especially under conditions like these. But as you've already stated that missed messages are not a big deal, it isn't a big deal.
Do you see any problem with using UDP for this, keeping in mind that ordering doesn't matter, lost/corrupted packets can be safely ignored, and these packet spikes will have tens of thousands of packets arriving possibly simultaneously?
Not under the conditions you have stated, assuming they are correct.

How to split received with boost asio udp sockets united datagrams

I've made my UDP server and client with boost::asio udp sockets. Everything looked good before I started sending more datagrams. They come correctly from client to server. But, they are united in my buffer into one message.
I use
udp::socket::async_receive with std::array<char, 1 << 18 > buffer
for making async request. And receive data through callback
void on_receive(const error_code& code, size_t bytes_transferred)
If I send data too often (every 10 milliseconds) I receive several datagrams simultaneously into my buffer with callback above. The question is - how to separate them? Note: my UDP datagrams have variable length. I don't want to use addition header with size, cause it'll make my code useless for third-party datagrams.
I believe this is a limitation in the way boost::asio handles stateless data streams. I noticed exactly the same behavior when using boost::asio for a serial interface. When I was sending packets with relatively large gaps between them I was receiving each one in a separate callback. As the packet size grew and the gap between the packets therefore decreased, it reached a stage when it would execute the callback only when the buffer was full, not after receipt of a single packet.
If you know exactly the size of the expected datagrams, then your solution of limiting the input buffer size is a perfectly sensible one, as you know a-priori exactly how large the buffer needs to be.
If your congestion is coming from having multiple different packet types being transmitted, so you can't pre-allocate the correct size buffer, then you could potentially create different sockets on different ports for each type of transaction. It's a little more "hacky" but given the virtually unlimited nature of ephemeral port availability, as long as you're not using 20,000 different packet types that would probably help you out as-well.

How does packet interaction with TCP Selective Acknowledgement work?

Can anybody explain how the packet interaction with TCP Selective Acknowledgment works?
I found the definition on Wikipedia, but I cannot get a clear picture what Selective Acknowledgment really does compared to Positive Acknowledgment and Negative Acknowledgment.
TCP breaks the information it sends into segments... essentially segments are chunks of data no larger than the current value of the TCP MSS (maximum segment size) received from the other end. Those chunks have incrementing sequence numbers (based on total data byte counts sent in the TCP session) that allow TCP to know when something got lost in-flight; the first TCP sequence number is chosen at random, and for security-purposes it should not be a pseudo-random number. Most of the time, the MTU of your local ethernet is smaller than the MSS, so they could send multiple segments to you before you can ACK.
It is helpful to think of these things in the time sequence they got standardized...
First came Positive Acknowledgement, which is the mechanism of telling the sender you got the data, and the sequence number you ACK with is the maximum byte-sequence received per TCP chunk (a.k.a segment) he sent.
I will demonstrate below, but in my examples you will see small TCP segment numbers like 1,2,3,4,5... in reality these byte-sequence numbers will be large, incrementing, and have gaps between them (but that's normal... TCP typically sends data in chunks at least 500 bytes long).
So, let's suppose the sender xmits segment numbers 1,2,3,4,5 before send your first ACK. If all goes well, you send an ACK for 1,2,3,4,5 and life is good. If 2 gets lost, everything is on hold till the sender realizes that 2 has never been ACK'd; he knows because you send duplicate ACKs for 1. Upon the proper timeout, the sender xmits 2,3,4,5 again.
Then Selective Acknowledgement was proposed as a way to make this more efficient. In the same example, you ACK 1, and SACK segments 3 through 5 along with it (if you use a sniffer, you'll see something like "ACK:1, SACK:3-5" for the ACK packets from you). This way, the sender knows it only has to retransmit TCP segment 2... so life is better. Also, note that the SACK defined the edges of the contiguous data you have received; however, multiple non-contiguous data segments can be SACK'd at the same time.
Negative Acknowledgement is the mechanism of telling the sender only about missing data. If you don't tell them something is missing, they keep sending the data 'till you cry uncle.
HTH, \m