How does UDP SetWriteBuffer and SetReadBuffer the OS's buffers? - sockets

Description
I'm busy writing a high frequency UDP server with Go. I'd estimate at least 1000 packets/second both ways.
However as the size of data I'm sending over the UDP socket grew, I eventually ran into the follow error: read udp 127.0.0.1:1541->127.0.0.1:9737: wsarecv: A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself.
I eventually just grew the size of the buffers I was reading from and writing into as follows:
buffer := make([]byte, 64 * 1024 * 1024) // used to just be 1024
l, err := s.socketSim.Read(buffer)
This worked fine and I stopped getting the error... However then I can across two functions inside the net package:
s.socketSim.SetWriteBuffer(64 * 1024 * 1024)
s.socketSim.SetReadBuffer(64 * 1024 * 1024)
I learned that these two act on the operating system's transmit buffer
Question
Do I even care to set the operating system buffer size and why? How does the size on the application buffer impact the size of the operating system buffer? Should they always be the same and how big should/can they become?

First, not only do you have an MTU size for each interface on your device and whatever destination you're send/recving from, but there is also an MTU size for each device in between. For this reason, as others have mentioned, you might want to use what is generally accepted for MTU since you might not control every device in the data route. In the case of UDP, MTU really just means how big a datagram can be before fragmenting.
Second, you almost certainly want your SND/RCV buffers to be larger than the MTU. These are kernel buffers which hold on to data when you're not ready to receive them. A larger UDP RCV buffer means that the kernel will buffer more packets for you instead before dropping them into the abyss. Maybe you have some non-trivial work to do for each packet. Depending on the bitrate, you might want a larger or smaller kernel buffer.
Finally, you're using UDP. There is no guarantee that you'll receive packets in order or at all. Any router in between you and a peer could decide to drop the packet for any reason. Since you're using UDP, you should prepare for dropped and out-of-order packets. You also might need some sort of retransmission mechanism, which further complicates things.
Or you might consider using TCP if dropped packets are unacceptable, knowing that timing is indeterminate.
If you're on linux, you can see current buffer sizes in /proc/sys/net. Usually the kernel will double what you ask for.
Also, you can tune your buffer size by watching for packet drops in /proc/net/udp. If you see drops, you might want to make your rcv buffer bigger, especially if the data is bursty and the processing intensive. If you're data is coming in at a consistent rate and you're still dropping packets, then you aren't processing them fast enough.

Related

how decrease UDP backlog to once packet at a time?

Most articles are about how to increase a UDP socket's receive buffer size to handle more packets, but I need a solution to decrease the UDP receive buffer to accept only 1 packet at a time and discard/drop all other packets until that packet is read.
I'm trying to do this for Linux, and did some network stack tuning, like setting the RCVBUFF and RCVBUFFFORCE socket options, but that didn't work. I cannot reduce RCVBUFF lower than 2046B (maybe 1 memory page), even when setting the udp_rmem_min to 0.
Why i can’t set UDP RCVBUF lower than 2046?

Do NIC drivers change descriptor ring algorithms on different socket buffer sizes?

If I set a socket buffer size to a particular max,
will this affect the way the network card uses its descriptor ring?
int option = 4800;
setsockopt(socket_id, SOL_SOCKET, SO_RCVBUF, (char *)&nSocketOption, sizeof(nSocketOption);
Are the sectors of the ring different sizes based on my decision to set the socket's buffer size?
The reason I ask, is because I'm receiving multiple messages over the same socket. A size of 4,800 bytes, should be able to hold about 25 messages, before dropping packets. Doing a size of 4,800 results in the smallest message being processed as frequently as its being sent (about 17 times a second). But when I change the socket buffer size to 4,799, all of my small packets get dropped.
I believe this is due to the network card processing differently based on the buffer size.
Is this a fair assumption?

What is the maximum possible size of receive buffer of network layer?

I want to know the maximum size of receive buffer of network layer or TCP/IP layer. Can anyone help on this?
What is the socket type?
If the socket is TCP then I would like to prefer you to set the buffer size to 8K.
For UDP you can also set the buffer size to 8k. It is not actually important for UDP. Because in UDP a whole packet is transmitted at a time. For this reason, you do not need to save much data in the socket for longer period of time.
But in TCP, data comes as a stream. You cannot afford data loss here because it will result in several parsing related issues.

However does UDP socket receiver buffer size impact latency?

I am setting the multicast UDP socket receiver buffer size to a big value to avoid packet drop. I tried to use a small buffer size, I did not see any latency diff. I am wondering how does it impact latency? When the app is fast enough to handle incoming packets, does bigger socket buffer size really impact latency and why?
UDP latency is going to depend more on the network that you're passing the traffic through than the local configuration. Small buffer size will mean you drop packets more often for high throughput streams but that isn't technically a latency issue. Latency will be affected by your local machine by how fast you can pull packets out of the buffer which will be negligible.
It doesn't impact latency at all. It just uses extra memory, that's why it's tuneable.

Receive buffer size for tcp/ip sockets

What's the maximum data size one should expect in a receive operation? The data that has to be sent is very large, but at some point there will be packet fragmentation I guess?
You can always limit the size of the buffer recv() would fill (parameter)
Your application design should not be sensitive to the amount of bytes recv() is willing to provide in one call.
It has little to do with MTU. In some TCP stack designs, one call to recv() would not return more than one datagram of underlying packet protocol. In others, it may be as big as socket's receive buffer.
There is something like maximum network packet size:
MTU
this indicates the max size off the low level buffer (3 iso/osi layer IP) during data transfer over network (not loopback).
Which is typically 1492 in Ethernet networks.
So it's worth to optimize data transfer to size of this amount.
(there are also so called Jumbo frames which breaks this rule, but there must be software/hardware which accepts that)
However simple recv() on socket, can return more bytes than MTU.
So you need to transfer first packet with the size of the rest data.
size = recv(512) // size should came in one shot
while( count(data) == size) // the rest of actual data can came sliced, so You should receive until size
data[offset] = recv(size)