Ideal UDP packet size on a reliable (short) network for efficient data transfer - sockets

I have a question about the UDP protocol.
I want to stream data from a particle sensor (Xilinx FPGA Dev Board) to a raspberry pi (or Windows 10 laptop) via UDP byte socket as binary data. I don't care if some packets are lost, because there are many particles coming after anyways... But if a packet gets lost, all of the particles information should get lost.
The connection is a short lan cable over the 1 Gbit/s Ethernet port.
The "minimum" amount of data is 192 Bit (24 Byte) per particle (16 x 12 Bit value) and the maximum amount of particles is 3300 per second.
So I have to transfer max. 192 x 3300 = 79200 Byte/s plus Header etc.
Maximum packet size of UDP 65.507 Byte
As I understand the packet size has to be devidable by 24 Byte in my application.
Which leaves me with a packet size range of 24 to 65.496 Byte.
But if the concentration is lower I don't want to wait minutes until a packet is filled and ready to be send.
What would you guys suggust in regard of repitition rate and size?
E.g. a 1008 Byte packet has to be send about every 13 ms at max. particle concentration.
best regards
I just tested a UDP socket with 1024 Byte Buffer, which works nice with strings or Bitvectors for single test packets.

Related

OPUS packet size

I have an application, that reads opus packets from a file. The file confirms opus packets in ogg format. My application sends each opus packet every 20 millisecond (it is configurable).
For 20 millisec, it sends packets of sizes ranging from 200 bytes to 400 bytes, say average size is 300 bytes.
Sending 300 bytes for 20millsec, is it right or its too much of data. How can I calculate for 20millisec how much data (in bytes) I can send to remote.
Can somebody help me to understand how to calculate number of bytes I need to send to remote party per 20millisec.
300 bytes/packet × 8 bits/byte / 20 ms/packet = 120 kbit/s
That is enough for good quality stereo music. Depending on the quality that you need, or if you are only sending mono or voice, you could potentially reduce the bitrate of the encoder. However if you are reading from an Ogg Opus file then the packets are already encoded, so it is too late to reduce the bitrate of the encoder unless you decode the packets and re-encode them at a lower bitrate.

UDP packet loss rate might increase on conditions?

Does UDP packet loss percentage might increase considering packet size? For example if I send 100'000 packets, in first try byte[] size is 30, but second 300. Could packet size play role in it's drop ability or packet loss percentage is not its size dependent?
The packet loss is depending on the size of the packet. This has several reasons.
IP packets can go up to 64k approximately, but they are fragmented up to the MTU of ethernet and if one of those packets gets lost , the whole IP packet is dropped. For larger packets if the traffic is high the probability is higher that the larger packet will be dropped. The MTU is around 1500 bytes.
There is more to it than just that. Internally a protocol stack is implemented using internal buffers that are a lot smaller than the mtu, this can vary from 300 bytes and larger. But the point is that these buffers are also a limited resource. If the network device runs out of buffers, then the packet will be dropped as well.
If you don't know the MTU on the network in question according to the link below a 512-byte UDP payload is considered reasonable to allow a margin for other header information that you may not have anticipated.
What is the largest Safe UDP Packet Size on the Internet
Because you're sending larger packets, yes it could increase the chances that packets are dropped.
Now if you compare sending 100000 packets of 30 bytes or 10000 packets of 300 bytes, even though the user data is the same the total size of the packets is larger due to the headers.

What's the practical limit for the data length of UDP packet?

This is wikipedia's explanation of the length field of the UDP header:
Length
A field that specifies the length in bytes of the UDP header
and UDP data. The minimum length is 8 bytes because that is the length
of the header. The field size sets a theoretical limit of 65,535 bytes
(8 byte header + 65,527 bytes of data) for a UDP datagram. The
practical limit for the data length which is imposed by the underlying
IPv4 protocol is 65,507 bytes (65,535 − 8 byte UDP header − 20 byte IP
header).
The practical limit for the data length should minus 20 byte IP header, why is that?
Take a good look at the explanation of the IP header at this link :
https://www.ietf.org/rfc/rfc791.txt
I quote :
Total Length: 16 bits
Total Length is the length of the datagram, measured in octets, including internet header and data. This field allows the length of a datagram to be up to 65,535 octets. Such long datagrams are impractical for most hosts and networks. All hosts must be prepared to accept datagrams of up to 576 octets (whether they arrive whole or in fragments). It is recommended that hosts only send datagrams larger than 576 octets if they have assurance that the destination is prepared to accept the larger datagrams.
The number 576 is selected to allow a reasonable sized data block to be transmitted in addition to the required header information. For example, this size allows a data block of 512 octets plus 64 header octets to fit in a datagram. The maximal internet header is 60 octets, and a typical internet header is 20 octets, allowing a margin for headers of higher level protocols.
So the maximum total length is 65535 but this includes the IP header itself.
Therefore you have an IP payload that can be 65535 - 20 = 65515.
But the payload of IP in your case is UDP and UDP has a header of its own which is 8 bytes. Hence you get to the theoretical limit of the payload of a UDP packet : 65,535 − 8 byte UDP header − 20 byte IP header
Note the use of theoretical instead of practical. The practical limit of a UDP packet takes into account the probability of fragmentation and thus considers the mtu of the network layer. The link above also has an interesting sentence containing the value 576. 576 - 20 - 8 = 548 which is not quite 534 but getting close. This might explain this practical limit.
Because UDP packets are encapsulated in IP packets which headers are 20 bytes. You can't send UDP packets without a encapsulated IP packet. Usually the actual limit is way less and it depends on the MTU of the routers between the two endpoints transmitting the UDP packet.
Because the IP header has to be (a) sent and (b) counted in the 16-bit length word. See RFC 791 #3.1.
However the real practical limit is generally accepted to be 534 bytes, to avoid fragmentation at the IP layer, which increases the risk of datagram loss.

Increased MTU but still can't send large UDP packets

a little info on what i'm trying to achieve here first. I'm using a Texas Instrument board EVM6678LE, and what i am trying to do is to increase the UDP transfer rate between the board and my PC.
I've increased the MTU on my PC through netsh>interface>ipv4 to 15,000. But when i ping the board from my PC i am only able to ping up to "ping 192.168.2.100 -l 10194", if i ping with 195bytes onwards i'll receive a request timeout. Is this a limitation of my PC or something?
Does anyone have any idea what could be the possible cause of this? Any advice or suggestions at all would be welcome. As the only way to increase the transfer rate i could think of it increasing the per packet size which reduces overhead. And at 10k i have a rate of around 9.1MB/s, and i'm trying to attain 25MB/s.
Thanks!
Increasing the MTU on your PC may not prevent fragmentation. I don't know exactly what is controlling this, but your network card or driver can fragment the packet even when MTU is not reached. Use a sniffer like Wireshark to see how the packets are sent.
About the timeout, it is possible that your board rejects fragmented pings (because of Ping of Death protection). There is also a possibility that its packet buffer is 10kB (10240) bytes long, and can't receive larger packets. Also, make sure that the receiving endpoint have a matching MTU.
Anyway, if you are trying to increase transfer rate, you are on the wrong track. The overhead for UDP is 8 bytes, IP 20 bytes, Ethernet 18 bytes, which make a total of 46 bytes (oh, coincidence, 46+10194 is exactly 10240). 46 bytes overhead for 1024 MTU is 95.5%. 46 bytes for 4096 is 98.9%, 46 bytes for 16384 is 99.7%. That means you gain +3.5% transfer rate from 1024 to 4096 MTU, and another +0.8% from 4096 to 16384. The gain is just ridiculous, and you should just let the MTU to the common default 1500.
Anyway, going from 9.1MB/s to 25MB/s just by changing the MTU is IMPOSSIBLE (if it was, why the PC default is not higher ?). Here I guess you are using Fast Ethernet (100BASE-T), and you are already transferring near full bandwidth. To get higher rates, you would need to have Gigabit Ethernet (1000BASE‑T). That means you need both hardware endpoints to support 1000BASE-T.

iPhone 4S - BLE data transfer speed

I've been tinkering around with the BLE (Bluetooth Low Energy) connectivity classes quiet a bit lately and haven't been able to make it transfer data any faster than 1KB / 5 seconds. I believe, in the documentation, it says the max speed is 60 bytes per 20 milliseconds. With data transfer and counting the Ack transfer after each set of packets, I believe we should be able to go as fast as 1.5KB per second. So my code is around 7-8 times slower than it should be.
I'm just wondering if anyone has been able to do data transfer in BLE as fast as the documentation says it should be able to do. What sort of speed are you getting if faster than mine?
Thanks a lot
see at the guidlines of apple and you will see that a connection update request is required to speed up your connection.
https://developer.apple.com/hardwaredrivers/BluetoothDesignGuidelines.pdf
I have min=20ms max 40 ms
I hope I could help
Roman
If you are able to use higher MTU size (negotiated by the iOS) then you would be able to increase the bandwidth even more, because there is a 4 byte L2CAP header and a 3 byte ATT header that wouldn't be transmitted more than in one packet.
If you are able to transmit 6 packets pr connection interval, then you would be able to put in 35 byte extra per connection interval (the 7 byte header would still be there for the first packet) The MTU size could also be split over several connection intervals, increasing the throughput with 7 more bytes pr connection interval. (Just takes longer time to assemble the packet again.) The max MTU size allowed by ATT is 515 bytes (Max size of att is 512 bytes + 3 byte header for opcode + handle)