Increased MTU but still can't send large UDP packets - sockets

a little info on what i'm trying to achieve here first. I'm using a Texas Instrument board EVM6678LE, and what i am trying to do is to increase the UDP transfer rate between the board and my PC.
I've increased the MTU on my PC through netsh>interface>ipv4 to 15,000. But when i ping the board from my PC i am only able to ping up to "ping 192.168.2.100 -l 10194", if i ping with 195bytes onwards i'll receive a request timeout. Is this a limitation of my PC or something?
Does anyone have any idea what could be the possible cause of this? Any advice or suggestions at all would be welcome. As the only way to increase the transfer rate i could think of it increasing the per packet size which reduces overhead. And at 10k i have a rate of around 9.1MB/s, and i'm trying to attain 25MB/s.
Thanks!

Increasing the MTU on your PC may not prevent fragmentation. I don't know exactly what is controlling this, but your network card or driver can fragment the packet even when MTU is not reached. Use a sniffer like Wireshark to see how the packets are sent.
About the timeout, it is possible that your board rejects fragmented pings (because of Ping of Death protection). There is also a possibility that its packet buffer is 10kB (10240) bytes long, and can't receive larger packets. Also, make sure that the receiving endpoint have a matching MTU.
Anyway, if you are trying to increase transfer rate, you are on the wrong track. The overhead for UDP is 8 bytes, IP 20 bytes, Ethernet 18 bytes, which make a total of 46 bytes (oh, coincidence, 46+10194 is exactly 10240). 46 bytes overhead for 1024 MTU is 95.5%. 46 bytes for 4096 is 98.9%, 46 bytes for 16384 is 99.7%. That means you gain +3.5% transfer rate from 1024 to 4096 MTU, and another +0.8% from 4096 to 16384. The gain is just ridiculous, and you should just let the MTU to the common default 1500.
Anyway, going from 9.1MB/s to 25MB/s just by changing the MTU is IMPOSSIBLE (if it was, why the PC default is not higher ?). Here I guess you are using Fast Ethernet (100BASE-T), and you are already transferring near full bandwidth. To get higher rates, you would need to have Gigabit Ethernet (1000BASE‑T). That means you need both hardware endpoints to support 1000BASE-T.

Related

How does UDP SetWriteBuffer and SetReadBuffer the OS's buffers?

Description
I'm busy writing a high frequency UDP server with Go. I'd estimate at least 1000 packets/second both ways.
However as the size of data I'm sending over the UDP socket grew, I eventually ran into the follow error: read udp 127.0.0.1:1541->127.0.0.1:9737: wsarecv: A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself.
I eventually just grew the size of the buffers I was reading from and writing into as follows:
buffer := make([]byte, 64 * 1024 * 1024) // used to just be 1024
l, err := s.socketSim.Read(buffer)
This worked fine and I stopped getting the error... However then I can across two functions inside the net package:
s.socketSim.SetWriteBuffer(64 * 1024 * 1024)
s.socketSim.SetReadBuffer(64 * 1024 * 1024)
I learned that these two act on the operating system's transmit buffer
Question
Do I even care to set the operating system buffer size and why? How does the size on the application buffer impact the size of the operating system buffer? Should they always be the same and how big should/can they become?
First, not only do you have an MTU size for each interface on your device and whatever destination you're send/recving from, but there is also an MTU size for each device in between. For this reason, as others have mentioned, you might want to use what is generally accepted for MTU since you might not control every device in the data route. In the case of UDP, MTU really just means how big a datagram can be before fragmenting.
Second, you almost certainly want your SND/RCV buffers to be larger than the MTU. These are kernel buffers which hold on to data when you're not ready to receive them. A larger UDP RCV buffer means that the kernel will buffer more packets for you instead before dropping them into the abyss. Maybe you have some non-trivial work to do for each packet. Depending on the bitrate, you might want a larger or smaller kernel buffer.
Finally, you're using UDP. There is no guarantee that you'll receive packets in order or at all. Any router in between you and a peer could decide to drop the packet for any reason. Since you're using UDP, you should prepare for dropped and out-of-order packets. You also might need some sort of retransmission mechanism, which further complicates things.
Or you might consider using TCP if dropped packets are unacceptable, knowing that timing is indeterminate.
If you're on linux, you can see current buffer sizes in /proc/sys/net. Usually the kernel will double what you ask for.
Also, you can tune your buffer size by watching for packet drops in /proc/net/udp. If you see drops, you might want to make your rcv buffer bigger, especially if the data is bursty and the processing intensive. If you're data is coming in at a consistent rate and you're still dropping packets, then you aren't processing them fast enough.

UDP packet loss rate might increase on conditions?

Does UDP packet loss percentage might increase considering packet size? For example if I send 100'000 packets, in first try byte[] size is 30, but second 300. Could packet size play role in it's drop ability or packet loss percentage is not its size dependent?
The packet loss is depending on the size of the packet. This has several reasons.
IP packets can go up to 64k approximately, but they are fragmented up to the MTU of ethernet and if one of those packets gets lost , the whole IP packet is dropped. For larger packets if the traffic is high the probability is higher that the larger packet will be dropped. The MTU is around 1500 bytes.
There is more to it than just that. Internally a protocol stack is implemented using internal buffers that are a lot smaller than the mtu, this can vary from 300 bytes and larger. But the point is that these buffers are also a limited resource. If the network device runs out of buffers, then the packet will be dropped as well.
If you don't know the MTU on the network in question according to the link below a 512-byte UDP payload is considered reasonable to allow a margin for other header information that you may not have anticipated.
What is the largest Safe UDP Packet Size on the Internet
Because you're sending larger packets, yes it could increase the chances that packets are dropped.
Now if you compare sending 100000 packets of 30 bytes or 10000 packets of 300 bytes, even though the user data is the same the total size of the packets is larger due to the headers.

Why low data throughput is observed when iperf tried with UDP packet size set below 2000?

I'm experimenting on an LTE connection for checking the maximum rate of bandwidth can be achieved in the uplink.While creating iperf sessions i observed that i'm not able to go beyond 100Kbps in the uplink when the UDP packet size is set as 1400.Apparently when i increased the packet size to 50000 i was able to achieve 2 Mbps in the same link.
Can someone guide me why this performance difference is observed ?When i tried this in a wired channel there i was able to achieve 10Mbps with UDP packet size set as 1400 itself.
What could be the reason for this?
Will trying TCP/IP instead of UDP increase the data throughput?
It probably matters where fragmentation is done - application or IP stack. Your observations show you that IP stack is more efficient.
TCP will be slower. TCP's built-in congestion control will not allow you to send packets until some of already sent have been ACK-ed. That adds round-trip time to performance considerations.
UDP has no such restrictions. It can (ab)use the network to its full potential.

What does option limit in tc netem mean and do?

I'm trying to emulate slow net link with command tc. I use netem to emulate delay and packet loss and htb to emulate narrow bandwidth, but I find there is a limit option in netem, what does this option do? will it affect the final bandwidth?
I googled it and find something in http://manpages.ubuntu.com/manpages/raring/man8/tc-netem.8.html
which says:
limits the effect of selected options to the indicated number of next packets.
But I still can not understand what it does.
I don't know exactly what netem is doing, but I've found that if you don't set "limit" to a higher value, netem doesn't work correctly - i.e. it discards packets at higher speeds and possibly has other problems, essentially not accurately emulating a real network.
From the mailing list mentioned by CarlH, Stephen Hemminger said:
The limit value is in packets at least when using the default qdisc
inside netem (tfifo). You can also use pfifo and configure it for
packet limit, or bfifo same only bytes. The value 1000 is low, you
want about 50% more than the max packet rate * delay, unless you are
trying to emulate a router with a small queue.
So for a 1 Gbps link, 1 Gbps / 1500 bytes MTU * 100 ms * 1.5 = 12500.
Command:
sudo tc qdisc add dev eth1 root netem limit 12500 delay 100ms loss 1%
I've been using limit 100000, which seems to work fine, but it seems a lower value may be fine.
From https://lists.linuxfoundation.org/pipermail/netem/2007-March/001091.html
The "limit" parameter refers to the number of buffers allocated in the
netem module.
The limit must be adjusted to support the number of frames delayed
(500ms for e.g.) at a given data rate.
Yours sincerely,
Laurent MARIE
The updated documentation says:
limit packets
maximum number of packets the qdisc may hold queued at a time.

iPhone 4S - BLE data transfer speed

I've been tinkering around with the BLE (Bluetooth Low Energy) connectivity classes quiet a bit lately and haven't been able to make it transfer data any faster than 1KB / 5 seconds. I believe, in the documentation, it says the max speed is 60 bytes per 20 milliseconds. With data transfer and counting the Ack transfer after each set of packets, I believe we should be able to go as fast as 1.5KB per second. So my code is around 7-8 times slower than it should be.
I'm just wondering if anyone has been able to do data transfer in BLE as fast as the documentation says it should be able to do. What sort of speed are you getting if faster than mine?
Thanks a lot
see at the guidlines of apple and you will see that a connection update request is required to speed up your connection.
https://developer.apple.com/hardwaredrivers/BluetoothDesignGuidelines.pdf
I have min=20ms max 40 ms
I hope I could help
Roman
If you are able to use higher MTU size (negotiated by the iOS) then you would be able to increase the bandwidth even more, because there is a 4 byte L2CAP header and a 3 byte ATT header that wouldn't be transmitted more than in one packet.
If you are able to transmit 6 packets pr connection interval, then you would be able to put in 35 byte extra per connection interval (the 7 byte header would still be there for the first packet) The MTU size could also be split over several connection intervals, increasing the throughput with 7 more bytes pr connection interval. (Just takes longer time to assemble the packet again.) The max MTU size allowed by ATT is 515 bytes (Max size of att is 512 bytes + 3 byte header for opcode + handle)