UDP packet drop issue in Wireshark while wiritng to pcap file - matlab

My FPGA is continuously sending UDP packets on network using 10/100/1000 Mbps Ethernet. I am using Wireshark to capture the packets directly to a .pcap file & then extract & display UDP data in Matlab GUI. FPGA kit is connected to a 1 Gbps switch and then to PC.
Initially i tried using Matlab's built in UDP object instead of .pcap files but using that i was facing packet drop issue at high BW (>1 Mbps) and was only able to achieve drop free reception for very low BW around 110 kbps. That was not acceptable for my case. A link to the problem is given below:
Incorrect UDP data reception in Matlab
Based on these problems i moved towards using Wireshark. I use wireshark to create multiple .pcap files (1 Mb) of UDP data and then start extracting UDP data from these files in Matlab. A link also guided towards this approach i.e. writing packets directly to file(High speed UDP receiver in MATLAB
).
The problem is that i am getting some packets dropped at random intervals. The problem is very frequent when operating at High Ethernet BW - 220 Mbps. So i reduced my BW to around <50 Mbps still i get some packet drops. I tried using some of the tips provided by Wireshark (http://wiki.wireshark.org/Performance) for optimizing performance but still the issue persists.
This seems to be issue with memory as far as i have understood.
Some details about the design:
UDP Data Size = 64 Bytes
Ethernet Frame Size = 110 Bytes
PCAP File Size = 1 Mb
Wireshark Buffer Size = 1 Gb
Please guide me towards a possible solution.
Regards,
Sameed

I would suggest a larger file size may provide efficiency savings, something like 64MB. Also agree with Slava's suggestion - tcpdump is more efficient/robust than the wireshark GUI.

Try writing to a ram disk
[Padding]

Related

link speed vs throughput

I am new to networking. while doing an experiment on file transfer protocol(wired connection), I have to calculate the time taken to transfer 1 file from source to destination.
For calculating the file transfer time, i require the file size as well as the link speed.
Can anyone please explain what is this link speed and how to calculate it?
is it same as PHY rate?
Does PHY rates exist for wired connections or it exists in wireless connections only?
And also,please explain the difference between PHY rate,link speed and throughput.
Thanks in advance.
You will need to consider the whole protocol stack for the exercise:
FTP
TCP
IP
Ethernet
PHY
Each of these layers reduces the raw PHY rate.
On the Ethernet and IP layers, it's quite simple. Each frame on these protocols has a maximum size (MTU) and a fixed size which needs to be allocated for the header of each frame.
After subtracting the overhead for the headers, you have the throughput via IP.
For TCP, we can ignore the data overhead for now, as the main factor is the additional round-trips added. In this case let's only deal with the handshake and ignore the other details for now. This means for the SYN-ACK-ACK sequence, we will to have account for twice the delay before the link is established from the client side.
For FTP, let's also assume the most simple case, anonymous login, active transfer, no encoding. That adds one more roundtrip before the actual data transfer starts.
Why did we choose to ignore the data size in the FTP and TCP protocol? Because for all modern link speeds, this is completely masked by the delay.
So in total your effective throughput is now PHY rate * Ethernet overhead * IP overhead + file size / (4 * Delay)
Choosing a different transfer encoding in FTP would add another factor to the left side. Accounting for TCP window scaling, retransmissions, login via FTP etc. would add more round-trips.
There could also be additional protocols in that stack, introducing further overhead. E.g. network tunnels.

Why low data throughput is observed when iperf tried with UDP packet size set below 2000?

I'm experimenting on an LTE connection for checking the maximum rate of bandwidth can be achieved in the uplink.While creating iperf sessions i observed that i'm not able to go beyond 100Kbps in the uplink when the UDP packet size is set as 1400.Apparently when i increased the packet size to 50000 i was able to achieve 2 Mbps in the same link.
Can someone guide me why this performance difference is observed ?When i tried this in a wired channel there i was able to achieve 10Mbps with UDP packet size set as 1400 itself.
What could be the reason for this?
Will trying TCP/IP instead of UDP increase the data throughput?
It probably matters where fragmentation is done - application or IP stack. Your observations show you that IP stack is more efficient.
TCP will be slower. TCP's built-in congestion control will not allow you to send packets until some of already sent have been ACK-ed. That adds round-trip time to performance considerations.
UDP has no such restrictions. It can (ab)use the network to its full potential.

Increased MTU but still can't send large UDP packets

a little info on what i'm trying to achieve here first. I'm using a Texas Instrument board EVM6678LE, and what i am trying to do is to increase the UDP transfer rate between the board and my PC.
I've increased the MTU on my PC through netsh>interface>ipv4 to 15,000. But when i ping the board from my PC i am only able to ping up to "ping 192.168.2.100 -l 10194", if i ping with 195bytes onwards i'll receive a request timeout. Is this a limitation of my PC or something?
Does anyone have any idea what could be the possible cause of this? Any advice or suggestions at all would be welcome. As the only way to increase the transfer rate i could think of it increasing the per packet size which reduces overhead. And at 10k i have a rate of around 9.1MB/s, and i'm trying to attain 25MB/s.
Thanks!
Increasing the MTU on your PC may not prevent fragmentation. I don't know exactly what is controlling this, but your network card or driver can fragment the packet even when MTU is not reached. Use a sniffer like Wireshark to see how the packets are sent.
About the timeout, it is possible that your board rejects fragmented pings (because of Ping of Death protection). There is also a possibility that its packet buffer is 10kB (10240) bytes long, and can't receive larger packets. Also, make sure that the receiving endpoint have a matching MTU.
Anyway, if you are trying to increase transfer rate, you are on the wrong track. The overhead for UDP is 8 bytes, IP 20 bytes, Ethernet 18 bytes, which make a total of 46 bytes (oh, coincidence, 46+10194 is exactly 10240). 46 bytes overhead for 1024 MTU is 95.5%. 46 bytes for 4096 is 98.9%, 46 bytes for 16384 is 99.7%. That means you gain +3.5% transfer rate from 1024 to 4096 MTU, and another +0.8% from 4096 to 16384. The gain is just ridiculous, and you should just let the MTU to the common default 1500.
Anyway, going from 9.1MB/s to 25MB/s just by changing the MTU is IMPOSSIBLE (if it was, why the PC default is not higher ?). Here I guess you are using Fast Ethernet (100BASE-T), and you are already transferring near full bandwidth. To get higher rates, you would need to have Gigabit Ethernet (1000BASE‑T). That means you need both hardware endpoints to support 1000BASE-T.

What is the maximum possible size of receive buffer of network layer?

I want to know the maximum size of receive buffer of network layer or TCP/IP layer. Can anyone help on this?
What is the socket type?
If the socket is TCP then I would like to prefer you to set the buffer size to 8K.
For UDP you can also set the buffer size to 8k. It is not actually important for UDP. Because in UDP a whole packet is transmitted at a time. For this reason, you do not need to save much data in the socket for longer period of time.
But in TCP, data comes as a stream. You cannot afford data loss here because it will result in several parsing related issues.

UDP stream with little packets

I have a little network with a client and a server, and I'm testing the FrameRate, changing the dimension of the packet. Particulary, I have an image, changing threshold, I extract keypoints and descriptors and then I send a fixed number of packets (with different dimension with different threshold). Problems happen when udp packets are under MTU dimension, reception rate decrease and frame rate tend to be constant. I verify with wireshark that my reception times are correct, so isn't a server code problem.
this is the graph with the same image sends 30 times for threshold with a 10 step from 40 to 170.
i can't post the image so this is the link
Thanks for the responces
I think that none will interest this answer, but we arrived to the conclusion that the problem is a problem in wifi dongle's drivers.
The trasmission window does not go under a determined time's threshold. So under a determined amount of data while time remains constant, decreases.