How to measure how fast I'm sending the UDP datagrams? - sockets

I have two questions about the actual sending speed using sendto() with C socket programming.
I did a little socket programming and I'm sending UDP datagrams back to back with no spacing(pausing) between each sendto action in a for loop. Is it reasonable to use clock_gettime() to get the elapsed time and calculate the actual sending rate? What's actually influencing the sending speed, is it the CPU's frequency, or is it the network interface that I'm using? My understanding is that it should be the slower one of the two? And use clock_gettime(), can I get a rather good estimation of this sending speed? Say that we get this sending speed and denote it by S.
Suppose I'm sending the UDP datagram from a PC through a 100Mbps ethernet network interface to a router. What's the actual arrival rate at the router? For one case, if S is greater than 100Mbps, then the arrival rate will be around 100Mbps, right? If S is greater than 100Mpbs, then the arrival rate should be S, right? Or should it be still 100Mbps? I'm a a little confused.
And the reason I'm doing this is that I want to get the maximum burst size of UDP datagram I could send in a row to the router(given a certain bandwidth limit of the outgoing link) without dropping any datagram. Any idea how to do some tests to get this?

A million things affect the speed and dropped packets. I recommend you write a C program that varies sending speed, row length, etc, measures speed and dropped packets, and outputs the results to something you can graph, such as a csv file.

Related

How does UDP SetWriteBuffer and SetReadBuffer the OS's buffers?

Description
I'm busy writing a high frequency UDP server with Go. I'd estimate at least 1000 packets/second both ways.
However as the size of data I'm sending over the UDP socket grew, I eventually ran into the follow error: read udp 127.0.0.1:1541->127.0.0.1:9737: wsarecv: A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself.
I eventually just grew the size of the buffers I was reading from and writing into as follows:
buffer := make([]byte, 64 * 1024 * 1024) // used to just be 1024
l, err := s.socketSim.Read(buffer)
This worked fine and I stopped getting the error... However then I can across two functions inside the net package:
s.socketSim.SetWriteBuffer(64 * 1024 * 1024)
s.socketSim.SetReadBuffer(64 * 1024 * 1024)
I learned that these two act on the operating system's transmit buffer
Question
Do I even care to set the operating system buffer size and why? How does the size on the application buffer impact the size of the operating system buffer? Should they always be the same and how big should/can they become?
First, not only do you have an MTU size for each interface on your device and whatever destination you're send/recving from, but there is also an MTU size for each device in between. For this reason, as others have mentioned, you might want to use what is generally accepted for MTU since you might not control every device in the data route. In the case of UDP, MTU really just means how big a datagram can be before fragmenting.
Second, you almost certainly want your SND/RCV buffers to be larger than the MTU. These are kernel buffers which hold on to data when you're not ready to receive them. A larger UDP RCV buffer means that the kernel will buffer more packets for you instead before dropping them into the abyss. Maybe you have some non-trivial work to do for each packet. Depending on the bitrate, you might want a larger or smaller kernel buffer.
Finally, you're using UDP. There is no guarantee that you'll receive packets in order or at all. Any router in between you and a peer could decide to drop the packet for any reason. Since you're using UDP, you should prepare for dropped and out-of-order packets. You also might need some sort of retransmission mechanism, which further complicates things.
Or you might consider using TCP if dropped packets are unacceptable, knowing that timing is indeterminate.
If you're on linux, you can see current buffer sizes in /proc/sys/net. Usually the kernel will double what you ask for.
Also, you can tune your buffer size by watching for packet drops in /proc/net/udp. If you see drops, you might want to make your rcv buffer bigger, especially if the data is bursty and the processing intensive. If you're data is coming in at a consistent rate and you're still dropping packets, then you aren't processing them fast enough.

VoIP delta spikes below 20ms, causing the jitter to change

I am trying to do some measurements on VoIP. I am using OpenSIPS, RTPProxy, and SIPp for testing.
Everything works fine as expected, but I only have a question regarding the delta time.
Below is a screenshot I got from Wireshark RTP streams' analysis.
Why do I have these spikes below the 20ms?
I am using in a SIPp xml scenario, where 8kulaw has the following characteristics:
8kulaw.wav: RIFF (little-endian) data, WAVE audio, ITU G.711 mu-law,
mono 8000 Hz
Much appreciated!
The "RTP Stream Analysis" from wireshark is giving you hints on the quality of the stream.
Your Max Delta value is 20.15 and occurs at packet 2008.
This will indicate the time between 2 packets which in your use-case are supposed to be spaced by exactly 20ms.
So the maximum difference is very short and should definitly not affect the quality of the stream. Usually, this is used on receiver (for incoming stream): on sender, there is usually no internal latency. This probably explains why you have so short "Max Delta".
The spikes you see are pretty big, but this is mostly because the scale is very short. Not because the stream is bad.

Sending images through sockets

I have an idea for a client-server. The client handles only input, sending it to the server. Server handles the input, logic and then sends the image of the program to the client. The client prints the image on user's screen. Uses udp, slight artefacts in the image are tolerated.
How fast can those images travel through the Internet? Can they travel at least 5 times a second? I don't have 2 computers at hand to test it.
EDIT: One more question - how reliable is UDP protocol? How many pixels would arrive corrupted? Say, 10% on average?
EDIT2: For example, I have an 320x200 32 bit image (red,green,blue + alpha). That's ~2 million bits. How long it takes for the image to arrive from the server to the client, if my ping is X, my uploading speed Y Mbps and my download speed Z Mbps?
The answers to your questions depend heavily on the internet connections of the machines involved. In particular, if the program is heavily graphical, the bandwidth used by the images may be fairly substantial, especially if your client is on a mobile device connecting through the cellular telephony system.
If you have plenty of bandwidth, 5 round trips per second should be achievable most of the time if both client and server are in the U.S., or both are in Europe. There are, for example, interactive computer games that depend on having 4-5 round trips per second for smooth play, and only occasionally have glitches as a result. If client and server are on different continents, and especially if they are on opposite sides of the world, this may be more difficult, as speed of light delays start using a significant proportion of the available transmission time. In the worst case, say between China and Argentina, theoretical speed of light delays alone limit the network to less than 8 round trips per second, so with real network and bandwidth limitations, 5 round trips per second could be difficult to achieve.
The reliability of UDP depends substantially on how congested the connection is. On an uncongested network connection, you'd probably lose 1% of the packets or less. On a very congested network connection, it might be a lot worse - I've seen situations where 80% of the packets were lost.
On an uncongested network, the time for an image to travel from the server to the client would be
(ping time)/2 + (1-packet overhead)*(image size)/(minimum bandwidth)
Packet overhead is only a few percent, so you might be able to drop that term out. Minimum bandwidth would be the minimum of the server upload bandwidth and the client download bandwidth. Note that the image size might be reduced substantially through compression. Don't forget, though, that you also need to allow for time for the input to be sent from the client to the server, which adds another (ping time)/2 at a minimum.

UDP stream with little packets

I have a little network with a client and a server, and I'm testing the FrameRate, changing the dimension of the packet. Particulary, I have an image, changing threshold, I extract keypoints and descriptors and then I send a fixed number of packets (with different dimension with different threshold). Problems happen when udp packets are under MTU dimension, reception rate decrease and frame rate tend to be constant. I verify with wireshark that my reception times are correct, so isn't a server code problem.
this is the graph with the same image sends 30 times for threshold with a 10 step from 40 to 170.
i can't post the image so this is the link
Thanks for the responces
I think that none will interest this answer, but we arrived to the conclusion that the problem is a problem in wifi dongle's drivers.
The trasmission window does not go under a determined time's threshold. So under a determined amount of data while time remains constant, decreases.

Bandwidth measurent by minimum data transfer

I intend to write an application where I will be needing to calculate the network bandwidth along with latency and packet loss rate. One of the constraints is to passively measure the bandwidth (using the application data itself).
What I have read online and understood from a few existing applications is that almost all of them use active probing techniques (that is, generating a flow of probe packets) and use the time difference between arrival of the first and last packets to calculate the bandwidth.
The main problems with such a technique are that it floods the network with probe packets, which runs longer and is not scalable (since we need to run the application at both ends).
One of the suggestions was to calculate the RTT of a packet by echoing it back to the sender and calculate the bandwidth using the following equation:
Bandwidth <= (Receive Buffer size)/RTT.
I am not sure how accurate this could be as the receiver may not always echo back the packet on time to get the correct RTT. Use of ICMP alone may not always work as many servers disable it.
My main application runs over a TCP connection so I am interested in using the TCP connection to measure the actual bandwidth offered over a particular period of time. I would really appreciate if anybody could suggest a simple technique (reliable formula) to measure the bandwidth for a TCP connection.
It is only possible to know the available bandwidth by probing the network. This is due to that a 80% utilized link will still send echo-packets without delay, i.e. it will appear to be 0% occupied.
If you instead just wish to measure the bandwidth your application is using, it is much easier. E.g. keep a record of the amount of data you have transferred in the last second divided into 10ms intervals.
Active probing technique and its variants are bandwidth estimation algorithm. You dont want to use these algorithm to measure bandwidth. Note the difference between 'measure' and 'estimate'.
If you want to use tcp to measure bandwidth, you should be aware that tcp bandwidth is influenced by latency.
The easiest way to measure bandwidth using tcp is by sending tcp packets and measure the transferred bandwidth. It will flood the network. None of the non flooding algorithm reliable in high speed network. Plus, non flooding algorithm assume the channel is clear from traffic. If there is other traffic inside the channel, the result would be skewed. Im not suprised if the result would not make sense.