UDP stream with little packets - sockets

I have a little network with a client and a server, and I'm testing the FrameRate, changing the dimension of the packet. Particulary, I have an image, changing threshold, I extract keypoints and descriptors and then I send a fixed number of packets (with different dimension with different threshold). Problems happen when udp packets are under MTU dimension, reception rate decrease and frame rate tend to be constant. I verify with wireshark that my reception times are correct, so isn't a server code problem.
this is the graph with the same image sends 30 times for threshold with a 10 step from 40 to 170.
i can't post the image so this is the link
Thanks for the responces

I think that none will interest this answer, but we arrived to the conclusion that the problem is a problem in wifi dongle's drivers.
The trasmission window does not go under a determined time's threshold. So under a determined amount of data while time remains constant, decreases.

Related

VoIP delta spikes below 20ms, causing the jitter to change

I am trying to do some measurements on VoIP. I am using OpenSIPS, RTPProxy, and SIPp for testing.
Everything works fine as expected, but I only have a question regarding the delta time.
Below is a screenshot I got from Wireshark RTP streams' analysis.
Why do I have these spikes below the 20ms?
I am using in a SIPp xml scenario, where 8kulaw has the following characteristics:
8kulaw.wav: RIFF (little-endian) data, WAVE audio, ITU G.711 mu-law,
mono 8000 Hz
Much appreciated!
The "RTP Stream Analysis" from wireshark is giving you hints on the quality of the stream.
Your Max Delta value is 20.15 and occurs at packet 2008.
This will indicate the time between 2 packets which in your use-case are supposed to be spaced by exactly 20ms.
So the maximum difference is very short and should definitly not affect the quality of the stream. Usually, this is used on receiver (for incoming stream): on sender, there is usually no internal latency. This probably explains why you have so short "Max Delta".
The spikes you see are pretty big, but this is mostly because the scale is very short. Not because the stream is bad.

Beacon size vs message size in Wireless Ad-Hoc Networks

I'm working on neighbor discovery protocols in wireless ad-hoc networks. There are many protocols that rely only on beacon messages between nodes when the discovery phase is going on. On the other hand, there are other approaches that try to transmit more information (like a node's neighbor table) during the discovery, in order to accelerate it. Depending on the time needed to listen to those messages the discovery latency and power consumption varies. Suppose that the same hardware is used to transmit them and that there aren't collisions.
I read that beacons can be sent extremely fast (less than 1ms easily) but I haven't found anything about how long it takes to send/receive a bigger message. Let say a message carrying around 50-500 numbers representing all the info about your neighbors. How much extra power is needed?
Update
Can this bigger message be divided into a bunch of beacon size messages? If it does, then I suppose the power used to transmit/listen grows linearly.
One possible solution is to divide the transmission in N different beacon-like messages with a small extra information to be able to put them back together. In this way, the power used grows linearly as N grows.

Sending images through sockets

I have an idea for a client-server. The client handles only input, sending it to the server. Server handles the input, logic and then sends the image of the program to the client. The client prints the image on user's screen. Uses udp, slight artefacts in the image are tolerated.
How fast can those images travel through the Internet? Can they travel at least 5 times a second? I don't have 2 computers at hand to test it.
EDIT: One more question - how reliable is UDP protocol? How many pixels would arrive corrupted? Say, 10% on average?
EDIT2: For example, I have an 320x200 32 bit image (red,green,blue + alpha). That's ~2 million bits. How long it takes for the image to arrive from the server to the client, if my ping is X, my uploading speed Y Mbps and my download speed Z Mbps?
The answers to your questions depend heavily on the internet connections of the machines involved. In particular, if the program is heavily graphical, the bandwidth used by the images may be fairly substantial, especially if your client is on a mobile device connecting through the cellular telephony system.
If you have plenty of bandwidth, 5 round trips per second should be achievable most of the time if both client and server are in the U.S., or both are in Europe. There are, for example, interactive computer games that depend on having 4-5 round trips per second for smooth play, and only occasionally have glitches as a result. If client and server are on different continents, and especially if they are on opposite sides of the world, this may be more difficult, as speed of light delays start using a significant proportion of the available transmission time. In the worst case, say between China and Argentina, theoretical speed of light delays alone limit the network to less than 8 round trips per second, so with real network and bandwidth limitations, 5 round trips per second could be difficult to achieve.
The reliability of UDP depends substantially on how congested the connection is. On an uncongested network connection, you'd probably lose 1% of the packets or less. On a very congested network connection, it might be a lot worse - I've seen situations where 80% of the packets were lost.
On an uncongested network, the time for an image to travel from the server to the client would be
(ping time)/2 + (1-packet overhead)*(image size)/(minimum bandwidth)
Packet overhead is only a few percent, so you might be able to drop that term out. Minimum bandwidth would be the minimum of the server upload bandwidth and the client download bandwidth. Note that the image size might be reduced substantially through compression. Don't forget, though, that you also need to allow for time for the input to be sent from the client to the server, which adds another (ping time)/2 at a minimum.

How to measure how fast I'm sending the UDP datagrams?

I have two questions about the actual sending speed using sendto() with C socket programming.
I did a little socket programming and I'm sending UDP datagrams back to back with no spacing(pausing) between each sendto action in a for loop. Is it reasonable to use clock_gettime() to get the elapsed time and calculate the actual sending rate? What's actually influencing the sending speed, is it the CPU's frequency, or is it the network interface that I'm using? My understanding is that it should be the slower one of the two? And use clock_gettime(), can I get a rather good estimation of this sending speed? Say that we get this sending speed and denote it by S.
Suppose I'm sending the UDP datagram from a PC through a 100Mbps ethernet network interface to a router. What's the actual arrival rate at the router? For one case, if S is greater than 100Mbps, then the arrival rate will be around 100Mbps, right? If S is greater than 100Mpbs, then the arrival rate should be S, right? Or should it be still 100Mbps? I'm a a little confused.
And the reason I'm doing this is that I want to get the maximum burst size of UDP datagram I could send in a row to the router(given a certain bandwidth limit of the outgoing link) without dropping any datagram. Any idea how to do some tests to get this?
A million things affect the speed and dropped packets. I recommend you write a C program that varies sending speed, row length, etc, measures speed and dropped packets, and outputs the results to something you can graph, such as a csv file.

Bandwidth measurent by minimum data transfer

I intend to write an application where I will be needing to calculate the network bandwidth along with latency and packet loss rate. One of the constraints is to passively measure the bandwidth (using the application data itself).
What I have read online and understood from a few existing applications is that almost all of them use active probing techniques (that is, generating a flow of probe packets) and use the time difference between arrival of the first and last packets to calculate the bandwidth.
The main problems with such a technique are that it floods the network with probe packets, which runs longer and is not scalable (since we need to run the application at both ends).
One of the suggestions was to calculate the RTT of a packet by echoing it back to the sender and calculate the bandwidth using the following equation:
Bandwidth <= (Receive Buffer size)/RTT.
I am not sure how accurate this could be as the receiver may not always echo back the packet on time to get the correct RTT. Use of ICMP alone may not always work as many servers disable it.
My main application runs over a TCP connection so I am interested in using the TCP connection to measure the actual bandwidth offered over a particular period of time. I would really appreciate if anybody could suggest a simple technique (reliable formula) to measure the bandwidth for a TCP connection.
It is only possible to know the available bandwidth by probing the network. This is due to that a 80% utilized link will still send echo-packets without delay, i.e. it will appear to be 0% occupied.
If you instead just wish to measure the bandwidth your application is using, it is much easier. E.g. keep a record of the amount of data you have transferred in the last second divided into 10ms intervals.
Active probing technique and its variants are bandwidth estimation algorithm. You dont want to use these algorithm to measure bandwidth. Note the difference between 'measure' and 'estimate'.
If you want to use tcp to measure bandwidth, you should be aware that tcp bandwidth is influenced by latency.
The easiest way to measure bandwidth using tcp is by sending tcp packets and measure the transferred bandwidth. It will flood the network. None of the non flooding algorithm reliable in high speed network. Plus, non flooding algorithm assume the channel is clear from traffic. If there is other traffic inside the channel, the result would be skewed. Im not suprised if the result would not make sense.