I intend to write an application where I will be needing to calculate the network bandwidth along with latency and packet loss rate. One of the constraints is to passively measure the bandwidth (using the application data itself).
What I have read online and understood from a few existing applications is that almost all of them use active probing techniques (that is, generating a flow of probe packets) and use the time difference between arrival of the first and last packets to calculate the bandwidth.
The main problems with such a technique are that it floods the network with probe packets, which runs longer and is not scalable (since we need to run the application at both ends).
One of the suggestions was to calculate the RTT of a packet by echoing it back to the sender and calculate the bandwidth using the following equation:
Bandwidth <= (Receive Buffer size)/RTT.
I am not sure how accurate this could be as the receiver may not always echo back the packet on time to get the correct RTT. Use of ICMP alone may not always work as many servers disable it.
My main application runs over a TCP connection so I am interested in using the TCP connection to measure the actual bandwidth offered over a particular period of time. I would really appreciate if anybody could suggest a simple technique (reliable formula) to measure the bandwidth for a TCP connection.
It is only possible to know the available bandwidth by probing the network. This is due to that a 80% utilized link will still send echo-packets without delay, i.e. it will appear to be 0% occupied.
If you instead just wish to measure the bandwidth your application is using, it is much easier. E.g. keep a record of the amount of data you have transferred in the last second divided into 10ms intervals.
Active probing technique and its variants are bandwidth estimation algorithm. You dont want to use these algorithm to measure bandwidth. Note the difference between 'measure' and 'estimate'.
If you want to use tcp to measure bandwidth, you should be aware that tcp bandwidth is influenced by latency.
The easiest way to measure bandwidth using tcp is by sending tcp packets and measure the transferred bandwidth. It will flood the network. None of the non flooding algorithm reliable in high speed network. Plus, non flooding algorithm assume the channel is clear from traffic. If there is other traffic inside the channel, the result would be skewed. Im not suprised if the result would not make sense.
Related
I am new to networking. while doing an experiment on file transfer protocol(wired connection), I have to calculate the time taken to transfer 1 file from source to destination.
For calculating the file transfer time, i require the file size as well as the link speed.
Can anyone please explain what is this link speed and how to calculate it?
is it same as PHY rate?
Does PHY rates exist for wired connections or it exists in wireless connections only?
And also,please explain the difference between PHY rate,link speed and throughput.
Thanks in advance.
You will need to consider the whole protocol stack for the exercise:
FTP
TCP
IP
Ethernet
PHY
Each of these layers reduces the raw PHY rate.
On the Ethernet and IP layers, it's quite simple. Each frame on these protocols has a maximum size (MTU) and a fixed size which needs to be allocated for the header of each frame.
After subtracting the overhead for the headers, you have the throughput via IP.
For TCP, we can ignore the data overhead for now, as the main factor is the additional round-trips added. In this case let's only deal with the handshake and ignore the other details for now. This means for the SYN-ACK-ACK sequence, we will to have account for twice the delay before the link is established from the client side.
For FTP, let's also assume the most simple case, anonymous login, active transfer, no encoding. That adds one more roundtrip before the actual data transfer starts.
Why did we choose to ignore the data size in the FTP and TCP protocol? Because for all modern link speeds, this is completely masked by the delay.
So in total your effective throughput is now PHY rate * Ethernet overhead * IP overhead + file size / (4 * Delay)
Choosing a different transfer encoding in FTP would add another factor to the left side. Accounting for TCP window scaling, retransmissions, login via FTP etc. would add more round-trips.
There could also be additional protocols in that stack, introducing further overhead. E.g. network tunnels.
I have an idea for a client-server. The client handles only input, sending it to the server. Server handles the input, logic and then sends the image of the program to the client. The client prints the image on user's screen. Uses udp, slight artefacts in the image are tolerated.
How fast can those images travel through the Internet? Can they travel at least 5 times a second? I don't have 2 computers at hand to test it.
EDIT: One more question - how reliable is UDP protocol? How many pixels would arrive corrupted? Say, 10% on average?
EDIT2: For example, I have an 320x200 32 bit image (red,green,blue + alpha). That's ~2 million bits. How long it takes for the image to arrive from the server to the client, if my ping is X, my uploading speed Y Mbps and my download speed Z Mbps?
The answers to your questions depend heavily on the internet connections of the machines involved. In particular, if the program is heavily graphical, the bandwidth used by the images may be fairly substantial, especially if your client is on a mobile device connecting through the cellular telephony system.
If you have plenty of bandwidth, 5 round trips per second should be achievable most of the time if both client and server are in the U.S., or both are in Europe. There are, for example, interactive computer games that depend on having 4-5 round trips per second for smooth play, and only occasionally have glitches as a result. If client and server are on different continents, and especially if they are on opposite sides of the world, this may be more difficult, as speed of light delays start using a significant proportion of the available transmission time. In the worst case, say between China and Argentina, theoretical speed of light delays alone limit the network to less than 8 round trips per second, so with real network and bandwidth limitations, 5 round trips per second could be difficult to achieve.
The reliability of UDP depends substantially on how congested the connection is. On an uncongested network connection, you'd probably lose 1% of the packets or less. On a very congested network connection, it might be a lot worse - I've seen situations where 80% of the packets were lost.
On an uncongested network, the time for an image to travel from the server to the client would be
(ping time)/2 + (1-packet overhead)*(image size)/(minimum bandwidth)
Packet overhead is only a few percent, so you might be able to drop that term out. Minimum bandwidth would be the minimum of the server upload bandwidth and the client download bandwidth. Note that the image size might be reduced substantially through compression. Don't forget, though, that you also need to allow for time for the input to be sent from the client to the server, which adds another (ping time)/2 at a minimum.
I'm experimenting on an LTE connection for checking the maximum rate of bandwidth can be achieved in the uplink.While creating iperf sessions i observed that i'm not able to go beyond 100Kbps in the uplink when the UDP packet size is set as 1400.Apparently when i increased the packet size to 50000 i was able to achieve 2 Mbps in the same link.
Can someone guide me why this performance difference is observed ?When i tried this in a wired channel there i was able to achieve 10Mbps with UDP packet size set as 1400 itself.
What could be the reason for this?
Will trying TCP/IP instead of UDP increase the data throughput?
It probably matters where fragmentation is done - application or IP stack. Your observations show you that IP stack is more efficient.
TCP will be slower. TCP's built-in congestion control will not allow you to send packets until some of already sent have been ACK-ed. That adds round-trip time to performance considerations.
UDP has no such restrictions. It can (ab)use the network to its full potential.
I have a little network with a client and a server, and I'm testing the FrameRate, changing the dimension of the packet. Particulary, I have an image, changing threshold, I extract keypoints and descriptors and then I send a fixed number of packets (with different dimension with different threshold). Problems happen when udp packets are under MTU dimension, reception rate decrease and frame rate tend to be constant. I verify with wireshark that my reception times are correct, so isn't a server code problem.
this is the graph with the same image sends 30 times for threshold with a 10 step from 40 to 170.
i can't post the image so this is the link
Thanks for the responces
I think that none will interest this answer, but we arrived to the conclusion that the problem is a problem in wifi dongle's drivers.
The trasmission window does not go under a determined time's threshold. So under a determined amount of data while time remains constant, decreases.
I have two questions about the actual sending speed using sendto() with C socket programming.
I did a little socket programming and I'm sending UDP datagrams back to back with no spacing(pausing) between each sendto action in a for loop. Is it reasonable to use clock_gettime() to get the elapsed time and calculate the actual sending rate? What's actually influencing the sending speed, is it the CPU's frequency, or is it the network interface that I'm using? My understanding is that it should be the slower one of the two? And use clock_gettime(), can I get a rather good estimation of this sending speed? Say that we get this sending speed and denote it by S.
Suppose I'm sending the UDP datagram from a PC through a 100Mbps ethernet network interface to a router. What's the actual arrival rate at the router? For one case, if S is greater than 100Mbps, then the arrival rate will be around 100Mbps, right? If S is greater than 100Mpbs, then the arrival rate should be S, right? Or should it be still 100Mbps? I'm a a little confused.
And the reason I'm doing this is that I want to get the maximum burst size of UDP datagram I could send in a row to the router(given a certain bandwidth limit of the outgoing link) without dropping any datagram. Any idea how to do some tests to get this?
A million things affect the speed and dropped packets. I recommend you write a C program that varies sending speed, row length, etc, measures speed and dropped packets, and outputs the results to something you can graph, such as a csv file.