I have an idea for a client-server. The client handles only input, sending it to the server. Server handles the input, logic and then sends the image of the program to the client. The client prints the image on user's screen. Uses udp, slight artefacts in the image are tolerated.
How fast can those images travel through the Internet? Can they travel at least 5 times a second? I don't have 2 computers at hand to test it.
EDIT: One more question - how reliable is UDP protocol? How many pixels would arrive corrupted? Say, 10% on average?
EDIT2: For example, I have an 320x200 32 bit image (red,green,blue + alpha). That's ~2 million bits. How long it takes for the image to arrive from the server to the client, if my ping is X, my uploading speed Y Mbps and my download speed Z Mbps?
The answers to your questions depend heavily on the internet connections of the machines involved. In particular, if the program is heavily graphical, the bandwidth used by the images may be fairly substantial, especially if your client is on a mobile device connecting through the cellular telephony system.
If you have plenty of bandwidth, 5 round trips per second should be achievable most of the time if both client and server are in the U.S., or both are in Europe. There are, for example, interactive computer games that depend on having 4-5 round trips per second for smooth play, and only occasionally have glitches as a result. If client and server are on different continents, and especially if they are on opposite sides of the world, this may be more difficult, as speed of light delays start using a significant proportion of the available transmission time. In the worst case, say between China and Argentina, theoretical speed of light delays alone limit the network to less than 8 round trips per second, so with real network and bandwidth limitations, 5 round trips per second could be difficult to achieve.
The reliability of UDP depends substantially on how congested the connection is. On an uncongested network connection, you'd probably lose 1% of the packets or less. On a very congested network connection, it might be a lot worse - I've seen situations where 80% of the packets were lost.
On an uncongested network, the time for an image to travel from the server to the client would be
(ping time)/2 + (1-packet overhead)*(image size)/(minimum bandwidth)
Packet overhead is only a few percent, so you might be able to drop that term out. Minimum bandwidth would be the minimum of the server upload bandwidth and the client download bandwidth. Note that the image size might be reduced substantially through compression. Don't forget, though, that you also need to allow for time for the input to be sent from the client to the server, which adds another (ping time)/2 at a minimum.
Related
I'm working on neighbor discovery protocols in wireless ad-hoc networks. There are many protocols that rely only on beacon messages between nodes when the discovery phase is going on. On the other hand, there are other approaches that try to transmit more information (like a node's neighbor table) during the discovery, in order to accelerate it. Depending on the time needed to listen to those messages the discovery latency and power consumption varies. Suppose that the same hardware is used to transmit them and that there aren't collisions.
I read that beacons can be sent extremely fast (less than 1ms easily) but I haven't found anything about how long it takes to send/receive a bigger message. Let say a message carrying around 50-500 numbers representing all the info about your neighbors. How much extra power is needed?
Update
Can this bigger message be divided into a bunch of beacon size messages? If it does, then I suppose the power used to transmit/listen grows linearly.
One possible solution is to divide the transmission in N different beacon-like messages with a small extra information to be able to put them back together. In this way, the power used grows linearly as N grows.
I want to reduce a latency for my players in a multiplayer game.
Would it be beneficial to make a server in each continent to reduce latency? E.g. players are in US, but the server is in Europe, so I make it be in US.
How big could the difference be?
Yes absolutely, the closest your server is from the user the better the ping is because the travelling distance / time is reduced.
Specially between Europe and America, because of the sea ;)
The difference really depend on your server, but at least 150ms I think.
Cable . . . ( raw-fiber on Layer 1 [PHY] ) mileage rulez
Recent transatlantic cables ( deploying shorter fibre-meandres in a tighter inner-tubing ) benefit from about the 2012+ generation of cables shorter transatlantic latencies somewhere under 60 milliseconds, according to Hibernia Atlantic.
One has also to account for lambda-signal amplification and retiming units, that add some additional Layer 1 [PMD] latency overheads down the road.
Yes, but...
ping is a trivial tool to test RTT records ( a packet round-trip-time ) derived from counting a time, the packet proceeds through a network infrastructure there and back.
Thus sending such ping packet across a few meter cable distance would typically cost less time than having to wait till another such packet makes it to a target on the opposite side of the Globe and has next successfully crawled back again, but... as Heraclitus from Ephesos wisdom: "You can't step twice into the same river" .. repeating the ping probes to the same target will yield in many principally very different time-delays ( latencies are non-stationary ).
But there are many additional cardinal issues, besides the geographical distance from A-to-B, that influence the end-to-end latency ( and an expected smoothness ) of the application-layer services.
What bothers?
Transport network congestions ( if actual traffic overloads an underlying network capacities, buffering delays and/or packet drops start to occur )
Packet re-transmission(s) ( if any packet got lost, the opposite side will ask for re-transmission, packets received in the meantime are being recorded for reception, but only after some remarkably longer time, the missing packet arrives, for which the receiving process was yet waiting, because without a packet 6, one cannot decode the full message and packets 7, 8, 9, 10, ... simply had to wait until the #6 was asked for by the receiver's side at the sender's side, was re-transmitted from the sender's side and have made it again, hopefully with a successful delivery this time, to the receiver's hands to fit the puzzle-gap. That costs a lot more time, than a smooth error-free data-flow )
Selective class-of-traffic prioritisation ( if your class-of-traffic is not prioritised, your packets will be policed to wait in queues, until higher priorities allow for some more lower-priority traffic to fit in )
Packet deliveries do not guarantee to take the same path over the same network-vectors and individual packets can be, in general, transported over multiple different trajectories ( add various prioritisation policies + various buffering drop-outs + various intermittent congestions and spurious flows ... and the resulting latency per se + timing variance, i.e. the uncertainty of a final delivery time of the next packet only grows up and up ).
Last but not least, the server system processing bottlenecks. Fine-tuning a server to avoid any such adverse effect ( performance bottlenecks and even worse, any blocking-state episode(s) ) belongs to a professional controlled-latency infrastructure design & maintenance.
The Devil comes next!
You might have already noticed, that besides a static latency scale ( demonstrated by ping ) realistic gaming is even more adversely affected by latency jitter ... as an in-game context magically UFO-es forwards and back in TimeDOMAIN ... which cause unrealistically jumping planes right in front of your aiming cross, "shivering" characters, deadly enemy-fire bullets that causes one's damages without ever seen / a yet visible attacker body and similar disturbing artefacts.
Server-colocation proximity per se will help in the former, but will let on you to fight the latter
I want to know how can one calculate bandwidth requirements based upon flows and viceversa.
Meaning if I had to achieve total of 50,000 netflows what is the bandwidth requirement to produce this number? Is there a formula for this. I'm using this to size up flow analyzer appliance. If its license says supports 50,000 flows what does this means. How more bandwidth if i increase I would lose the license coverage?
Most applications and appliances list flow volume per second, so you are asking what the bandwidth requirement is to transport 50k netflow updates per second:
For NetFlow v5 each record is 48 bytes each with each packet carrying 20 or so records, with 24 bytes overhead per packet. This means you'd use about 20Mbps to carry the flow packets. If you used NetFlow v9 which uses a template based format it might be a bit more, or less, depending on what is included in the template.
However, if you are asking how much bandwidth you can monitor with 50k netflow updates per second, the answer becomes more complex. In our experience monitoring regular user networks (using google, facebook, etc) an average flow update encompasses roughly 25kbytes of transferred data. Meaning 50,000 flow updates per second would equate to monitoring 10Gbps of traffic. Keep in mind that these are very rough estimates.
This answer may vary, however, if you are monitoring a server-centric network in a datacenter, where each flow may be much larger or smaller, depending on content served. The bigger each individual traffic session is, the higher the bandwidth will be you can monitor with 50,000 flow updates per second.
Finally, if you enable sampling in your NetFlow exporters you will be able to monitor much more bandwidth. Sampling will only use 1 in every N packets, thus skipping many flows. This lowers the total number of flow updates per second needed to monitor high-volume networks.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Iphone detect 3g or wifi
is there a way to get the current network speed or if the device is on EDGE/3G/GPRS?
I can use Reachability to distinguish WiFI from WMAN but that's not enough for my application.
If you really need to know, you will have to test for it.
Setup a connection to a known server that has low latency. In a separate thread, mark the time (Time A) and send a packet (make sure it's one packet). Wait on that thread. Mark the time of the first packet (Time B). Read the data until finished. Mark the time of the last packet (Time C). Make sure the server response spans at least three packets.
Time B - Time A is a very rough estimate of latency.
Time C - Time B is a very rough estimate of bandwidth.
Because you want to be accurate as possible, use the lowest level of network access you have available (I believe on the iPhone this is sockets). Use random data blocks of fixed lengths as both your request and response. A few bytes for the request should suffice, but the response needs to be well constructed. Make sure the server sends a small first packet, then a sequence of packets that is large enough to fill three or more.
At that point you will need to test. Test, test, test, and test. Test on the different network types, test with different amounts of network traffic, test with switching networks, and test anything else you can think of.
I intend to write an application where I will be needing to calculate the network bandwidth along with latency and packet loss rate. One of the constraints is to passively measure the bandwidth (using the application data itself).
What I have read online and understood from a few existing applications is that almost all of them use active probing techniques (that is, generating a flow of probe packets) and use the time difference between arrival of the first and last packets to calculate the bandwidth.
The main problems with such a technique are that it floods the network with probe packets, which runs longer and is not scalable (since we need to run the application at both ends).
One of the suggestions was to calculate the RTT of a packet by echoing it back to the sender and calculate the bandwidth using the following equation:
Bandwidth <= (Receive Buffer size)/RTT.
I am not sure how accurate this could be as the receiver may not always echo back the packet on time to get the correct RTT. Use of ICMP alone may not always work as many servers disable it.
My main application runs over a TCP connection so I am interested in using the TCP connection to measure the actual bandwidth offered over a particular period of time. I would really appreciate if anybody could suggest a simple technique (reliable formula) to measure the bandwidth for a TCP connection.
It is only possible to know the available bandwidth by probing the network. This is due to that a 80% utilized link will still send echo-packets without delay, i.e. it will appear to be 0% occupied.
If you instead just wish to measure the bandwidth your application is using, it is much easier. E.g. keep a record of the amount of data you have transferred in the last second divided into 10ms intervals.
Active probing technique and its variants are bandwidth estimation algorithm. You dont want to use these algorithm to measure bandwidth. Note the difference between 'measure' and 'estimate'.
If you want to use tcp to measure bandwidth, you should be aware that tcp bandwidth is influenced by latency.
The easiest way to measure bandwidth using tcp is by sending tcp packets and measure the transferred bandwidth. It will flood the network. None of the non flooding algorithm reliable in high speed network. Plus, non flooding algorithm assume the channel is clear from traffic. If there is other traffic inside the channel, the result would be skewed. Im not suprised if the result would not make sense.