Does server's location affect players' ping? - server

I want to reduce a latency for my players in a multiplayer game.
Would it be beneficial to make a server in each continent to reduce latency? E.g. players are in US, but the server is in Europe, so I make it be in US.
How big could the difference be?

Yes absolutely, the closest your server is from the user the better the ping is because the travelling distance / time is reduced.
Specially between Europe and America, because of the sea ;)
The difference really depend on your server, but at least 150ms I think.

Cable . . . ( raw-fiber on Layer 1 [PHY] ) mileage rulez
Recent transatlantic cables ( deploying shorter fibre-meandres in a tighter inner-tubing ) benefit from about the 2012+ generation of cables shorter transatlantic latencies somewhere under 60 milliseconds, according to Hibernia Atlantic.
One has also to account for lambda-signal amplification and retiming units, that add some additional Layer 1 [PMD] latency overheads down the road.
Yes, but...
ping is a trivial tool to test RTT records ( a packet round-trip-time ) derived from counting a time, the packet proceeds through a network infrastructure there and back.
Thus sending such ping packet across a few meter cable distance would typically cost less time than having to wait till another such packet makes it to a target on the opposite side of the Globe and has next successfully crawled back again, but... as Heraclitus from Ephesos wisdom: "You can't step twice into the same river" .. repeating the ping probes to the same target will yield in many principally very different time-delays ( latencies are non-stationary ).
But there are many additional cardinal issues, besides the geographical distance from A-to-B, that influence the end-to-end latency ( and an expected smoothness ) of the application-layer services.
What bothers?
Transport network congestions ( if actual traffic overloads an underlying network capacities, buffering delays and/or packet drops start to occur )
Packet re-transmission(s) ( if any packet got lost, the opposite side will ask for re-transmission, packets received in the meantime are being recorded for reception, but only after some remarkably longer time, the missing packet arrives, for which the receiving process was yet waiting, because without a packet 6, one cannot decode the full message and packets 7, 8, 9, 10, ... simply had to wait until the #6 was asked for by the receiver's side at the sender's side, was re-transmitted from the sender's side and have made it again, hopefully with a successful delivery this time, to the receiver's hands to fit the puzzle-gap. That costs a lot more time, than a smooth error-free data-flow )
Selective class-of-traffic prioritisation ( if your class-of-traffic is not prioritised, your packets will be policed to wait in queues, until higher priorities allow for some more lower-priority traffic to fit in )
Packet deliveries do not guarantee to take the same path over the same network-vectors and individual packets can be, in general, transported over multiple different trajectories ( add various prioritisation policies + various buffering drop-outs + various intermittent congestions and spurious flows ... and the resulting latency per se + timing variance, i.e. the uncertainty of a final delivery time of the next packet only grows up and up ).
Last but not least, the server system processing bottlenecks. Fine-tuning a server to avoid any such adverse effect ( performance bottlenecks and even worse, any blocking-state episode(s) ) belongs to a professional controlled-latency infrastructure design & maintenance.
The Devil comes next!
You might have already noticed, that besides a static latency scale ( demonstrated by ping ) realistic gaming is even more adversely affected by latency jitter ... as an in-game context magically UFO-es forwards and back in TimeDOMAIN ... which cause unrealistically jumping planes right in front of your aiming cross, "shivering" characters, deadly enemy-fire bullets that causes one's damages without ever seen / a yet visible attacker body and similar disturbing artefacts.
Server-colocation proximity per se will help in the former, but will let on you to fight the latter

Related

Beacon size vs message size in Wireless Ad-Hoc Networks

I'm working on neighbor discovery protocols in wireless ad-hoc networks. There are many protocols that rely only on beacon messages between nodes when the discovery phase is going on. On the other hand, there are other approaches that try to transmit more information (like a node's neighbor table) during the discovery, in order to accelerate it. Depending on the time needed to listen to those messages the discovery latency and power consumption varies. Suppose that the same hardware is used to transmit them and that there aren't collisions.
I read that beacons can be sent extremely fast (less than 1ms easily) but I haven't found anything about how long it takes to send/receive a bigger message. Let say a message carrying around 50-500 numbers representing all the info about your neighbors. How much extra power is needed?
Update
Can this bigger message be divided into a bunch of beacon size messages? If it does, then I suppose the power used to transmit/listen grows linearly.
One possible solution is to divide the transmission in N different beacon-like messages with a small extra information to be able to put them back together. In this way, the power used grows linearly as N grows.

Low average call duration

My Mera VoIP Transit Softswitch (MVTS) shows very low ACD (about 0.3 min) for several directions (route groups) at peak hours. Looking for factors causing low ACD, I foung this topic: http://support.sippysoft.com/support/discussions/topics/3000137333, but all mentioned parameters seem to be normal. There is another strange thing also. As seen at this graph, there are about 10 lines occupied for each real call. I guess these problems are related somehow, though not sure yet.
What can cause such behavior?
You should check the followings:
SIP trunk quality
Your trunk or service providers might have quality issues to these directions. You can easily test this by sending the same traffic in the same time to some other carriers.
Low call quality under high load
You can easily verify this by just making a call during peak time and hear it yourself.
Some other factor causing call drop in similar circumstances.
Causes might include maximum supported channels, billing cut-off or others.
You should grab a statistics about disconnect codes and compare it to off-peak time or to other directions

Sending images through sockets

I have an idea for a client-server. The client handles only input, sending it to the server. Server handles the input, logic and then sends the image of the program to the client. The client prints the image on user's screen. Uses udp, slight artefacts in the image are tolerated.
How fast can those images travel through the Internet? Can they travel at least 5 times a second? I don't have 2 computers at hand to test it.
EDIT: One more question - how reliable is UDP protocol? How many pixels would arrive corrupted? Say, 10% on average?
EDIT2: For example, I have an 320x200 32 bit image (red,green,blue + alpha). That's ~2 million bits. How long it takes for the image to arrive from the server to the client, if my ping is X, my uploading speed Y Mbps and my download speed Z Mbps?
The answers to your questions depend heavily on the internet connections of the machines involved. In particular, if the program is heavily graphical, the bandwidth used by the images may be fairly substantial, especially if your client is on a mobile device connecting through the cellular telephony system.
If you have plenty of bandwidth, 5 round trips per second should be achievable most of the time if both client and server are in the U.S., or both are in Europe. There are, for example, interactive computer games that depend on having 4-5 round trips per second for smooth play, and only occasionally have glitches as a result. If client and server are on different continents, and especially if they are on opposite sides of the world, this may be more difficult, as speed of light delays start using a significant proportion of the available transmission time. In the worst case, say between China and Argentina, theoretical speed of light delays alone limit the network to less than 8 round trips per second, so with real network and bandwidth limitations, 5 round trips per second could be difficult to achieve.
The reliability of UDP depends substantially on how congested the connection is. On an uncongested network connection, you'd probably lose 1% of the packets or less. On a very congested network connection, it might be a lot worse - I've seen situations where 80% of the packets were lost.
On an uncongested network, the time for an image to travel from the server to the client would be
(ping time)/2 + (1-packet overhead)*(image size)/(minimum bandwidth)
Packet overhead is only a few percent, so you might be able to drop that term out. Minimum bandwidth would be the minimum of the server upload bandwidth and the client download bandwidth. Note that the image size might be reduced substantially through compression. Don't forget, though, that you also need to allow for time for the input to be sent from the client to the server, which adds another (ping time)/2 at a minimum.

Soft hand off in CDMA cellular networks

Hi,
In the CDMA cellular networks when MS (Mobile Station) need to change a BS(Base Station), exactly necessary for hand-off, i know that is soft hand-off (make a connection with a target BS before leaving current BS-s). But i want to know, because connection of MS remaining within a time with more than one BS, MS use the same code in CDMA to communicate with all BS-s or different code for different BS-s ?
Thanks in advance
For the benefit of everyone, i have touched upon few points before coming to the main point.
Soft Handoff is also termed as "make-before-break" handoff. This technique falls under the category of MAHO (Mobile Assisted Handover). The key theme behind this is having the MS to maintain a simultaneous communication link with two or more BS for ensuring a un-interrupted call.
In DL direction, it is achieved using different transmission codes(transmit same bit stream) on different physical channels in the same frequency by two or more BTS wherein the CDMA phone simultaneously receives the signals from these two or more BTS. In the active set, there can be more than one pilot as there could be three carriers involved in soft hand off. Also, there shall also be a rake receiver that shall do maximal combining of received signals.
In UL direction, MS shall operate on a candidate set where there could be more than 1 pilot that have sufficient signal strength for usage as reported by MS. The BTS shall tag each of the user's data with Frame reliability indicator that can provide details about the transmission quality to BSC. So, even though the signals(MS code channel) are received by both base stations, it is achieved by routing the signals to the BSC along with information of quality of received signals, which shall examine the quality based on the Frame reliability indicator and choose the best quality stream or the best candidate.

Bandwidth measurent by minimum data transfer

I intend to write an application where I will be needing to calculate the network bandwidth along with latency and packet loss rate. One of the constraints is to passively measure the bandwidth (using the application data itself).
What I have read online and understood from a few existing applications is that almost all of them use active probing techniques (that is, generating a flow of probe packets) and use the time difference between arrival of the first and last packets to calculate the bandwidth.
The main problems with such a technique are that it floods the network with probe packets, which runs longer and is not scalable (since we need to run the application at both ends).
One of the suggestions was to calculate the RTT of a packet by echoing it back to the sender and calculate the bandwidth using the following equation:
Bandwidth <= (Receive Buffer size)/RTT.
I am not sure how accurate this could be as the receiver may not always echo back the packet on time to get the correct RTT. Use of ICMP alone may not always work as many servers disable it.
My main application runs over a TCP connection so I am interested in using the TCP connection to measure the actual bandwidth offered over a particular period of time. I would really appreciate if anybody could suggest a simple technique (reliable formula) to measure the bandwidth for a TCP connection.
It is only possible to know the available bandwidth by probing the network. This is due to that a 80% utilized link will still send echo-packets without delay, i.e. it will appear to be 0% occupied.
If you instead just wish to measure the bandwidth your application is using, it is much easier. E.g. keep a record of the amount of data you have transferred in the last second divided into 10ms intervals.
Active probing technique and its variants are bandwidth estimation algorithm. You dont want to use these algorithm to measure bandwidth. Note the difference between 'measure' and 'estimate'.
If you want to use tcp to measure bandwidth, you should be aware that tcp bandwidth is influenced by latency.
The easiest way to measure bandwidth using tcp is by sending tcp packets and measure the transferred bandwidth. It will flood the network. None of the non flooding algorithm reliable in high speed network. Plus, non flooding algorithm assume the channel is clear from traffic. If there is other traffic inside the channel, the result would be skewed. Im not suprised if the result would not make sense.