Let do simple thing, we have a cloud, which client draws, and server which sends commands to move cloud. Assume what client 1 runs on 60 fps and Client 2 runs on 30 fps and we want kinda smooth cloud transition.
First problem - server have different fps with clients and if send move command every tick, it will start spamming commands much faster, then clients will draw.
Possible solution 1 - client sends "i want update" command after finishing frame.
Possible soolution 2 - server sends move cloud commands every x ms, but then cloud will not move smoothly. Can be combined with solution 3.
Possible solution 3 - server sends - "start move cloud with speed x" and "change "cloud direction" instead of "move cloud to x". But problem again is what checks for changing cloud dir on edge of screen, will trigger faster then cloud actually drawned on client.
Also Client 2 draws 2 times slower then Client 1, how compensate this?
How sync server logic with clients drawning in basic way?
Solution 3 sounds like the best one by far, if you can do it. All of your other solutions are much too chatty: they require extremely frequent communication between the client and server, much too frequent unless servers and clients have a very good network connection between them.
If your cloud movements are all simple enough that they can be sent to the clients as vectors such that the client can move the cloud along one vector for an extended period of time (many frames) before receiving new instructions (a new starting location and vector) from the server then you should definitely do that. If your cloud movements are not so easily representable as simple vectors then you can choose a more complex model (e.g. add instructions to transform the vector over time) and send the model's parameters to the clients.
If the cloud is part of a larger world and the clients track time in the world, then each of the sets of instructions coming from the server should include a timestamp representing the time when the initial conditions in the model are valid.
As for your question about how to compensate for client 2 drawing two times slower than client 1, you need to make your world clock tick at a consistent rate on both clients. This rate need not have any relationship with the screen refresh rate on either client.
Related
I am working on a real-time multiplayer soccer game.
Currently on my game i created a architecture like that;
Every client have a copy of the game state, also server has it too.
Clients send their input vector(joystick data) to server.(Local player uses the current input and move)
Client waits for other player's input, once that data arrives i set rigidbody speed and direction. Than it goes with smoothly.
Used Things;
UDP(has lower ping)
Tickrate : 32(Increasing tickrate fix this issues most of the time but not everyone's connection is strong and sending many packets in a second cause a ping issue)
Problem is;
Some times server and clients get de-sync and this cause every client sees another copy of the current game.
What i've tried;
Tried to increase Tickrate, but this only caused a connection with higher ping + packet loss
Lerp between two data, this caused; players seems like moving with different speeds
Lerp + Jitter, this caused; players always see the game in past state
If client and server positions different than a delta-x use the server's position(normally this can fix the issue but some times server and client get de-sync in every 4 tick and transporting object in 4 tick cause a very laggy / not smooth visual)
What is the best method to fix or handle this de-sync?
And why it is happening?
Is it normal to happen in almost every 4-5 tick?(~9 times in a second)
I'm building a client/server-type subsystem in a control system application using UDP Send/Receive blocks in Simulink. Data x is sent to the server via UDPSend block which is then processed at the server that returns output y.
Currently, I've both the client (a Simulink model) and the server (processing logic return in Java) resides in the localhost. Therefore, the packet exchanges essentially take near-zero time. I'd like to introduce network delay such that the packet exchanges take a varying amount of time (say due to changes in bandwidth availability), effectively simulating a scenario where the server node is located in a different geographical location.
Could someone guide me on how to achieve this? Thanks.
As a general (Simulink-independent) solution in a Windows environment, you should have a look at following tool, which "makes your network condition significantly worse, but in a managed and interactive manner."
I'm building a HTML5 / Websockets based multiplayer canvas game for Facebook and I've been working on the server code for a few days now. While the games pretty simple with a 2d top down, WSAD controls and mouseclick fires a projectile to the cursor x/y - I've never had to do real-time multiplayer before. I've read a few great documents but I'm hoping I can overview my general understanding of the topic and someone can validate the approach and/or point out areas for improvement.
Authoritative multiplayer server, client-side prediction and entity interpolation (and questions below)
Client connects to server
Client syncs time to server
Server has two main update loops:
Update the game physics (or game state) on the server at a frequency of 30 per second (tick rate?)
Broadcast the game state to all clients at a frequency of 10 per second
Client stores three updates before being allowed to move, this builds up the cache for entity interpolation between update states (old to new with one redundency in case of packet loss)
Upon input from the user, the client sends input commands to server at a frequency of 10 per second - these input commands are time stamped with the clients time
Client moves player on screen as a prediction of what the server will return as the final (authoritative) position of client
Server applies all updates to its physics / state in the previously mentioned update loop
Server sends out time stamped world updates.
Client (if is behind server time && has updates in the queue) linearly interpolates the old position to the new.
Questions
At 1: possibility to use NTP time and sync between the two?
At 5: time stamped? Is the main purpose here to time-stamp each packet
At 7: The input commands that come in will be out of sync per different latencies of the clients. I'm guessing this needs to be sorted before being applied? Or is this overkill?
At 9: is the lerp always a fixed amount? 0.5f for example? Should I be doing something smarter?
Lots of questions I know but any help would be appreciated!!
At 1 : You're a bit overthinking this, all you have to do in reality is to send the server time to the client and on that side increment that in your update loop to make sure you're tracking time in server-time. Every sync you set your own value to the one that came from the server. Be EXTRA careful about this part, validate every speed/time server-sided or you will get extremely easy-to-do but incredibly nasty hacks.
At 5 : Timestamped is important when you do this communication via UDP, as the order of the packets is not ensured unless you specifically make it so. Via websockets it shouldn't be that big of an issue, but it's still good practice (but make sure to validate those timestamps, or speedhacks ensure).
At 7 : Can be an overkill, depends on the type of the game. If your clients have large lag, they will send less inputs by definition to the server, so make sure you only process those that came before the point of processing and queue the remaining for the next update.
At 9 : This post from gamedev stackexchange might answer this better than I would, especially the text posted by user ggambett at the bottom.
I am working on an application which, through a Java program, links two different robot simulation environments. One simulation environment (let's call it A) sends the current state of the robot to the Java application, which does some calculations and then sends data about this current state, as well as some other information, on to the other simulation environment (let's call it B). Simulation B then updates the state of the robot to match Simulation A's version.
The problem is that as the program continues to run, simulation B begins to lag behind what simulation A is doing. This lag increases continuously, so that after a minute or so simulation B is several seconds behind.
I am using TCP sockets to send data between these environments and the Java program. From background reading on socket programming, I found out it is bad practice to continuously open and close sockets rapidly, so what I am doing currently is just keeping both sockets open. I have a loop running which grabs data from Sim A, does some calculations, and then sends the position data to Sim B and then I have the thread wait for 100ms and then the loop repeats. To be clear, the position data sent to B is unaltered from what is received from A.
Upon researching the lag issue, someone suggested to me that for streams of data it is actually a good idea to open and close sockets, because if you keep the socket open, if one simulation takes a longer time to process things than the other, you end up with the position data stacking up in the buffer and being read sequentially, instead of reading the most recent data. Is this true? Would rewriting my code to open and close sockets every 100ms potentially get rid of the delay? Or is this not how sockets actually work?
Edit for clarification: It is more critical that the simulations stay in sync than that all position data is sent, in other words it is acceptable to not pass along all data points for the sake of staying in sync.
Besides keeping the socket open causing problems, does anyone have any ideas of what might be causing the lag issue?
Thanks in advance for any insight/suggestions/hints!
You are correct about using a single connection. Data can indeed back up, but using multiple connections doesn't change that.
The basic question here is whether the Java program can calculate as fast as the robot can send data. If it can't, it will get behind. You can do various things to the networking to speed it up but if the computations can't keep up they are futile. So you need to investigate your timings.
I am trying to implement a poker server. An http server forwards data packets to the backend servers which handle the state of all the poker hands. In any given hand the player to act gets 10 seconds to act (bet,fold,call,raise,etc.). If there is no response within 10 seconds the server automatically folds for them. To check that 10 seconds has passed an event list of when actions must be received is maintained. It is a priority queue ordered by time and each poker hand currently being played has an entry in the priority queue.
Consider the following scenario since the last action 9.99 seconds pass before the next action arrives at the http server. By the time the action is forwarded to the backend servers extra time passes so now a total of 10.1 seconds have passed. The backend servers will have declared the hand folded, but I would like the action to be processed since technically it arrived at the http server after 9.99 seconds. Now one solution would be to have the backends wait some extra time before declaring a hand folded to see if an action timestamped at 9.99 seconds comes. But that would result in delaying when the next person in the hand gets to act.
The goals I would like are
Handle actions reaching the http server at 9.99 seconds instead of folding their hand.
Aggressively minimize delay resulting from having to do idle waiting to "solve" problem mentioned in bullet point 1.
What are the various solutions? To experts in distributed systems is there known literature on what the trade offs are to various solutions. I would like to know the various solutions deemed acceptable by distributed systems literature. Not just various ad hocs solution.
Maybe on the server side when client request arrives you could take the timestamp?
So you would take "start" and "stop" timestamps, to measure exactly 9.9s?