Mutli Player Game synchronization - iphone

The Situation:
I would like to ask what's the best logic for synchronizing objects in a multiplayer 1:1 game using BT or a web server. The game has two players, each of them has multiple guns & bullets, the bullets are created dynamically and disappear after a while, the players my move objects around simultaneously.
The Problem:
I have a real issue with synchronization, since the bullets on one device may be faster than other, also they may have already gone or hit an object on one device while on the other its still in the air.
Possibilities?
What is the best way of handling synchonization in this case? Should all the objects be controlled by one device acting as the server, while th other just gets the values, positions and does very little thinking. Or should control be distributed where each device creates, destroys and moves its own objects and then through synchronization tells the other device.
What is the best to handle transmission delay in this, since BT might be faster than playing over the web?
The best would be a working sample - thanks very much!

You seem to have started on some good ideas about synchronization, but it's possible there are two problems you are running into that are getting overlapped: the synchronization of game clocks and the sychronization of gamestate.
(1) synchronizing game clocks
you need some representation of 'game time' for your game. for a 2 player game it is very reasonable to simply declare one the authority.
so on the authoritative client:
OnUpdate()
gameTime = GetClockTime();
msg.gameTime = gameTime
SendGameTimeMessage(msg);
on the other client might be something like:
OnReceivGameTimeeMessage(msg)
lastGameTimeFromNetwork = msg.gameTime;
lastClockTimeOfGameTimeMessage = GetClockTime();
OnUpdate()
gameTime = lastGameTimeFromNetwork + GetClockTime() - lastClockTimeOfGameTimeMessage;
there are complications like skipping/slipping (ie getting times from over the network that go forward/backward too much) that require further work, but hopefully you get the idea. follow up with another question if you need.
note: this example doesn't differentiate 'ticks' vs 'seconds' nor does is it tied to your network protocol nor the type of device your game is running on (save the requirement 'the device has a local clock').
(2) synchronizing gamestate
after you have a consistent game clock, you still need to work out how to consistently simulate and propagate your gamestate. for synchronizing gamestate you have a few choices:
asynchronous
each unit of gamestate is 'owned' by one process. only that process is allowed to change that gamestate. those changes are propagated to all other processes.
if everything is owned by a single process, this is often called a 'client/server' game.
note, with this model each client has a different view of the game world at any time.
example games: quake, world of warcraft
to optimize bandwidth and hide latency, you can often do some local simulation for fields with a high update frequency. example:
drawPosition = lastSyncPostion + (currentTime - lastSyncTime) * lastSyncVelocity
of course you to having to reconcile new information with your simulated version in this case.
synchronous
each unit of gamestate is identical in all processes.
commands from each process are propagated to each other with their desired initiation time (sometime in the future).
in its simplest form, one process (often called the host) sends special messages indicating when to advance the game time. when everyone recieves that message they are allowed to simulate the game up to that point.
the 'in the future' requirement leads to high latency between input command and gamestate change.
in non-real time games like civilization, this is fine. in a game like starcraft, normally the sound acknowledging the input comes immediately, but the actually gamestate affecting action is delayed. this style is not appropriate for games like shooters that require time-sensitive actions (on the ~100ms scale).
synchronous with resimulation
each unit of gamestate is identical in all processes.
each process sends all other processes its input with its current timestamp. additionally a 'nothing happened' message is periodically sent.
each process has 2 copies of the gamestate.
copy 1 of the gamestate is propagated to the 'last earliest message' it has receive from all other clients. this is equivalent to the synchronous model, but has the weakness that it represents a gamestate from 'a little bit ago'
copy 2 of the gamestate is copy 1 plus all the remaining messages. it is a prediction of what is gamestate at the current time on the client, assuming nothing new happens.
the player interacts with some combination of the two gamestate (ideally 100% copy 2, but some consideration must be taken to avoid pops as new messages come in)
example games: street fighter 4 (internet play)
from your description, options (1) and (3) seem to fit your problem. again if you have further questions or require more detail, ask a follow up.

since the bullets on one device may be faster than other
This should not happen if the game has been architected properly.
Most games these days (particularly multiplayer ones) work on ticks - small timeslices. Each system should get the exact same result when it computes what happened during a tick - no "bullets moving faster on one machine than they do on another".
Then it's a much simpler matter of making sure each system gets the same inputs for each player (you'll need to broadcast each player's input to each other player, along with the tick the input was registered during), and making sure that each system calculates ticks at the same rate.

Related

Latency handling methods in a multiplayer game

I am working on a real-time multiplayer soccer game.
Currently on my game i created a architecture like that;
Every client have a copy of the game state, also server has it too.
Clients send their input vector(joystick data) to server.(Local player uses the current input and move)
Client waits for other player's input, once that data arrives i set rigidbody speed and direction. Than it goes with smoothly.
Used Things;
UDP(has lower ping)
Tickrate : 32(Increasing tickrate fix this issues most of the time but not everyone's connection is strong and sending many packets in a second cause a ping issue)
Problem is;
Some times server and clients get de-sync and this cause every client sees another copy of the current game.
What i've tried;
Tried to increase Tickrate, but this only caused a connection with higher ping + packet loss
Lerp between two data, this caused; players seems like moving with different speeds
Lerp + Jitter, this caused; players always see the game in past state
If client and server positions different than a delta-x use the server's position(normally this can fix the issue but some times server and client get de-sync in every 4 tick and transporting object in 4 tick cause a very laggy / not smooth visual)
What is the best method to fix or handle this de-sync?
And why it is happening?
Is it normal to happen in almost every 4-5 tick?(~9 times in a second)

real-time multiplayer server, interpolation and prediction validation

I'm building a HTML5 / Websockets based multiplayer canvas game for Facebook and I've been working on the server code for a few days now. While the games pretty simple with a 2d top down, WSAD controls and mouseclick fires a projectile to the cursor x/y - I've never had to do real-time multiplayer before. I've read a few great documents but I'm hoping I can overview my general understanding of the topic and someone can validate the approach and/or point out areas for improvement.
Authoritative multiplayer server, client-side prediction and entity interpolation (and questions below)
Client connects to server
Client syncs time to server
Server has two main update loops:
Update the game physics (or game state) on the server at a frequency of 30 per second (tick rate?)
Broadcast the game state to all clients at a frequency of 10 per second
Client stores three updates before being allowed to move, this builds up the cache for entity interpolation between update states (old to new with one redundency in case of packet loss)
Upon input from the user, the client sends input commands to server at a frequency of 10 per second - these input commands are time stamped with the clients time
Client moves player on screen as a prediction of what the server will return as the final (authoritative) position of client
Server applies all updates to its physics / state in the previously mentioned update loop
Server sends out time stamped world updates.
Client (if is behind server time && has updates in the queue) linearly interpolates the old position to the new.
Questions
At 1: possibility to use NTP time and sync between the two?
At 5: time stamped? Is the main purpose here to time-stamp each packet
At 7: The input commands that come in will be out of sync per different latencies of the clients. I'm guessing this needs to be sorted before being applied? Or is this overkill?
At 9: is the lerp always a fixed amount? 0.5f for example? Should I be doing something smarter?
Lots of questions I know but any help would be appreciated!!
At 1 : You're a bit overthinking this, all you have to do in reality is to send the server time to the client and on that side increment that in your update loop to make sure you're tracking time in server-time. Every sync you set your own value to the one that came from the server. Be EXTRA careful about this part, validate every speed/time server-sided or you will get extremely easy-to-do but incredibly nasty hacks.
At 5 : Timestamped is important when you do this communication via UDP, as the order of the packets is not ensured unless you specifically make it so. Via websockets it shouldn't be that big of an issue, but it's still good practice (but make sure to validate those timestamps, or speedhacks ensure).
At 7 : Can be an overkill, depends on the type of the game. If your clients have large lag, they will send less inputs by definition to the server, so make sure you only process those that came before the point of processing and queue the remaining for the next update.
At 9 : This post from gamedev stackexchange might answer this better than I would, especially the text posted by user ggambett at the bottom.

When does a zero-time MIDI event trigger?

I'm reading a MIDI file and I'm having trouble determining when next events trigger.
Let's say I have a midi file that has a track like this (where T=n is the delta time):
[T=0: Note On, C4] [T=128: Note Off, C4] [T=0: Note On, D4] [T=128: Note Off, D4]
Does the second Note On (D4) take place at the EXACT same time/tick as the previous Note Off (C4)? Or do you trigger it on the next tick?
In theory, the two events happen at the same time.
In practice, events need a certain time to be sent over MIDI (about one millisecond for three bytes), but the second event will be sent as soon as possible after the first one.
When no actual MIDI cable is involved, the events actually could take effect at the same time.
All events happen on a tick. However, they're sent out over the MIDI cable one at a time since MIDI is both a serial protocol and serial hardware. This became a problem with devices that sent out huge numbers of controller change messages, originally like the MIDI guitar controllers. They simply sent out more MIDI messages per second than the cable could transmit.
On alternate transport, like USB, those events can happen closer together but because they are serial, they must still happen one after the other. That time frame may be indistiguishable, (we hope), but there will always be a tiny lag.
For them to happen at the "same" time, you must either a) buffer or b) make them happen in different places, as with parallel players, which still leaves you with a delay in syncing.

Continious stream of data via socket gets progressively more delayed

I am working on an application which, through a Java program, links two different robot simulation environments. One simulation environment (let's call it A) sends the current state of the robot to the Java application, which does some calculations and then sends data about this current state, as well as some other information, on to the other simulation environment (let's call it B). Simulation B then updates the state of the robot to match Simulation A's version.
The problem is that as the program continues to run, simulation B begins to lag behind what simulation A is doing. This lag increases continuously, so that after a minute or so simulation B is several seconds behind.
I am using TCP sockets to send data between these environments and the Java program. From background reading on socket programming, I found out it is bad practice to continuously open and close sockets rapidly, so what I am doing currently is just keeping both sockets open. I have a loop running which grabs data from Sim A, does some calculations, and then sends the position data to Sim B and then I have the thread wait for 100ms and then the loop repeats. To be clear, the position data sent to B is unaltered from what is received from A.
Upon researching the lag issue, someone suggested to me that for streams of data it is actually a good idea to open and close sockets, because if you keep the socket open, if one simulation takes a longer time to process things than the other, you end up with the position data stacking up in the buffer and being read sequentially, instead of reading the most recent data. Is this true? Would rewriting my code to open and close sockets every 100ms potentially get rid of the delay? Or is this not how sockets actually work?
Edit for clarification: It is more critical that the simulations stay in sync than that all position data is sent, in other words it is acceptable to not pass along all data points for the sake of staying in sync.
Besides keeping the socket open causing problems, does anyone have any ideas of what might be causing the lag issue?
Thanks in advance for any insight/suggestions/hints!
You are correct about using a single connection. Data can indeed back up, but using multiple connections doesn't change that.
The basic question here is whether the Java program can calculate as fast as the robot can send data. If it can't, it will get behind. You can do various things to the networking to speed it up but if the computations can't keep up they are futile. So you need to investigate your timings.

Howto design a clock driven multi-agent simulation

I want to create a multi-agent simulation model for a real word manufacturing process to evaluate some dispatching rules. The simulation needs to produce event logs to evaluate time effect of the dispatching rules compared to the real manufacturing event logs.
How can I incorporate the 'current simulation time' into this kind of multi-agent, message passing intensive simulation?
Background:
The classical discrete event simulation (which handles the time-advancement nicely) cannot be applied here, as the agents in the system represent relatively complex behavior and routing requirements plus the dispatching rules require them to communicate frequently. This and other process complexities rule out a centralized scheduling approach as well.
In the manufacturing science, there are thousands of papers using a multi-agent simulation for their solution of some manufacturing related problem. However, I haven't found a paper yet which describes the inner workings or implementation details of these simulations in the required detail.
Unfortunately, using the shortest process time for discrete time stepping in a system might be infeasible as the range of process time is between 0.1s and 24 hours. There is a possibility my simulation will be used for what-if evaluations in a project later on so the simulation needs to run as fast as possible - no option for overnight simulation runs.
The problem size is about 500 resources and 1000 - 10000 product agents, most of them is finished and not participating in any further communication or resource occupation.
Consequently, in result to the communication new events can trigger an agent to do something before its original 'next time' event would arrive. For example, an agent is currently blocked on a resource lasting an hour. However, another higher priority agent needs that resource right away and asks the fist agent to release that resource.
In some sense, I need a way to create a hybrid of classical message passing agent-simulation and the discrete event simulation.
I considered a mediator agent that is involved in every message - a message router and time enforcer which sends around the messages and the timer tick events. Also the mediator agent keeps a list of next event times for various agents. However, I feel there should be a better way to solve my problem as the concept puts an enormous pressure at the mediator agent.
Update
It took a while, but it seems I managed to create a mini-framework and combined the DES and Agent concept into one. I'm sure its nothing new but at least unique: http://code.google.com/p/tidra-framework/ if you are interested.
This problem sounds as if it should be tackled by using parallel discrete-event simulation - the mediator agent you are planning to implement ('is involved in every message', 'sends around messages and timer tick events') seems to be doing the job of a discrete-event simulator right now. You can make this scale to the desired problem size by using more of such simulators in parallel and then use a synchronization algorithm to maintain causality etc. (see, e.g., this book for details). Of course, this requires some considerable effort, and you might be better off by really trying out the sequential algorithms first.
A nice way of augmenting the classical DES-view of logical processes (= agents) that communicate with each other via events could be to blend in some ideas from other formalisms used to describe discrete-event systems, such as DEVS. In DEVS, each entity can specify the duration it will be in a certain state (e.g., the agent blocking a resource), and will only be interrupted by incoming messages (and then change its state accordingly, e.g. the agent freeing the resource).
BTW In which sense do you think that the agents are too complex to be handled with discrete-event simulation? If you regard each agent as a logical process, it doesn't really matter how complex it is from a simulation point of view - or am I getting something wrong here?