How to balance start time for a multiplayer game? - iphone

I'm making a multiplayer game with GameKit. My issue is that when two devices are connected the game starts running with a slight time difference. On of the devices starts running the game a bit later. But this is not what i want. i want it to start simultaneously on both devices. So the first thing that i do is i check time of the beginning on both devices like this:
startTime = [NSDate timeIntervalSinceReferenceDate];
and this is how it looks:
361194394.193559
Then I send startTime value to the other device and then i compare received value with startTime of the other device.
- (void)balanceTime:(double)partnerTime
{
double time_diff = startTime - partnerTime;
if (time_diff < 0)
startTimeOut = -time_diff;
}
So if difference between two start times is negative it means that this device is starting earlier and therefore it has to wait for exactly the difference assigned to startTimeOut variable, which is a double and usually is something like 2.602417. So then i pause my game in my update method
- (void)update:(ccTime)dt
{
if (startTimeOut > 0)
{
NSLog(#"START TIME OUT %f", startTimeOut);
startTimeOut -= dt;
return;
}
}
But it unfortunately it doesn't help. Moreover it even extends the difference between start times of the devices. I just can't get why. Seems like everything i'm doing is reasonable. What am i doing wrong? How do i correct it? What would you do? Thanks a lot

As Almo commented, it is not possible to synchronize two devices to the same time. At the lowest level you will gnaw your teeth out on the Heisenberg Uncertainty Principle. Even getting two devices to synchronize to within a tenth of a second is not a trivial task. In addition, time synchronization would have to happen more or less frequently since the clocks in each device run ever so slightly asynchronous (ie a teeny bit faster or a weeny bit slower).
You also have to consider the lag introduced by sending data over Wifi, Blutooth or over the air. This lag is not a constant, and can be 10ms in one frame and 1000ms in another. You can't cancel out lag, nor can you predict it. But you can predict player movements.
The solution for games, or at least one of them, is client-side prediction and dead reckoning. This SO question has a few links of interest.

Related

Unity - relate wall clock time to physics time (in fixed update)

I am involved in a project that is building software for a robot that uses ROS2 to support the robot's autonomy code. To streamline development, we are using a model of our robot built in Unity to simulate the physics. In Unity, we have analogues for the robot's sensors and actuators - the Unity sensors use Unity's physics state to generate readings that are published to ROS2 topics and the actuators subscribe to ROS2 topics and process messages that invoke the actuators and implement the physics outcomes of those actuators within Unity. Ultimately, we will deploy the (unmodified) autonomy software on a physical robot that has real sensors and actuators and uses the real world for the physics.
In ROS2, we are scripting with python and in Unity the scripting uses C#.
It is our understanding that, by design, the wall clock time that a Unity fixed update call executes has no direct correlation with the "physics" time associated with the fixed update. This makes sense to us - simulated physics can run out of synchronization with the real world and still give the right answer.
Some of our planning software (ROS2/python) wants to initiate an actuator at a particular time, expressed as floating point seconds since the (1970) epoch. For example, we might want to start decelerating at a particular time so that we end up stopped one meter from the target. Given the knowledge of the robot's speed and distance from the target (received from sensors), along with an understanding of the acceleration produced by the actuator, it is easy to plan the end of the maneuver and have the actuation instructions delivered to the actuator well in advance of when it needs to initiate. Note: we specifically don't want to hold back sending the actuation instructions until it is time to initiate, because of uncertainties in message latency, etc. - if we do that, we will never end up exactly where we intended.
And in a similar fashion, we expect sensor readings that are published (in a fixed update in Unity/C#) to likewise be timestamped in floating point seconds since the epoch (e.g., the range to the target object was 10m at a particular recent time). We don't want to timestamp the sensor reading with the time it was received because of unknown latency from the time the sensor value was current and the time it was received in our ROS2 node.
When our (Unity) simulated sensors publish a reading (based on the physics state during a fixed update call), we don't know what real-world/wall clock timestamp to associated with it - we don't know which 20ms of real time that particular fixed update correlates to.
Likewise, when our our Unity script that is associated with an actuator is holding a message that says to initiate actuation at a particular real-world time, we don't know if that should happen in the current fixed update because we don't know the real-world time that the fixed update correlates to.
The Unity Time methods all seem to deal with time relative to the start of the game (basically, a dynamically determined epoch).
We have tried capturing the wall clock time and time since game start in a MonoBehavior's Start, but this seems to put us off by a handful of seconds when the fixed updates are running (with the exact time shift being variable between runs).
How to crosswalk between the Unity game-start-based epoch and a fixed-start epoch (e.g., 1970)?
An example: This code will publish the range to the target, along with the time of the measurement. This gets executed every 20ms by Unity.
void FixedUpdate()
{
RangeMsg targetRange = new RangeMsg();
targetRange.time_s = DateTimeOffset.UtcNow.ToUnixTimeMilliseconds() / 1000.0;
targetRange.range_m = Vector3.Distance(target.transform.position, chaser.transform.position);
ros.Publish(topicName, targetRange);
}
On the receiving end, let's say that we are calculating the speed toward the target:
def handle_range(self, msg):
if self.last_range is not None:
diff_s = msg.time_s - self.last_range.time_s
if diff_s != 0.0:
diff_range_m = self.last_range.range_m - msg.range_m
speed = Speed()
speed.time_s = msg.time_s
speed.speed_mps = diff_range_m / diff_s
self.publisher.publish(speed)
self.last_range = msg
If the messages are really published exactly every 20ms, then this all works. But if Unity gets behind and runs several fixed updates one after another to get caught up, then the speed gets computed as much higher than it should (because each cycle, 20ms of movement is applied, but the cycles may be executed within a millisecond of each other).
If instead we use Unity's time for timestamping the messages with
targetRange.time_s = Time.fixedTimeAsDouble;
then the range and time stay in synch and the speed calculation works great, even in the face of some major hiccup in Unity processing. But, then the rest of our code that lives in the 1970 epoch has no idea what time targetRange.time_s really is.

Getting displacement from accelerometer data with Core Motion

I am developing an augmented reality application that (at the moment) wants to display a simple cube on top of a surface, and be able to move in space (both rotating and displacing) to look at the cube in all the different angles. The problem of calibrating the camera doesn't apply here since I ask the user to place the iPhone on the surface he wants to place the cube on and then press a button to reset the attitude.
To find out the camera rotation is very simple with the Gyroscope and Core Motion. I do it this way:
if (referenceAttitude != nil) {
[attitude multiplyByInverseOfAttitude:referenceAttitude];
}
CMRotationMatrix mat = attitude.rotationMatrix;
GLfloat rotMat[] = {
mat.m11, mat.m21, mat.m31, 0,
mat.m12, mat.m22, mat.m32, 0,
mat.m13, mat.m23, mat.m33, 0,
0, 0, 0, 1
};
glMultMatrixf(rotMat);
This works really well.
More problems arise anyway when I try to find the displacement in space during an acceleration.
The Apple Teapot example with Core Motion just adds the x, y and z values of the acceleration vector to the position vector. This (apart from having not much sense) has the result of returning the object to the original position after an acceleration. (Since the acceleration goes from positive to negative or vice versa).
They did it like this:
translation.x += userAcceleration.x;
translation.y += userAcceleration.y;
translation.z += userAcceleration.z;
What should I do to find out displacement from the acceleration in some istant? (with known time difference). Looking some other answers, it seems like I have to integrate twice to get velocity from acceleration and then position from velocity. But there is no example in code whatsoever, and I don't think that is really necessary. Also, there is the problem that when the iPhone is still on a plane, accelerometer values are not null (there is some noise I think). How much should I filter those values? Am I supposed to filter them at all?
Cool, there are people out there struggling with the same problem so it is worth to spent some time :-)
I agree with westsider's statement as I spent a few weeks of experimenting with different approaches and ended up with poor results. I am sure that there won't be an acceptable solution for either larger distances or slow motions lasting for more than 1 or 2 seconds. If you can live with some restrictions like small distances (< 10 cm) and a given minimum velocity for your motions, then I believe there might be the chance to find a solution - no guarantee at all. If so, it will take you a pretty hard time of research and a lot of frustration, but if you get it, it will be very very cool :-) Maybe you find these hints useful:
First of all to make things easy just look at one axis e.g x but consider both left (-x) and right (+x) to have a representable situation.
Yes you are right, you have to integrate twice to get the position as function of time. And for further processing you should store the first integration's result (== velocity), because you will need it in a later stage for optimisation. Do it very careful because every tiny bug will lead to huge errors after short period of time.
Always bear in mind that even a very small error (e.g. <0.1%) will grow rapidly after doing integration twice. Situation will become even worse after one second if you configure accelerometer with let's say 50 Hz, i.e. 50 ticks are processed and the tiny neglectable error will outrun the "true" value. I would strongly recommend to not rely on trapezoidal rule but to use at least Simpson or a higher degree Newton-Cotes formula.
If you managed this, you will have to keep an eye on setting up the right low pass filtering. I cannot give a general value but as a rule of thumb experimenting with filtering factors between 0.2 and 0.8 will be a good starting point. The right value depends on the business case you need, for instance what kind of game, how fast to react on events, ...
Now you will have a solution which is working pretty good under certain circumstances and within a short period of time. But than after a few seconds you will run into trouble because your object is drifting away. Now you will enter the difficult part of the solution which I failed to handle eventually within the given time scope :-(
One promising approach is to introduce something I call "synthectic forces" or "virtual forces". This is some strategy to react on several bad situations triggering the object to drift away although the device remains fixed (? no native speaker, I mean without moving) in your hands. The most troubling one is a velocity greater than 0 without any acceleration. This is an unavoidable result of error propagation and can be handled by slowing down artificially that means introducing a virtual deceleration even if there is no real counterpart. A very simplified example:
if (vX > 0 && lastAccelerationXTimeStamp > 0.3sec) {
vX *= 0.9;
}
`
You will need a combination of such conditions to tame the beast. A lot of try and error is required to get a feeling for the right way to go and this will be the hard part of the problem.
If you ever managed to crack the code, pleeeease let me know, I am very curious to see if it is possible in general or not :-)
Cheers Kay
When the iPhone 4 was very new, I spent many, many hours trying to get an accurate displacement using accelerometers and gyroscope. There shouldn't have been much concern about incremental drift as device needed only move a couple of meters at most and the data collection typically ran for a few minutes at most. We tried all sorts of approaches and even had help from several Apple engineers. Ultimately, it seemed that the gyroscope wasn't up to the task. It was good for 3D orientation but that was it ... again, according to very knowledgable engineers.
I would love to hear someone contradict this - because the app never really turned out as we had hoped, etc.
I am also trying to get displacement on the iPhone. Instead of using integration I used the basic physics formula of d = .5a * t^2 assuming an initial velocity of 0 (doesn't sound like you can assume initial velocity of 0). So far it seems to work quite well.
My problem is that I'm using the deviceMotion.and the values are not correct. deviceMotion.gravity read near 0. Any ideas? - OK Fixed, apparently deviceMotion.gravity has a x, y, and z values. If you don't specify which you want you get back x (which should be near 0).
Find this question two years later, I just find a AR project on iOS 6 docset named pARk, It provide a proximate displacement capture and calculation using Gyroscope, aka CoreMotion.Framework.
I'm just starting leaning the code.
to be continued...

iOS - Speed Issues

Hey all, I've got a method of recording that writes the notes that a user plays to an array in real time. The only problem is that there is a slight delay and each sequence is noticeably slowed down when playing back. I upped the speed of playback by about 6 miliseconds, and it sounds right, but I was wondering if the delay would vary on other devices?
I've tested on an ipod touch 2nd gen, how would that preform on 3rd, and 4th as well as iphones? do I need to test on all of them and find the optimal delay variation?
Any Ideas?
More Info:
I use two NSThreads instead of timers, and fill an array with blank spots where no notes should play (I use integers, -1 is a blank). Every 0.03 seconds it adds a blank when recording. Every time the user hits a note, the most recent blank is replaced by a number 0-7. When playing back, the second thread is used, (2 threads because the second one has a shorter time interval) that has a time of 0.024. The 6 millisecond difference compensates for the delay between the recording and playback.
I assume that either the recording or playing of notes takes longer than the other, and thus creates the delay.
What I want to know is if the delay will be different on other devices, and how I should compensate for it.
Exact Solution
I may not have explained it fully, that's why this solution wasn't provided, but for anyone with a similar problem...
I played each beat similar to a midi file like so:
while playing:
do stuff to play beat
new date xyz seconds from now
new date now
while now is not > date xyz seconds from now wait.
The obvious thing that I was missing was to create the two dates BEFORE playing the beat...
D'OH!
It seems more likely to me that the additional delay is caused by the playback of the note, or other compute overhead in the second thread. Grab the wallclock time in the second thread before playing each note, and check the time difference from the last one. You will need to reduce your following delay by any excess (likely 0.006 seconds!).
The delay will be different on different generations of the iphone, but by adapting to it dynamically like this, you will be safe as long as the processing overhead is less than 0.03 seconds.
You should do the same thing in the first thread as well.
Getting high resolution timestamps - there's a a discussion on apple forums here, or this stackoverflow question.

Best way to code a real-time multiplayer game

I'm not sure if the term real-time is being misused here, but the idea is that many players on a server have a city producing n resources per second. There might be a thousand such cities. What's the best way to reward all of the player cities?
Is the best way a loop like such placed in an infinite loop running whenever the game is "live"? (please ignore the obvious faults with such simplistic logic)
foreach(City c in AllCities){
if(c.lastTouched < DateTime.Now.AddSeconds(-10)){
c.resources += (DateTime.Now-c.lastTouched).Seconds * c.resourcesPerSecond;
c.lastTouched = DateTime.Now;
c.saveChanges();
}
}
I don't think you want an infinite loop as that would waste a lot of CPU cycles. This is basically a common simulation situation Wikipedia Simulation Software and there are a few approaches I can think of:
A discrete time approach where you increment the clock by a fixed time and recalculate the state of your system. This is similar to your approach above except do the calculation periodically and remove the 10 seconds if clause.
A discrete event approach where you have a central event queue, each with a timestamp, sorted by time. You sleep until the next event is due and then dispatch it. E.g. the event could mean adding a single resource. Wikipedia Discrete Event Simulation
Whenever someone asks for the number of resources calculate it based on the rate, initial time, and current time. This can be very efficient when the number of queries is expected to be small relative to the number of cities and the elapsed time.
while you can store the last ticked time per object, like your example, it's often easier to just have a global timestep
while(1) {
currentTime = now();
dt = currentTime - lastUpdateTime;
foreach(whatever)
whatever.update(dt);
lastUpdateTime = currentTime;
}
if you have different systems that don't need as frequent updates:
while(1) {
currentTime = now();
dt = currentTime - lastUpdateTime;
subsystem.timer += dt
while (subsystem.timer > subsystem.updatePeriod) {// need to be careful
subsystem.timer -= subsystem.updatePeriod; // that the system.update()
subsystem.update(subsytem.updatePeriod); // is faster than the
} // system.period
// ...
}
(which you'll notice is pretty much what you were doing on a per city basis)
Another gotcha is that with different subsystem clock rates, you can get overlaps (ie ticking many subsystems the same frame), leading to inconsistent frame times which can sometimes be an issue.

Measuring velocity via iPhone SDK

I need to implement a native iPhone app to measure the velocity of the phone (basically a speedometer). I know that you can do so via the CoreLocation API fairly easily, but I am concerned about battery consumption since this is to be a real-time measurement that could be used for up to a couple of hours at a time. My understanding is that while you are actively monitoring for events from the LocationManager (even though I don't actually care about GPS location) it is battery-intensive.
The other obvious option to explore would be using the accelerometers to try and calculate speed, but there is nothing in the API to help you do so. Based on my research, it should be possible to do this, but seems extremely complicated and error-prone. Translating from acceleration to velocity can be tricky to begin with, plus the iPhone accelerometer data can be "noisy". I'm familiar with the SDK example that demonstrates using low/high pass filtering, etc. -- but I have not seen a good example anywhere that shows calculating velocity.
Does anyone have any real-world experience with this they can share? Code would be fantastic, but really I just want to know if anyone has successfully done this (for a long-lived app) and what approach they took.
EDIT: I've got a working prototype that uses the LocationManager API. It works OK, but the update cycle is far from ideal for a real-time measurement of velocity. Depending on circumstances, it can take up to 4-5 seconds sometimes to update. Cruising at a given speed tends to work OK, but accel/decel tend to lag very badly from a user interaction standpoint. Also, I need to feed velocity into some other calculations that I'm doing and the precision is not really what I need.
It seems possible based on (very few) other apps I've seen, notably gMeter which claims to make no use of GPS but calculates velocity accurately. I'm really surprised there are no references or any sample code that demonstrates this anywhere that I can find. I realize it's complex, but surely there's something out there.
Since the GPS and accelerometer have different weaknesses, your best option is probably a combined approach - Get a measurement from the GPS every minute or so, then add realtime changes from the accelerometer.
From a practical standpoint, you are not going get accurate velocity from forces acting on the accelerometer.
Use the GPS with readings taken at 1 minute intervals and put the GPS to sleep inbetween.
Here is an example:
SpeedViewController.h
CLLocationManager *locManager;
CLLocationSpeed speed;
NSTimer *timer;
#property (nonantomic,retain) NSTimer *timer;
SpeedViewController.m
#define kRequiredAccuracy 500.0 //meters
#define kMaxAge 10.0 //seconds
- (void)startReadingLocation {
[locManager startUpdatingLocation];
}
- (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation {
NSTimeInterval ageInSeconds = [newLocation.timestamp timeIntervalSinceNow];
//ensure you have an accurate and non-cached reading
if( newLocation.horizontalAccuracy > kRequiredAccuracy || fabs(ageInSeconds) > kMaxAge )
return;
//get current speed
currentSpeed = newLocation.speed;
//this puts the GPS to sleep, saving power
[locManager stopUpdatingLocation];
//timer fires after 60 seconds, then stops
self.timer = [NSTimer scheduledTimerWithTimeInterval:60.0 target:self selector:#selector(timeIntervalEnded:) userInfo:nil repeats:NO];
}
//this is a wrapper method to fit the required selector signature
- (void)timeIntervalEnded:(NSTimer*)timer {
[self startReadingLocation];
}
The error in the acceleration will accumulate over time. Your best bet is to get an accurate velocity from the GPS, maybe once a minute or less:
distanceTravelled = sqrt( (position2.x-position1.x)^2 + (position2.y-position1.y)^2 )
velocity = distanceTravelled/timeBetweenGPSReadings
(where ^2 means squared)
Then take frequent measurements of the accelerometer:
newVelocity = oldVelocity + accelerometer*timeBetweenAccelerometerReadings
I'm not sure you'll get very far trying to track velocity using the accelerometer. To do this, you'd have to make sure you captured EVERY acceleration, since any missed data points would indicate the wrong velocity (this is assuming, of course, you're able to convert the reported accelerometer values into standard units). Thus, you'd have to constantly run the accelerometer stuff, which sucks quite a bit of juice in itself (and, again, you won't be guaranteed all accelerations). I'd recommend using CoreLocation.
While I don't know the iPhone API, I do know something about GPS and inertial navigation. It might be useful.
The GPS receivers I have worked with can all provide a direct measurement of velocity from the GPS signals they receive. These measurements are more accurate than position data even. I don't know if the Apple API provides access, or even if apple has configured their receiver to provide this data. This would be the more efficient route to getting a velocity measurement.
The next path, given that you have accelerometer data and GPS data is to combine them as mentioned earlier by other posters and comments. Using the GPS to periodically correct the accumulated intertial measurement from the accelerometer data works very well in practice. It provides the benefit of more frequent accelerometer measurements, and the accuracy of the GPS measurements. A Kalman filter is commonly used. But given the accuracy and timing limits of your chosen platform a kalman filter may be overkill and something simpler to implement and run should work fine.
Anyway, just some things to think about.