I am developing a multiplayer game with Box2D physics for iOS. The multiplayer is using lock-step method as usual. The game updates physics world fixed-timely. There is no desync among iOS devices with same CPU.
However, when testing with new iOS devices with Apple A6 chip, desync happened. Viewing my log file gives me the impression that desync happens quite fast and it was probably because of some floating point operation that I could not find out which yet.
I can guarantee that only Box2D is the only module needed to be synchronized in the design of the game, and all mutliplayer commands and inputs are not out-of-sync according to my log.
I have tried changing all transcendental functions: sinf, cosf, pow, sqrtf, atan2f to double version, but without any luck.
Is there any way to force Apple A6 treating floating point numbers as same as Apple A5 like some compiler options?
I will really appreciate any answer.
A number of math library functions use different algorithms on the A5 and A6. If they differ by more than an ulp or two, you may have found a bug; please report it. Otherwise, the variation is likely within the expected tolerances of good-quality math library. For a glimpse into the reasons of why this is so, the best reference is Ian Ollmann's email to the mac-games-dev mailing list several years ago, "the math library is not a security tool", which addressed this exact issue in the context of Mac OS X. (the tl;dr version is that the goal of delivering bit-identical results across architectures, which some game developers want, is fundamentally in conflict with delivering high-accuracy answers as efficiently as possible on all architectures, which all developers [and users, since it benefits responsiveness and battery life] want; something has to give, and for a general-purpose system library the latter necessarily takes priority). The Apple developer forums would be another good place to look for information.
it is actually Nguyen Truong Chung again.
Thank you all very much for your answers so far. I really appreciate your answers, which enlightened me the path to continue debugging! At the moment, I somehow found out the reason of the desync, but without concrete solutions yet. I wish to provide you with information I got, and hopefully I can get some more in-sights.
1. Finding:
I have this function that use cos. I printed the log like this:
void rotateZ( float angle )
{
if( angle )
{
const float sinTheta = sin( angle );
const float cosTheta = cos( angle );
// I logged here
myLog( "Vector3D::SelfRotateZ(%x) %x, %x", *(unsigned int*)&angle, *(unsigned int*)&cosTheta, *(unsigned int*)&sinTheta );
....
}
}
Desync happened like this:
On iPad4: Vector3D::SelfRotateZ(404800d2) bf7ff708, 3c8782bc
On iPhone4: Vector3D::SelfRotateZ(404800d2) bf7ff709, 3c8782bc
2. Re-testing:
And the story does not stop here because:
I tried these line of code at the beginning of the game:
{
unsigned int zz = 0x404800d2;
float yy = 0;
memcpy( &yy, &zz, 4 );
const float temp1 = cos( yy );
printf( "%x\n", *(unsigned int*)&temp1;
}
I ran the code above on the same iPhone4, and guess what? I got this: bf7ff708
I put that code in the update loop of the game and the result I got was still bf7ff708 at every loop.
What is more? The value 0x404800d2 is an initialize value of the game, so every time the game starts, the two desync lines above are always present there.
3: The questioning:
So, I decided to forget what happened above, and temporarily replaced sin, cos function with simple Taylor implementations I found on dreamcode.net. The desync no longer happened.
It seems that the cos function is not even deterministic on the same iPhone 4 (OS version 5).
My question is: Do we have an explanation why cos function returns different result for the same input on a same phone? Here I have the input 0x404800d2, and two different outputs: bf7ff708 and bf7ff709. However, I cannot reproduce the result bf7ff709 by simply coding.
I guess I need the source code of math functions of the OS (floating-point version) in order to understand this clearly. Is above problem I found enough as a bug report?
It's actually Nguyen Truong Chung again. :)
Thank you very much for your help so far.
I just want to report that the desync is fixed after I rewrote all transcendental functions like cos, sin, sqrt, pow, atan2, atan, asin, acos, etc. ( as many as possible ).
Related
Developing with various game physics engines over the years, I've noticed that on the same machine I observe widely different results in physics simulations between runs. Most recently, the Unity engine does this, even though physics are calculated at set intervals of time (FixedUpdate) -- as far as I can determine it should be completely independent of frame-rate.
I've asked this question on game forums before, and was told it was due to chaotic motion: see double pendulum. But, even the double pendulum is deterministic if the starting conditions are exactly controlled, right? On the same machine, shouldn't floating point math behave the same way?
I understand that there are problems with floating point math accuracy, but I understand those problems (as outlined here) to not be problems on the same hardware -- isn't floating point inaccuracy still deterministic? What am I missing?
tl;dr: If running a simulation on the same machine, using the same floating point math(?), shouldn't the simulation be deterministic?
Thank you very much for your time.
Yes, you are correct. A program executed on the same machine will give identical results each time (at least ideally---there might be cosmic rays or other external things that affect memory and what not, but I would say these are not of our concern). All calculations on a computer are deterministic, and so all algorithms of a computer will necessarily be deterministic (which is the reason it's so hard to make random number generators)!
Most likely the randomness you see is implemented in the program with some random number generator, and the seed for the random numbers varies from run to run. Should you start the simulation with the same seed, you will see the same result.
Edit: I'm not familiar with Unity, but doing some more research seems to indicate that the FixedUpdate routine might be the problem.
Except for the audio thread, everything in unity is done in the main thread if you don't explicitly start a thread of your own. FixedUpdate runs on the very same thread, at the same interval as Update, except it makes up for lost time and simulates a fixed time step.
source
If this is the case, and the function itself looks somewhat like:
void physicsUpdate(double currentTime, double lastTime)
{
double deltaT = currentTime - lastTime;
// do physics using `deltaT`
}
Here we will naturally get different behaviour due to deltaT not being same from two different runs. This is determined from what other processes are running in the background, as they could delay the main thread. This function would be called irregularly and you would observe different results from runs. Note that these irregularities will mostly not be due to floating point inprecision, but due to inaccuracies when doing integration. (E.g. velocity is often calculated by v = a*deltaT, which assumes a constant acceleration since last update. This is in general not true).
However, if the function would look like this:
void physicsUpdate(double deltaT)
{
// do physics using `deltaT`
}
Every time you do simulations using this you will always get the exact same result.
I've not got much experience with Unity or its physics simulations, but I've found the following forum post which also links to an article which seems to indicate it's down to precision with the floating point calculations.
As you've mentioned, a lot of people seem to keep rehashing this question!
The forum also links to this blog post which may shed some light on the issue.
Just want know if I'm missing something. I'm from ObjC land where NSTimeInterval is a double, which gives "sub-millisecond precision within a range of 10,000 years". Compare this to Unity, which since it uses a float for time, starts to break down after a day (maybe even sooner). Math.Approximately(1 day, a day + 1 frame) returns true for example (whereas 1 hour vs 1 hour + 1 frame correctly returns false). I actually experienced this when I left my game open all night and came back to it, noticing strange behavior on things that were time dependent.
Unity internally uses a double to track time. The double is converted to a float before being passed to user code.
This is largely because Unity uses floats in most other locations (floats are much better supported on a range of graphics cards/platforms).
Since Unity uses a double internally, you don't need to worry about it losing count / failing to increment time.
What you do need to worry about is that after the game has been running for a number of hours, the representable values become more and more sparse.
This can cause things that moved smoothly to start stuttering.
Personally, I tend to keep my own (float) and reset it to zero at some sensible interval, ideally when it makes no difference (which depends what you're using it for).
If Unity uses float for time then Unity is making a huge mistake and you should use other sources for time. The 24-bit precision of a float means that after a day you will only have about 4 ms accuracy. Bugs may start showing up long before that, and some games actually need time to be stable for much longer than a day.
Even if it doesn't seem like you need much time precision, using float means that your time precision is dropping as the game goes on, adding an extra cause for bugs.
There are many games that have had bugs because they use floats for time, and burning that bad decision into a game engine is a terrible idea. I discussed this problem a few years ago after seeing this mistake repeated many times in the games I was working on at the time:
https://randomascii.wordpress.com/2012/02/13/dont-store-that-in-a-float/
The main recommendation is to use double or int64 for time, and if you use double then start it with a value of about 5 billion so that your precision will be consistent throughout your game, instead of gradually dropping.
Unity3d uses floats for many components in the engine. Therefore you will find that a lot of functions and values will return floats or store floats respectively. Once you have been programming in Unity3d for a while you will even get the inside joke on their builds -- usually they look like this: 4.3.1f -- everything is a float.
You should be able to use .NET to get time in double if you use C#. Also I highly recommend, for some things, using the .NET Math class instead of the Unity Math.h, one is fast the other is in floats.
Float is used not only in time representation but everywhere in Unity and in most engines simply because it's good enough for games and uses less resources. By "good enough" I mean that you probably won't need more precision in most situations. Like the example you gave, it's very rare that someone will run into this situation.
In this question, there's this very nice answer:
Floats have no problem doing precise integer arithmetic up to 224, which is what most people are (mostly irrationally) afraid of. Doubles do not solve the problem of common values being unrepresentable exactly - whether you get 0.10000001 or 0.10000000000000001, you still must make sure your code considers it "equal" to 0.09999999.
Doubles are slower on every platform you'll care about writing games, take twice the memory bandwidth, and have fewer specialized CPU instructions available.
Single-precision floats remain the standard for hardware graphics interfaces, both on the C/C++ side and inside shaders.
so I have the following Integral that i need to do numerically:
Int[Exp(0.5*(aCosx + bSinx + cCos2x + dSin2x))] x=0..2Pi
The problem is that the output at any given value of x can be extremely large, e^2000, so larger than I can deal with in double precision.
I havn't had much luck googling for the following, how do you deal with large numbers in fortran, not high precision, i dont care if i know it to beyond double precision, and at the end i'll just be taking the log, but i just need to be able to handle the large numbers untill i can take the log..
Are there integration packes that have the ability to handle arbitrarily large numbers? Mathematica clearly can.. so there must be something like this out there.
Cheers
This is probably an extended comment rather than an answer but here goes anyway ...
As you've already observed Fortran isn't equipped, out of the box, with the facility for handling such large numbers as e^2000. I think you have 3 options.
Use mathematics to reduce your problem to one which does (or a number of related ones which do) fall within the numerical range that your Fortran compiler can compute.
Use Mathematica or one of the other computer algebra systems (eg Maple, SAGE, Maxima). All (I think) of these can be integrated into a Fortran program (with varying degrees of difficulty and integration).
Use a library for high-precision (often called either arbitray-precision or multiple-precision too) arithmetic. Your favourite search engine will turn up a number of these for you, some written in Fortran (and therefore easy to integrate), some written in C/C++ or other languages (and therefore slightly harder to integrate). You might start your search at Lawrence Berkeley or the GNU bignum library.
(Yes I know that I wrote that you have 3 options, but your question suggests that you aren't ready to consider this yet) You could write your own high-/arbitrary-/multiple-precision functions. Fortran provides everything you need to construct such a library, there is a lot of work already done in the field to learn from, and it might be something of interest to you.
In practice it generally makes sense to apply as much mathematics as possible to a problem before resorting to a computer, that process can not only assist in solving the problem but guide your selection or construction of a program to solve what's left of the problem.
I agree with High Peformance Mark that the best option here numerically is to use analytics to scale or simplify the result first.
I will mention that if you do want to brute force it, gfortran (as of 4.6, with the libquadmath library) has support for quadruple precision reals, which you can use by selecting the appropriate kind. As long as your answers (and the intermediate results!) don't get too much bigger than what you're describing, that may work, but it will generally be much slower than double precision.
This requires looking deeper at the problem you are trying to solve and the behavior of the underlying mathematics. To add to the good advice already provided by Mark and Jonathan, consider expanding the exponential and trig functions into Taylor series and truncating to the desired level of precision.
Also, take a step back and ask why you are trying to accomplish by calculating this value. As an example, I recently had to debug why I was getting outlandish results from a property correlation which was calculating vapor pressure of a fluid to see if condensation was occurring. I spent a long time trying to understand what was wrong with the temperature being fed into the correlation until I realized the case causing the error was a simulation of vapor detonation. The problem was not in the numerics but in the logic of checking for condensation during a literal explosion; physically, a condensation check made no sense. The real problem was the code was asking an unnecessary question; it already had the answer.
I highly recommend Forman Acton's Numerical Methods That (Usually) Work and Real Computing Made Real. Both focus on problems like this and suggest techniques to tame ill-mannered computations.
As a follow-up to this question:
I was in the process of implementing a calculator app using Apple's complex number support when I noticed that if one calculates using that support, one ends up with the following:
(1+i)^2=1.2246063538223773e-16 + 2i
Of course the correct identity is (1+i)^2=2i. This is a specific example of a more general phenomenon -- roundoff errors can be really annoying if they round a part that is supposed to be zero to something that is slightly nonzero.
Suggestions on how to deal with this? I could implement integer powers of complex numbers in other ways, but the general problem will remain, and my solution could itself cause other inconsistencies.
As you note, this is as standard rounding error issue with floating points. A #Howard notes, you should likely round your double results back into the float range before displaying.
I typically use FLT_EPSILON to help me with these kinds of things as well.
#define fequal(a,b) (fabs((a) - (b)) < FLT_EPSILON)
#define fequalzero(a) (fabs(a) < FLT_EPSILON)
With those, you might like a function like this (untested)
inline void froundzero(a) { if (fequalzero(a)) a = 0; }
The complex version is left as an exercise for the reader as they say :D
I am developing an augmented reality application that (at the moment) wants to display a simple cube on top of a surface, and be able to move in space (both rotating and displacing) to look at the cube in all the different angles. The problem of calibrating the camera doesn't apply here since I ask the user to place the iPhone on the surface he wants to place the cube on and then press a button to reset the attitude.
To find out the camera rotation is very simple with the Gyroscope and Core Motion. I do it this way:
if (referenceAttitude != nil) {
[attitude multiplyByInverseOfAttitude:referenceAttitude];
}
CMRotationMatrix mat = attitude.rotationMatrix;
GLfloat rotMat[] = {
mat.m11, mat.m21, mat.m31, 0,
mat.m12, mat.m22, mat.m32, 0,
mat.m13, mat.m23, mat.m33, 0,
0, 0, 0, 1
};
glMultMatrixf(rotMat);
This works really well.
More problems arise anyway when I try to find the displacement in space during an acceleration.
The Apple Teapot example with Core Motion just adds the x, y and z values of the acceleration vector to the position vector. This (apart from having not much sense) has the result of returning the object to the original position after an acceleration. (Since the acceleration goes from positive to negative or vice versa).
They did it like this:
translation.x += userAcceleration.x;
translation.y += userAcceleration.y;
translation.z += userAcceleration.z;
What should I do to find out displacement from the acceleration in some istant? (with known time difference). Looking some other answers, it seems like I have to integrate twice to get velocity from acceleration and then position from velocity. But there is no example in code whatsoever, and I don't think that is really necessary. Also, there is the problem that when the iPhone is still on a plane, accelerometer values are not null (there is some noise I think). How much should I filter those values? Am I supposed to filter them at all?
Cool, there are people out there struggling with the same problem so it is worth to spent some time :-)
I agree with westsider's statement as I spent a few weeks of experimenting with different approaches and ended up with poor results. I am sure that there won't be an acceptable solution for either larger distances or slow motions lasting for more than 1 or 2 seconds. If you can live with some restrictions like small distances (< 10 cm) and a given minimum velocity for your motions, then I believe there might be the chance to find a solution - no guarantee at all. If so, it will take you a pretty hard time of research and a lot of frustration, but if you get it, it will be very very cool :-) Maybe you find these hints useful:
First of all to make things easy just look at one axis e.g x but consider both left (-x) and right (+x) to have a representable situation.
Yes you are right, you have to integrate twice to get the position as function of time. And for further processing you should store the first integration's result (== velocity), because you will need it in a later stage for optimisation. Do it very careful because every tiny bug will lead to huge errors after short period of time.
Always bear in mind that even a very small error (e.g. <0.1%) will grow rapidly after doing integration twice. Situation will become even worse after one second if you configure accelerometer with let's say 50 Hz, i.e. 50 ticks are processed and the tiny neglectable error will outrun the "true" value. I would strongly recommend to not rely on trapezoidal rule but to use at least Simpson or a higher degree Newton-Cotes formula.
If you managed this, you will have to keep an eye on setting up the right low pass filtering. I cannot give a general value but as a rule of thumb experimenting with filtering factors between 0.2 and 0.8 will be a good starting point. The right value depends on the business case you need, for instance what kind of game, how fast to react on events, ...
Now you will have a solution which is working pretty good under certain circumstances and within a short period of time. But than after a few seconds you will run into trouble because your object is drifting away. Now you will enter the difficult part of the solution which I failed to handle eventually within the given time scope :-(
One promising approach is to introduce something I call "synthectic forces" or "virtual forces". This is some strategy to react on several bad situations triggering the object to drift away although the device remains fixed (? no native speaker, I mean without moving) in your hands. The most troubling one is a velocity greater than 0 without any acceleration. This is an unavoidable result of error propagation and can be handled by slowing down artificially that means introducing a virtual deceleration even if there is no real counterpart. A very simplified example:
if (vX > 0 && lastAccelerationXTimeStamp > 0.3sec) {
vX *= 0.9;
}
`
You will need a combination of such conditions to tame the beast. A lot of try and error is required to get a feeling for the right way to go and this will be the hard part of the problem.
If you ever managed to crack the code, pleeeease let me know, I am very curious to see if it is possible in general or not :-)
Cheers Kay
When the iPhone 4 was very new, I spent many, many hours trying to get an accurate displacement using accelerometers and gyroscope. There shouldn't have been much concern about incremental drift as device needed only move a couple of meters at most and the data collection typically ran for a few minutes at most. We tried all sorts of approaches and even had help from several Apple engineers. Ultimately, it seemed that the gyroscope wasn't up to the task. It was good for 3D orientation but that was it ... again, according to very knowledgable engineers.
I would love to hear someone contradict this - because the app never really turned out as we had hoped, etc.
I am also trying to get displacement on the iPhone. Instead of using integration I used the basic physics formula of d = .5a * t^2 assuming an initial velocity of 0 (doesn't sound like you can assume initial velocity of 0). So far it seems to work quite well.
My problem is that I'm using the deviceMotion.and the values are not correct. deviceMotion.gravity read near 0. Any ideas? - OK Fixed, apparently deviceMotion.gravity has a x, y, and z values. If you don't specify which you want you get back x (which should be near 0).
Find this question two years later, I just find a AR project on iOS 6 docset named pARk, It provide a proximate displacement capture and calculation using Gyroscope, aka CoreMotion.Framework.
I'm just starting leaning the code.
to be continued...