Just want know if I'm missing something. I'm from ObjC land where NSTimeInterval is a double, which gives "sub-millisecond precision within a range of 10,000 years". Compare this to Unity, which since it uses a float for time, starts to break down after a day (maybe even sooner). Math.Approximately(1 day, a day + 1 frame) returns true for example (whereas 1 hour vs 1 hour + 1 frame correctly returns false). I actually experienced this when I left my game open all night and came back to it, noticing strange behavior on things that were time dependent.
Unity internally uses a double to track time. The double is converted to a float before being passed to user code.
This is largely because Unity uses floats in most other locations (floats are much better supported on a range of graphics cards/platforms).
Since Unity uses a double internally, you don't need to worry about it losing count / failing to increment time.
What you do need to worry about is that after the game has been running for a number of hours, the representable values become more and more sparse.
This can cause things that moved smoothly to start stuttering.
Personally, I tend to keep my own (float) and reset it to zero at some sensible interval, ideally when it makes no difference (which depends what you're using it for).
If Unity uses float for time then Unity is making a huge mistake and you should use other sources for time. The 24-bit precision of a float means that after a day you will only have about 4 ms accuracy. Bugs may start showing up long before that, and some games actually need time to be stable for much longer than a day.
Even if it doesn't seem like you need much time precision, using float means that your time precision is dropping as the game goes on, adding an extra cause for bugs.
There are many games that have had bugs because they use floats for time, and burning that bad decision into a game engine is a terrible idea. I discussed this problem a few years ago after seeing this mistake repeated many times in the games I was working on at the time:
https://randomascii.wordpress.com/2012/02/13/dont-store-that-in-a-float/
The main recommendation is to use double or int64 for time, and if you use double then start it with a value of about 5 billion so that your precision will be consistent throughout your game, instead of gradually dropping.
Unity3d uses floats for many components in the engine. Therefore you will find that a lot of functions and values will return floats or store floats respectively. Once you have been programming in Unity3d for a while you will even get the inside joke on their builds -- usually they look like this: 4.3.1f -- everything is a float.
You should be able to use .NET to get time in double if you use C#. Also I highly recommend, for some things, using the .NET Math class instead of the Unity Math.h, one is fast the other is in floats.
Float is used not only in time representation but everywhere in Unity and in most engines simply because it's good enough for games and uses less resources. By "good enough" I mean that you probably won't need more precision in most situations. Like the example you gave, it's very rare that someone will run into this situation.
In this question, there's this very nice answer:
Floats have no problem doing precise integer arithmetic up to 224, which is what most people are (mostly irrationally) afraid of. Doubles do not solve the problem of common values being unrepresentable exactly - whether you get 0.10000001 or 0.10000000000000001, you still must make sure your code considers it "equal" to 0.09999999.
Doubles are slower on every platform you'll care about writing games, take twice the memory bandwidth, and have fewer specialized CPU instructions available.
Single-precision floats remain the standard for hardware graphics interfaces, both on the C/C++ side and inside shaders.
Related
Right now, my current method for updating very large numbers is the follows:
Keep track of the health of a given monster, to be attacked, with both a
double to keep track of significant digits, and an int to keep track of the power
If the monster is attacked, only update the small double / exponent values used
By following the above, I am only ever dealing with calculations that are between small doubles between 0 and 10. However, I feel as though this is extremely complicated, in comparison to simply using bigintegers.
I have worked in Java previously, and using BigInt resulted in a HUGE performance loss when the exponential numbers became massive (e.g., 10 ^ 20000 +), if I remember correctly.
In the interest of coding a game, and not having to re-evaluate formulas later, what is the best course of action?
Am I really saving that much performance by keeping calculations between small numbers? Or is there an implementation of BigDouble / BigInt for Swift that makes any gain in performance by another implementation negligible? I am well versed in how to use BigDouble / BigInt, so I am really only concerned with the performance difference between using either of those versus an implementation of big numbers by splitting the number into a double (to represent significant digits) and an int (to represent the exponent).
Thank you, and if there needs to be any clarification, I can provide it.
Developing with various game physics engines over the years, I've noticed that on the same machine I observe widely different results in physics simulations between runs. Most recently, the Unity engine does this, even though physics are calculated at set intervals of time (FixedUpdate) -- as far as I can determine it should be completely independent of frame-rate.
I've asked this question on game forums before, and was told it was due to chaotic motion: see double pendulum. But, even the double pendulum is deterministic if the starting conditions are exactly controlled, right? On the same machine, shouldn't floating point math behave the same way?
I understand that there are problems with floating point math accuracy, but I understand those problems (as outlined here) to not be problems on the same hardware -- isn't floating point inaccuracy still deterministic? What am I missing?
tl;dr: If running a simulation on the same machine, using the same floating point math(?), shouldn't the simulation be deterministic?
Thank you very much for your time.
Yes, you are correct. A program executed on the same machine will give identical results each time (at least ideally---there might be cosmic rays or other external things that affect memory and what not, but I would say these are not of our concern). All calculations on a computer are deterministic, and so all algorithms of a computer will necessarily be deterministic (which is the reason it's so hard to make random number generators)!
Most likely the randomness you see is implemented in the program with some random number generator, and the seed for the random numbers varies from run to run. Should you start the simulation with the same seed, you will see the same result.
Edit: I'm not familiar with Unity, but doing some more research seems to indicate that the FixedUpdate routine might be the problem.
Except for the audio thread, everything in unity is done in the main thread if you don't explicitly start a thread of your own. FixedUpdate runs on the very same thread, at the same interval as Update, except it makes up for lost time and simulates a fixed time step.
source
If this is the case, and the function itself looks somewhat like:
void physicsUpdate(double currentTime, double lastTime)
{
double deltaT = currentTime - lastTime;
// do physics using `deltaT`
}
Here we will naturally get different behaviour due to deltaT not being same from two different runs. This is determined from what other processes are running in the background, as they could delay the main thread. This function would be called irregularly and you would observe different results from runs. Note that these irregularities will mostly not be due to floating point inprecision, but due to inaccuracies when doing integration. (E.g. velocity is often calculated by v = a*deltaT, which assumes a constant acceleration since last update. This is in general not true).
However, if the function would look like this:
void physicsUpdate(double deltaT)
{
// do physics using `deltaT`
}
Every time you do simulations using this you will always get the exact same result.
I've not got much experience with Unity or its physics simulations, but I've found the following forum post which also links to an article which seems to indicate it's down to precision with the floating point calculations.
As you've mentioned, a lot of people seem to keep rehashing this question!
The forum also links to this blog post which may shed some light on the issue.
I understand that there can be a .000000000000001 margin of error for double math and this is be made worse by multiplication to make the margin of error larger. With that said, is it possible to round off every calculation to a significant digit (maybe 4 decimal places) to achieve consistency across all platforms? Would it simply be more efficient using decimal math or will decimal math require similar rounding?
I will be using this for my lockstep RTS game which requires a deterministic physics engine for synchronous multiplayer. I'm using C#. Some calculations and some calculations I wish to perform include Sqrt, Sin, and Pow of the System.Math library.
I've actually been thinking about the whole matter in the wrong way. Instead of trying to minimize errors with greater accuracy (and more overhead), I should just use a type that stores and operates deterministically. I used the answer here: Fixed point math in c#? which helped me create a fixed point type that works perfectly and efficiently.
I'm building a library for iphone (speex, but i'm sure it will apply to a lot of other libs too) and the make script has an option to use fixed point instead of floating point.
As the iphone ARM processor has the VFP extension and performs very well floating point calculations, do you think it's a better choice to use the fixed point option ?
If someone already benchmarked this and wants to share , i would really thank him.
Well, it depends on the setup of your application, here is some guidelines
First try turning on optimization to 0s (Fastest Smallest)
Turn on Relax IEEE Compliance
If your application can easily process floating point numbers in contiguous memory locations independently, you should look at the ARM NEON intrinsic's and assembly instructions, they can process up to 4 floating point numbers in a single instruction.
If you are already heavily using floating point math, try to switch some of your logic to fixed point (but keep in mind that moving from an NEON register to an integer register results in a full pipeline stall)
If you are already heavily using integer math, try changing some of your logic to floating point math.
Remember to profile before optimization
And above all, better algorithms will always beat micro-optimizations such as the above.
If you are dealing with large blocks of sequential data, NEON is definitely the way to go.
Float or fixed, that's a good question. NEON is somewhat faster dealing with fixed, but I'd keep the native input format since conversions take time and eventually, extra memory.
Even if the lib offers a different output formats as an option, it almost alway means lib-internal conversions. So I guess float is the native one in this case. Stick to it.
Noone prevents you from micro-optimizing better algorithms. And usually, the better the algorithm, the more performance gain can be achieved through micro-optimizations due to the pipelining on modern machines.
I'd stay away from intrinsics though. There are so many posts on the net complaining about intrinsics doing something crazy, especially when dealing with immediate values.
It can and will get very troublesome, and you can hardly optimize anything with intrinsics either.
I am developing an augmented reality application that (at the moment) wants to display a simple cube on top of a surface, and be able to move in space (both rotating and displacing) to look at the cube in all the different angles. The problem of calibrating the camera doesn't apply here since I ask the user to place the iPhone on the surface he wants to place the cube on and then press a button to reset the attitude.
To find out the camera rotation is very simple with the Gyroscope and Core Motion. I do it this way:
if (referenceAttitude != nil) {
[attitude multiplyByInverseOfAttitude:referenceAttitude];
}
CMRotationMatrix mat = attitude.rotationMatrix;
GLfloat rotMat[] = {
mat.m11, mat.m21, mat.m31, 0,
mat.m12, mat.m22, mat.m32, 0,
mat.m13, mat.m23, mat.m33, 0,
0, 0, 0, 1
};
glMultMatrixf(rotMat);
This works really well.
More problems arise anyway when I try to find the displacement in space during an acceleration.
The Apple Teapot example with Core Motion just adds the x, y and z values of the acceleration vector to the position vector. This (apart from having not much sense) has the result of returning the object to the original position after an acceleration. (Since the acceleration goes from positive to negative or vice versa).
They did it like this:
translation.x += userAcceleration.x;
translation.y += userAcceleration.y;
translation.z += userAcceleration.z;
What should I do to find out displacement from the acceleration in some istant? (with known time difference). Looking some other answers, it seems like I have to integrate twice to get velocity from acceleration and then position from velocity. But there is no example in code whatsoever, and I don't think that is really necessary. Also, there is the problem that when the iPhone is still on a plane, accelerometer values are not null (there is some noise I think). How much should I filter those values? Am I supposed to filter them at all?
Cool, there are people out there struggling with the same problem so it is worth to spent some time :-)
I agree with westsider's statement as I spent a few weeks of experimenting with different approaches and ended up with poor results. I am sure that there won't be an acceptable solution for either larger distances or slow motions lasting for more than 1 or 2 seconds. If you can live with some restrictions like small distances (< 10 cm) and a given minimum velocity for your motions, then I believe there might be the chance to find a solution - no guarantee at all. If so, it will take you a pretty hard time of research and a lot of frustration, but if you get it, it will be very very cool :-) Maybe you find these hints useful:
First of all to make things easy just look at one axis e.g x but consider both left (-x) and right (+x) to have a representable situation.
Yes you are right, you have to integrate twice to get the position as function of time. And for further processing you should store the first integration's result (== velocity), because you will need it in a later stage for optimisation. Do it very careful because every tiny bug will lead to huge errors after short period of time.
Always bear in mind that even a very small error (e.g. <0.1%) will grow rapidly after doing integration twice. Situation will become even worse after one second if you configure accelerometer with let's say 50 Hz, i.e. 50 ticks are processed and the tiny neglectable error will outrun the "true" value. I would strongly recommend to not rely on trapezoidal rule but to use at least Simpson or a higher degree Newton-Cotes formula.
If you managed this, you will have to keep an eye on setting up the right low pass filtering. I cannot give a general value but as a rule of thumb experimenting with filtering factors between 0.2 and 0.8 will be a good starting point. The right value depends on the business case you need, for instance what kind of game, how fast to react on events, ...
Now you will have a solution which is working pretty good under certain circumstances and within a short period of time. But than after a few seconds you will run into trouble because your object is drifting away. Now you will enter the difficult part of the solution which I failed to handle eventually within the given time scope :-(
One promising approach is to introduce something I call "synthectic forces" or "virtual forces". This is some strategy to react on several bad situations triggering the object to drift away although the device remains fixed (? no native speaker, I mean without moving) in your hands. The most troubling one is a velocity greater than 0 without any acceleration. This is an unavoidable result of error propagation and can be handled by slowing down artificially that means introducing a virtual deceleration even if there is no real counterpart. A very simplified example:
if (vX > 0 && lastAccelerationXTimeStamp > 0.3sec) {
vX *= 0.9;
}
`
You will need a combination of such conditions to tame the beast. A lot of try and error is required to get a feeling for the right way to go and this will be the hard part of the problem.
If you ever managed to crack the code, pleeeease let me know, I am very curious to see if it is possible in general or not :-)
Cheers Kay
When the iPhone 4 was very new, I spent many, many hours trying to get an accurate displacement using accelerometers and gyroscope. There shouldn't have been much concern about incremental drift as device needed only move a couple of meters at most and the data collection typically ran for a few minutes at most. We tried all sorts of approaches and even had help from several Apple engineers. Ultimately, it seemed that the gyroscope wasn't up to the task. It was good for 3D orientation but that was it ... again, according to very knowledgable engineers.
I would love to hear someone contradict this - because the app never really turned out as we had hoped, etc.
I am also trying to get displacement on the iPhone. Instead of using integration I used the basic physics formula of d = .5a * t^2 assuming an initial velocity of 0 (doesn't sound like you can assume initial velocity of 0). So far it seems to work quite well.
My problem is that I'm using the deviceMotion.and the values are not correct. deviceMotion.gravity read near 0. Any ideas? - OK Fixed, apparently deviceMotion.gravity has a x, y, and z values. If you don't specify which you want you get back x (which should be near 0).
Find this question two years later, I just find a AR project on iOS 6 docset named pARk, It provide a proximate displacement capture and calculation using Gyroscope, aka CoreMotion.Framework.
I'm just starting leaning the code.
to be continued...