Implementing a saved ghost for each level in a game - iphone

I'm developing a video game for the iPhone. I want each level to save a "ghost" of the best run, just like in Mario Kart.
I've already implemented it, but I used a brute force approach that stores (x, y, rotation) values each frame (60 per second). Needless to say this is very expensive, especially for a mobile device.
I guess I could store the data less often and interpolate when rendering the ghost, but I'm not 100% sure how to do this and if it will look good.
Has anyone done this before? How would you implement it? Is there a standard approach?

Linear interpolation is very easy, if that's what you want. But if the objects you want to interpolate can have non-linear trajectories (from ballistic or lateral accelleration effects) the thing gets more complicated.
The simplest approach would be for you to record the initial position for an object and its movement vector (x,y,z speed, as offsets per time unit). You won't have to record it again for that object until it changes its speed and/or its direction. Then you just record the elapsed time, the new position and the new movement vector (theoretically you don't have to record the position again, just the vectors, but I'd recommend doing so to have a check value - after the program is debugged you can discard it).
Then to playback you place the object at the original position and, for each time frame, add the movement offset to it until it reaches the time for the next recorded position. And so on.

Related

Unity - In what cases should I add Time.deltaTime?

I'm new to Unity and I see many times that Time.deltaTime needs to be added. In which cases should I add it? I know this is so that there will be no excess power in the event of a quick refresh of the frame's computer.
For example, in the next case, do I need to add Time.deltaTime?
playerRigidbody.AddForce(Vector3.up * 100 * Time.deltaTime, ForceMode.Impulse);
Time.deltaTime is the amount of seconds it took for the engine to process the previous frame. It's calculation is rather simple: it uses the system's internal clock to compare the system time when the engine started processing the previous frame to the system time when the engine started processing the current frame. Every motherboard has a "system clock" which is responsible to keep track of time. Operating systems have access to that system clock and provide API's to read that clock. And Unity gets the time from that API and that's how things are synchronized.
Think of a game as a movie, which is essentially a sequence of images. The difference is that a movie is rendered at a fixed rate of 24 images per second, while a game doesn't have a fixed frame rate.
In a movie, if a car travels at 1 meter per second, each image will make it move by 1/24 meter, and after 24 images (1 second) the car will have traveled exactly 1 meter. It's easy because we know that each frame takes exactly 1/24 second.
In a game, we have to do the same thing, except the frame rate varies. Some frames can take 1/60 second, some others can take 1/10 second. We can't use a fixed ratio. Instead of a fixed number we have to use Time.deltaTime. Each frame, the car will move a distance proportional to the time of the frame. After roughly 1 second, the car will have traveled roughly 1 meter
Delta is the mathematical symbol for a finite difference. Its use is very common in english when talking about something that changed over time.
deltaTime is a difference of time, so it's a Delta
Shorter Terms
You must always use Time.deltaTime when moving objects over time, either by supplying it yourself or by making use of functions like SmoothDamp that use Time.deltaTime by default (hardly any functions do that though). But you must never use Time.deltaTime for input that's already framerate-independent, such as mouse movement, since in that case using Time.deltaTime will do the opposite of what you intend.
If you're ever unsure you can always toggle vsync on and off and observe any potential speed differences. If the game's already running close to the monitor refresh rate anyway, then make use of Application.targetFrameRate instead.
In very easy words
Time.deltaTime is the time passed since last frame was rendered.
By multiplying a value with it you basically convert it from Something per frame into Something per second.
Is it needed?
Now if you need to use it totally depends on your specific use-case! In your case for AddForce: NO!.
The force influences the velocity of a physics object. The velocity itself already is an absolute per second vector.
Usually there are two use-cases for AddForce:
It is called continuously but within FixedUpdate
Because FixedUpdate is not called every frame anyway but rather on a fixed real time intervals (by default 0.02 seconds) you would not need Time.deltaTime. The Doc already provide this in the example.
It is anyway called only as a single event (e.g. by jumping)
Here there is nothing continuous, so you don't need and don't want to use Time.deltaTime either since a single event can not be frame-rate-dependent anyway.

Calculating Lean Angle with Core Motion

I have a record session for my application. When user started a record session I start collecting data from device's CMMotionManager object and store them on CoreData to process and present later. The data I'm collecting includes gps data, accelerometer data and gyro data. The frequency of data is 10Hz.
Currently I'm struggling to calculate the lean angle of device with motion data. It is possible to calculate which side of device is land by using gravity data but I want to calculate right or left angle between user and ground regardless of travel direction.
This problem requires some linear algebra knowledge to solve. For example for calculation on some point I must calculate the equation of a 3D line on a calculated plane. I am working on this one for a day and it's getting more complex. I'm not good at math at all. Some math examples related to the problem is appreciated too.
It depends on what you want to do with the collected data and what ways the user will go with that recording iPhone in her/his pocket. The reason is that Euler angles are no safe and especially no unique way to express a rotation. Consider a situation where the user puts the phone upright into his jeans' back pocket and then turns left around 90°. Because CMAttitude is related to a device lying flat on the table, you have two subsequent rotations for (pitch=x, roll=y, yaw=z) according to this picture:
pitch +90° for getting the phone upright => (90, 0, 0)
roll +90° for turning left => (90, 90, 0)
But you can get the same position by:
yaw +90° for turning the phone left (0, 0, 90)
pitch -90° for making the phone upright (-90, 0, 90)
You see two different representations (90, 90, 0) and (-90, 0, 90) for getting to the same rotation and there are more of them. So you press Start button, do some fancy rotations to put the phone into the pocket and you are in trouble because you can't rely on Euler Angles when doing more complex motions (s. gimbal lock for more headaches on this ;-)
Now the good news: you are right linear algebra will do the job. What you can do is force your users to put the phone in always the same position e.g. fixed upright in the right back pocket and calculate the angle(s) relative to the ground by building the dot product of gravity vector from CMDeviceMotion g = (x, y, z) and the postion vector p which is the -Y axis (0, -1, 0) in upright position:
g • x = x*0 + y*(-1) + z*0 = -y = ||g||*1*cos (alpha)
=> alpha = arccos (-y/9.81) as total angle. Note that gravitational acceleration g is constantly about 9.81
To get the left-right lean angle and forward-back angle we use the tangens:
alphaLR = arctan (x/y)
alphaFB = arctan (z/y)
[UPDATE:]
If you can't rely on having the phone at a predefined postion like (0, -1, 0) in the equations above, you can only calculate the total angle but not the specific ones alphaLR and alphaFB. The reason is that you only have one axis of the new coordinate system where you need two of them. The new Y axis y' will then be defined as average gravity vector but you don't know your new X axis because every vector perpedicular to y' will be valid.
So you have to provide further information like let the users walk a longer distance into one direction without deviating and use GPS and magnetometer data to get the 2nd axis z'. Sounds pretty error prone in practise.
The total angle is no problem as we can replace (0, -1, 0) with the average gravity vector (pX, pY, pZ):
g•p = xpX + ypY + zpZ = ||g||||p||*cos(alpha) = ||g||^2*cos(alpha)
alpha = arccos ((xpX + ypY + z*pZ) / 9.81^2)
Two more things to bear in mind:
Different persons wear different trowsers with different pockets. So the gravity vector will be different even for the same person wearing other clothes and you might need some kind of normalisation
CMMotionManager does not work in the background i.e. the users must not push the standby button
If I understand your question, I think you are interested in getting the attitude of your device. You can do this using the attitude property of the CMDeviceMotion object that you get from the deviceMotion property of the CMMotionManager object.
There are two different angles that you might be interested in the CMAttitude class: roll and pitch. If you imagine your device as an airplane with the propeller at the top (where the headphone jack is), pitch is the angle the plane/device would make with the ground if the plane were in a climb or dive. Meanwhile, roll is the angle that the "wings" would make with the ground if the plane were to be banking or in mid barrel roll.
(BTW, there is a third angle called yaw that I think is not relevant for your question.)
The angles will be given in radians, but it's easy enough to convert them to degrees if that's what you want (by multiplying by 180 and then dividing by pi).
Assuming I understand what you want, the good news is that you may not need to understand any linear algebra to capture and use these angles. (If I'm missing something, please clarify and I'd be happy to help further.)
UPDATE (based on comments):
The attitude values in the CMAttitude object are relative to the ground (i.e., the default reference frame has the Z-axis as vertical, that is pointing in the opposite direction as gravity), so you don't have to worry about cancelling out gravity. So, for example, if you lie your device on a flat table top, and then roll it up onto its side, the roll property of the CMAttitude object will change from 0 to plus or minus 90 degrees (+- .5pi radians), depending on which side you roll it onto. Meanwhile, if you start it lying flat and then gradually stand it up on its end, the same will happen to the pitch property.
While you can use the pitch, roll, and yaw angles directly if you want, you can also set a different reference frame (e.g., a different direction for "up"). To do this, just capture the attitude in that orientation during a "calibration" step and then use CMAttitude's multiplyByInverseOfAttitude: method to transform your attitude data to the new reference frame.
Even though your question only mentioned capturing the "lean angle" (with the ground), you will probably want to capture at least 2 of the 3 attitude angles (e.g., pitch and either roll or yaw, depending on what they are doing), potentially all three, if the device is going to be in a person's pocket. (The device could rotate in the pocket in various ways if the pocket is baggy, for example.) For the most part, though, I think you will probably be able to rely on just two of the three (unless you see radical shifts in yaw throughout the course of a recording session). So for example, in my jeans pocket, the phone is usually nearly vertical. Thus, for me, pitch would vary a whole bunch as I, say, walk, sit or run. Roll would vary whenever I change the direction I'm facing. Meanwhile, yaw would not vary much at all (unless I do kart-wheels, which I can't!). So yaw can probably be ignored for me.
To summarize the main point: to use these attitude angles, you don't need to do any linear algebra, nor worry about gravity (although you may want to use this for other purposes, of course).
UPDATE 2 (based on Kay's new post):
Kay just replied and showed how to use gravity and linear algebra to make sure your angles are unique. (And, btw, I think you should give the bounty to that post, fwiw.)
Depending on what you want to do, you may want to use this math. You would want to use the linear algebra and gravity if you need a standardized way of "talking about" and/or comparing attitudes over the course of your recording session. If you just want to visualize them, you can probably still get away with not using the increased complexity. (For example, visualizing (pitch=90, roll=0, yaw=0) should be the same as visualizing (pitch=0, roll=90, yaw=90).) In my approach above, while you could have multiple ways of referring to the "same" attitude, none of them is actually wrong, per se. They will still give you the angles relative to the ground.
But the fact that the gyroscope can switch from one valid description of an attitude to another means that what I wrote above about getting away with only 2 of the 3 components needs to be corrected: because of this, you will need to capture all three components, no matter what. Sorry.

Calibrating code to iphone acellerometer and gyro data

I'm concepting an iPhone app that will require precise calibration to the iPhones accelerometer and gyro data. I will have to simulate specific movements that I would eventually like to execute code. (Think shake-to-shuffle, or undo).
Is there a good way of doing this already? or something you can come up with? Perhaps some way to generate a time/value graph of the movement data as it is being captured?
Movement data being captured - see the accelerometer graph sample app, which shows the data in real time: http://developer.apple.com/library/ios/#samplecode/AccelerometerGraph/Introduction/Intro.html
The data is pretty noisy - the gyro and accelerometer aren't good enough right now to be able to track where the phone is in local 3d space, for example. The rotation, however, is very solid, and the orientation of the device can be pretty accurately tracked. You may have the best results making gestures out of rotation data instead of movement along an axis. Or, basic direction like shakes along an axis will work as Jacob Jennings said.
A good starting point for accelerometer gesture recognition is this tutorial by Kevin Bomberry at AblePear:
http://blog.ablepear.com/2010/02/iphone-sdk-shake-rattle-roll.html
He sets a blanket threshold for the absolute value of acceleration on any axis. I would generate an 'event' for the axis that had the highest acceleration during the break of the threshold (Z POSITIVE, X NEGATIVE, etc), and push these on an 'event history' queue. At the end of each didAccelerate call, evaluate the queue for patterns that match a gesture, for example:
X POSITIVE, X NEGATIVE, X POSITIVE, X NEGATIVE might be considered a 'shake' along that axis. This should provide a couple different gesture commands.
See the following for a simple queue category addition to NSMutableArray:
How do I make and use a Queue in Objective-C?

Detect the iPhone rotation spin?

I want to create an application could detect the number of spin when user rotates the iPhone device. Currently, I am using the Compass API to get the angle and try many ways to detect spin. Below is the list of solutions that I've tried:
1/ Create 2 angle traps (piece on the full round) on the full round to detect whether the angle we get from compass passed them or not.
2/ Sum all angle distance between times that the compass is updated (in updateHeading function). Let try to divide the sum angle to 360 => we could get the spin number
The problem is: when the phone is rotated too fast, the compass cannot catch up with the speed of the phone, and it returns to us the angle with latest time (not continuously as in the real rotation).
We also try to use accelerometer to detect spin. However, this way cannot work when you rotate the phone on a flat plane.
If you have any solution or experience on this issue, please help me.
Thanks so much.
The iPhone4 contains a MEMS gyrocompass, so that's the most direct route.
As you've noticed, the magnetometer has sluggish response. This can be reduced by using an anticipatory algorithm that uses the sluggishness to make an educated guess about what the current direction really is.
First, you need to determine the actual performance of the sensor. To do this, you need to rotate it at a precise rate at each of several rotational speeds, and record the compass behavior. The rotational platform should have a way to read the instantaneous position.
At slower speeds, you will see a varying degree of fixed lag. As the speed increases, the lag will grow until it approaches 180 degrees, at which point the compass will suddenly flip. At higher speeds, all you will see is flipping, though it may appear to not flip when the flips repeat at the same value. At some of these higher speeds, the compass may appear to rotate backwards, opposite to the direction of rotation.
Getting a rotational table can be a hassle, and ensuring it doesn't affect the local magnetic field (making the compass useless) is a challenge. The ideal table will be made of aluminum, and if you need to use a steel table (most common), you will need to mount the phone on a non-magnetic platform to get it as far away from the steel as possible.
A local machine shop will be a good place to start: CNC machines are easily capable of doing what is needed.
Once you get the compass performance data, you will need to build a model of the observed readings vs. the actual orientation and rotational rate. Invert the model and apply it to the readings to obtain a guess of the actual readings.
A simple algorithm implementation will be to keep a history of the readings, and keep a list of the difference between sequential readings. Since we know there is compass lag, when a difference value is non-zero, we will know the current value has some degree of inaccuracy due to lag.
The next step is to create a list of 'corrected' readings, where the know lag of the prior actual values is used to generate an updated value that is used to create an updated value that is added to the last value in the 'corrected' list, and is stored as the newest value.
When the cumulative correction (the difference between the latest values in the actual and corrected list exceed 360 degrees, that means we basically don't know where the compass is pointing. Hopefully, that point won't be reached, since most rotational motion should generally be for a fairly short duration.
However, since your goal is only to count rotations, you will be off by less than a full rotation until the accumulated error reaches a substantially higher value. I'm not sure what this value will be, since it depends on both the actual compass lag and the actual rate of rotation. But if you care only about a small number of rotations (5 or so), you should be able to obtain usable results.
You could use the velocity of the acceleration to determine how fast the phone is spinning and use that to fill in the blanks until the phone has stopped, at which point you could query the compass again.
If you're using an iPhone 4, the problem has been solved and you can use Core Motion to get rotational data.
For earlier devices, I think an interesting approach would be to try to detect wobbling as the device rotates, using UIAccelerometer on a very fine reporting interval. You might be able to get some reasonable patterns detected from the motion at right angles to the plane of rotation.

Determining speed of shake

Is it possible to determine the speed at which someone is shaking their iPhone? This would be the time they start moving to the ending point where they are now going back to the origin. Basically it is one swipe that I'd like to measure the speed of. This discussion comments on initial speed: http://discussions.apple.com/message.jspa?messageID=8297689#8297689. It seems that the important component of distance is lacking in the iPhone to get a good measure of speed.
Sure, it sounds like all you'd need to do would be to numerically integrate the acceleration twice to get the distance traveled. For instance, look at
Calculate the position of an accelerating body after a certain time
Note that you'll have to subtract gravity from the measured acceleration to get the kinetic acceleration, which is what you should integrate. As for how to do that, re: GoatRider's comment: I might try storing the last measured acceleration whose magnitude was equal to gravity (I think that's 1 in iPhone units?). Then for each acceleration measurement you make whose magnitude is greater than 1, subtract the last known acceleration of gravity - this will need to be a vector subtraction - and use that as the kinetic acceleration. Of course, this assumes that the user keeps the phone in the same orientation throughout the swipe, which I think would be approximately true.
Unfortunately, there's no technique you can use to distinguish between gravitational acceleration and kinetic acceleration in general - that is, a determined user could always find a way to fool whatever algorithm you might come up with. (Trivia: that's called the equivalence principle, and it's the foundation of Einstein's theory of general relativity)
You'll have to do the calculations yourself. Each acceleration event you receive will tell you the relative G-forces registering on the accelerometer and the time at which the event was recorded. You'll have to sample over several events and interpolate. Here's more info on the acceleration event itself:
UIAcceleration Class Reference