How to detect iPhone movement in space using accelerometer? - iphone

I am trying to make an application that would detect what kind of shape you made with your iPhone using accelerometer.
As an example, if you draw a circle with your hand holding the iPhone, the app would be able to redraw it on the screen.
This could also work with squares, or even more complicated shapes.
The only example of application I've seen doing such a thing is AirPaint (http://vimeo.com/2276713), but it doesn't seems to be able to do it in real time.
My first try is to apply a low-pass filter on the X and Y parameters from the accelerometer, and to make a pointer move toward these values, proportionally to the size of the screen.
But this is clearly not enought, I have a very low accuracy, and if I shake the device it also makes the pointer move...
Any ideas about that ?
Do you think accelerometer data is enought to do it ? Or should I consider using other data, such as the compass ?
Thanks in advance !

OK I have found something that seems to work, but I still have some problems.
Here is how I proceed (admiting the device is hold verticaly) :
1 - I have my default x, y, and z values.
2 - I extract the gravity vector from this data using a low pass filter.
3 - I substract the normalized gravity vector from each x, y, and z, and get the movement acceleration.
4 - Then, I integrate this acceleration value with respect to time, so I get the velocity.
5 - I integrate this velocity again with respect to time, and find a position.
All of the below code is into the accelerometer:didAccelerate: delegate of my controller.
I am trying to make a ball moving according to the position i found.
Here is my code :
NSTimeInterval interval = 0;
NSDate *now = [NSDate date];
if (previousDate != nil)
{
interval = [now timeIntervalSinceDate:previousDate];
}
previousDate = now;
//Isolating gravity vector
gravity.x = currentAcceleration.x * kFileringFactor + gravity.x * (1.0 - kFileringFactor);
gravity.y = currentAcceleration.y * kFileringFactor + gravity.y * (1.0 - kFileringFactor);
gravity.z = currentAcceleration.z * kFileringFactor + gravity.z * (1.0 - kFileringFactor);
float gravityNorm = sqrt(gravity.x * gravity.x + gravity.y * gravity.y + gravity.z * gravity.z);
//Removing gravity vector from initial acceleration
filteredAcceleration.x = acceleration.x - gravity.x / gravityNorm;
filteredAcceleration.y = acceleration.y - gravity.y / gravityNorm;
filteredAcceleration.z = acceleration.z - gravity.z / gravityNorm;
//Calculating velocity related to time interval
velocity.x = velocity.x + filteredAcceleration.x * interval;
velocity.y = velocity.y + filteredAcceleration.y * interval;
velocity.z = velocity.z + filteredAcceleration.z * interval;
//Finding position
position.x = position.x + velocity.x * interval * 160;
position.y = position.y + velocity.y * interval * 230;
If I execute this, I get quite good values, I mean I can see the acceleration going into positive or negative values according to the movements I make.
But when I try to apply that position to my ball view, I can see it is moving, but with a propencity to go more in one direction than the other. This means, for example, if I draw circles with my device, i will see the ball describing curves towards the top-left corner of the screen.
Something like that : http://img685.imageshack.us/i/capturedcran20100422133.png/
Do you have any ideas about what is happening ?
Thanks in advance !

The problem is that you can't integrate acceleration twice to get position. Not without knowing initial position and velocity. Remember the +C term that you added in school when learning about integration? Well by the time you get to position it is a ct+k term. And it is is significant. That's before you consider that the acceleration data you're getting back is quantised and averaged, so you're not actually integrating the actual acceleration of the device. Those errors will end up being large when integrated twice.
Watch the AirPaint demo closely and you'll see exactly this happening, the shapes rendered are significantly different to the shapes moved.
Even devices that have some position and velocity sensing (a Wiimote, for example) have trouble doing gesture recognition. It is a tricky problem that folks pay good money (to companies like AILive, for example) to solve for them.
Having said that, you can probably quite easily distinguish between certain types of gesture, if their large scale characteristics are different. A circle can be detected if the device has received accelerations in each of six angle ranges (for example). You could detect between swiping the iphone through the air and shaking it.
To tell the difference between a circle and a square is going to be much more difficult.

You need to look up how acceleration relates to velocity and velocity to position. My mind is having a wee fart at the moment, but I am sure it the integral... you want to intergrate acceleration with respect to time. Wikipedia should help you with the maths and I am sure there is a good library somewhere that can help you out.
Just remember though that the accelerometers are not perfect nor polled fast enough. Really sudden movements may not be picked up that well. But for gently drawing in the air, it should work fine.

Seems like you are normalizing your gravity vector before subtraction with the instantaneous acceleration. This would keep the relative orientation but remove any relative scale. The latest device I tested (admittedly not an Idevice) returned gravity at roughly -9.8 which is probably calibrated to m/s. Assuming no other acceleration, if you were to normalize this then subtract it from the filtered pass, you would end up with a current accel of -8.8 instead of 0.0f;
2 options:
-You can just subtract out the gravity vector after the filter pass
-Capture the initial accel vector length, normalize the accel and the gravity vectors, scale the accel vector by the dot of the accel and gravity normals.
Also worth remembering to take the orientation of the device into account.

Related

Relative sizes of pymunk forces

In Pymunk, is the magnitude of gravity the same as the magnitude of apply_force_at_local_point or apply_force_at_world_point, relatively speaking. In other words, is the magnitude of gravity=(20,40) equal to the magnitude of apply_force_at_world_point((20,40),object's position).
I used the equation of motion, final position = initial position + intial velocity * time + 1/2 * acceleration * t^2, to test that. It turns out that these magnitudes are not equal. For instance, it took a force of (0,-7888) to equal gravity of (0,-1750).
I am trying to determine apply_force_at_world_point force that would equal/cancel out gravity. I know I can just set the body's gravity to zero to achieve that effect but my goal is to determine magnetic force that would be enough to levitate a magnet of given weight and magnetic strength.
How can I find the magnitude of force (without testing a bunch of random values) that would equal gravity.
I hope the information given is enough to understand the issue
You can see how the velocity of a body is updated in the Chipmunk source code: https://github.com/viblo/Chipmunk2D/blob/master/src/cpBody.c#L501
body->v = cpvadd(cpvmult(body->v, damping), cpvmult(cpvadd(gravity, cpvmult(body->f, body->m_inv)), dt));
Translated to Python/Pymunk this would be something like this:
body.velocity = body.velocity * damping + (gravity + body.force / body.mass) * dt
From this I think this should work to make a opposing force matching the gravity:
body.apply_force_at_local_point(-space.gravity * body.mass)
(I tested this in a simple simulation with some gravity and a ball shape/body and it seems to work as expected, instead of falling the ball stayed put)

Projectile Motion in Unity

So my friend and I are making some 2D game, we are using some custom character controller, so we are not using rigidbody2D. Now we have some sort of catapult which needs to eject the player in a projectile-motion style.
We've done it for the catapult which shoots the player straight up
In inspector you can decide how much units do you want player to jump and how much does it need to get to reach max height.
So here is the code for the catapult that shoots the player up.
float ejectInicialY = (jumpHeight - ( player.physics.gravity * Mathf.Pow(timeToReachMaxHeight, 2) / 2)) / timeToReachMaxHeight;
float ejectVelocityY = ejectInicialY + player.physics.gravity * Time.deltaTime;
player.physics.playerVelocity = new Vector2(ejectVelocityY, 0f);
I tried to apply the same formulas for the X coordinate, but it doesn't work well.
Any help would be greatly appreciated.
This is ultimately a physics problem.
You are calculating current velocities by determining the acceleration of the object. Acceleration of an object can be determined from the net force acting on the object (F) and the mass of the object (m) through the formula a = F / m. I highly recommend reading some explanations of projectile motion and understanding the meaning of the motion equations you are using.
Vertical Direction
For the vertical direction, the net vertical force during the jump (assuming no air drag, etc.) is player.physics.gravity. So you apply your motion formulas assuming a constant acceleration of player.physics.gravity, which you've seemed to have accomplished already.
Horizontal Direction
Becausegravity does not commonly act in the horizontal direction, the net horizontal force during the jump (assuming no air drag, etc.) is 0. So again you can apply your motion formulas, but this time using 0 as your acceleration. By doing this, you should realize that velocityX does not change (in the absence of net horizontal force). Therefore the X coordinate can be determined through (in pseudo-code) newPositionX = startPositionX + Time.deltaTime * velocityX

How to keep a UIView pointing in a certain direction regardless of device orientation?

I have a UIView that I want to always be facing up. So say you have an UIImageView that has an arrow, and no matter what way the device is being held, it's pointing up. Obviously I need the accelerometer, but telling it to rotate the image based on the coordinates is only going to work the first time I think, since rotations are relative. Is there an easier way of doing this? I feel like there would be a simple example app somewhere that would do something like this.
Apple has sample code which does exactly what you want; have a look at this: http://developer.apple.com/library/ios/#samplecode/WhichWayIsUp/Introduction/Intro.html
To receive specific motion data, use the shared instance of UIAccelerometer and it's delegate. http://developer.apple.com/library/ios/documentation/UIKit/Reference/UIAccelerometerDelegate_Protocol/UIAccelerometerDelegate/UIAccelerometerDelegate.html#//apple_ref/occ/intf/UIAccelerometerDelegate
As far as the rotations, I'm not sure. Have a look at the "BubbleLevel" sample code here: http://developer.apple.com/library/ios/samplecode/BubbleLevel/Introduction/Intro.html#//apple_ref/doc/uid/DTS40007331
The accelerometer delegate documentation references that and other relevant sample code.
WhichWayIsUp is the sample application you're looking for - in iOS Reference Library (great docs!)
Transforms applied to a view aren’t cumulative. Rather, the view starts out being aligned to the coordinate system of its parent, and the transform is applied relative to that.
As other answers have alluded to, there are enough pieces in Apple’s BubbleLevel sample code to accomplish what you’re looking for. -[LevelViewController accelerometer:didAccelerate:] already contains an example of how to extract the smoothed screen-relative rotation component of the gravity vector:
// Use a basic low-pass filter to only keep the gravity in the accelerometer values for the X and Y axes
accelerationX = acceleration.x * kFilteringFactor + accelerationX * (1.0 - kFilteringFactor);
accelerationY = acceleration.y * kFilteringFactor + accelerationY * (1.0 - kFilteringFactor);
// keep the raw reading, to use during calibrations
currentRawReading = atan2(accelerationY, accelerationX);
Given the coordinate systems of UIAcceleration and atan2, there are a few factors you’ll need to keep in mind to calculate your view’s desired rotation:
When the device is upright in portrait, atan2 will return -π/2 (-90°), since the input gravity vector is {0.0, -1.0, 0.0}.
CGAffineTransformMakeRotation creates a transform that will rotate clockwise (on iOS) by the specified number of radians, but the raw angle from atan2 specifies a counterclockwise rotation.
Accordingly, you’ll want to rotate your view by -(currentRawReading + π/2), assuming its parent view is oriented with the screen in portrait:
view.transform = CGAffineTransformMakeRotation(-(currentRawReading + M_PI/2));

iphone moving direction using accelerometer

I want some help on UIAccelerometer class. I want to find some way to find out or distinguish when I wave iphone device from right-to-left and the left-to-right. I am getting x,y,z values as well as interval value. I am also getting velocity and distance calculated from normal physics rule.
Distance = (prevDist + sqrt(pow((prevx - acceleration.x), 2) + pow((prevy - acceleration.y), 2) + pow((prevz - acceleration.z), 2)))
Velocity = (Distance * timeintercal) { here distance means newdistance-prevdistance and timeinterval for that distance }
When I am moving iphone device right-to-left then left-to-right I am getting total distance traveld in both direction and velocity at regular intervals. Can you help me how I can find out that I am moving iphone device left-to-right or right-to-left?
Help would be appreciated.
Thanks.
First, you have a few physics errors. Your distance is not a 'distance' variable because you are summing acceleration values from the accelerometer. Also, your velocity is actually a 'speed' variable because you are multiplying acceleration by a time interval.
What you really want to do is monitor acceleration.x for values that exceed a certain pre-determined magnitude. This magnitude should be low enough that the user does not have to swing the iPhone violenty, but high enough that simple movements will not cross your threshold. If acceleration.x is greater than 0 by your pre-determined threshold or more, then you are accelerating right. If acceleration.y is less than 0 by your pre-determined amount or more, then you are accelerating left.
It is far more difficult to determine the velocity of the phone over a period of time, because you will have to integrate the acceleration values continuously. The iPhone accelerometer is not incredibly precise and can be noisy, so after a small period of time your application will probably be convinced your phone is drifting in random directions. Also, remember that gravity will always show up in your acceleration values.
It depends on which way the user will be holding the device.
You don't need the velocity or the distance to figure out which way the device is being swung.
Instead, you can keep a history of the past (say 5) accelerations.
Assuming that the device is held the normal way (portrait, right-side up), you would use the average of the previous x - accelerations to determine the direction.
So basically what I did is in the accelerometer:didAccelerate event:
//prevAccels is an NSMutableArray
float VAL = .75;
int HISTORY_NUMBER = 5;
if (accleration.x>VAL && acceleration.y>VAL){ //if there has been a sudden jerking motion
float avgX;
for (int i = 0; i<[prevAccels count]; i++)
{
UIAcceleration *a = [prevAccels objectAtIndex:i];
avgX += a.x;
}
avgX /= HISTORY_NUMBER; //get the average of the previous accelerations
BOOL flickRight = (avgX<-VAL)?YES:NO;
//[self wave:flickRight]; //do stuff here
}
[prevAccels addObject:acceleration]; //add the acceleration to the history
if ([prevAccels count]>HISTORY_NUMBER) [prevAccels removeObjectAtIndex:0]; //trim the list if it is too large
Hope that helps!
One thing to keep in mind is that gravity is always affecting the accelerometers, so that needs to be taken into account. Unfortunately, you can't count on straight forward methods to compute distances and actual velocity due to not knowing at any instant where "down" is.
To note when "swing left" or "swing right" happens, also consider that the movement is likely to be in an arc, and you'll get opposite acceleration when the swinging stops. To know that the swinging is underway, take a look at the "out" acceleration vector, e.g. the one that points away from the center of rotation. Using that you can distinguish between a "start moving" state and a "stop moving state" which otherwise will have equal-but-opposite acceleration values.

Smoothing out didAccelerate messages

I'm working on something similar to an inclinometer. I rotate an image based on the angle calculated in didAccelerate (from UIAccelerometerDelegate) using the passed in UIAcceleration variable. The image is jittery or twitchy. Even when the phone is laying on its back and not moving. There are some inclinometers in the app store that are super smooth. I've tried to smooth it out by checking the last reading and if it is within a range, don't do anything. That stops some of the twitching but then you get into something that looks like stop animation. How can I get a smoother affect?
I would suggest smoothing using a 'critically damped spring'.
Essentially what you do is calculate your target rotation (which is what you have now).
Instead of applying this directly to the image rotation, you attach the image rotation to the target rotation with a spring, or piece of elastic. The spring is damped such that you get no oscillations.
You will need one constant to tune how quickly the system reacts, we'll call this the SpringConstant.
Basically we apply two forces to the rotation, then integrate this over time.
The first force is that applied by the spring, Fs = SpringConstant * DistanceToTarget.
The second is the damping force, Fd = -CurrentVelocity * 2 * sqrt(SpringConstant)
The CurrentVelocity forms part of the state of the system, and can be initialised to zero.
In each step, you multiply the sum of these two forces by the time step.
This gives you the change in the value of the CurrentVelocity.
Multiply this by the time step again and it will give you the displacement.
We add this to the actual rotation of the image.
In c++ code:
float CriticallyDampedSpring( float a_Target,
float a_Current,
float & a_Velocity,
float a_TimeStep )
{
float currentToTarget = a_Target - a_Current;
float springForce = currentToTarget * SPRING_CONSTANT;
float dampingForce = -a_Velocity * 2 * sqrt( SPRING_CONSTANT );
float force = springForce + dampingForce;
a_Velocity += force * a_TimeStep;
float displacement = a_Velocity * a_TimeStep;
return a_Current + displacement;
}
I imagine you have two dimensions of rotation. You just need to use this once for each dimension, and keep track of the velocity of each dimension separately.
In systems I was working with 5 was a good value for the spring constant, but you will have to experiment for your situation. Setting it too high will not get rid of the jerkiness, setting it too low it will react too slowly.
Also, you might be best to make a class that keeps the velocity state rather than have to pass it into the function over and over.
I hope this is helpful :)