iPhone 3D compass - iphone

I am trying to build an app for the iPhone 4 which enables the user to "point" at a hardcoded destination and a dot appears where the destination is located.
First, i use the compass to make a horizontal compass(this will cover the left/right rotation):
// Heading
nowHeading = heading.trueHeading;
// Shift image (horizontal compass)
float shift = bearing - nowHeading;
destinationImage.center = CGPointMake(shift+160, destinationImage.center.y);
I shift the dot 160 pixels because the screen is 320 pixels width. My question is now, how can I expand this code to handle up and down? Meaning that if i point the phone down in the table, the dot wont show.. I have to point (like taking a picture) at the destination in order for it to be drawn on the screen. I've already implemented the accelerator. But i don't know how to unite these components to solve my problem.

Bearing should depend on the field of vision of the camera. For iPhone 4 the horizontal angular view is 47.5 so 320 points/47.5 = xxx points per degree, use that to shift horizontally. You also have to add an adaptive filter to the accelerometers, you can get one from the AccelerometerGraph project from Apple.
You have the rotation in one axis (bearing) you should get the rotation on the other two from the accelerometers. The atan2 of two axis give you the rotation on the third. Go to UIAcceleration and imagine an axis physically piercing the device if that helps and do double xAngle = atan2(acceleration.y, acceleration.z); So once you have the rotation upside down you can repeat what you did for the horizontal with the vertical field of view, eg: 60 for the iPhone.
That is going to be one rough implementation :) but achieving smooth movement is difficult. One thing you can do is use the gyros to get a faster response and correct their signal periodically with the accelerometers. See this talk for the troubles ahead: Sensor Fusion on Android Devices. Here is a website dedicated to the Kalman Filter. If you dare with Quaternions I recommend "Visualizing Quaternions" from Andrew J. Hanson.

It sounds like you are trying to do a style of Augmented Reality. If that. Is the case there are several libraries and sample code suggested here:
Augmented Reality

Related

Maths behind iPhone AR ToolKit

I'm using iPhone ARToolkit and I'm wondering how it works.
I want to know how with a destination location, a user location and a compass, this toolkit can know it user is looking to that destination.
How can I know the maths behind this calculations?
The maths that AR ToolKit uses is basic trigonometry. It doesn't use the technique that Thomas describes which I think would be a better approach (apart from step 5. See below)
Overview of the steps involved.
The iPhone's GPS supplies the device's location and you already have the coordinates of the location you want to look at.
First it calculates the difference between the latitude and the longitude values of the two points. These two difference measurements mean you can construct a right-angled triangle and calculate what angle from your current position another given position is. This is the relevant code:
- (float)angleFromCoordinate:(CLLocationCoordinate2D)first toCoordinate:(CLLocationCoordinate2D)second {
float longitudinalDifference = second.longitude - first.longitude;
float latitudinalDifference = second.latitude - first.latitude;
float possibleAzimuth = (M_PI * .5f) - atan(latitudinalDifference / longitudinalDifference);
if (longitudinalDifference > 0) return possibleAzimuth;
else if (longitudinalDifference < 0) return possibleAzimuth + M_PI;
else if (latitudinalDifference < 0) return M_PI;
return 0.0f;
}
At this point you can then read the compass value from the phone and determine what specific compass angle(azimuth) your device is pointing at. The reading from the compass will be the angle directly in the center of the camera's view. The AR ToolKit then calculates the full range of angle's currently displayed on screen as the iPhone's field of view is known.
In particular it does this by calculating what the angle of the leftmost part of the view is showing:
double leftAzimuth = centerAzimuth - VIEWPORT_WIDTH_RADIANS / 2.0;
if (leftAzimuth < 0.0) {
leftAzimuth = 2 * M_PI + leftAzimuth;
}
And then calculates the right most:
double rightAzimuth = centerAzimuth + VIEWPORT_WIDTH_RADIANS / 2.0;
if (rightAzimuth > 2 * M_PI) {
rightAzimuth = rightAzimuth - 2 * M_PI;
}
We now have:
The angle relative to our current position of something we want to display
A range of angles which are currently visible on the screen
This is enough to plot a marker on the screen in the correct position (kind of...see problems section below)
It also does similar calculations related to the devices inclination so if you look at the sky you hopefully won't see a city marker up there and if you point it at your feet you should in theory see cities on the opposite side of the planet. There are problems with these calculation in this toolkit however.
The problems...
Device orientation is not perfect
The value I've just explained the calculation of assumes you're holding the device in an exact position relative to the earth. i.e. perfectly landscape or portrait. Your user probably won't always be doing that. If you tilt the device slightly your horizon line will no longer be horizontal on screen.
The earth is actually 3D!
The earth is 3-dimensional. Few of the calculations in the toolkit account for that. The calculations it performs are only really accurate when you're pointing the device towards the horizon.
For example if you try to plot a point on the opposite side of the globe (directly under your feet) this toolkit behaves very strangely. The approach used to calculate the azimuth range on screen is only valid when looking at the horizon. If you point your camera at the floor you can actually see every single compass point. The toolkit however, thinks you're still only looking at compass reading ± (width of view / 2). If you rotate on the spot you'll see your marker move to edge of the screen, disappear and then reappear on the other side. What you would expect to see is the marker stay on screen as you rotate.
The solution
I've recently implemented an app with AR which I initially hoped AR Toolkit would do the heavy lifting for me. I came across the problems just described which aren't acceptable for my app so had to roll my own.
Thomas' approach is a good method up to point 5 which as I explained above only works when pointing towards the horizon. If you need to plot anything outside of that it breaks down. In my case I have to plot objects that are overhead so it's completely unsuitable.
I addressed this by using OpenGL ES to plot my markers where they actually are in 3D space and move the OpenGL viewport around according to readings from the gyroscope while continuously re-calibrating against the compass. The 3D engine handles all the hard work of determining what's on screen.
Hope that's enough to get you started. I wish I could provide more detail than that but short of posting a lot of hacky code I can't. This approach however did address both problems described above. I hope to open source that part of my code at some point but it's very rough and coupled to my problem domain at the moment.
that is all information needed. with iphone-location and destination-location you can calculate the destination-angle (with respect to true north).
The only missing thing is to know where the iPhone is currently looking at which is returned by the compass (magnetic north + current location -> true north).
edit: Calculations: (this is just an idea: there may exist a better solution without a lot coordinate-transformations)
convert current and destination location to ecef-coordinates
transform destination ecef coordinate to enu (east, north, up) local coordinate system with current location as reference location. You can also use this.
ignore the height-value and use the enu-coordinate to get the direction: atan2(deast, dnorth)
The compass returns already the angle the iPhone is looking at
display the destination on the screen if dest_angle - 10° <= compass_angle <= dest_angle + 10°
with respect to the cyclic-angle-space. The constant of 10° is just a guessed value. You should either try some values to find out a useful one or you have to analyse some properties of the iPhone-camera.
The coordinate-transformation-equations become much simpler if you assume that the earth is a sphere and not an ellipsoid. Most links if have postet are assuming an wgs-84 ellipsoid becasue gps also does afaik).

Is there a way to figure out 3D distance/view angle from a 2D environment using the iPhone/iPad camera?

Maybe I'm asking this too soon in my research, but I'd better know if this is possible sooner than later.
Imagine I have the following square printed on a paper on top of a table:
The table is brown, so it does not match with any of the colors in the square. Is there a way for me, from a common iPhone camera (non-stereo view), to figure out the distance and angle from which Im looking at the square in the table?
In the end what I'm looking for is being able to draw a 3D square on top of this one using the camera image, but I'm not sure if I am going to be able to figure out the distance and position of the object in space using only a 2D image. Any hints are well appreciated.
Short answer: http://weblog.bocoup.com/javascript-augmented-reality
Big answer:
First posterize, Then vectorize, With the vectors in your power you may need to do some math tricks to define, based on the vectors position, the perspective and then the camera position.
Maybe this help:
www.pixastic.com/lib/docs/actions/posterize/
github.com/selead/cl-vectorizer
vectormagic.com/home
autotrace.sourceforge.net
www.scipy.org/PyLab
raphaeljs.com/
technabob.com/blog/2007/12/29/video-games-get-vectorized/
superuser.com/questions/88415/is-there-an-open-source-alternative-to-vector-magic
Oughta be possible. Scan the image for the red/blue/yellow pattern, then do edge detection to figure out how warped the squares are (they'll be parallelograms in anything but straight-on view). Distance would depend on the camera's zoom setting and scan resolution. But basically you'd count how many pixels are visible in each of the squares, run that past the camera's specs and you should be able to determine a rough distance.

Getting level of rotation with UIAccleration

Games like FroggyJump for iPhone figure out the rotation of the iphone. I'm getting confused with the acceleration values. How do I calculate the level of rotation? I suppose I need to consider when the iphone isn't perfectly upright.
Thank you.
I'm also wanting to use the new Core Motion framework with the "Device Motion" for iPhone 4 for extra precision. I guess I'll have to use that low pass filter for the other devices.
It's the yaw.
Having given Froggy Jump a quick go, I think it's likely directly using the accelerometer's x value as the left/right acceleration on the frog. If it is stationary, you can think of an accelerometer as giving you the vector that points upward into space, relative to the local axes. For something like a ball rolling or anything else accelerating due to tilt, you want to use the values directly.
For anything that involves actually knowing angles, you're probably best picking the axis around which you want to detect rotation then using the C function atan2f on the accelerometer values for the other two axes. With just an accelerometer, there are some scenarios in which you can't detect rotation — for example, if the device is flat on a table then an accelerometer can't detect yaw. The general rule is that rotations around the gravity vector can't be detected with an accelerometer alone.

Using iPhone/iPod Touch acceleration to rotate a 3D object

I'm trying to use an iPhone/iPod acceleration to manipulate directly a 3D object.
For that I've been searching lot's of stuff (Euler angles, Quaternions, etc).
I'm using OpenSG, where I have a 3D environment and want to manipulate a certain object (just rotating in all possible iPhone/iPod degrees of freedom using only accelerometer).
So, I tried to figure it out a solution for this problem but it still doesn't have the expected result and get some weird rotations in some angles.
Can someone tell me what I'm doing wrong? Or, is there a better way of doing this without using quaternions?
The acceleration variable is a Vec3f containing the accelerometer values from iPhone/iPod filtered with a low-pass filter.
acceleration.normalize();
Vec3f reference = OSG::Vec3f(0, 0, 1);
OSG::Vec3f axis = acceleration.cross( reference );
angle = acos( acceleration.dot( reference ) );
OSG::Quaternion quat;
quat.setValueAsAxisRad(axis, angle);
After this code, I update my scene node using quaternion quat.
I wanted to do the exact same thing and just tried it, I hadn't played around with an accelerometer before and it seemed like it should be possible.
The problem is that if you set your iPhone on a table and then slowly spin it around and observe the output of the accelerometer it basically doesn't change (one gravity down). If you tilt it up/down on any of the four edges you will see the output change.
In other words you know that your table top is tilting top/bottom or left/right, but you can't tell that you are spinning it. So you can map this tilt to two rotations of a 3D object.
You could probably use the compass for the horizontal rotation, I couldn't try because I was prototyping in the Unity Game Engine and it doesn't seem to support compass yet.
The ever wonderful Brad Larson posted an excellent description of his initial experiences of a 3d viewer while writing his Moleculs app.
His method for rotations was achieved as follows:
GLfloat currentModelViewMatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, currentModelViewMatrix);
glRotatef(xRotation, currentModelViewMatrix[1], currentModelViewMatrix[5], currentModelViewMatrix[9]);
glGetFloatv(GL_MODELVIEW_MATRIX, currentModelViewMatrix);
glRotatef(yRotation, currentModelViewMatrix[0], currentModelViewMatrix[4], currentModelViewMatrix[8]);
but whether or not this is helpful I can't recommend this blog entry enough Brad learns a lesson or two
Editing to add that I may have misread the question, but will keep the post here as it will likely help people searching with similar keywords.

How to determine absolute orientation

I have a xyz accelerometer and magnetometer. Now I want to determine the orientation of the device using both. The problem I see is that depending on the device orientation, I'd need to use the sensors in different order.
Let me give an example. If I have the device facing me then changes in both the roll and pitch can be determined with the accelerometer. For yaw I use the magnetometer.
But if I put the device horizontally (ie. turn it 90º, facing the ceiling) then any change in the up vector (now horizontal) isn't notice, as the accelerometer doesn't detect any change. This can now be detected with the magnetometer.
So the question is, how to determine when to use one or the other. Is this enough with both sensors or do I need something else?
Thanks
The key is to use the cross product of the two vectors, gravity and magnetometer. The cross product gives a new vector perpendicular to them both. That means it is horizontal (perpendicular to down) and 90 degrees away from north. Now you have three orthogonal vectors which define orientation. It is a little ugly because they are not all perpendicular but that is easy to fix. If you then cross this new vector back with the gravity vector that gives a third vector perpendicular to the gravity vector and the magnet plane vector. Now you have three perpendicular vectors which defines your 3D orientation coordinate system. The original accelerometer (gravity) vector defines Z (up/down) and the two cross product vectors define the east/west and north/south components of the orientation.
Here is some documentation that walks through this project. As is clear from other answers, the math can be tricky.
http://www.freescale.com/files/sensors/doc/app_note/AN4248.pdf
I think the question "how to determine when to use one or the other" is misguided. You should always use both sensors for orientation. There are cases where one of them is useless. However, these are edge cases.
If I understand you correctly, you'll need something to detect pitch (tilting) and orientation according to the cardinal points (North, East, South and West).
The pitch can be read from the accelerometer.
The orientation according to the cardinal points can be read from a compass.
Combining the output from these two sensors correctly with the right math in your software will most likely give you the absolute orientation.
I think it's doable that way.
Good luck.
In the event you still need absolute orientation you can check this break out board from Adafruit: https://www.adafruit.com/products/2472. The nice thing about this is board is that it has an ARM Cortex-M0 processor to do all of the calculations for you.