I want to create an augmented reality view on the iPhone. As a starting point, I took a look at Apple's pARk demo project. There, however, the deviceMotion property is used to get the rotation matrix to do the camera transformation with. But since deviceMotion uses the gyroscope (available on the iPhone 4 and newer) and I want to support the 3GS as well (in fact, a 3GS is my only development device), I cannot use this approach. So I want to create the rotation matrix myself using the data available from the accelerometer and compass.
Unfortunately, I lack the math skills to do so myself. Searching around, it seemed to me that this is the most relevant hands-on guide for my problem, but following the implementation there doesn't seem to adapt to my problem (the POI-views only appear momentarily and seemingly more due to device movement than to its heading; I've posted my onDisplayLink method (the only method with major changes) below). I've tried to read up on the relevant math, but at this point I simply don't know enough about it to find an approach on my own or to find the error in my code. Any help, please?
Edit: I've since recognized that the sensor data should better be stored in doubles than in ints and added a bit of smoothing. Now I can see more clearly how POIs that should appear from the side upon device rotation rather come down from above. Maybe that helps pointing to what's wrong.
CMAccelerometerData* orientation = motionManager.accelerometerData;
CMAcceleration acceleration = orientation.acceleration;
vec4f_t normalizedAccelerometer;
vec4f_t normalizedMagnetometer;
xG = (acceleration.x * kFilteringFactor) + (xG * (1.0 - kFilteringFactor));
yG = (acceleration.y * kFilteringFactor) + (yG * (1.0 - kFilteringFactor));
zG = (acceleration.z * kFilteringFactor) + (zG * (1.0 - kFilteringFactor));
xB = (heading.x * kFilteringFactor) + (xB * (1.0 - kFilteringFactor));
yB = (heading.y * kFilteringFactor) + (yB * (1.0 - kFilteringFactor));
zB = (heading.z * kFilteringFactor) + (zB * (1.0 - kFilteringFactor));
double accelerometerMagnitude = sqrt(pow(xG, 2) + pow(yG, 2) + pow(zG, 2));
double magnetometerMagnitude = sqrt(pow(xB, 2) + pow(yB, 2) + pow(zB, 2));
normalizedAccelerometer[0] = xG/accelerometerMagnitude;
normalizedAccelerometer[1] = yG/accelerometerMagnitude;
normalizedAccelerometer[2] = zG/accelerometerMagnitude;
normalizedAccelerometer[3] = 1.0f;
normalizedMagnetometer[0] = xB/magnetometerMagnitude;
normalizedMagnetometer[1] = yB/magnetometerMagnitude;
normalizedMagnetometer[2] = zB/magnetometerMagnitude;
normalizedMagnetometer[3] = 1.0f;
vec4f_t eastDirection;
eastDirection[0] = normalizedAccelerometer[1] * normalizedMagnetometer[2] - normalizedAccelerometer[2] * normalizedMagnetometer[1];
eastDirection[1] = normalizedAccelerometer[0] * normalizedMagnetometer[2] - normalizedAccelerometer[2] * normalizedMagnetometer[0];
eastDirection[2] = normalizedAccelerometer[0] * normalizedMagnetometer[1] - normalizedAccelerometer[1] * normalizedMagnetometer[0];
eastDirection[3] = 1.0f;
double eastDirectionMagnitude = sqrt(pow(eastDirection[0], 2) + pow(eastDirection[1], 2) + pow(eastDirection[2], 2));
vec4f_t normalizedEastDirection;
normalizedEastDirection[0] = eastDirection[0]/eastDirectionMagnitude;
normalizedEastDirection[1] = eastDirection[1]/eastDirectionMagnitude;
normalizedEastDirection[2] = eastDirection[2]/eastDirectionMagnitude;
normalizedEastDirection[3] = 1.0f;
vec4f_t northDirection;
northDirection[0] = (pow(normalizedAccelerometer[0], 2) + pow(normalizedAccelerometer[1],2) + pow(normalizedAccelerometer[2],2)) * xB - (normalizedAccelerometer[0] * xB + normalizedAccelerometer[1] * yB + normalizedAccelerometer[2] * zB)*normalizedAccelerometer[0];
northDirection[1] = (pow(normalizedAccelerometer[0], 2) + pow(normalizedAccelerometer[1],2) + pow(normalizedAccelerometer[2],2)) * yB - (normalizedAccelerometer[0] * xB + normalizedAccelerometer[1] * yB + normalizedAccelerometer[2] * zB)*normalizedAccelerometer[1];
northDirection[2] = (pow(normalizedAccelerometer[0], 2) + pow(normalizedAccelerometer[1],2) + pow(normalizedAccelerometer[2],2)) * zB - (normalizedAccelerometer[0] * xB + normalizedAccelerometer[1] * yB + normalizedAccelerometer[2] * zB)*normalizedAccelerometer[2];
northDirection[3] = 1.0f;
double northDirectionMagnitude;
northDirectionMagnitude = sqrt(pow(northDirection[0], 2) + pow(northDirection[1], 2) + pow(northDirection[2], 2));
vec4f_t normalizedNorthDirection;
normalizedNorthDirection[0] = northDirection[0]/northDirectionMagnitude;
normalizedNorthDirection[1] = northDirection[1]/northDirectionMagnitude;
normalizedNorthDirection[2] = northDirection[2]/northDirectionMagnitude;
normalizedNorthDirection[3] = 1.0f;
CMRotationMatrix r;
r.m11 = normalizedEastDirection[0];
r.m21 = normalizedEastDirection[1];
r.m31 = normalizedEastDirection[2];
r.m12 = normalizedNorthDirection[0];
r.m22 = normalizedNorthDirection[1];
r.m32 = normalizedNorthDirection[2];
r.m13 = normalizedAccelerometer[0];
r.m23 = normalizedAccelerometer[1];
r.m33 = normalizedAccelerometer[2];
transformFromCMRotationMatrix(cameraTransform, &r);
[self setNeedsDisplay];
When the device is placed on a table and roughly (using Compass.app) pointing to north, I log this data:
Accelerometer: x: -0.016692, y: 0.060852, z: -0.998007
Magnetometer: x: -0.016099, y: 0.256711, z: -0.966354
North Direction x: 0.011472, y: 8.561041, z:0.521807
Normalized North Direction x: 0.001338, y: 0.998147, z:0.060838
East Direction x: 0.197395, y: 0.000063, z:-0.003305
Normalized East Direction x: 0.999860, y: 0.000319, z:-0.016742
Does that appear sane?
Edit 2: I have updated the assignment of r to one that apparently leads me halfway to my goal: when the device is upright, I now see the landmarks near the horizontal plane; however, they are about 90ยบ clock-wards off their expected location. Also, the output after the movement suggested by Beta:
Accelerometer: x: 0.074289, y: -0.997192, z: -0.009475
Magnetometer: x: 0.031341, y: -0.986382, z: -0.161458
North Direction x: -1.428996, y: -0.057306, z:-5.172881
Normalized North Direction x: -0.266259, y: -0.010678, z:-0.963842
East Direction x: 0.151658, y: -0.011698, z:-0.042025
Normalized East Direction x: 0.961034, y: -0.074126, z:-0.266305
After getting hold of an iPhone 4, I was able to compare the data generated by the code above with the output of the CoreMotion attitude data. With this, I found out that I should assign the values to my rotation matrix in the following manner:
CMRotationMatrix r;
r.m11 = normalizedNorthDirection[0];
r.m21 = normalizedNorthDirection[1];
r.m31 = normalizedNorthDirection[2];
r.m12 = 0 - normalizedEastDirection[0];
r.m22 = normalizedEastDirection[1];
r.m32 = 0 - normalizedEastDirection[2];
r.m13 = 0 - normalizedAccelerometer[0];
r.m23 = 0 - normalizedAccelerometer[1];
r.m33 = 0 - normalizedAccelerometer[2];
This gives roughly similar values, but of course the data produced by CoreMotion using the gyro is much better. Anyway, that's a starting point to reasonably support the 3GS. Maybe there can be additional quality derived by some sort of filtering, but I've not decided yet whether that's worth the effort.
Related
For some reason, the clock is not getting positioned properly. I have no idea why. I have tried to change the values of the numbers but this is the best I got it too. I really hope someone could help me out with this. I am trying to make it so that the hands of the clock don't stick out too much too.
In your loop where you draw the dashed outline:
var outerCircleRadius = radius;
var innerCircleRadius = radius - 14;
for (double i = 0; i < 360; i += 12) {
var x1 = centerX + outerCircleRadius * cos(i * pi / 180);
var y1 = centerX + outerCircleRadius * sin(i * pi / 180);
var x2 = centerX + innerCircleRadius * cos(i * pi / 180);
var y2 = centerX + innerCircleRadius * sin(i * pi / 180);
canvas.drawLine(Offset(x1, y1), Offset(x2, y2), dashBrush);
}
In the assignment for y1 and y2, centerX should be centerY.
EDIT: Looking at the rest of the code, there are a lot of places where centerX should be centerY. Always double-check code that you copy-paste.
I have a video stream coming from a 180 degree fisheye camera. I want to do some image-processing to convert the fisheye view into a normal view.
After some research and lots of read articles I found this paper.
They describe an algorithm (and some formulas) to solve this problem.
I used tried to implement this method in a Matlab. Unfortunately it doesn't work, and I failed to make it work. The "corrected" image looks exactly like the original photograph and there's no any removal of distortion and secondly I am just receiving top left side of the image, not the complete image but changing the value of 'K' to 1.9 gives mw the whole image, but its exactly the same image.
Input image:
Result:
When the value of K is 1.15 as mentioned in the article
When the value of K is 1.9
Here is my code:
image = imread('image2.png');
[Cx, Cy, channel] = size(image);
k = 1.5;
f = (Cx * Cy)/3;
opw = fix(f * tan(asin(sin(atan((Cx/2)/f)) * k)));
oph = fix(f * tan(asin(sin(atan((Cy/2)/f)) * k)));
image_new = zeros(opw, oph,channel);
for i = 1: opw
for j = 1: oph
[theta,rho] = cart2pol(i,j);
R = f * tan(asin(sin(atan(rho/f)) * k));
r = f * tan(asin(sin(atan(R/f))/k));
X = ceil(r * cos(theta));
Y = ceil(r * sin(theta));
for k = 1: 3
image_new(i,j,k) = image(X,Y,k);
end
end
end
image_new = uint8(image_new);
warning('off', 'Images:initSize:adjustingMag');
imshow(image_new);
This is what solved my problem.
input:
strength as floating point >= 0. 0 = no change, high numbers equal stronger correction.
zoom as floating point >= 1. (1 = no change in zoom)
algorithm:
set halfWidth = imageWidth / 2
set halfHeight = imageHeight / 2
if strength = 0 then strength = 0.00001
set correctionRadius = squareroot(imageWidth ^ 2 + imageHeight ^ 2) / strength
for each pixel (x,y) in destinationImage
set newX = x - halfWidth
set newY = y - halfHeight
set distance = squareroot(newX ^ 2 + newY ^ 2)
set r = distance / correctionRadius
if r = 0 then
set theta = 1
else
set theta = arctangent(r) / r
set sourceX = halfWidth + theta * newX * zoom
set sourceY = halfHeight + theta * newY * zoom
set color of pixel (x, y) to color of source image pixel at (sourceX, sourceY)
I am following the quaternion tutorial: http://www.raywenderlich.com/12667/how-to-rotate-a-3d-object-using-touches-with-opengl and am trying to rotate a globe to some XYZ location. I have an initial quaternion and generate a random XYZ location on the surface of the globe. I pass that XYZ location into the following function. The idea was to generate a lookAt vector with GLKMatrix4MakeLookAt and define the end Quaternion for the slerp step from the lookAt matrix.
- (void)rotateToLocationX:(float)x andY:(float)y andZ:(float)z {
// Turn on the interpolation for smooth rotation
_slerping = YES; // Begin auto rotating to this location
_slerpCur = 0;
_slerpMax = 1.0;
_slerpStart = _quat;
// The eye location is defined by the look at location multiplied by this modifier
float modifier = 1.0;
// Create a look at vector for which we will create a GLK4Matrix from
float xEye = x;
float yEye = y;
float zEye = z;
//NSLog(#"%f %f %f %f %f %f",xEye, yEye, zEye, x, y, z);
_currentSatelliteLocation = GLKMatrix4MakeLookAt(xEye, yEye, zEye, 0, 0, 0, 0, 1, 0);
_currentSatelliteLocation = GLKMatrix4Multiply(_currentSatelliteLocation,self.effect.transform.modelviewMatrix);
// Turn our 4x4 matrix into a quat and use it to mark the end point of our interpolation
//_currentSatelliteLocation = GLKMatrix4Translate(_currentSatelliteLocation, 0.0f, 0.0f, GLOBAL_EARTH_Z_LOCATION);
_slerpEnd = GLKQuaternionMakeWithMatrix4(_currentSatelliteLocation);
// Print info on the quat
GLKVector3 vec = GLKQuaternionAxis(_slerpEnd);
float angle = GLKQuaternionAngle(_slerpEnd);
//NSLog(#"%f %f %f %f",vec.x,vec.y,vec.z,angle);
NSLog(#"Quat end:");
[self printMatrix:_currentSatelliteLocation];
//[self printMatrix:self.effect.transform.modelviewMatrix];
}
The interpolation works, I get a smooth rotation, however the ending location is never the XYZ I input - I know this because my globe is a sphere and I am calculating XYZ from Lat Lon. I want to look directly down the 'lookAt' vector toward the center of the earth from that lat/lon location on the surface of the globe after the rotation. I think it may have something to do with the up vector but I've tried everything that made sense.
What am I doing wrong - How can I define a final quaternion that when I finish rotating, looks down a vector to the XYZ on the surface of the globe? Thanks!
Is the following your meaning:
Your globe center is (0, 0, 0), radius is R, the start position is (0, 0, R), your final position is (0, R, 0), so rotate the globe 90 degrees around X-asix?
If so, just set lookat function eye position to your final position, the look at parameters to the globe center.
m_target.x = 0.0f;
m_target.y = 0.0f;
m_target.z = 1.0f;
m_right.x = 1.0f;
m_right.y = 0.0f;
m_right.z = 0.0f;
m_up.x = 0.0f;
m_up.y = 1.0f;
m_up.z = 0.0f;
void CCamera::RotateX( float amount )
{
Point3D target = m_target;
Point3D up = m_up;
amount = amount / 180 * PI;
m_target.x = (cos(PI / 2 - amount) * up.x) + (cos(amount) * target.x);
m_target.y = (cos(PI / 2 - amount) * up.y) + (cos(amount) * target.y);
m_target.z = (cos(PI / 2 - amount) * up.z) + (cos(amount) * target.z);
m_up.x = (cos(amount) * up.x) + (cos(PI / 2 + amount) * target.x);
m_up.y = (cos(amount) * up.y) + (cos(PI / 2 + amount) * target.y);
m_up.z = (cos(amount) * up.z) + (cos(PI / 2 + amount) * target.z);
Normalize(m_target);
Normalize(m_up);
}
void CCamera::RotateY( float amount )
{
Point3D target = m_target;
Point3D right = m_right;
amount = amount / 180 * PI;
m_target.x = (cos(PI / 2 + amount) * right.x) + (cos(amount) * target.x);
m_target.y = (cos(PI / 2 + amount) * right.y) + (cos(amount) * target.y);
m_target.z = (cos(PI / 2 + amount) * right.z) + (cos(amount) * target.z);
m_right.x = (cos(amount) * right.x) + (cos(PI / 2 - amount) * target.x);
m_right.y = (cos(amount) * right.y) + (cos(PI / 2 - amount) * target.y);
m_right.z = (cos(amount) * right.z) + (cos(PI / 2 - amount) * target.z);
Normalize(m_target);
Normalize(m_right);
}
void CCamera::RotateZ( float amount )
{
Point3D right = m_right;
Point3D up = m_up;
amount = amount / 180 * PI;
m_up.x = (cos(amount) * up.x) + (cos(PI / 2 - amount) * right.x);
m_up.y = (cos(amount) * up.y) + (cos(PI / 2 - amount) * right.y);
m_up.z = (cos(amount) * up.z) + (cos(PI / 2 - amount) * right.z);
m_right.x = (cos(PI / 2 + amount) * up.x) + (cos(amount) * right.x);
m_right.y = (cos(PI / 2 + amount) * up.y) + (cos(amount) * right.y);
m_right.z = (cos(PI / 2 + amount) * up.z) + (cos(amount) * right.z);
Normalize(m_right);
Normalize(m_up);
}
void CCamera::Normalize( Point3D &p )
{
float length = sqrt(p.x * p.x + p.y * p.y + p.z * p.z);
if (1 == length || 0 == length)
{
return;
}
float scaleFactor = 1.0 / length;
p.x *= scaleFactor;
p.y *= scaleFactor;
p.z *= scaleFactor;
}
The answer to this question is a combination of the following rotateTo function and a change to the code from Ray's tutorial at ( http://www.raywenderlich.com/12667/how-to-rotate-a-3d-object-using-touches-with-opengl ). As one of the comments on that article says there is an arbitrary factor of 2.0 being multiplied in GLKQuaternion Q_rot = GLKQuaternionMakeWithAngleAndVector3Axis(angle * 2.0, axis);. Remove that "2" and use the following function to create the _slerpEnd - after that the globe will rotate smoothly to XYZ specified.
// Rotate the globe using Slerp interpolation to an XYZ coordinate
- (void)rotateToLocationX:(float)x andY:(float)y andZ:(float)z {
// Turn on the interpolation for smooth rotation
_slerping = YES; // Begin auto rotating to this location
_slerpCur = 0;
_slerpMax = 1.0;
_slerpStart = _quat;
// Create a look at vector for which we will create a GLK4Matrix from
float xEye = x;
float yEye = y;
float zEye = z;
_currentSatelliteLocation = GLKMatrix4MakeLookAt(xEye, yEye, zEye, 0, 0, 0, 0, 1, 0);
// Turn our 4x4 matrix into a quat and use it to mark the end point of our interpolation
_slerpEnd = GLKQuaternionMakeWithMatrix4(_currentSatelliteLocation);
}
Im trying to understanding collision detection in 2d world. I recently got this tutorials http://www.gotoandplay.it/_articles/2003/12/bezierCollision.php. I have question which puzzled me a lot - on the flash demo ball is dropping without responding if i try to swap the starting and end point.
Can someone explain me , how the simulation works.
I have modified this the sample code. It works perfect until the start and end point are swapped, Here is same code in objective c
Thanks in advance. .
-(void)render:(ccTime)dt {
if(renderer)
{
CGPoint b = ball.position;
float bvx = ball.vx;
float bvy = ball.vy;
bvx += .02;
bvy -= .2;
b.x += bvx;
b.y += bvy;
float br = ball.contentSize.width/2;
for ( int p = 0 ; p < [map count] ; p++ ) {
line *l = [map objectAtIndex:p];
CGPoint p0 = l.end;
CGPoint p1 = l.start;
float p0x = p0.x, p0y = p0.y, p1x = p1.x, p1y = p1.y;
// get Angle //
float dx = p0x - p1x;
float dy = p0y - p1y;
float angle = atan2( dy , dx );
float _sin = sin ( angle );
float _cos = cos ( angle );
// rotate p1 ( need only 'x' ) //
float p1rx = dy * _sin + dx * _cos + p0x;
// rotate ball //
float px = p0x - b.x;
float py = p0y - b.y;
float brx = py * _sin + px * _cos + p0x;
float bry = py * _cos - px * _sin + p0y;
float cp = ( b.x - p0x ) * ( p1y - p0y ) - ( b.y - p0y ) * ( p1x - p0x );
if ( bry > p0y - br && brx > p0x && brx < p1rx && cp > 0 ) {
// calc new Vector //
float vx = bvy * _sin + bvx * _cos;
float vy = bvy * _cos - bvx * _sin;
vy *= -.8;
vx *= .98;
float __sin = sin ( -angle );
float __cos = cos ( -angle );
bvx = vy * __sin + vx * __cos;
bvy = vy * __cos - vx * __sin;
// calc new Position //
bry = p0y - br;
dx = p0x - brx;
dy = p0y - bry;
b.x = dy * __sin + dx * __cos + p0x;
b.y = dy * __cos - dx * __sin + p0y;
}
}
ball.position = b;
ball.vx = bvx;
ball.vy = bvy;
if ( b.y < 42)
{
ball.position = ccp(50, size.height - 42);
ball.vx = .0f;
ball.vy = .0f;
}
}
}
The order of the points defines an orientation on the curve. If the start point is on the left and the end point on the right, then the curve is oriented so that "up" points above the curve. However, if you swap the start/end points the curve is oppositely oriented, so now "up" actually points below the curve.
When your code detects a collision and then corrects the velocity it is using the curve's orientation. That is why when the ball drops on the curve with the start/end points swapped it appears to jump through the curve.
To correct this your collision resolution code should check which side of the curve the ball is on (with respect to the curve's orientation), and adjust accordingly.
If you swap l.end and l.start it will serve for line without the segment (l.start, l.end). This is because all values are signed here.
Algorithm turns the plane so that line is horizontal and one of the segment ends doesn't move. After that it is easy to understand whether the ball touches the line. And if it does, its speed should change: in rotated plane it just reverses y-coordinate and we should rotate it back to get line not horizontal again.
In fact not a very good implementation. All this can be done without sin, cos, just vectors.
I'm making a line graph on the iphone using core graphics and instead of having a jagged chart, I want to smooth it out like in good old math class. What's the formula to pick where to put the control points for CGContextAddCurveToPoint?
CGFloat cp2x = (x + x + prevX);
CGFloat cp1y = (prevY + prevY + y);
CGFloat cp1x = (prevX + prevX + x);
CGFloat cp2y = (y + y + prevY);
CGContextAddCurveToPoint(context, cp1x, cp1y, cp2x, cp2y, x, y);
This code almost works but doesn't take into account 3 points.
I ended up doing something like this that worked well:
CGPoint prevItemPosition2 = [self positionForItem: prevItem2 andMaxItem:maxItem inItems: items];
CGPoint prevItemPosition1 = [self positionForItem: prevItem1 andMaxItem:maxItem inItems: items];
CGFloat cpAngle = atan2f((prevItemPosition2.y - prevItemPosition1.y),
(prevItemPosition2.x - prevItemPosition1.x));
cpAngle += M_PI;
CGFloat magnitude = sqrtf(powf(prevItemPosition1.x - itemPosition.x, 2) +
powf(prevItemPosition1.y - itemPosition.y, 2)) / 3;
CGPoint angleComponents = CGPointMake(cos(cpAngle) * magnitude,
sin(cpAngle) * magnitude);
CGPoint cp = CGPointMake(prevItemPosition1.x + angleComponents.x,
prevItemPosition1.y + angleComponents.y);
CGContextAddQuadCurveToPoint(context, cp.x, cp.y, itemPosition.x, itemPosition.y);