Point in Tilt Direction - iPhone - iphone

In my cocos2d game, I have my player sprite and I want to have him move in the direction I tilt my iPhone. I can deal with that, the hardest bit which I can't work out is:
How do I make my sprite rotate to point in the direction I am tilting? This is represented very well in the 'Tilt to Live' game on the app store. I want controls just like that.
My sprite(for those unfamiliar with cocos2d) does have a rotation value if that helps.
Thanks.

If you don't like the above, here's a simpler way to get a reasonable result!
Hold the iPad in front of you, let LR be the left / right tilt, TA the towards you / away from you tilt. So LR runs from -90 to 90, TA from -90 to 90. (TA negative is leaning towards your belly.)
Display both those numbers on your screen, and move the device around, so you are certain you have that right to begin with. You won't be able to do anything until that is working.
The solutionAngle will be like a clock hand, clockwise, with 12 distant from you.
Go through this decision chain:
If both LR and TA is zero, the machine is flat. Act appropriately.
If LR is flat (0), the answer is either 0 or 180, depending on the sign of TA.
If TA is flat (0), the answer is either 90 or 270, depending on the sign of LR.
Otherwise:
adjustmentAngle = arctan( sin(TA) / sin(LR) )
// (NB, that should run from -90 to +90)
if ( LR > 0 ) finalResult = 90 - adjustmentAngle
if ( LR < 0 ), finalResult = 270 + adjustmentAngle
I think that will do it! Hope it helps!
IMO...... be sure to smooth the result over time, for a good feel.
.
setting the angle...
"the only thing I am unsure of currently (concerning your own idea) is how do I apply it to my player? Do I merely make the player rotation value equal to the adjustmentAngle?" .. hi Josh, yes simply set the rotation to the final angle you calculate using the above! Fortunately it's that simple.
If you ever have to convert back/fore between degrees and radians, just paste in these lines of code that everyone uses:
#include <math.h>
static inline float degreestoradians (double degrees) {return degrees * M_PI/180;}
static inline float radianstodegrees (double degrees) {return degrees * 180/M_PI;}
.
where are the axes?...
PS, here's the incredibly handy diagram you may want to bookmark:
http://developer.apple.com/library/ios/#documentation/uikit/reference/UIAcceleration_Class/Reference/UIAcceleration.html
.
converting from accelerometer to angles...
"the accelerometer doesn't provide the raw data in angles. How do get from the raw data"
Quite right, I forgot to mention it sorry. This is an everyday problem...
distanceFactor = square root of (x^2 + y^2 + z^2)
angle X axis = acos ( x / distanceFactor )
angle y axis = acos ( y / distanceFactor )
angle z axis = acos ( z / distanceFactor) )
You must TEST this by writing the three angles on the screen and then moving it around, in other words "physically unit test" that section you write, before you proceed!
here is one of many answers from SO: UIAccelerationValue angle
BTW as you can probably see, you can get a rough result by taking the ratio of simply the raw x by raw y value, rather than the ratio of the two sines, in the 'adjustmentAngle' expression ... but anyway don't worry about that for now.
And finally!
IMPORTANT Readers should note that the amazing new Core Motion system, handles a lot of this for you, depending on your needs. Check it out!!!!!
Hope it helps!

Related

Calculating coordinates from reference points

I'm working on a game in Unity where you can walk around in a city that also exists in real life.
In the game you should be able to enter real-world coordinates, or use your phone's GPS, and you'll be transported to the in-game position of those coordinates.
For this, i'd need to somehow convert the game coordinates to latitude and longitude coordinates. I have some coordinates from specific buildings, and i figured i might be able to write a script to determine the game coordinates from those reference points.
I've been searching for a bit on Google, and though i have probably come across the right solutions occasionally, i've been unable to understand them enough to use it in my code.
If someone has experience with this, or knows how i could do this, i'd appreciate it if you could help me understand it :)
Edit: Forgot to mention that other previous programmers have already placed the world at some position and rotation they felt like using, which unfortunately i can't simply change without breaking things.
Tim Falken
This is simple linear math. The main issues you'll come across is the fact that your game coordinate system will be probably be reversed along one or more axis. You'll probably need to reverse the direction along the latitude (Y) axis of your app. Aside from that it is just a simple conversion of the scales. Since you say that this is the map of a real place you should be able to easily figure out the min\max lon\lat which your map covers. Take the absolute value of the difference between these two values and divide that by the width\height of your map in each direction. This will be the change in latitude per map unit value. Store this value and it should be easy to convert both ways between the two units. Make functions that abstract the details and you should have no problems calculating this either way.
I assume that you have been able to retrieve the GPS coordinates OK.
EDIT:
By simple linear math I mean something like this (this is C++ style psuedo code and completely untested; in a real world example the constants would all be member variables instead):
define('MAP_WIDTH', 1000);
define('MAP_HEIGHT', 1000);
define('MIN_LON', 25.333);
define('MIN_LAT', 20.333);
define('MAX_LON', 27.25);
define('MAX_LAT', 20.50);
class CoordConversion {
float XScale=abs(MAX_LON-MIN_LON)/MAP_WIDTH;
float YScale=abs(MAX_LAT-MIN_LAT)/MAP_HEIGHT;
int LonDir = MIN_LON<MAX_LON?1:-1;
int LatDir = MIN_LAT<MAX_LAT?1:-1;
public static float GetXFromLon(float lon) {
return (this.LonDir>0?(lon-MIN_LON):(lon-MAX_LON))*this.XScale;
}
public static float GetYFromLat(float lat) {
return (this.LatDir >0?(lat-MIN_LAT):(lat-MAX_LAT))*this.YScale;
}
public static float GetLonFromX(float x) {
return (this.LonDir>0?MIN_LON:MAX_LON)+(x/this.XScale);
}
public static float GetLatFromY(float y) {
return (this.LonDir>0?MIN_LAT:MAX_LAT)+(y/this.YScale);
}
}
EDIT2: In the case that the map is rotated you'll want to use the minimum and maximum lon\lat actually shown on the map. You'll also need to rotate each point after the conversion. I'm not even going to attempt to get this right off the top of my head but I can give your the code you'll need:
POINT rotate_point(float cx,float cy,float angle,POINT p)
{
float s = sin(angle);
float c = cos(angle);
// translate point back to origin:
p.x -= cx;
p.y -= cy;
// rotate point
float xnew = p.x * c - p.y * s;
float ynew = p.x * s + p.y * c;
// translate point back:
p.x = xnew + cx;
p.y = ynew + cy;
}
This will need to be done in when returning a game point and also it needs to be done in reverse before using a game point to convert to a lat\lon point.
EDIT3: More help on getting the coordinates of your maps. First find the city or whatever it is on Google maps. Then you can right click the highest point (furthest north) on your maps and find the highest longitude. Repeat this for all four cardinal directions and you should be set.

create opencv camera matrix for iPhone 5 solvepnp

I am developing an application for the iPhone using opencv. I have to use the method solvePnPRansac:
http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html
For this method I need to provide a camera matrix:
__ __
| fx 0 cx |
| 0 fy cy |
|_0 0 1 _|
where cx and cy represent the center pixel positions of the image and fx and fy represent focal lengths, but that is all the documentation says. I am unsure what to provide for these focal lengths. The iPhone 5 has a focal length of 4.1 mm, but I do not think that this value is usable as is.
I checked another website:
http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
which shows how opencv creates camera matrices. Here it states that focal lengths are measured in pixel units.
I checked another website:
http://www.velocityreviews.com/forums/t500283-focal-length-in-pixels.html
(about half way down)
it says that focal length can be converted from units of millimeters to pixels using the equation: fx = fy = focalMM * pixelDensity / 25.4;
Another Link I found states that fx = focalMM * width / (sensorSizeMM);
fy = focalMM * length / (sensorSizeMM);
I am unsure about these equations and how to properly create this matrix.
Any help, advice, or links on how to create an accurate camera matrix (especially for the iPhone 5) would be greatly appreciated,
Isaac
p.s. I think that (fx/fy) or (fy/fx) might be equal to the aspect ratio of the camera, but that might be completely wrong.
UPDATE:
Pixel coordinates to 3D line (opencv)
using this link, I can figure out how they want fx and fy to be formatted because they use it to scale angles relative to their distance from the center. therefore, fx and fy are likely in pixels/(unit length) but im still not sure what this unit length needs to be, can it be arbitrary as long as x and y are scaled to each other?
You can get an initial (rough) estimate of the focal length in pixel dividing the focal length in mm by the width of a pixel of the camera' sensor (CCD, CMOS, whatever).
You get the former from the camera manual, or read it from the EXIF header of an image taken at full resolution. Finding out the latter is a little more complicated: you may look up on the interwebs the sensor's spec sheet, if you know its manufacturer and model number, or you may just divide the overall width of its sensitive area by the number of pixels on the side.
Absent other information, it's usually safe to assume that the pixels are square (i.e. fx == fy), and that the sensor is orthogonal to the lens's focal axis (i.e. that the term in the first row and second column of the camera matrix is zero). Also, the pixel coordinates of the principal point (cx, cy) are usually hard to estimate accurately without a carefully designed calibration rig, and an as-carefully executed calibration procedure (that's because they are intrinsically confused with the camera translation parallel to the image plane). So it's best to just set them equal to the geometrical geometrical center of the image, unless you know that the image has been cropped asymmetrically.
Therefore, your simplest camera model has only one unknown parameter, the focal length f = fx = fy.
Word of advice: in your application is usually more convenient to carry around the horizontal (or vertical) field-of-view angle, rather than the focal length in pixels. This is because the FOV is invariant to image scaling.
The "focal length" you are dealing with here is simply a scaling factor from objects in the world to camera pixels, used in the pinhole camera model (Wikipedia link). That's why its units are pixels/unit length. For a given f, an object of size L at a distance (perpendicular to the camera) z, would be f*L/z pixels.
So, you could estimate the focal length by placing an object of known size at a known distance of your camera and measuring its size in the image. You could aso assume the central point is the center of the image. You should definitely not ignore the lens distortion (dist_coef parameter in solvePnPRansac).
In practice, the best way to obtain the camera matrix and distortion coefficients is to use a camera calibration tool. You can download and use the MRPT camera_calib software from this link, there's also a video tutorial here. If you use matlab, go for the Camera Calibration Toolbox.
Here you have a table with the spec of the cameras for iPhone 4 and 5.
The calculation is:
double f = 4.1;
double resX = (double)(sourceImage.cols);
double resY = (double)(sourceImage.rows);
double sensorSizeX = 4.89;
double sensorSizeY = 3.67;
double fx = f * resX / sensorSizeX;
double fy = f * resY / sensorSizeY;
double cx = resX/2.;
double cy = resY/2.;
Try this:
func getCamMatrix()->(Float, Float, Float, Float)
{
let format:AVCaptureDeviceFormat? = deviceInput?.device.activeFormat
let fDesc:CMFormatDescriptionRef = format!.formatDescription
let dim:CGSize = CMVideoFormatDescriptionGetPresentationDimensions(fDesc, true, true)
// dim = dimensioni immagine finale
let cx:Float = Float(dim.width) / 2.0;
let cy:Float = Float(dim.height) / 2.0;
let HFOV : Float = format!.videoFieldOfView
let VFOV : Float = ((HFOV)/cx)*cy
let fx:Float = abs(Float(dim.width) / (2 * tan(HFOV / 180 * Float(M_PI) / 2)));
let fy:Float = abs(Float(dim.height) / (2 * tan(VFOV / 180 * Float(M_PI) / 2)));
return (fx, fy, cx, cy)
}
Old thread, present problem.
As Milo and Isaac mentioned after Milo's answer, there seems to be no "common" params available for, say, the iPhone 5.
For what it is worth, here is the result of a run with the MRPT calibration tool, with a good old iPhone 5:
[CAMERA_PARAMS]
resolution=[3264 2448]
cx=1668.87585
cy=1226.19712
fx=3288.47697
fy=3078.59787
dist=[-7.416752e-02 1.562157e+00 1.236471e-03 1.237955e-03 -5.378571e+00]
Average err. of reprojection: 1.06726 pixels (OpenCV error=1.06726)
Note that dist means distortion here.
I am conducting experiments on a toy project, with these parameters---kind of ok. If you do use them on your own project, please keep in mind that they may be hardly good enough to get started. The best will be to follow Milo's recommendation with your own data. The MRPT tool is quite easy to use, with the checkerboard they provide. Hope this does help getting started !

How can I correctly calculate the direction for a moving object?

I'm solving the following problem: I have an object and I know its position now and its position 300ms ago. I assume the object is moving. I have a point to which I want the object to get.
What I need is to get the angle from my current object to the destination point in such a format that I know whether to turn left or right.
The idea is to assume the current angle from the last known position and the current position.
I'm trying to solve this in MATLAB. I've tried using several variations with atan2 but either I get the wrong angle in some situations (like when my object is going in circles) or I get the wrong angle in all situations.
Examples of code that screws up:
a = new - old;
b = dest - new;
alpha = atan2(a(2) - b(2), a(1) - b(1);
where new is the current position (eg. x = 40; y = 60; new = [x y];), old is the 300ms old position and dest is the destination point.
Edit
Here's a picture to demonstrate the problem with a few examples:
In the above image there are a few points plotted and annotated. The black line indicates our estimated current facing of the object.
If the destination point is dest1 I would expect an angle of about 88°.
If the destination point is dest2 I would expect an angle of about 110°.
If the destination point is dest3 I would expect an angle of about -80°.
Firstly, you need to note the scale on the sample graph you show above. The x-axis ticks move in steps of 1, and the y-axis ticks move in steps of 20. The picture with the two axes appropriately scaled (like with the command axis equal) would be a lot narrower than you have, so the angles you expect to get are not right. The expected angles will be close to right angles, just a few degrees off from 90 degrees.
The equation Nathan derives is valid for column vector inputs a and b:
theta = acos(a'*b/(sqrt(a'*a) * sqrt(b'*b)));
If you want to change this equation to work with row vectors, you would have to switch the transpose operator in both the calculation of the dot product as well as the norms, like so:
theta = acos(a*b'/(sqrt(a*a') * sqrt(b*b')));
As an alternative, you could just use the functions DOT and NORM:
theta = acos(dot(a,b)/(norm(a)*norm(b)));
Finally, you have to account for the direction, i.e. whether the angle should be positive (turn clockwise) or negative (turn counter-clockwise). You can do this by computing the sign of the z component for the cross product of b and a. If it's positive, the angle should be positive. If it's negative, the angle should be negative. Using the function SIGN, our new equation becomes:
theta = sign(b(1)*a(2)-b(2)*a(1)) * acos(dot(a,b)/(norm(a)*norm(b)));
For your examples, the above equation gives an angle of 88.85, 92.15, and -88.57 for your three points dest1, dest2, and dest3.
NOTE: One special case you will need to be aware of is if your object is moving directly away from the destination point, i.e. if the angle between a and b is 180 degrees. In such a case you will have to pick an arbitrary turn direction (left or right) and a number of degrees to turn (180 would be ideal ;) ). Here's one way you could account for this condition using the function EPS:
theta = acos(dot(a,b)/(norm(a)*norm(b))); %# Compute theta
if abs(theta-pi) < eps %# Check if theta is within some tolerance of pi
%# Pick your own turn direction and amount here
else
theta = sign(b(1)*a(2)-b(2)*a(1))*theta; %# Find turn direction
end
You can try using the dot-product of the vectors.
Define the vectors 'a' and 'b' as:
a = new - old;
b = dest - new;
and use the fact that the dot product is:
a dot b = norm2(a) * norm2(b) * cos(theta)
where theta is the angle between two vectors, and you get:
cos(theta) = (a dot b)/ (norm2(a) * norm2(b))
The best way to calculate a dot b, assuming they are column vectors, is like this:
a_dot_b = a'*b;
and:
norm2(a) = sqrt(a'*a);
so you get:
cos(theta) = a'*b/(sqrt((a'*a)) * sqrt((b'*b)))
Depending on the sign of the cosine you either go left or right
Essentially you have a line defined by the points old and new and wish to determine if dest is on right or the left of that line? In which case have a look at this previous question.

Is there a fast way to calculate the smallest delta between two rotation values?

There are two views:
viewA and viewB. Both are rotated.
The coordinate system for rotation is weird: It goes from 0 to 179,999999 or -179,99999 degrees. So essentially 179,99999 and -179,99999 are very close together!
I want to calculate how much degrees or radians are between these rotations.
For example:
viewA is rotated at 20 degrees
viewB is rotated at 30 degrees
I could just do: rotationB - rotationA = 10.
But the problem with this formula:
viewA is rotated at 179 degrees
viewB is rotated at -179 degrees
that would go wrong: rotationB - rotationA = -179 - 179 = -358
358 is plain wrong, because they are very close together in reality. So one thing I could do maybe is to check if the absolute result value is bigger than 180, and if so, calculate it the other way around to get the short true delta. But I feel this is plain wrong and bad, because of possible floating point errors and unprecision. So if two views are rotated essentially equally at 179,99999999999 degrees I might get a weird 180 or a 0 if I am lucky.
Maybe there's a genius-style math formular with PI, sine or other useful stuff to get around this problem?
EDIT: Original answer (with Mod) was wrong. would have given 180 - right answer in certain circumstances (angles 30 and -20 for example would give answer of 130, not correct answer of 50):
Two correct answers for all scenarios:
If A1 and A2 are two angles (between -179.99999 and 179.99999,
and Abs means take the Absolute Value,
The angular distance between them, is expressed by:
Angle between = 180 - Abs(Abs(A1 - A2) - 180)
Or, using C-style ternary operator:
Angle between = A1 < 180 + A2? A1 - A2: 360 + A1 - A2
Judging from the recent questions you've asked, you might want to read up on the unit circle. This is a fundamental concept in trigonometry, and it is how angles are calculated when doing rotations using CGAffineTransforms or CATransform3Ds.
Basically, the unit circle goes from 0 to 360 degrees, or 0 to 2 * pi (M_PI is the constant used on the iPhone) radians. Any angle greater than 360 degrees is the same as that angle minus a multiple of 360 degrees. For example, 740 degrees is the same as 380 degrees, which is the same as 20 degrees, when it comes to the ending position of something rotated by that much.
Likewise, negative degrees are the same as if you'd added a multiple of 360 degrees to them. -20 degrees is the same as 340 degrees.
There's no magic behind any of these calculations, you just have to pay attention to when something crosses the 0 / 360 degree point on the circle. In the case you describe, you can add 360 to any negative values to express them in positive angles. When subtracting angles, if the ending angle is less than the starting angle, you may also need to add 360 to the result to account for crossing the zero point on the unit circle.
Let's try this again:
There are two angles between A and B. One of them is
θ1 = A - B
The other is
θ2 = 360 - θ1
So just take the minimum of those two.
In addition to Brad Larson's excellent answer I would add that you can do:
CGFloat adjustAngle(angle) { return fmod(angle + 180.0, 360.0); }
...
CGFloat difference = fmod(adjustAngle(angle1) - adjustAngle(angle2), 360.0);
Take the difference, add 360, and mod by 360.

Car turning circle and moving the sprite

I would like to use Cocos2d on the iPhone to draw a 2D car and make it steer from left to right in a natural way.
Here is what I tried:
Calculate the angle of the wheels and just move it to the destination point where the wheels point to. But this creates a very unnatural feel. The car drifts half the time
After that I started some research on how to get a turning circle from a car, which meant that I needed a couple of constants like wheelbase and the width of the car.
After a lot of research, I created the following code:
float steerAngle = 30; // in degrees
float speed = 20;
float carWidth = 1.8f; // as in 1.8 meters
float wheelBase = 3.5f; // as in 3.5 meters
float x = (wheelBase / abs(tan(steerAngle)) + carWidth/ 2);
float wheelBaseHalf = wheelBase / 2;
float r = (float) sqrt(x * x + wheelBaseHalf * wheelBaseHalf);
float theta = speed * 1 / r;
if (steerAngle < 0.0f)
theta = theta * -1;
drawCircle(CGPointMake(carPosition.x - r, carPosition.y),
r, CC_DEGREES_TO_RADIANS(180), 50, NO);
The first couple of lines are my constants. carPosition is of the type CGPoint. After that I try to draw a circle which shows the turning circle of my car, but the circle it draws is far too small. I can just make my constants bigger, to make the circle bigger, but then I would still need to know how to move my sprite on this circle.
I tried following a .NET tutorial I found on the subject, but I can't really completely convert it because it uses Matrixes, which aren't supported by Cocoa.
Can someone give me a couple of pointers on how to start this? I have been looking for example code, but I can't find any.
EDIT After the comments given below
I corrected my constants, my wheelBase is now 50 (the sprite is 50px high), my carWidth is 30 (the sprite is 30px in width).
But now I have the problem, that when my car does it's first 'tick', the rotation is correct (and also the placement), but after that the calculations seem wrong.
The middle of the turning circle is moved instead of kept at it's original position. What I need (I think) is that at each angle of the car I need to recalculate the original centre of the turning circle. I would think this is easy, because I have the radius and the turning angle, but I can't seem to figure out how to keep the car moving in a nice circle.
Any more pointers?
You have the right idea. The constants are the problem in this case. You need to specify wheelBase and carWidth in units that match your view size. For example, if the image of your car on the screen has a wheel base of 30 pixels, you would use 30 for the WheelBase variable.
This explains why your on-screen circles are too small. Cocoa is trying to draw circles for a tiny little car which is only 1.8 pixels wide!
Now, for the matter of moving your car along the circle:
The theta variable you calculate in the code above is a rotational speed, which is what you would use to move the car around the center point of that circle:
Let's assume that your speed variable is in pixels per second, to make the calculations easier. With that assumption in place, you would simply execute the following code once every second:
// calculate the new position of the car
newCarPosition.x = (carPosition.x - r) + r*cos(theta);
newCarPosition.y = carPosition.y + r*sin(theta);
// rotate the car appropriately (pseudo-code)
[car rotateByAngle:theta];
Note: I'm not sure what the correct method is to rotate your car's image, so I just used rotateByAngle: to get the point across. I hope it helps!
update (after comments):
I hadn't thought about the center of the turning circle moving with the car. The original code doesn't take into account the angle that the car is already rotated to. I would change it as follows:
...
if (steerAngle < 0.0f)
theta = theta * -1;
// calculate the center of the turning circle,
// taking int account the rotation of the car
circleCenter.x = carPosition.x - r*cos(carAngle);
circleCenter.y = carPosition.y + r*sin(carAngle);
// draw the turning circle
drawCircle(circleCenter, r, CC_DEGREES_TO_RADIANS(180), 50, NO);
// calculate the new position of the car
newCarPosition.x = circleCenter.x + r*cos(theta);
newCarPosition.y = circleCenter.y + r*sin(theta);
// rotate the car appropriately (pseudo-code)
[car rotateByAngle:theta];
carAngle = carAngle + theta;
This should keep the center of the turning circle at the appropriate point, even if the car has been rotated.