Calculate missing y between two points - coordinates

I would like to know how to calculate the missing y coordinate between the two know points. The x position of the point is know only the y position is unknow. How do i calculate the position with the know points?
the known points are:
point 1:
x: -4000
y: 4000
point 2:
x: 4000
y: 0
point 3:
x: -1000
y: ?
I tried using a map function but it did not work.
Visual of the missing point

Find out the line equation joining point 1 and point 2 using "2 point form of a line" formula.
As point 3 is lying on this line, you can substitute the x co-ordinate of point 3 in the above obtained equation and get the unknown y co-ordinate!

I got some help of a coworker. We came up with the following method that calculates the crossing point of two lines.
/// <summary>
/// This method calculates the crossing x and y point with two lines
/// </summary>
/// <param name="newMessage">The new position</param>
/// <param name="point1">The first point of the line</param>
/// <param name="point2">The second point of the line</param>
/// <returns>A new point with the x and y position of the crossing point if the lines do not cross a point with the values -4001, -4001 is returned</returns>
public static Point CalculateCrossingPointOfTwoLines(LaserMessage newMessage, Point point1, Point point2)
{
int currentXPosition = LaserConnectionLogic.PreviousLaserMessage.X;
int currentYPosition = LaserConnectionLogic.PreviousLaserMessage.Y;
Point lowestPoint = point1.Y < point2.Y ? point1 : point2;
double yDifferenceCurrentAndNew = currentYPosition - newMessage.Y;
double xDifferenceCurrentAndNew = newMessage.X - currentXPosition;
double c1 = yDifferenceCurrentAndNew * newMessage.X + xDifferenceCurrentAndNew * newMessage.Y;
double a2 = point2.Y - point1.Y;
double crossedLineLengthDifference = point1.X - point2.X;
double c2 = a2 * lowestPoint.X + crossedLineLengthDifference * lowestPoint.Y;
double determinant = yDifferenceCurrentAndNew * crossedLineLengthDifference - a2 * xDifferenceCurrentAndNew;
if (determinant == 0)
{ // lines do not cross
return new Point(-4001, -4001);
}
double x = (crossedLineLengthDifference * c1 - xDifferenceCurrentAndNew * c2) / determinant;
double y = (yDifferenceCurrentAndNew * c2 - a2 * c1) / determinant;
return new Point(x.ToInt(), y.ToInt());
}

Related

Compute coordinates position with projection

Given 2 coordinates (point 1 and 2 in red) in WGS84 I need to find the coordinates of the point perpendicular (point 3) to the line at a given distance.
I could manage to make the math to compute this perpendicular point, but when displayed on the map, the point seems to be at a wrong place, probably because of the projection.
What I want on a map:
And what I have instead on the map:
How can I take into account the projection so that the point on the map appears perpendicular to the line? The algorithm below to compute the point comes from here: https://math.stackexchange.com/questions/93424/calculate-rectangle-coordinates-from-line-and-height
public static Coords ComputePerpendicularPoint(Coords first, Coords last, double distance)
{
double slope = -(last.Lon.Value - first.Lon.Value) / (last.Lat.Value - first.Lat.Value);
// number of km per degree = ~111km (111.32 in google maps, but range varies between 110.567km at the equator and 111.699km at the poles)
// 1km in degree = 1 / 111.32km = 0.0089
// 1m in degree = 0.0089 / 1000 = 0.0000089
distance = distance * 0.0000089 / 100; //0.0000089 => represents around 1m in wgs84. /100 because distance is in cm
double t = distance / Math.Sqrt(1 + (slope * slope));
Coords perp_coord = new Coords();
perp_coord.Lon = first.Lon + t;
perp_coord.Lat = first.Lat + (t * slope);
return perp_coord;
}
Thank you in advance!

How to find perpendicular line of given 2 point?

I am trying to draw a perpendicular line on canvas in flutter , after some recap of my math , I failed to implement the formula into my code.
I have a point p1 and point p2 to draw a straight line,
but then I need draw a perpendicular line that pass through p3 (expected result is like blue line in the picture)
First of my code is find the line equation given two point (p1,p2). Below is how I find the m (slope).
double x1=p1.x;
double x2=p2.x;
double y1=p1.y;
double y2=p2.y;
double m=(y2-y1)/(x2-x1);
And to find the slope for the perpendicular line I am writing the code like below
//this code I expected to transform the previous m to negative reciprocal.
double invertedM = ( 1 / m ) * -1;
And then I have to find the new c(y-intercept) with my third point p3 to form a new line equation. And substitute y1=0 and y2=screeen_height to draw the perpendicular line that pass through p3
double invertedC = p3.y / (invertedM * p3.x) ;
//get x give y = 0
qy1 = 0 ;
double findX1 = (qy1-invertedC)/invertedM;
Point answerPoint1 = Point(findX1,qy1);
//get x given y = screenheight
qy2 = screenheight ;
double findX2 = (qy2-invertedC)/invertedM;
Point answerPoint2 = Point(findX2,qy2);
But somehow the result I get although is perpendicular but it doesn't pass through p3 .
I think you've just over-complicated your algebra a little.
The slope m of the p1-p2 line is given by:
m = (y2-y1)/(x2-x1)
Then the equation of the line perpendicular to p1-p2 passing through p3 is:
(y-y3)/(x-x3) = -1/m
Rearranging gives:
x = (y3-y)*m + x3
Therefore:
double findX1 = (p3.y-qy1)*m + p3.x;
double findX2 = (p3.y-qy2)*m + p3.x;
where qy1 = 0, qy2 = screenHeight, as in your code.
What happens when p3 is such that findX1 and/or findX2 are less than zero or greater than screenWidth? Do you then want to snap the line to the left or right?

Distance from point to ellipse

I need to somehow compute the distance between a point and an Ellipse.
I describe the Ellipse in my program as coordinates x = a cos phi and y = b sin phi (where a,b are constants and phi the changing angle).
I want to compute the shortest distance between a point P and my ellipse.
My thought were to calculate the vector from the center of my ellipse and the point P and then find the vector that start from the center and reaches the end of the ellipse in the direction of the point P and at the end subtract both vectors to have the distance (thi may not give the shortest distance but it's still fine for what I need.
The problem is I don't know how to compute the second vector.
Does someone has a better Idea or can tell me how I can find the second vetor?
Thanks in advance!
EDIT1:
ISSUE:COMPUTED ANGLE DOESN'T SEEM TO GIVE RIGHT POINT ON ELLIPSE
Following the suggestion of MARTIN R, I get this result:
The white part is created by the program of how he calculates the distance. I compute the angle phi using the vector from the center P (of ellipse) to the center of the body. But as I use the angle in the equation of my ellipse to get the point that should stay on the ellipse BUT also having same direction of first calculated vector (if we consider that point as a vector) it actually gives the "delayed" vector shown above.
What could be the problem? I cannot really understand this behavior (could it have something to do with atan2??)
EDIT2:
I show also that in the other half of the ellipse it gives this result:
So we can see that the only case where this works is when we have phi = -+pi/2 and phi = -+pi
IMPLEMENTATION FAILED
I tried using the implementation of MARTIN R but I still get the things wrong.
At first I thought it could be the center (that is not always the same) and I changed the implementation this way:
func pointOnEllipse(ellipse: Ellipse, p: CGPoint) -> CGPoint {
let maxIterations = 10
let eps = CGFloat(0.1/max(ellipse.a, ellipse.b))
// Intersection of straight line from origin to p with ellipse
// as the first approximation:
var phi = atan2(ellipse.a*p.y, ellipse.b*p.x)
// Newton iteration to find solution of
// f(θ) := (a^2 − b^2) cos(phi) sin(phi) − x a sin(phi) + y b cos(phi) = 0:
for _ in 0..<maxIterations {
// function value and derivative at phi:
let (c, s) = (cos(phi), sin(phi))
let f = (ellipse.a*ellipse.a - ellipse.b*ellipse.b)*c*s - p.x*ellipse.a*s + p.y*ellipse.b*c - ellipse.center.x*ellipse.a*s + ellipse.center.y*ellipse.b*c
//for the second derivative
let f1 = (ellipse.a*ellipse.a - ellipse.b*ellipse.b)*(c*c - s*s) - p.x*ellipse.a*c - p.y*ellipse.b*s - ellipse.center.x*ellipse.a*c - ellipse.center.y*ellipse.b*s
let delta = f/f1
phi = phi - delta
if abs(delta) < eps { break }
}
return CGPoint(x: (ellipse.a * cos(phi)) + ellipse.center.x, y: (ellipse.b * sin(phi)) + ellipse.center.y)
}
We can see what happens here:
This is pretty strange, all points stay in that "quadrant". But I also noticed when I move the green box far far away from the ellipse it seems to get the right vector for the distance.
What could it be?
END RESULT
Using updated version of MARTIN R (with 3 iterations)
x = a cos(phi), y = b sin (phi) is an ellipse with the center at
the origin, and the approach described in your question can be realized like this:
// Point on ellipse in the direction of `p`:
let phi = atan2(a*p.y, b*p.x)
let p2 = CGPoint(x: a * cos(phi), y: b * sin(phi))
// Vector from `p2` to `p`:
let v = CGVector(dx: p.x - p2.x, dy: p.y - p2.y)
// Length of `v`:
let distance = hypot(v.dx, v.dy)
You are right that this does not give the shortest distance
of the point to the ellipse. That would require to solve 4th degree
polynomial equations, see for example distance from given point to given ellipse or
Calculating Distance of a Point from an Ellipse Border.
Here is a possible implementation of the algorithm
described in http://wwwf.imperial.ac.uk/~rn/distance2ellipse.pdf:
// From http://wwwf.imperial.ac.uk/~rn/distance2ellipse.pdf .
func pointOnEllipse(center: CGPoint, a: CGFloat, b: CGFloat, closestTo p: CGPoint) -> CGPoint {
let maxIterations = 10
let eps = CGFloat(0.1/max(a, b))
let p1 = CGPoint(x: p.x - center.x, y: p.y - center.y)
// Intersection of straight line from origin to p with ellipse
// as the first approximation:
var phi = atan2(a * p1.y, b * p1.x)
// Newton iteration to find solution of
// f(θ) := (a^2 − b^2) cos(phi) sin(phi) − x a sin(phi) + y b cos(phi) = 0:
for i in 0..<maxIterations {
// function value and derivative at phi:
let (c, s) = (cos(phi), sin(phi))
let f = (a*a - b*b)*c*s - p1.x*a*s + p1.y*b*c
let f1 = (a*a - b*b)*(c*c - s*s) - p1.x*a*c - p1.y*b*s
let delta = f/f1
phi = phi - delta
print(i)
if abs(delta) < eps { break }
}
return CGPoint(x: center.x + a * cos(phi), y: center.y + b * sin(phi))
}
You may have to adjust the maximum iterations and epsilon
according to your needs, but those values worked well for me.
For points outside of the ellipse, at most 3 iterations were required
to find a good approximation of the solution.
Using that you would calculate the distance as
let p2 = pointOnEllipse(a: a, b: b, closestTo: p)
let v = CGVector(dx: p.x - p2.x, dy: p.y - p2.y)
let distance = hypot(v.dx, v.dy)
Create new coordinate system, which transforms ellipse into circle https://math.stackexchange.com/questions/79842/is-an-ellipse-a-circle-transformed-by-a-simple-formula, then find distance of point to circle, and convert distance
I wrote up an explanation using Latex so it could be more readable and just took some screen shots. The approach I am sharing is one using a Newton step based optimization approach to the problem.
Note that for situations where you have an ellipse with a smaller ratio between the major and minor axis lengths, you only need a couple iterations, at most, to get pretty good accuracy. For smaller ratios, you could even probably get away with just the initial guess's result, which is essentially what Martin R shows. But if your ellipses can be any shape, you may want to add in some code to improve the approximation.
You have the Ellipsis center of (a, b) and an arbitrary point of P(Px, Py). The equation of the line defined by these two points looks like this:
(Y - Py) / (b - Py) = (X - Px) / (a - Px)
The other form you have is an ellipse. You need to find out which are the (X, Y) points which are both on the ellipse and on the line between the center and the point. There will be two such points and you need to calculate both their distance from P and choose the smaller distance.

Projection of circular region of interest onto rectangle [duplicate]

BOUNTY STATUS UPDATE:
I discovered how to map a linear lens, from destination coordinates to source coordinates.
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
1). I actually struggle to reverse it, and to map source coordinates to destination coordinates. What is the inverse, in code in the style of the converting functions I posted?
2). I also see that my undistortion is imperfect on some lenses - presumably those that are not strictly linear. What is the equivalent to-and-from source-and-destination coordinates for those lenses? Again, more code than just mathematical formulae please...
Question as originally stated:
I have some points that describe positions in a picture taken with a fisheye lens.
I want to convert these points to rectilinear coordinates. I want to undistort the image.
I've found this description of how to generate a fisheye effect, but not how to reverse it.
There's also a blog post that describes how to use tools to do it; these pictures are from that:
(1) : SOURCE Original photo link
Input : Original image with fish-eye distortion to fix.
(2) : DESTINATION Original photo link
Output : Corrected image (technically also with perspective correction, but that's a separate step).
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
My function stub looks like this:
Point correct_fisheye(const Point& p,const Size& img) {
// to polar
const Point centre = {img.width/2,img.height/2};
const Point rel = {p.x-centre.x,p.y-centre.y};
const double theta = atan2(rel.y,rel.x);
double R = sqrt((rel.x*rel.x)+(rel.y*rel.y));
// fisheye undistortion in here please
//... change R ...
// back to rectangular
const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta));
fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y);
return ret;
}
Alternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed?
The description you mention states that the projection by a pin-hole camera (one that does not introduce lens distortion) is modeled by
R_u = f*tan(theta)
and the projection by common fisheye lens cameras (that is, distorted) is modeled by
R_d = 2*f*sin(theta/2)
You already know R_d and theta and if you knew the camera's focal length (represented by f) then correcting the image would amount to computing R_u in terms of R_d and theta. In other words,
R_u = f*tan(2*asin(R_d/(2*f)))
is the formula you're looking for. Estimating the focal length f can be solved by calibrating the camera or other means such as letting the user provide feedback on how well the image is corrected or using knowledge from the original scene.
In order to solve the same problem using OpenCV, you would have to obtain the camera's intrinsic parameters and lens distortion coefficients. See, for example, Chapter 11 of Learning OpenCV (don't forget to check the correction). Then you can use a program such as this one (written with the Python bindings for OpenCV) in order to reverse lens distortion:
#!/usr/bin/python
# ./undistort 0_0000.jpg 1367.451167 1367.451167 0 0 -0.246065 0.193617 -0.002004 -0.002056
import sys
import cv
def main(argv):
if len(argv) < 10:
print 'Usage: %s input-file fx fy cx cy k1 k2 p1 p2 output-file' % argv[0]
sys.exit(-1)
src = argv[1]
fx, fy, cx, cy, k1, k2, p1, p2, output = argv[2:]
intrinsics = cv.CreateMat(3, 3, cv.CV_64FC1)
cv.Zero(intrinsics)
intrinsics[0, 0] = float(fx)
intrinsics[1, 1] = float(fy)
intrinsics[2, 2] = 1.0
intrinsics[0, 2] = float(cx)
intrinsics[1, 2] = float(cy)
dist_coeffs = cv.CreateMat(1, 4, cv.CV_64FC1)
cv.Zero(dist_coeffs)
dist_coeffs[0, 0] = float(k1)
dist_coeffs[0, 1] = float(k2)
dist_coeffs[0, 2] = float(p1)
dist_coeffs[0, 3] = float(p2)
src = cv.LoadImage(src)
dst = cv.CreateImage(cv.GetSize(src), src.depth, src.nChannels)
mapx = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
mapy = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
cv.InitUndistortMap(intrinsics, dist_coeffs, mapx, mapy)
cv.Remap(src, dst, mapx, mapy, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0))
# cv.Undistort2(src, dst, intrinsics, dist_coeffs)
cv.SaveImage(output, dst)
if __name__ == '__main__':
main(sys.argv)
Also note that OpenCV uses a very different lens distortion model to the one in the web page you linked to.
(Original poster, providing an alternative)
The following function maps destination (rectilinear) coordinates to source (fisheye-distorted) coordinates. (I'd appreciate help in reversing it)
I got to this point through trial-and-error: I don't fundamentally grasp why this code is working, explanations and improved accuracy appreciated!
def dist(x,y):
return sqrt(x*x+y*y)
def correct_fisheye(src_size,dest_size,dx,dy,factor):
""" returns a tuple of source coordinates (sx,sy)
(note: values can be out of range)"""
# convert dx,dy to relative coordinates
rx, ry = dx-(dest_size[0]/2), dy-(dest_size[1]/2)
# calc theta
r = dist(rx,ry)/(dist(src_size[0],src_size[1])/factor)
if 0==r:
theta = 1.0
else:
theta = atan(r)/r
# back to absolute coordinates
sx, sy = (src_size[0]/2)+theta*rx, (src_size[1]/2)+theta*ry
# done
return (int(round(sx)),int(round(sy)))
When used with a factor of 3.0, it successfully undistorts the images used as examples (I made no attempt at quality interpolation):
Dead link
(And this is from the blog post, for comparison:)
If you think your formulas are exact, you can comput an exact formula with trig, like so:
Rin = 2 f sin(w/2) -> sin(w/2)= Rin/2f
Rout= f tan(w) -> tan(w)= Rout/f
(Rin/2f)^2 = [sin(w/2)]^2 = (1 - cos(w))/2 -> cos(w) = 1 - 2(Rin/2f)^2
(Rout/f)^2 = [tan(w)]^2 = 1/[cos(w)]^2 - 1
-> (Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
However, as #jmbr says, the actual camera distortion will depend on the lens and the zoom. Rather than rely on a fixed formula, you might want to try a polynomial expansion:
Rout = Rin*(1 + A*Rin^2 + B*Rin^4 + ...)
By tweaking first A, then higher-order coefficients, you can compute any reasonable local function (the form of the expansion takes advantage of the symmetry of the problem). In particular, it should be possible to compute initial coefficients to approximate the theoretical function above.
Also, for good results, you will need to use an interpolation filter to generate your corrected image. As long as the distortion is not too great, you can use the kind of filter you would use to rescale the image linearly without much problem.
Edit: as per your request, the equivalent scaling factor for the above formula:
(Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
-> Rout/f = [Rin/f] * sqrt(1-[Rin/f]^2/4)/(1-[Rin/f]^2/2)
If you plot the above formula alongside tan(Rin/f), you can see that they are very similar in shape. Basically, distortion from the tangent becomes severe before sin(w) becomes much different from w.
The inverse formula should be something like:
Rin/f = [Rout/f] / sqrt( sqrt(([Rout/f]^2+1) * (sqrt([Rout/f]^2+1) + 1) / 2 )
I blindly implemented the formulas from here, so I cannot guarantee it would do what you need.
Use auto_zoom to get the value for the zoom parameter.
def dist(x,y):
return sqrt(x*x+y*y)
def fisheye_to_rectilinear(src_size,dest_size,sx,sy,crop_factor,zoom):
""" returns a tuple of dest coordinates (dx,dy)
(note: values can be out of range)
crop_factor is ratio of sphere diameter to diagonal of the source image"""
# convert sx,sy to relative coordinates
rx, ry = sx-(src_size[0]/2), sy-(src_size[1]/2)
r = dist(rx,ry)
# focal distance = radius of the sphere
pi = 3.1415926535
f = dist(src_size[0],src_size[1])*factor/pi
# calc theta 1) linear mapping (older Nikon)
theta = r / f
# calc theta 2) nonlinear mapping
# theta = asin ( r / ( 2 * f ) ) * 2
# calc new radius
nr = tan(theta) * zoom
# back to absolute coordinates
dx, dy = (dest_size[0]/2)+rx/r*nr, (dest_size[1]/2)+ry/r*nr
# done
return (int(round(dx)),int(round(dy)))
def fisheye_auto_zoom(src_size,dest_size,crop_factor):
""" calculate zoom such that left edge of source image matches left edge of dest image """
# Try to see what happens with zoom=1
dx, dy = fisheye_to_rectilinear(src_size, dest_size, 0, src_size[1]/2, crop_factor, 1)
# Calculate zoom so the result is what we wanted
obtained_r = dest_size[0]/2 - dx
required_r = dest_size[0]/2
zoom = required_r / obtained_r
return zoom
I took what JMBR did and basically reversed it. He took the radius of the distorted image (Rd, that is, the distance in pixels from the center of the image) and found a formula for Ru, the radius of the undistorted image.
You want to go the other way. For each pixel in the undistorted (processed image), you want to know what the corresponding pixel is in the distorted image.
In other words, given (xu, yu) --> (xd, yd). You then replace each pixel in the undistorted image with its corresponding pixel from the distorted image.
Starting where JMBR did, I do the reverse, finding Rd as a function of Ru. I get:
Rd = f * sqrt(2) * sqrt( 1 - 1/sqrt(r^2 +1))
where f is the focal length in pixels (I'll explain later), and r = Ru/f.
The focal length for my camera was 2.5 mm. The size of each pixel on my CCD was 6 um square. f was therefore 2500/6 = 417 pixels. This can be found by trial and error.
Finding Rd allows you to find the corresponding pixel in the distorted image using polar coordinates.
The angle of each pixel from the center point is the same:
theta = arctan( (yu-yc)/(xu-xc) ) where xc, yc are the center points.
Then,
xd = Rd * cos(theta) + xc
yd = Rd * sin(theta) + yc
Make sure you know which quadrant you are in.
Here is the C# code I used
public class Analyzer
{
private ArrayList mFisheyeCorrect;
private int mFELimit = 1500;
private double mScaleFESize = 0.9;
public Analyzer()
{
//A lookup table so we don't have to calculate Rdistorted over and over
//The values will be multiplied by focal length in pixels to
//get the Rdistorted
mFisheyeCorrect = new ArrayList(mFELimit);
//i corresponds to Rundist/focalLengthInPixels * 1000 (to get integers)
for (int i = 0; i < mFELimit; i++)
{
double result = Math.Sqrt(1 - 1 / Math.Sqrt(1.0 + (double)i * i / 1000000.0)) * 1.4142136;
mFisheyeCorrect.Add(result);
}
}
public Bitmap RemoveFisheye(ref Bitmap aImage, double aFocalLinPixels)
{
Bitmap correctedImage = new Bitmap(aImage.Width, aImage.Height);
//The center points of the image
double xc = aImage.Width / 2.0;
double yc = aImage.Height / 2.0;
Boolean xpos, ypos;
//Move through the pixels in the corrected image;
//set to corresponding pixels in distorted image
for (int i = 0; i < correctedImage.Width; i++)
{
for (int j = 0; j < correctedImage.Height; j++)
{
//which quadrant are we in?
xpos = i > xc;
ypos = j > yc;
//Find the distance from the center
double xdif = i-xc;
double ydif = j-yc;
//The distance squared
double Rusquare = xdif * xdif + ydif * ydif;
//the angle from the center
double theta = Math.Atan2(ydif, xdif);
//find index for lookup table
int index = (int)(Math.Sqrt(Rusquare) / aFocalLinPixels * 1000);
if (index >= mFELimit) index = mFELimit - 1;
//calculated Rdistorted
double Rd = aFocalLinPixels * (double)mFisheyeCorrect[index]
/mScaleFESize;
//calculate x and y distances
double xdelta = Math.Abs(Rd*Math.Cos(theta));
double ydelta = Math.Abs(Rd * Math.Sin(theta));
//convert to pixel coordinates
int xd = (int)(xc + (xpos ? xdelta : -xdelta));
int yd = (int)(yc + (ypos ? ydelta : -ydelta));
xd = Math.Max(0, Math.Min(xd, aImage.Width-1));
yd = Math.Max(0, Math.Min(yd, aImage.Height-1));
//set the corrected pixel value from the distorted image
correctedImage.SetPixel(i, j, aImage.GetPixel(xd, yd));
}
}
return correctedImage;
}
}
I found this pdf file and I have proved that the maths are correct (except for the line vd = *xd**fv+v0 which should say vd = **yd**+fv+v0).
http://perception.inrialpes.fr/CAVA_Dataset/Site/files/Calibration_OpenCV.pdf
It does not use all of the latest co-efficients that OpenCV has available but I am sure that it could be adapted fairly easily.
double k1 = cameraIntrinsic.distortion[0];
double k2 = cameraIntrinsic.distortion[1];
double p1 = cameraIntrinsic.distortion[2];
double p2 = cameraIntrinsic.distortion[3];
double k3 = cameraIntrinsic.distortion[4];
double fu = cameraIntrinsic.focalLength[0];
double fv = cameraIntrinsic.focalLength[1];
double u0 = cameraIntrinsic.principalPoint[0];
double v0 = cameraIntrinsic.principalPoint[1];
double u, v;
u = thisPoint->x; // the undistorted point
v = thisPoint->y;
double x = ( u - u0 )/fu;
double y = ( v - v0 )/fv;
double r2 = (x*x) + (y*y);
double r4 = r2*r2;
double cDist = 1 + (k1*r2) + (k2*r4);
double xr = x*cDist;
double yr = y*cDist;
double a1 = 2*x*y;
double a2 = r2 + (2*(x*x));
double a3 = r2 + (2*(y*y));
double dx = (a1*p1) + (a2*p2);
double dy = (a3*p1) + (a1*p2);
double xd = xr + dx;
double yd = yr + dy;
double ud = (xd*fu) + u0;
double vd = (yd*fv) + v0;
thisPoint->x = ud; // the distorted point
thisPoint->y = vd;
This can be solved as an optimization problem. Simply draw on curves in images that are supposed to be straight lines. Store the contour points for each of those curves. Now we can solve the fish eye matrix as a minimization problem. Minimize the curve in points and that will give us a fisheye matrix. It works.
It can be done manually by adjusting the fish eye matrix using trackbars! Here is a fish eye GUI code using OpenCV for manual calibration.

Projectile Motion in Cocos2d iphone

I want to throw a ball that has a projectile motion. I have a monkey on centre of screen and onTouchBegin I am taking the starting point of the touch and onTouchEnded I am taking the ending points. From the starting and ending points I am taking the angle value between them. Like 30 degrees, 45 or 90 degree.
This is my code by which I have calculated angle of start to endpoint
float angleRadians = atan2(startTouchPoint.x - touchPoint.x, startTouchPoint.y - touchPoint.y);
float angleDegrees = CC_RADIANS_TO_DEGREES(angleRadians);
float cocosAngle = -1 * angleDegrees;
Now i am using Projectile motion formula to throw ball with angle i have calculated from above formula .
inside init method
gravity = 9.8; // metres per second square
X = 0;
Y = 0;
V0 = 50; // meters per second -- elevation
VX0 = V0 * cos(angle); // meters per second
VY0 = V0 * sin(angle); // meters per second
gameTime = 0;
and onTouchEnded i have called fire method which will throw ball .
-(void)fire:(ccTime) dt
{
CCLOG(#"Angle 1: %.2f",angle);
gameTime += dt*6;
// x = v0 * t * cos(angle)
X = (V0 * gameTime * cos(angle))/2+120;
// y = v0 * t * sin(angle) - 0.5 * g * t^2
Y = (V0 * gameTime * sin(angle) - 0.5 * gravity * pow(gameTime, 2))/2+255;
if (Y > 50)
{
sprite_webfire.position = ccp(X,Y);
flag = true;
}
else
{
//angleValue += 15;
angleValue = angle;
angle = [self DegreesToRadians:angleValue];
gravity = 9.8; // metres per second square
X = 0;
Y = 0;
V0 = 50; // meters per second -- elevation
VX0 = V0 * cos(angle); // meters per second
VY0 = V0 * sin(angle); // meters per second
gameTime = 0;
// [self pauseSchedulerAndActions];
}
if (Y < 50)
{
[self unschedule:#selector(fire:)];
}
NSLog(#"ball (%lf,%lf), dt = %lf angle value %d", X, Y, dt,angleValue);
}
this code is working . by this code i can throw ball in projectile motion but i cant throw it where i want to. i cant throw wrt to given angle from start to end point.
i can throw it like red mark but i want to throw it blue mark with swipe . but its not throwing like i am swiping screen.
I am not certain on what math you are using to do this, I find your documentation a bit confusing.
Generally, for project tile motion this is what you need to do:
Find out what the take off angle is relative to the horizontal. Then depending on whatever initial velocity you want the object to have, use that and you trig equations to put your initial velocities into rectangular components.
For example:
If initial velocity was 10, the initial velocity in the y direction would be 10sin(angle), and in the x direction it would be 10cos(angle).
Then in to update the position of the sprite you should use kinematics equations: http://www.physicsclassroom.com/class/1dkin/u1l6c.cfm
First update velocities:
Velocity in the Y direction: V = v(initial) + gravity*(Delta-time)
Velocity in the X direction is constant unless you want to factor in some sort of resistance to make things a lot more complicated.
then position y = oldPositionY + velocity(in Y direction)*(Delta-time) + 1/2(gravity)(delta-time)^2.
and position x = oldPositionX + Xvelocity*delta-time
I have done some projectile motion stuff, and I have found you need to make gravity a large constant, something around 500 to make it look life-like. Let me know if this is confusing or you don't know how to implement it.
I would suggest that you take a look at the following tutorial: http://www.raywenderlich.com/4756/how-to-make-a-catapult-shooting-game-with-cocos2d-and-box2d-part-1.
It shows you how to use a physics engine, so you don't need to do much of the math. All the 'bullets' in the tutorial are also moving with projectile motion.
I'll add a bit to what was already said (which was good). Firstly, you should not be wasting time computing any angles. Stick with vectors for your velocity. In other words, get the initial velocity vector from the touch start and end location, and that will be your (v0x, v0y). For example:
CGPoint initialVelocity = ccpSub(touchPoint, startTouchPoint);
float v0x = initialVelocity.x;
float v0y = initialVelocity.y;
If you wish to assign a different magnitude to the initial velocity vector, simply normalize it and then multiply it by a new magnitude.
CGPoint unitVelocity = ccpNormalize(initialVelocity);
float magnitude = 200; // or whatever you want it to be
CGPoint velocity = ccpMult(unitVelocity, magnitude);
Anyway, with this velocity set properly you can then use it in your position calculations as before, but without the added complexity of calculating the angles.
-(void) fire:(ccTime)dt
{
.
.
gameTime += dt;
// if x(t) = x0 + v0x*t, then dx = v0x*dt
x += v0x*dt;
// if y(t) = y0 + v0y*t - 0.5t^2, then dy = v0y*dt - g*t*dt
y += (v0y * dt - g*gameTime*dt);
.
.
}
Also you should not set v0 = 50. Calculate the velocity from the vector as I suggested.
Something important to consider is that you are calculating what the movement should be in a physical world based upon units of meters. The screen is operating in points, not meters, so you will probably have to apply a scaling factor to the new position (x,y) to get the look that you are going for.
Edit: my bad, I had to revisit my math in the position calculation. My differentials was a bit rusty.