Best way to find the coordinates between two points? - coordinates

So I'm trying to find the location between point A & B. Point A will be my phone and point be will be a dot on a small object. Point A is a point in the middle of my phone with an X/Y axis. Using the front facing camera on an iPhone, point B attached to the small object, moves into the cameras view. Point B will never be more than a foot away from point A at any time.
Is it possible to determine the exact coordinates of point B in relation to point A's X/Y axis (Within a millimeter or so)? Would a signal need to be sent between the two points, or would the camera be able to pick up point B and determine its coordinates on point A's X/Y axis?
I've attached an image below to hopefully explain what I am trying to describe a little better.

The coordinates of two points are A(-6, 0) and B(10, 4). Find the coordinates of the point C on the y-axis such that AC=BC.

Related

Unity Rotate Sphere To Point Directly Upwards Based On Child Point

I've got a 3d sphere which I've been able to plot a point on using longitude and latitude thanks to some work of another developer I've found online. I think I understand what its doing.
What I need to do now is rotate my planet so the point is always at the top most point (ie the north pole) but I'm not sure how to do this. I'm probably missing some important fundamentals here so I'm hoping the answer can assist in my future learning.
Here's an image showing what I have - The blue line is a line coming from the longitude and latitude I have plotted and I need to rotate the planet so that line is basically pointing directly upwards.
https://ibb.co/2y24FxS
If anyone is able to advise it'd be very much appreciated.
If I'm not mistaken, Unity uses a coordinate system where the y-axis points up.
If the point on your sphere was in the xy-plane, you'd just have to determine the angle between the radius-vector (starts in the center of the sphere, ends on the point in question) and the y-axis, and than rotate by that amount around the z-axis, so that the radius vector becomes vertical. But your point is at an arbitrary location in 3D space - see the image below. So one way to go about it is to first bring the point to the xy-plane, then continue from there.
Calculate the radius vector, which is just r = x-sphereCenter. Make a copy of it, set y to zero, so that you have (x, 0, z) - which is just the projection of the vector r on the horizontal xz-plane - let's call the copy rXZ.
Determine the signed angle between the x-axis and rXZ (use Vector3.SignedAngle(xAxis, rXZ, yAxis), see docs), and create a rotation matrix M1 that rotates the sphere in the opposite direction around the vertical (negate the angle). This should place your point in the xy-plane.
Now determine the angle between r and the y-axis (Vector3.SignedAngle(r, yAxis, zAxis)), and create a new rotation matrix M2 that rotates by that angle around the zAxis. (I think for this second one, the simpler Vector3.Angle will work as well.)
So, what you want now is to combine the two matrices (by multiplying them) into a single transform (I'm assuming this is a transformation in the local coordinate system of the sphere, where (0, 0, 0) is the sphere's center). If I'm not mistaken, Unity uses column-major matrices, so the multiplication order should be M = M2 * M1 (the rightmost matrix is applied first).
Reorient your globe using M as a local transform, and it should bring your point to the top. You can also create M3 = M1.inverse, and then do M = M3 * M2 * M1, to preserve the original angular offset from the xy-plane.
Check for edge cases, such as r already being vertical (pointing straight up, or straight down).

Measuring objects in a photo taken by calibrated cameras, knowing the size of a reference object in the photo

I am writing a program that captures real time images from a scene by two calibrated cameras (so the internal parameters of the cameras are known to us). Using two view geometry, I can find the essential matrix and use OpenCV or MATLAB to find the relative position and orientation of one camera with respect to another. Having the essential matrix, it is shown in Hartley and Zisserman's Multiple View Geometry that one can reconstruct the scene using triangulation up to scale. Now I want to use a reference length to determine the scale of reconstruction and resolve ambiguity.
I know the height of the front wall and I want to use it for determining the scale of reconstruction to measure other objects and their dimensions or their distance from the center of my first camera. How can it be done in practice?
Thanks in advance.
Edit: To add more information, I have already done linear trianglation (minimizing the algebraic error) but I am not sure if it is any useful because there is still a scale ambiguity that I don't know how to get rid of it. My ultimate goal is to recognize an object (like a Pepsi can) and separate it in a rectangular area (which is going to be written as a separate module by someone else) and then find the distance of each pixel in this rectangular area, i.e. the region of interest, to the camera. Then the distance from the camera to the object will be the minimum of the distances from the camera to the 3D coordinates of the pixels in the region of interest.
Might be a bit late, but at least for someone struggling with the same staff.
As far as I remember it is actually linear problem. You got essential matrix, which gives you rotation matrix and normalized translation vector specifying relative position of cameras. If you followed Hartley and Zissermanm you probably chose one of the cameras as origin of world coordinate system. Meaning all your triangulated points are in normalized distance from this origin. What is important is, that the direction of every triangulated point is correct.
If you have some reference in the scene (lets say height of the wall), then you just have to find this reference (2 points are enough - so opposite ends of the wall) and calculate "normalization coefficient" (sorry for terminology) as
coeff = realWorldDistanceOf2Points / distanceOfTriangulatedPoints
Once you have this coeff, just mulptiply all your triangulated points with it and you got real world points.
Example:
you know that opposite corners of the wall are 5m from each other. you find these corners in both images, triangulate them (lets call triangulated points c1 and c2), calculate their distance in the "normalized" world as ||c1 - c2|| and get the
coeff = 5 / ||c1 - c2||
and you get real 3d world points as triangulatedPoint*coeff.
Maybe easier option is to have both cameras in fixed relative position and calibrate them together by stereoCalibrate openCV/Matlab function (there is actually pretty nice GUI in Matlab for that) - it returns not just intrinsic params, but also extrinsic. But I don't know if this is your case.

Finding the tangent point of a circle, connected to another point on the plane

If I have a circle of radius x with known coordinates for the center of the circle, and another point P on the coordinate grid, how do I find two coordinates points on the circle so that they create two tangent lines when connected to P?
I don't want a code sample, rather I just need to steps to figure it out myself :)
(I already saw the other answers on other questions, but those don't go into detail about how it works)

coordinates and heading direction relative to one point, rotating a map (MatLab)

I painted a scheme/diagram that makes it easier to understand my question. (The angles are for technical reasons 0° 90° 180° ... MSpaint won't rotate degreewise, but my data does). I need the B arrows relative to A, concerning the coordinate and relative angle. I subtract A's coordinates from A and B, now A always sits at (0,0) and B keeps its relative distance. How can I do that with the angles of the arrows?
The data I have is situation 1,2 and 3.
I have A and B's coordinates and directions. I need to translate/rotate/normalize/egocentric - whatever the right word might be, to get to situation 4,5 and 6 respectively. In the end, all data (4,5,6) pooled will look like 7, with the new coordinates and directions of B, because then A would always be in the center and heading up. I would believe that something like that might be used often in a different context and am hoping for a inline function or hint/topic to search in.

eye position mapping with the screen pixel

I am currently doing a project called eye controlled cursor using MATLAB.
I have few stages before I extract out the center of the iris (which can be considered as a pupil location). face detetcion - > eye detection -- > iris detection -->And finally i have obtained the center of the iris as show in the figure.
Now, I am trying to map this position (X,Y) to my computer screen pixel (1366 x 768). In most of the journals I have found, they require a reference point such as lips, nose or eye corner. But I am only able to extract the center of iris by doing certain thresholding. How can i map this position (X,Y) to my computer screen pixel (1366 x 768)?
Well you either have to fix the head to a certain position (which isn't very practical) or you will have to adapt to the face position. Depending on your image, you will have to choose points that are always on that image and are easy to detect. If you just have one point (like the nose), you can only adjust for the x/y shift of your head. If you have more points (like the 4 corners of the eye, the nose, maybe the corners of the mouth), you can also extract the 3 rotational values of the head and therefore calculate the direction of sight much better. For a first approach, I guess only the two inner corners of the eye (they are "easy" to detect) will do.
I would also recommend using a calibration sequency. You present the user with a sequence of 4 red points in the corners of the screen and he has to look at them. You can then record the positions of the pupils and interpolate between them.