I am trying to see if there is a plugin or Node library that will be able to estimate a lat/long based on an array of distances from known locations. Attached image shows three circles which are radius from known location. I would like to take this information and estimate a location from the combination of information. Originally I was thinking of just showing the intersect of the circles, but that does not cover cases where the circles do not intersect.
Related
Essentially, I have a list of points that I know are all connected upwards, downwards, leftwards, rightwards, or diagonally. Given two points, I want to find the minimum number of points you would have to travel to get to the other point.
Update: ended up going with an A* (A star) algorithm
Link to matlab code of A star algorithm that I used: https://www.mathworks.com/matlabcentral/fileexchange/56877-a-astar-a-star-search-algorithm-easy-to-use
I am writing a program that captures real time images from a scene by two calibrated cameras (so the internal parameters of the cameras are known to us). Using two view geometry, I can find the essential matrix and use OpenCV or MATLAB to find the relative position and orientation of one camera with respect to another. Having the essential matrix, it is shown in Hartley and Zisserman's Multiple View Geometry that one can reconstruct the scene using triangulation up to scale. Now I want to use a reference length to determine the scale of reconstruction and resolve ambiguity.
I know the height of the front wall and I want to use it for determining the scale of reconstruction to measure other objects and their dimensions or their distance from the center of my first camera. How can it be done in practice?
Thanks in advance.
Edit: To add more information, I have already done linear trianglation (minimizing the algebraic error) but I am not sure if it is any useful because there is still a scale ambiguity that I don't know how to get rid of it. My ultimate goal is to recognize an object (like a Pepsi can) and separate it in a rectangular area (which is going to be written as a separate module by someone else) and then find the distance of each pixel in this rectangular area, i.e. the region of interest, to the camera. Then the distance from the camera to the object will be the minimum of the distances from the camera to the 3D coordinates of the pixels in the region of interest.
Might be a bit late, but at least for someone struggling with the same staff.
As far as I remember it is actually linear problem. You got essential matrix, which gives you rotation matrix and normalized translation vector specifying relative position of cameras. If you followed Hartley and Zissermanm you probably chose one of the cameras as origin of world coordinate system. Meaning all your triangulated points are in normalized distance from this origin. What is important is, that the direction of every triangulated point is correct.
If you have some reference in the scene (lets say height of the wall), then you just have to find this reference (2 points are enough - so opposite ends of the wall) and calculate "normalization coefficient" (sorry for terminology) as
coeff = realWorldDistanceOf2Points / distanceOfTriangulatedPoints
Once you have this coeff, just mulptiply all your triangulated points with it and you got real world points.
Example:
you know that opposite corners of the wall are 5m from each other. you find these corners in both images, triangulate them (lets call triangulated points c1 and c2), calculate their distance in the "normalized" world as ||c1 - c2|| and get the
coeff = 5 / ||c1 - c2||
and you get real 3d world points as triangulatedPoint*coeff.
Maybe easier option is to have both cameras in fixed relative position and calibrate them together by stereoCalibrate openCV/Matlab function (there is actually pretty nice GUI in Matlab for that) - it returns not just intrinsic params, but also extrinsic. But I don't know if this is your case.
I need some help with matlab and detected features. I'll start with the fact that I am not so good with matlab.
I am trying to automate morphing process by combining feature point detection with morph. I use CascadeObjectDetector to find the features of a human face for two images and I find their strongest features and position with Location function. What I need is to sort the x,y coordinates of detected features so that coordinates of points in pic1 would match the nearest coordinates of pic2 because the main thing in morphing is to have pairs of corresponding points.
There are a lot of similar questions but I can't get a clear answer out of them. So, I want to represent latitude and longitude in a 2D space such that I can calculate the distances if necessary.
There is the equirectangular approach which can calculate the distances but this is not exactly what I want.
There is the UTM but it seems there are many zones and letters. So the distance should take into consideration the changing of zone which is not trivial.
I want to have a representation such that i can deal with x,y as numbers in Euclidean space and perform the standard distance formula on them without multiplying with the diameter of Earth every time I need to calculate the distance between two points.
Is there anything in Matlab that can change lat/long to x,y in Euclidean space?
I am not a matlab speciallist but the answer is not limited to matlab. Generally in GIS when you want to perform calculations in Euclidean space you have to apply 'projection' to the data. There are various types of projections, one of the most popular being Transverse Mercator
The common feature of such projections is the fact you can't precisely represent whole world with it. I mean the projection is based on chosen meridian and is precise enough up to some distance from it (e.g. Gauss Krueger projection is quite accurate around +-500km from the meridian.
You will always have to choose some kind of 'zone' or 'meridian', regardless of what projection you choose, because it is impossible to transform a sphere into plane without any deformations (be it distance, angle or area).
So if you are working on a set of data located around some geographical area you can simply transform (project) the data and treat it as normal Enclidean 2d space.
But if you think of processing data located around the whole world you will have to properly cluster and project it using proper zone.
i need to find the distance between the two points.I can find the distance between them manually by the pixel to cm converter in the image processing tool box. But i want a code which detects the point positions in the image and calculate the distance.
More accurately speaking the image contains only three points one mid and the other two approximately distanced equally from it...
There might be a better way then this, but I hacked something similar together last night.
Use bwboundaries to find the objects in the image (the contiguous regions in a black/white image).
The second returned matrix, L, is the same image but with the regions numbered. So for the first point, you want to isolate all the pixels related to it,
L2 = (L==1)
Now find the center of that region (for object 1).
x1 = (1:size(L2,2))*sum(L2,1)'/size(L2,2);
y1 = (1:size(L2,1))*sum(L2,2)/size(L2,1);
Repeat that for all the regions in your image. You should have the center of mass of each point. I think that should do it for you, but I haven't tested it.