How to get distance bw 2 points in real world irrespective of diffrence in height? - distance

i am using a pair of DW1000 UWB sensors and am able to get accurate distance bw them.
how can i get rid of (z1-z2) term in the final distance..i.e if both sensors are fixed at (x1,y1)and (x2,y2 ) respectively , how do i ensure that the distance (reported ) states constant even if i move the tags up or down

You need to give the (z1-z2) information to the anchor to calculate the horizontal distance. Use Pythagoras' Theorem. If z1-z2 is unknown, you need more sensors.

Related

Calculating estimated location from distance to known locations

I am trying to see if there is a plugin or Node library that will be able to estimate a lat/long based on an array of distances from known locations. Attached image shows three circles which are radius from known location. I would like to take this information and estimate a location from the combination of information. Originally I was thinking of just showing the intersect of the circles, but that does not cover cases where the circles do not intersect.

calculating the scores using matlab

I am working on calculating the scores for air rifle paper target. I'm able to calculate the distance from center of the image to the center of the bullet hole in Pixels.
Here's my code:
I = imread('Sample.jpg');
RGB = imresize(I,0.9);
imshow(RGB);
bw = im2bw(RGB,graythresh(getimage));
figure, imshow(bw);
bw2 = imfill(bw,'holes');
s = regionprops(bw2,'centroid');
centroids = cat(1,s.Centroid);
dist_from_center = norm(size(bw(:,:,1))/2 - centroids,2);
hold(imgca,'on');
plot(imgca,centroids(:,1),centroids(:,2),'r*');
hold(imgca,'off');
numberOfPixels = numel(I);
Number_Of_Pixel = numel(RGB);
This is the raw image with one bullet hole.
This is the result I am having.
This is the paper target I'm using to get the score.
Can any one suggest me how to calculate the score using this.
See my walk through your problem in Python
It's a very fun problem you have.
I assumed you have already a way of getting the binary holes mask (since you gave us the image)
Some scores are wrong because of target centering issues in given image
Given hole-mask, find 2D shot center
I assume that the actual images would include several holes instead of one.
Shot locations extracted by computing the local maxima of the distance transform of the binary hole image. Since the distance transform gives as intensity output the distance from the examined point to a border, this allows us to compute the centermost pixels as local maximum.
Local maximum technique I used is computing a maximum filter of your image with a given size (10 for me) and find the pixels that have filtered == original.
You have to remove the 0-valued "maxima" but apart from that it's a nice trick to remember, since it works in N-dimension by using a N-dimensional maximum filter.
Given a 2D position of shot center, compute the score
You need to transform your coordinate system from cartesian (X,Y) to polar (distance,angle).
Image from MathWorks to illustrate the math.
To use the center of image as reference point, offset each position by the image center vector.
Discarding the angle, your score is directly linked to the distance from center.
Your score is an integer that you need to compute based on distance :
As I understand you score 10 if you are at distance 0 and decrease till 0 points.
This means the scoring function is
border_space = 10 px # distance between each circle, up to you to find it :)
score = 10 - (distance / border_space) # integer division though
with the added constraint that score can not be negative :
score = max(10 - (distance / border_space),0)
Really do look through my ipython notebook, it's very visual
Edit: Regarding the distance conversion.
Your target practice image is in pixels, but these pixel distances can be mapped to millimeters : You probably know what your target's size is in centimeters (it's regulation size, right ?), so you can set up a conversion rate:
target_size_mm = 1000 # 1 meter = 1000 millimeters
target_size_px = 600 # to be measured for each image
px_to_mm_ratio = target_size_mm / target_size_px
object_size_px = 102 # any value you want to measure really
object_size_mm = object_size_px * px_to_mm_ratio
Everytime you're thinking about a facet of your problem, think "Is what I'm looking at in pixels or in millimeters ?". Try to conceptually separate the code that uses pixels from the one in millimeters.
It is coding best practice to avoid these assumptions where you can, so that if you get a bunch of images from different cameras, with different properties, you can "convert" everything to a common format (millimeters) and have a uniform treatment of data afterwards

Verify that camera calibration is still valid

How do you determine that the intrinsic and extrinsic parameters you have calculated for a camera at time X are still valid at time Y?
My idea would be
to use a known calibration object (a chessboard) and place it in the camera's field of view at time Y.
Calculate the chessboard corner points in the camera's image (at time Y).
Define one of the chessboard corner points as world origin and calculate the world coordinates of all remaining chessboard corners based on that origin.
Relate the coordinates of 3. with the camera coordinate system.
Use the parameters calculated at time X to calculate the image points of the points from 4.
Calculate distances between points from 2. with points from 5.
Is that a clever way to go about it? I'd eventually like to implement it in MATLAB and later possibly openCV. I think I'd know how to do steps 1)-2) and step 6). Maybe someone can give a rough implementation for steps 2)-5). Especially I'd be unsure how to relate the "chessboard-world-coordinate-system" with the "camera-world-coordinate-system", which I believe I would have to do.
Thanks!
If you have a single camera you can easily follow the steps from this article:
Evaluating the Accuracy of Single Camera Calibration
For achieving step 2, you can easily use detectCheckerboardPoints function from MATLAB.
[imagePoints, boardSize, imagesUsed] = detectCheckerboardPoints(imageFileNames);
Assuming that you are talking about stereo-cameras, for stereo pairs, imagePoints(:,:,:,1) are the points from the first set of images, and imagePoints(:,:,:,2) are the points from the second set of images. The output contains M number of [x y] coordinates. Each coordinate represents a point where square corners are detected on the checkerboard. The number of points the function returns depends on the value of boardSize, which indicates the number of squares detected. The function detects the points with sub-pixel accuracy.
As you can see in the following image the points are estimated relative to the first point that covers your third step.
[The image is from this page at MATHWORKS.]
You can consider point 1 as the origin of your coordinate system (0,0). The directions of the axes are shown on the image and you know the distance between each point (in the world coordinate), so it is just the matter of depth estimation.
To find a transformation matrix between the points in the world CS and the points in the camera CS, you should collect a set of points and perform an SVD to estimate the transformation matrix.
But,
I would estimate the parameters of the camera and compare them with the initial parameters at time X. This is easier, if you have saved the images that were used when calibrating the camera at time X. By repeating the calibrating process using those images you should get very similar results, if the camera calibration is still valid.
Edit: Why you need the set of images used in the calibration process at time X?
You have a set of images to do the calibrations for the first time, right? To recalibrate the camera you need to use a new set of images. But for checking the previous calibration, you can use the previous images. If the parameters of the camera are changes, there would be an error between the re-estimation and the first estimation. This can be used for evaluating the validity of the calibration not for recalibrating the camera.

Project GPS coordinates to Euclidean space

There are a lot of similar questions but I can't get a clear answer out of them. So, I want to represent latitude and longitude in a 2D space such that I can calculate the distances if necessary.
There is the equirectangular approach which can calculate the distances but this is not exactly what I want.
There is the UTM but it seems there are many zones and letters. So the distance should take into consideration the changing of zone which is not trivial.
I want to have a representation such that i can deal with x,y as numbers in Euclidean space and perform the standard distance formula on them without multiplying with the diameter of Earth every time I need to calculate the distance between two points.
Is there anything in Matlab that can change lat/long to x,y in Euclidean space?
I am not a matlab speciallist but the answer is not limited to matlab. Generally in GIS when you want to perform calculations in Euclidean space you have to apply 'projection' to the data. There are various types of projections, one of the most popular being Transverse Mercator
The common feature of such projections is the fact you can't precisely represent whole world with it. I mean the projection is based on chosen meridian and is precise enough up to some distance from it (e.g. Gauss Krueger projection is quite accurate around +-500km from the meridian.
You will always have to choose some kind of 'zone' or 'meridian', regardless of what projection you choose, because it is impossible to transform a sphere into plane without any deformations (be it distance, angle or area).
So if you are working on a set of data located around some geographical area you can simply transform (project) the data and treat it as normal Enclidean 2d space.
But if you think of processing data located around the whole world you will have to properly cluster and project it using proper zone.

How to find the distance between the only two points in an image produced by a grating like substance?

i need to find the distance between the two points.I can find the distance between them manually by the pixel to cm converter in the image processing tool box. But i want a code which detects the point positions in the image and calculate the distance.
More accurately speaking the image contains only three points one mid and the other two approximately distanced equally from it...
There might be a better way then this, but I hacked something similar together last night.
Use bwboundaries to find the objects in the image (the contiguous regions in a black/white image).
The second returned matrix, L, is the same image but with the regions numbered. So for the first point, you want to isolate all the pixels related to it,
L2 = (L==1)
Now find the center of that region (for object 1).
x1 = (1:size(L2,2))*sum(L2,1)'/size(L2,2);
y1 = (1:size(L2,1))*sum(L2,2)/size(L2,1);
Repeat that for all the regions in your image. You should have the center of mass of each point. I think that should do it for you, but I haven't tested it.