Estimating distance to a point using camera calibration - matlab

I want to estimate distance (camera to a point in the ground : that means Yw=0) from a given pixel coordinate of that point . For that I used camera calibration methods
But the results are not meaningful.
I have following details to calibration
-focal length x and y , principal point x and y, effective pixel size in meters , yaw and pitch angles and camera heights etc.
-I have entered focal length ,principal points and translation vector in terms of pixels for calculation
-I have multiplied image point with camera_matrix and then rotational| translation matrix (R|t), to get the world point.
Is my procedure correct?? What can be wrong ?
result
image_point(x,y) =400,380
world_point z co ordinate(distance) = 12.53
image_point(x,y) =400,180
world_point z co ordinate(distance) = 5.93
problem
I am getting very few pixels for z coordinate ,
That means z co ordinate is << 1 m , (because effective pixel size in meters = 10 ^-5 )
This is my matlab code
%positive downward pitch
xR = 0.033;
yR = 0;
zR = pi;
%effective pixel size in meters = 10 ^-5 ; focal_length x & y = 0.012 m
% principal point x & y = 320 and 240
intrinsic_params =[1200,0,320;0,1200,240;0,0,1];
Rx=[1,0,0 ; 0,cos(xR),sin(xR); 0,-sin(xR),cos(xR)];
Ry=[cos(yR),0,-sin(yR) ; 0,1,0 ; sin(yR),0,cos(yR)];
Rz=[cos(zR),sin(zR),0 ; -sin(zR),cos(zR),0 ; 0,0,1];
R= Rx * Ry * Rz ;
% The camera is 1.17m above the ground
t=[0;117000;0];
extrinsic_params = horzcat(R,t);
% extrinsic_params is 3 *4 matrix
P = intrinsic_params * extrinsic_params; % P 3*4 matrix
% make it square ....
P_sq = [P; 0,0,0,1];
%image size is 640 x 480
%An arbitrary pixel 360,440 is entered as input
image_point = [400,380,0,1];
% world point will be in the form X Y Z 1
world_point = P_sq * image_point'

Your procedure is kind of right, however it is going in the wrong direction.
See this link. Using your intrinsic and extrinsic calibration matrix you can find the pixel-space position of a real-world vector, NOT the other way around. The exception to this is if your camera is stationary in the global frame and you have the Z position of the feature in the global space.
Stationary camera, known feature Z case: (see also this link)
%% First we simulate a camera feature measurement
K = [0.5 0 320;
0 0.5 240;
0 0 1]; % Example intrinsics
R = rotx(0)*roty(0)*rotz(pi/4); % orientation of camera in global frame
c = [1; 1; 1]; %Pos camera in global frame
rwPt = [ 10; 10; 5]; %position of a feature in global frame
imPtH = K*R*(rwPt - c); %Homogeneous image point
imPt = imPtH(1:2)/imPtH(3) %Actual image point
%% Now we use the simulated image point imPt and the knowledge of the
% features Z coordinate to determine the features X and Y coordinates
%% First determine the scaling term lambda
imPtH2 = [imPt; 1];
z = R.' * inv(K) * imPtH2;
lambda = (rwPt(3)-c(3))/z(3);
%% Now the RW position of the feature is:
rwPt2 = c + lambda*R.' * inv(K) * imPtH2 % Reconstructed RW point
Non-stationary camera case:
To find the real-world position or distance from the camera to a particular feature (given on the image plane) you have to employ some method of reconstructing the 3D data from the 2D image.
The two that come to mind immediately is opencv's solvePnP and stereo-vision depth estimation.
solvePnP requires 4 co-planar (in RW space) features to be available in the image, and the positions of the features in RW space known. This may not sound useful as you need to know the RW position of the features, but you can simply define the 4 features with a known offset rather than a position in the global frame - the result will be the relative position of the camera in the frame the features are defined in. solvePnP gives very accurate pose estimation of the camera. See my example.
Stero vision depth estimation requires the same feature to be found in two spatially-separate images and the transformation between the images in RW space must be known very precisely.
There may be other methods but these are the two I am familiar with.

Related

How can I make these point move frequently each second, since the Euclidean distance between these points is updated each time interval?

I deployed number of random points with their speed and displacement, how can I make these point move frequently each second,
since the Euclidean distance between these points is updated each time interval (second) based on the coordinates of new positions?
NumNode=2;
ro=1000;
center=[0 0];
Initial_Direction = rand(NumNode, 1) * 2 *pi; v = 15/3.6; % [m/s] velocity of node
theta_Node=2*pi*(rand(NumNode,1));
% let the Nodes deployed away from the center of circle network layout
g = 0.5 * ro + 0.5 * ro * rand(NumNode,1);
PosNode_x=center(1)+g.*cos(theta_Node); % Initial positions
PosNode_y=center(2)+g.*sin(theta_Node);
PosNode = [PosNode_x ,PosNode_y];
DX2 = [cos(Initial_Direction(:)) .* v,sin(Initial_Direction(:)) .* v]; % displacement of Node
PosNodeNew = PosNode + DX2; % New position of node without rotating
rotaion2 = [cosd(Initial_Direction), -sind(Initial_Direction); sind(Initial_Direction), cosd(Initial_Direction)]; % Rotation matrix to make the movement of node similar to move on arc
% Index approach to multiply different matrix dimensions
index=1:numel(PosNode);
xyRotated1 = rotaion2;
xyRotated1(index)= rotaion2(index).*PosNode(index); % rotated matrix multiply by position of node
index2 = 1:numel(DX2);
Newposss=xyRotated1;
NewPosss(index2) = xyRotated1(index2) + DX2(index2);
gg= NewPosss(index2); % New position of node with rotating on arc
figure(1)
scatter(PosDrone_x,PosDrone_x,'r+')
figure(2)
scatter(PosDroneNew(:,1),PosDroneNew(:,2),'b*')
figure (3)
scatter (xyRotated1(:,1),xyRotated1(:,2),'r.')
axis equal
figure (4)
scatter(NewPosss(1,:),NewPosss(1,:),'b*')
axis equal

Finding the pixel displacement in fish eye images

I am trying to plot the displacement of a pixel from the original image to the fish eye image based on the radius from the center of the image.
I was successful in producing fish images in MATLAB using maketform
testImg = imread('ship.jpg');
optTra = maketform('custom',2,2,[],#radial,options);
newX = imtransform(testImg,optTra);
imshow(newX);
the radial function here helps me to get the fish eye distorted image.
I need to find the displacement of each pixel in the original image to that of the distorted image.
If the transformation applied (a.k.a "#radial") was angular, the inverse transformation is given by:
u = r cos(phi) + 0.5;
v = r sin(phi) + 0.5;
where
r = atan2(sqrt(x*x+y*y),p.z)/pi;
phi = atan2(y,x);
x,y are assumed to be normalized coordinates (centered and between -1 to 1).

Transformation of camera calibration patterns

I use camera calibration in matlab to detect some checkerboard patterns, after
figure; showExtrinsics(cameraParams, 'CameraCentric');
Now, I want to rotate the checkerboard patterns around the x-axis such that all of them have nearly the same y coordinates in the camera frame.
Method:
I get the positions of all patterns in the camera's frame. Then I do optimization,where the objective function is to minimize variance in y and the variable is rotation about x ranging from o to 360.
Problem:
But when I plot the transformed y-coordinates, they are even nearly in a line.
Code:
Get the checkerboad points:
%% Get rotation and translation matrices for each image;
T_cw=cell(num_imgs,1); % stores camera to world rotation and translation for each image
pixel_coordinates=zeros(num_imgs,2); % stores the pixel coordinates of each checkerboard origin
for ii=1:num_imgs,
% Calibrate the camera
im=imread(list_imgs_path{ii});
[imagePoints, boardSize] = detectCheckerboardPoints(im);
[r_wc, t_wc] = extrinsics(imagePoints, worldPoints, cameraParams);
T_wc=[r_wc,t_wc';0 0 0 1];
% World to camera matrix
T_cw{ii} = inv(T_wc);
t_cw{ii}=T_cw{ii}(1:3,4); % x,y,z coordinates in camera's frame
end
Data(num_imgs=10):
t_cw
[-1072.01388542262;1312.20387622761;-1853.34408157349]
[-1052.07856598756;1269.03455126794;-1826.73576892251]
[-1091.85978641218;1351.08261414473;-1668.88197803184]
[-1337.56358084648;1373.78548638383;-1396.87603554914]
[-1555.19509876309;1261.60428874489;-1174.63047408086]
[-1592.39596647158;1066.82210015055;-1165.34417772659]
[-1523.84307918660;963.781819272748;-1207.27444716506]
[-1614.00792252030;893.962075837621;-1114.73528985018]
[-1781.83112607964;708.973204727939;-797.185326205240]
[-1781.83112607964;708.973204727939;-797.185326205240]
Main code (Optimization and transformation):
%% Get theta for rotation
f_obj = #(x)var_ycors(x,t_cw);
opt_theta = fminbnd(f_obj,0,360);
%% Plotting (rotate ycor and check to fix theta)
y_rotated=zeros(1,num_imgs);
for ii=1:num_imgs,
y_rotated(ii)=rotate_cor(opt_theta,t_cw{ii});
end
plot(1:numel(y_rotated),y_rotated);
function var_computed=var_ycors(theta,t_cw)
ycor=zeros(1,numel(t_cw));
for ii =1:numel(t_cw),
ycor(ii)=rotate_cor(theta,t_cw{ii});
end
var_computed=var(ycor);
end
function ycor=rotate_cor(theta,mat)
r_x=[1 0 0; 0 cosd(theta) -sind(theta); 0 sind(theta) cosd(theta)];
rotate_mat=mat'*r_x;
ycor=rotate_mat(2);
end
This is a clear eigenvector problem!
Take your centroids:
t_cw=[-1072.01388542262;1312.20387622761;-1853.34408157349
-1052.07856598756;1269.03455126794;-1826.73576892251
-1091.85978641218;1351.08261414473;-1668.88197803184
-1337.56358084648;1373.78548638383;-1396.87603554914
-1555.19509876309;1261.60428874489;-1174.63047408086
-1592.39596647158;1066.82210015055;-1165.34417772659
-1523.84307918660;963.781819272748;-1207.27444716506
-1614.00792252030;893.962075837621;-1114.73528985018
-1781.83112607964;708.973204727939;-797.185326205240
-1781.83112607964;708.973204727939;-797.185326205240];
t_cw=reshape(t_cw,[3,10])';
compute PCA on them, so we know the principal conponents:
[R]=pca(t_cw);
And.... thats it! R is now the transformation matrix between your original points and the rotated coordinate system. As an example, I will draw in red the old points and in blue the new ones:
hold on
plot3(t_cw(:,1),t_cw(:,2),t_cw(:,3),'ro')
trans=t_cw*R;
plot3(trans(:,1),trans(:,2),trans(:,3),'bo')
You can see that now the blue ones are in a plane, with the best possible fit to the X direction. If you want them in Y direction, just rotate 90 degrees in Z (I am sure you can figure out how to do this with 2 minutes of Google ;) ).
Note: This is mathematically the best possible fit. I know they are not as "in a row" as one would like, but this is because of the data, this is honestly the best possible fit, as that is what the eigenvectors are!

Find Position based on signal strength (intersection area between circles)

I'm trying to estimate a position based on signal strength received from 4 Wi-Fi Access Points. I measure the signal strength from 4 access points located in each corner of a square room with 100 square meters (10x10). I recorded the signal strengths in a known position (x, y) = (9.5, 1.5) using an Android phone. Now I want to check how accurate can a multilateration method be under the circumstances.
Using MATLAB, I applied a formula to calculate distance using the signal strength. The following MATLAB function shows the application of the formula:
function [ d_vect ] = distance( RSS )
% Calculate distance from signal strength
result = (27.55 - (20 * log10(2400)) + abs(RSS)) / 20;
d_vect = power(10, result);
end
The input RSS is a vector with the four signal strengths measured in the test point (x,y) = (9.5, 1.5). The RSS vector looks like this:
RSS =
-57.6000
-60.4000
-44.7000
-54.4000
and the resultant vector with all the estimated distances to each access points looks like this:
d_vect =
7.5386
10.4061
1.7072
5.2154
Now I want to estimate my position based on these distances and the access points position in order to find the error between the estimated position and the known position (9.5, 1.5). I want to find the intersection area (In order to estimate a position) between four circles where each access point is the center of one of the circles and the distance is the radius of the circle.
I want to find the grey area as shown in this image :
http://www.biologycorner.com/resources/venn4.gif
If you want an alternative way of estimating the location without estimating the intersection of circles you can use trilateration. It is a common technique in navigation (e.g. GPS) to estimate a position given a set of distance measurements.
Also, if you wanted the area because you also need an estimate of the uncertainty of the position I would recommend solving the trilateration problem using least squares which will easily give you an estimate of the parameters involved and an error propagation to yield an uncertainty of the location.
I found an answear that solved perfectly the question. It is explained in detail in this link:
https://gis.stackexchange.com/questions/40660/trilateration-algorithm-for-n-amount-of-points
I also developed some MATLAB code for the problem. Here it goes:
Estimate distances from the Access Points:
function [ d_vect ] = distance( RSS )
result = (27.55 - (20 * log10(2400)) + abs(RSS)) / 20;
d_vect = power(10, result);
end
The trilateration function:
function [] = trilat( X, d, real1, real2 )
cla
circles(X(1), X(5), d(1), 'edgecolor', [0 0 0],'facecolor', 'none','linewidth',4); %AP1 - black
circles(X(2), X(6), d(2), 'edgecolor', [0 1 0],'facecolor', 'none','linewidth',4); %AP2 - green
circles(X(3), X(7), d(3), 'edgecolor', [0 1 1],'facecolor', 'none','linewidth',4); %AP3 - cyan
circles(X(4), X(8), d(4), 'edgecolor', [1 1 0],'facecolor', 'none','linewidth',4); %AP4 - yellow
axis([0 10 0 10])
hold on
tbl = table(X, d);
d = d.^2;
weights = d.^(-1);
weights = transpose(weights);
beta0 = [5, 5];
modelfun = #(b,X)(abs(b(1)-X(:,1)).^2+abs(b(2)-X(:,2)).^2).^(1/2);
mdl = fitnlm(tbl,modelfun,beta0, 'Weights', weights);
b = mdl.Coefficients{1:2,{'Estimate'}}
scatter(b(1), b(2), 70, [0 0 1], 'filled')
scatter(real1, real2, 70, [1 0 0], 'filled')
hold off
end
Where,
X: matrix with APs coordinates
d: distance estimation vector
real1: real position x
real2: real position y
If you have three sets of measurements with (x,y) coordinates of location and corresponding signal strength. such as:
m1 = (x1,y1,s1)
m2 = (x2,y2,s2)
m3 = (x3,y3,s3)
Then you can calculate distances between each of the point locations:
d12 = Sqrt((x1 - x2)^2 + (y1 - y2)^2)
d13 = Sqrt((x1 - x3)^2 + (y1 - y3)^2)
d23 = Sqrt((x2 - x3)^2 + (y2 - y3)^2)
Now consider that each signal strength measurement signifies an emitter for that signal, that comes from a location somewhere at a distance. That distance would be a radius from the location where the signal strength was measured, because one would not know at this point the direction from where the signal came from. Also, the weaker the signal... the larger the radius. In other words, the signal strength measurement would be inversely proportional to the radius. The smaller the signal strength the larger the radius, and vice versa. So, calculate the proportional, although not yet accurate, radius's of our three points:
r1 = 1/s1
r2 = 1/s2
r3 = 1/s3
So now, at each point pair, set apart by their distance we can calculate a constant (C) where the radius's from each location will just touch one another. For example, for the point pair 1 & 2:
Ca * r1 + Ca * r2 = d12
... solving for the constant Ca:
Ca = d12 / (r1 + r2)
... and we can do this for the other two pairs, as well.
Cb = d13 / (r1 + r3)
Cc = d23 / (r2 + r3)
All right... select the largest C constant, either Ca, Cb, or Cc. Then, use the parametric equation for a circle to find where the coordinates meet. I will explain.
The parametric equation for a circle is:
x = radius * Cos(theta)
y = radius * Sin(theta)
If Ca was the largest constant found, then you would compare points 1 & 2, such as:
Ca * r1 * Cos(theta1) == Ca * r2 * Cos(theta2) &&
Ca * r1 * Sin(theta1) == Ca * r2 * Sin(theta2)
... iterating theta1 and theta2 from 0 to 360 degrees, for both circles. You might write code like:
for theta1 in 0 ..< 360 {
for theta2 in 0 ..< 360 {
if( abs(Ca*r1*cos(theta1) - Ca*r2*cos(theta2)) < 0.01 && abs(Ca*r1*sin(theta1) - Ca*r2*sin(theta2)) < 0.01 ) {
print("point is: (", Ca*r1*cos(theta1), Ca*r1*sin(theta1),")")
}
}
}
Depending on what your tolerance was for a match, you wouldn't have to do too many iterations around the circumferences of each signal radius to determine an estimate for the location of the signal source.
So basically you need to intersect 4 circles. There can be many approaches to it, and there are two that will generate the exact intersection area.
First approach is to start with one circle, intersect it with the second circle, then intersect the resulting area with the third circle and so on. that is, on each step you know current intersection area, and you intersect it with a new circle. The intersection area will always be a region bounded by circle arcs, so to intersect it with a new circle you walk along the boundary of the area and check whether each bounding arc intersects with a new circle. If it does, then you leave only the part of the arc that lies inside a new circle, remember that you should continue with an arc from a new circle, and continue traversing the boundary until you find the next intersection.
Another approach that seems to result in a worse time complexity, but in your case of 4 circles this will not be important, is to find all the intersection points of two circles and choose only those points that are of interest for you, that is which lie inside all other circles. These points will be the corners of your area, and then it is rather easy to reconstruct the area. After googling a bit, I have even found a live demo of this approach.

create cylinders in 3D volumetric data

I'm trying to create a dataset of raw volumetric data consisting of geometrical shapes. The point is to use volume ray casting to project them in 2D but first I want to create the volume manually.
The geometry is consisting of one cylinder that is in the middle of the volume, along the Z axis and 2 smaller cylinders that are around the first one, deriving from rotations around the axes.
Here is my function so far:
function cyl= createCylinders(a, b, c, rad1, h1, rad2, h2)
% a : data width
% b : data height
% c : data depth
% rad1: radius of the big center cylinder
% rad2: radius of the smaller cylinders
% h1: height of the big center cylinder
% h2: height of the smaller cylinders
[Y X Z] =meshgrid(1:a,1:b,1:c); %matlab saves in a different order so X must be Y
centerX = a/2;
centerY = b/2;
centerZ = c/2;
theta = 0; %around y
fi = pi/4; %around x
% First cylinder
cyl = zeros(a,b,c);
% create for infinite height
R = sqrt((X-centerX).^2 + (Y-centerY).^2);
startZ = ceil(c/2) - floor(h1/2);
endZ = startZ + h1 - 1;
% then trim it to height = h1
temp = zeros(a,b,h1);
temp( R(:,:,startZ:endZ)<rad1 ) = 255;
cyl(:,:,startZ:endZ) = temp;
% Second cylinder
cyl2 = zeros(a,b,c);
A = (X-centerX)*cos(theta) + (Y-centerY)*sin(theta)*sin(fi) + (Z-centerZ)*cos(fi)*sin(theta);
B = (Y-centerY)*cos(fi) - (Z-centerZ)*sin(fi);
% create again for infinite height
R2 = sqrt(A.^2+B.^2);
cyl2(R2<rad2) = 255;
%then use 2 planes to trim outside of the limits
N = [ cos(fi)*sin(theta) -sin(fi) cos(fi)*cos(theta) ];
P = (rad2).*N + [ centerX centerY centerZ];
T = (X-P(1))*N(1) + (Y-P(2))*N(2) + (Z-P(3))*N(3);
cyl2(T<0) = 0;
P = (rad2+h2).*N + [ centerX centerY centerZ];
T = (X-P(1))*N(1) + (Y-P(2))*N(2) + (Z-P(3))*N(3);
cyl2(T>0) = 0;
% Third cylinder
% ...
cyl = cyl + cyl2;
cyl = uint8(round(cyl));
% ...
The concept is that the first cylinder is created and then "cut" according to the z-axis value, to define its height. The other cylinder is created using the relation A2 + B 2 = R2 where A and B are rotated accordingly using the rotation matrices only around x and y axes, using Ry(θ)Rx(φ) as described here.
Until now everything seems to be working, because I have implemented code (tested that it works well) to display the projection and the cylinders seem to have correct rotation when they are not "trimmed" from infinite height.
I calculate N which is the vector [0 0 1] aka z-axis rotated in the same way as the cylinder. Then I find two points P of the same distances that I want the cylinder's edges to be and calculate the plane equations T according to that points and normal vector. Lastly, I trim according to that equality. Or at least that's what I think I'm doing, because after the trimming I usually don't get anything (every value is zero). Or, the best thing I could get when I was experimenting was cylinders trimmed, but the planes of the top and bottom where not oriented well.
I would appreciate any help or corrections at my code, because I've been looking at the geometry equations and I can't find where the mistake is.
Edit:
This is a quick screenshot of the object I'm trying to create. NOTE that the cylinders are opaque in the volume data, all the inside is considered as homogeneous material.
I think instead of:
T = (X-P(1))*N(1) + (Y-P(2))*N(2) + (Z-P(3))*N(3);
you should try the following at both places:
T = (X-P(1)) + (Y-P(2)) + (Z-P(3));
Multiplying by N is to account for the direction of the axis of the 2nd cylinder which you have already done just above that step.