Finding the pixel displacement in fish eye images - matlab

I am trying to plot the displacement of a pixel from the original image to the fish eye image based on the radius from the center of the image.
I was successful in producing fish images in MATLAB using maketform
testImg = imread('ship.jpg');
optTra = maketform('custom',2,2,[],#radial,options);
newX = imtransform(testImg,optTra);
imshow(newX);
the radial function here helps me to get the fish eye distorted image.
I need to find the displacement of each pixel in the original image to that of the distorted image.

If the transformation applied (a.k.a "#radial") was angular, the inverse transformation is given by:
u = r cos(phi) + 0.5;
v = r sin(phi) + 0.5;
where
r = atan2(sqrt(x*x+y*y),p.z)/pi;
phi = atan2(y,x);
x,y are assumed to be normalized coordinates (centered and between -1 to 1).

Related

Estimating an ellipse with a multi-variate Gaussian

In MATLAB, say I have the parameters for an ellipse:
(x,y) center
Minor axis radius
Major axis radius
Angle of rotation
Now, I want to generate random points that lie within that ellipse, approximated from a 2D gaussian.
My attempt thus far is this:
num_samps = 100;
data = [randn(num_samps, 1)+x_center randn(num_samps, 1)+y_center];
This gives me a cluster of data that's approximately centered at the center, however if I draw the ellipse over the top some of the points might still be outside.
How do I enforce the axis rules and the rotation?
Thanks.
my assumptions
x_center = h
y_center = k
Minor Axis Radius = b
Major Axis Raduis = a
rotation angle = alpha
h=0;
k=0;
b=5;
a=10;
alpha=30;
num_samps = 100;
data = [randn(num_samps, 1)+h randn(num_samps, 1)+k];
chk=(((((data(:,1)-h).*cos(alpha)+(data(:,2)-k).*sin(alpha))./a).^2) +...
(((data(:,1)-h).*sin(alpha)+(data(:,2)-k).*cos(alpha))./b).^2)<=1;
idx=find(chk==0);
if ~isempty(idx)
data(idx,:)=data(idx,:)-.5*ones(length(idx),2);
end

Make a circle inside a matrix in Matlab

I want to make a circle inside a matrix. For example; Make a matrix of some dimension, let's say ones(200,200) and then select its circle's x and y co-ordinates and change the color of these selected pixels and then show the image using imshow(img). As showing in the picture. Is it possible?
OR
Can I change this ploting code to picture for using the circle functionality?
radius = 5;
centerX = 20;
centerY = 30;
viscircles([centerX, centerY], radius);
axis square;
You can use meshgrid to create a grid of x and y coordinates and then use the equation of the circle to check whether each x/y pair is within the circle or not. This will yield a logical result which can be displayed as an image
[x,y] = meshgrid(1:200, 1:200);
isinside = (x - centerX).^2 + (y - centerY).^2 <= radius^2;
imshow(isinside);
If you simply want the outline of the circle, you can apply a convolution to the resulting binary mask to decrease it's size and then subtract out the circle to yield only the outline
shrunk = ~conv2(double(~isinside), ones(3), 'same');
outline = isinside - shrunk;
imshow(outline)
If you have the Image Processing Toolbox, you can use bwperim to yield the binary outline
outline = bwperim(isinside);
imshow(outline);
Update
If you want to change the colors shown above, you can either invert outline and isinside before displaying
isinside = ~isinside;
outline = ~outline;
imshow(isinside)
imshow(outline)
Or you can invert the colormap
imshow(isinside)
colormap(gca, flipud(gray))

Estimating distance to a point using camera calibration

I want to estimate distance (camera to a point in the ground : that means Yw=0) from a given pixel coordinate of that point . For that I used camera calibration methods
But the results are not meaningful.
I have following details to calibration
-focal length x and y , principal point x and y, effective pixel size in meters , yaw and pitch angles and camera heights etc.
-I have entered focal length ,principal points and translation vector in terms of pixels for calculation
-I have multiplied image point with camera_matrix and then rotational| translation matrix (R|t), to get the world point.
Is my procedure correct?? What can be wrong ?
result
image_point(x,y) =400,380
world_point z co ordinate(distance) = 12.53
image_point(x,y) =400,180
world_point z co ordinate(distance) = 5.93
problem
I am getting very few pixels for z coordinate ,
That means z co ordinate is << 1 m , (because effective pixel size in meters = 10 ^-5 )
This is my matlab code
%positive downward pitch
xR = 0.033;
yR = 0;
zR = pi;
%effective pixel size in meters = 10 ^-5 ; focal_length x & y = 0.012 m
% principal point x & y = 320 and 240
intrinsic_params =[1200,0,320;0,1200,240;0,0,1];
Rx=[1,0,0 ; 0,cos(xR),sin(xR); 0,-sin(xR),cos(xR)];
Ry=[cos(yR),0,-sin(yR) ; 0,1,0 ; sin(yR),0,cos(yR)];
Rz=[cos(zR),sin(zR),0 ; -sin(zR),cos(zR),0 ; 0,0,1];
R= Rx * Ry * Rz ;
% The camera is 1.17m above the ground
t=[0;117000;0];
extrinsic_params = horzcat(R,t);
% extrinsic_params is 3 *4 matrix
P = intrinsic_params * extrinsic_params; % P 3*4 matrix
% make it square ....
P_sq = [P; 0,0,0,1];
%image size is 640 x 480
%An arbitrary pixel 360,440 is entered as input
image_point = [400,380,0,1];
% world point will be in the form X Y Z 1
world_point = P_sq * image_point'
Your procedure is kind of right, however it is going in the wrong direction.
See this link. Using your intrinsic and extrinsic calibration matrix you can find the pixel-space position of a real-world vector, NOT the other way around. The exception to this is if your camera is stationary in the global frame and you have the Z position of the feature in the global space.
Stationary camera, known feature Z case: (see also this link)
%% First we simulate a camera feature measurement
K = [0.5 0 320;
0 0.5 240;
0 0 1]; % Example intrinsics
R = rotx(0)*roty(0)*rotz(pi/4); % orientation of camera in global frame
c = [1; 1; 1]; %Pos camera in global frame
rwPt = [ 10; 10; 5]; %position of a feature in global frame
imPtH = K*R*(rwPt - c); %Homogeneous image point
imPt = imPtH(1:2)/imPtH(3) %Actual image point
%% Now we use the simulated image point imPt and the knowledge of the
% features Z coordinate to determine the features X and Y coordinates
%% First determine the scaling term lambda
imPtH2 = [imPt; 1];
z = R.' * inv(K) * imPtH2;
lambda = (rwPt(3)-c(3))/z(3);
%% Now the RW position of the feature is:
rwPt2 = c + lambda*R.' * inv(K) * imPtH2 % Reconstructed RW point
Non-stationary camera case:
To find the real-world position or distance from the camera to a particular feature (given on the image plane) you have to employ some method of reconstructing the 3D data from the 2D image.
The two that come to mind immediately is opencv's solvePnP and stereo-vision depth estimation.
solvePnP requires 4 co-planar (in RW space) features to be available in the image, and the positions of the features in RW space known. This may not sound useful as you need to know the RW position of the features, but you can simply define the 4 features with a known offset rather than a position in the global frame - the result will be the relative position of the camera in the frame the features are defined in. solvePnP gives very accurate pose estimation of the camera. See my example.
Stero vision depth estimation requires the same feature to be found in two spatially-separate images and the transformation between the images in RW space must be known very precisely.
There may be other methods but these are the two I am familiar with.

sliding window on 3d volume matlab

I need to slide a window over a 3d volume. The sliding is only on one layer of the 3d volume, i.e for each x,y with one specific z.
This is what I want to do in a loop:
for each x,y,z, for example:
px =9; py =9; pz =12;
a = rand(50,50,50);
[x y z] = meshgrid(1:50,1:50,1:50);
r = 3;
%-------------loop starts here:
% creating a shaped window, for example sphere of radius r
inds = find((x-px).^2 + (y-py).^2 + (z-pz).^2 <= r.^2);
% getting the relevant indices, here, it is the sphere around px,py,pz
[i,j,k] = ind2sub(size(a),inds);
% adjust the center of the sphere to be at (0,0,0) instead of (px,py,pz)
adj_inds = bsxfun(#minus,[i,j,k],[px,py,pz]);
% Computing for each sphere some kind of median point
cx = sum(a(inds).*adj_inds(:,1))./sum(a(inds));
cy = sum(a(inds).*adj_inds(:,2))./sum(a(inds));
cz = sum(a(inds).*adj_inds(:,3))./sum(a(inds));
%Saving the result: the distance between the new point and the center of the sphere.
res(yc,xc) = sqrt(sum([cx,cy,cz].^2));
%-------------
Now, all of this should happen many many time, ( ~300000), loop takes ages, convolution returns 3d volume (for each x,y,z) while I need to perform this only for each (x,y) and a list of z's.
Help please...
Thanks
matlabit

How to measure the rotation of a image in MATLAB?

I have two images. One is the original, and the another is rotated.
Now, I need to discover the angle that the image was rotated. Until now, I thought about discovering the centroids of each color (as every image I will use has squares with colors in it) and use it to discover how much the image was rotated, but I failed.
I'm using this to discover the centroids and the color in the higher square in the image:
i = rgb2gray(img);
bw = im2bw(i,0.01);
s = regionprops(bw,'Centroid');
centroids = cat(1, s.Centroid);
colors = impixel(img,centroids(1),centroids(2));
top = max(centroids);
topcolor = impixel(img,top(1),top(2));
You can detect the corners of one of the colored rectangles in both the image and the rotated version, and use these as control points to infer the transformation between the two images (like in image registration) using the CP2TFORM function. We can then compute the angle of rotation from the affine transformation matrix:
Here is an example code:
%# read first image (indexed color image)
[I1 map1] = imread('http://i.stack.imgur.com/LwuW3.png');
%# constructed rotated image
deg = -15;
I2 = imrotate(I1, deg, 'bilinear', 'crop');
%# find blue rectangle
BW1 = (I1==2);
BW2 = imrotate(BW1, deg, 'bilinear', 'crop');
%# detect corners in both
p1 = corner(BW1, 'QualityLevel',0.5);
p2 = corner(BW2, 'QualityLevel',0.5);
%# sort corners coordinates in a consistent way (counter-clockwise)
p1 = sortrows(p1,[2 1]);
p2 = sortrows(p2,[2 1]);
idx = convhull(p1(:,1), p1(:,2)); p1 = p1(idx(1:end-1),:);
idx = convhull(p2(:,1), p2(:,2)); p2 = p2(idx(1:end-1),:);
%# make sure we have the same number of corner points
sz = min(size(p1,1),size(p2,1));
p1 = p1(1:sz,:); p2 = p2(1:sz,:);
%# infer transformation from corner points
t = cp2tform(p2,p1,'nonreflective similarity'); %# 'affine'
%# rotate image to match the other
II2 = imtransform(I2, t, 'XData',[1 size(I1,2)], 'YData',[1 size(I1,1)]);
%# recover affine transformation params (translation, rotation, scale)
ss = t.tdata.Tinv(2,1);
sc = t.tdata.Tinv(1,1);
tx = t.tdata.Tinv(3,1);
ty = t.tdata.Tinv(3,2);
translation = [tx ty];
scale = sqrt(ss*ss + sc*sc);
rotation = atan2(ss,sc)*180/pi;
%# plot the results
subplot(311), imshow(I1,map1), title('I1')
hold on, plot(p1(:,1),p1(:,2),'go')
subplot(312), imshow(I2,map1), title('I2')
hold on, plot(p2(:,1),p2(:,2),'go')
subplot(313), imshow(II2,map1)
title(sprintf('recovered angle = %g',rotation))
If you can identify a color corresponding to only one component it is easier to:
Calculate the centroids for each image
Calculate the mean of the centroids (in x and y) for each image. This is the "center" of each image
Get the red component color centroid (in your example) for each image
Subtract the mean of the centroids for each image from the red component color centroid for each image
Calculate the ArcTan2 for each of the vectors calculated in 4), and subtract the angles. That is your result.
If you have more than one figure of each color, you need to calculate all possible combinations for the rotation and then select the one that is compatible with the other possible rotations.
I could post the code in Mathematica, if you think it is useful.
I would take a variant to the above mentioned approach:
% Crude binarization method to knock out background and retain foreground
% features. Note one looses the cube in the middle
im = im > 1
Then I would get the 2D autocorrelation:
acf = normxcorr2(im, im);
From this result, one can easily detect the peaks, and as rotation carries into the autocorrelation function (ACF) domain, one can ascertain the rotation by matching the peaks between the original ACF and the ACF from the rotated image, for example using the so-called Hungarian algorithm.