Rectify a photo in Matlab - matlab

I'm working with some road photos and I have to transform the captured perspective. My goal is to transform the road into a square. My photos look similar to this one:
However, I need the road to take the shape of a square, not a triangle or trapeze, so I can work on it.
I have the angles Pitch, Yaw and Roll from my photo.
I'm trying this:
roll_x = 178.517332325424462;
pitch_y = -0.829084891331837937;
pan_z = -0.668910403057581759;
%% Transform perspective
img = imread('imageLoaded.png');
R_rot = R_z(pan_z)*R_y(pitch_y)*R_x(roll_x);
R_2d = [ R_rot(1,1) R_rot(1,2) 0;
R_rot(2,1) R_rot(2,2) 0;
0 0 1 ];
tform = affine2d(R_2d);
outputImage = imwarp(img,tform);
figure(1);
imshow(outputImage);
%% Matrix for Yaw-rotation about the Z-axis
function [R] = R_z(psi)
R = [cosd(psi) -sind(psi) 0;
sind(psi) cosd(psi) 0;
0 0 1];
end
%% Matrix for Pitch-rotation about the Y-axis
function [R] = R_y(theta)
R = [cosd(theta) 0 sind(theta);
0 1 0 ;
-sind(theta) 0 cosd(theta) ];
end
%% Matrix for Roll-rotation about the X-axis
function [R] = R_x(phi)
R = [1 0 0;
0 cosd(phi) -sind(phi);
0 sind(phi) cosd(phi)];
end
but I'm rotating the image, not transforming it. How can I achieve my goal?

MATLAB has fitgeotrans, a function that can find the affine transform from points you can define manually, you could use it directly or to try and determine what's wrong with your transform.
That said, an affine transform is not what you need. From the last page of this:
An affine transform maintains parallelism, which you can't have when transforming a trapeze-shaped road into a square. You need a projective transform (homography).
See this StackOverflow answer on how to do it.

Related

How to apply a function for each of the various entries from different matrices as index?

I'm writing a function in matlab which mimics the built-in 'imwarp' function (applying geometric transformation) without using any kind of loops. i'm in the final step when i have to call my function for bi-linear interpolation for every index in final 2D image.
I have 3 arrays here , 'pts' have homogenized vectors (x,y,1) for which i interpolate and 'row' and 'cols' have x and y coordinates respectively for resultant image where interpolated intensity value would be placed.
finalImage (rows(1,:),cols(1,:))=bilinear(pts(:,:),im);
Kindly correct my syntax here to do it properly. thanks in advance.
The following is a simple implementation of applying an affine transformation to an image. Some of the matrices may be reversed because I did this from memory. I don't know exactly how you are formatting your pts array so I figure a working example is the best I can do. The interp2 function applies bilinear interpolation, the bilinear function performs the bilinear transform which describes analog filters as digital filters. This is not what you want.
P.S. You have to make sure to use the inverse transform when applying image warping (that is, define the point you want to sample in the input image for each point in the output image). If you perform the forward transform (i.e. define the point in the output image that each point in the input image maps to) then you will end up with some serious aliasing effects and potentially holes in the output image.
Hope this helps. Let me know if you have questions.
img = double(imread('rice.png'))/255;
theta = 30; % rotate 30 degrees
R = [cosd(theta) -sind(theta) 0; ...
sind(theta) cosd(theta) 0; ...
0 0 1];
sx = 15; % skew by 15 degrees in x
Skx = [1 tand(sx) 0; ...
0 1 0; ...
0 0 1];
% Translate by 1/2 size of image
tx = -size(img, 2)/2;
ty = -size(img, 1)/2;
T = [1 0 tx; ...
0 1 ty; ...
0 0 1];
% Scale image down by 1/2
sx = 0.5;
sy = 0.5;
S = [sx 0 0; ...
0 sy 0; ...
0 0 1];
% translate, scale, rotate, skew, then translate back
A = inv(T)*Skx*R*S*T;
% create meshgrid points
[x, y] = meshgrid(1:size(img,2), 1:size(img,1));
% reshape so we can apply matrix op
V = [reshape(x, 1, []); reshape(y, 1, []); ones(1, numel(x))];
Vq = inv(A)*V;
% probably not necessary for these transformations but project back to the z=1 plane
Vq(1,:) = Vq(1,:) ./ V(3,:);
Vq(2,:) = Vq(2,:) ./ V(3,:);
% reshape back into a meshgrid
xq = reshape(Vq(1,:), size(img));
yq = reshape(Vq(2,:), size(img));
% use interp2 to perform bilinear interpolation
imgnew = interp2(x, y, img, xq, yq);
% show the resulting image
imshow(imgnew);

Image rotation about an arbitrary point

I was asked to perform an image rotation about an arbitrary point. The framework they provided was in matlab so I had to fill a function called MakeTransformMat that receives the angle of rotation and the point where we want to rotate.
As I've seen in class to do this rotation first we translate the point to the origin, then we rotate and finally we translate back.
The framework asks me to return a Transformation Matrix. Am I right to build that matrix as the multiplication of the translate-rotate-translate matrices? otherwise, what am I forgetting?
function TransformMat = MakeTransformMat(theta,center_y,center_x)
%Translate image to origin
trans2orig = [1 0 -center_x;
0 1 -center_y;
0 0 1];
%Rotate image theta degrees
rotation = [cos(theta) -sin(theta) 0;
sin(theta) cos(theta) 0;
0 0 1];
%Translate back to point
trans2pos = [1 0 center_x;
0 1 center_y;
0 0 1];
TransformMat = trans2orig * rotation * trans2pos;
end
This worked for me. Here I is the input image and J is the rotated image
[height, width] = size(I);
rot_deg = 45; % Or whatever you like (in degrees)
rot_xc = width/2; % Or whatever you like (in pixels)
rot_yc = height/2; % Or whatever you like (in pixels)
T1 = maketform('affine',[1 0 0; 0 1 0; -rot_xc -rot_yc 1]);
R1 = maketform('affine',[cosd(rot_deg) sind(rot_deg) 0; -sind(rot_deg) cosd(rot_deg) 0; 0 0 1]);
T2 = maketform('affine',[1 0 0; 0 1 0; width/2 height/2 1]);
tform = maketform('composite', T2, R1, T1);
J = imtransform(I, tform, 'XData', [1 width], 'YData', [1 height]);
Cheers.
I've answered a very similar question elsewhere: Here is the link.
In the code linked to, the point about which you rotate is determined by how the meshgrid is defined.
Does that help? Have you read the Wikipedia page on rotation matrices?

Homographic image transformation distortion issue

I am trying to transform an image using a 3D transformation matrix and assuming my camera is orthonormal.
I am defining my homography using the plane-induced homography formula H=R-t*n'/d (with d=Inf so H=R) as given in Hartley and Zisserman Chapter 13.
What I am confused about is when I use a rather modest rotation, the image seems to be distorting much more than I expect (I'm sure I'm not confounding radians and degrees).
What could be going wrong here?
I've attached my code and example output.
n = [0;0;-1];
d = Inf;
im = imread('cameraman.tif');
rotations = [0 0.01 0.1 1 10];
for ind = 1:length(rotations)
theta = rotations(ind)*pi/180;
R = [ 1 0 0 ;
0 cos(theta) -sin(theta);
0 sin(theta) cos(theta)];
t = [0;0;0];
H = R-t*n'/d;
tform = maketform('projective',H');
imT = imtransform(im,tform);
subplot(1,5,ind) ;
imshow(imT)
title(['Rot=' num2str(rotations(ind)) 'deg']);
axis square
end
The formula H = R-t*n'/d has one assumption which is not met in your case:
This formula implies that you are using pinhole camera model with focal length=1
But in your case, for your camera to be more real and for your code to work, you should set the focal length to some positive number much greater than 1. (focal length is the distance from your camera center to the image plane)
To do this you can define a calibration matrix K which handles the focal length. You just need to change your formula to
H=K R inv(K) - 1/d K t n' inv(K)
in which K is a 3-by-3 identity matrix whose two first elements along the diagonal are set to the focal length (e.g. f=300). The formula can be easily derived if you assume a projective camera.
Below is the corrected version of your code, in which the angles make sense.
n = [0;0;-1];
d = Inf;
im = imread('cameraman.tif');
rotations = [0 0.01 0.1 30 60];
for ind = 1:length(rotations)
theta = rotations(ind)*pi/180;
R = [ 1 0 0 ;
0 cos(theta) -sin(theta);
0 sin(theta) cos(theta)];
t = [0;0;0];
K=[300 0 0;
0 300 0;
0 0 1];
H=K*R/K-1/d*K*t*n'/K;
tform = maketform('projective',H');
imT = imtransform(im,tform);
subplot(1,5,ind) ;
imshow(imT)
title(['Rot=' num2str(rotations(ind)) 'deg']);
axis square
end
You can see the result in the image below:
You can also rotate the image around its center. For it to happen you should set the image plane origin to the center of the image which I think is not possible with that method of matlab (maketform).
You can use the method below instead.
imT=imagehomog(im,H','c');
Note that if you use this method, you'll have to change some settings in n, d, t and R to get the appropriate result.
That method can be found at: https://github.com/covarep/covarep/blob/master/external/voicebox/imagehomog.m
The result of the program with imagehomog and some changes in n, d, t , and R is shown below which seems more real.
New settings are:
n = [0 0 1]';
d = 2;
t = [1 0 0]';
R = [cos(theta), 0, sin(theta);
0, 1, 0;
-sin(theta), 0, cos(theta)];
Hmm... I'm not 100% percent on this stuff, but it was an interesting question and relevant to my work, so I thought I'd play around and give it a shot.
EDIT: I tried this once using no built-ins. That was my original answer. Then I realized that you could do it your way pretty easily:
The easy answer to your question is to use the correct rotation matrix about the z-axis:
R = [cos(theta) -sin(theta) 0;
sin(theta) cos(theta) 0;
0 0 1];
Here's another way to do it (my original answer):
I'm going to share what I did; hopefully this is useful to you. I only did it in 2D (though that should be easy to expand to 3D). Note that if you want to rotate the image in plane, you will need to use a different rotation matrix that you have currently coded. You need to rotate about the Z-axis.
I did not use those matlab built-ins.
I referred to http://en.wikipedia.org/wiki/Rotation_matrix for some info.
im = double(imread('cameraman.tif')); % must be double for interpn
[x y] = ndgrid(1:size(im,1), 1:size(im,2));
rotation = 10;
theta = rotation*pi/180;
% calculate rotation matrix
R = [ cos(theta) -sin(theta);
sin(theta) cos(theta)]; % just 2D case
% calculate new positions of image indicies
tmp = R*[x(:)' ; y(:)']; % 2 by numel(im)
xi = reshape(tmp(1,:),size(x)); % new x-indicies
yi = reshape(tmp(2,:),size(y)); % new y-indicies
imrot = interpn(x,y,im,xi,yi); % interpolate from old->new indicies
imagesc(imrot);
My own question now is: "How do you change the origin about which you are rotating the image? Clearly, I'm rotating about (0,0), the top left corner.
EDIT 2 In response to the asker's comment, I've tried again.
This time I fixed a couple of things. Now I'm using the same transformation matrix (about x) as in the original question.
I rotated about the center of the image by redoing the way i do the ndgrids (put 0,0,0) in the center of the image. I also decided to show 3 planes of the image. This was not in the original question. The middle plane is the plane of interest. To get just the middle plane, you can leave out the zero-padding and redefine the 3rd ndgrid option to be just 1 instead of -1:1.
im = double(imread('cameraman.tif')); % must be double for interpn
im = padarray(im, [0 0 1],'both');
[x y z] = ndgrid(-floor(size(im,1)/2):floor(size(im,1)/2)-1, ...
-floor(size(im,2)/2):floor(size(im,2)/2)-1,...
-1:1);
rotation = 1;
theta = rotation*pi/180;
% calculate rotation matrix
R = [ 1 0 0 ;
0 cos(theta) -sin(theta);
0 sin(theta) cos(theta)];
% calculate new positions of image indicies
tmp = R*[x(:)'; y(:)'; z(:)']; % 2 by numel(im)
xi = reshape(tmp(1,:),size(x)); % new x-indicies
yi = reshape(tmp(2,:),size(y)); % new y-indicies
zi = reshape(tmp(3,:),size(z));
imrot = interpn(x,y,z,im,xi,yi,zi); % interpolate from old->new indicies
figure;
subplot(3,1,1);imagesc(imrot(:,:,1)); axis image; axis off;
subplot(3,1,2);imagesc(imrot(:,:,2)); axis image; axis off;
subplot(3,1,3);imagesc(imrot(:,:,3)); axis image; axis off;
You are performing rotations around the x-axis: in your matrix, the 1st component (x) is left unchanged by the rotation matrix. This is confirmed by the perspective deformations from your examples.
The actual amount of deformation will then depend on the distance between the camera and the image plane (or more accurately on its value relative to the focal length of the camera). It can be important when the cameraman image plane is located near the camera.

How to apply affine transform so that it works well with all transforms including translation?

For some reason imtransform function ignores translation part by default.
If I add additional space with XData and YData as it said in manual, I will handle only simple cases (i.e. translation only).
So, how to apply full-featured affine transform in Matlab?
I = imread('cameraman.png');
imshow(I);
% does not translate
xform = [1 2 0; 2 1 0; 100 0 1];
T = maketform('affine',xform);
I2 = imtransform(I,T);
figure, imshow(I2)
% translates but cuts some portion of an image
xform = [1 2 0; 2 1 0; 100 0 1];
T = maketform('affine',xform);
I2 = imtransform(I,T,'XData',[1 size(I,2)+xform(3,1)],'YData',[1 size(I,1)+xform(3,2)]);
figure, imshow(I2)
So I found I should apply transform to image range too.
After that I can decide, what to do if an image corner is not at the beginning of coordinates.
I = imread('cameraman.png');
XData=[1 size(I,2)];
YData=[1 size(I,1)];
imshow(I);
xform = [1 2 0; 2 1 0; 100 0 1];
T = maketform('affine',xform);
[XData1, YData1] = tformfwd(T, XData, YData);
if XData1(1)>1
XData1(1)=1;
end
if YData1(1)>1
YData1(1)=1;
end
I2 = imtransform(I,T,'XData',XData1,'YData',YData1);
figure, imshow(I2)

3d grayscale volume projection onto 2D plane

I have a 3-D grayscale volume corresponding to ultrasound data. In Matlab this 3-D volume is simply a 3-D matrix of MxNxP. The structure I'm interested in is not oriented along the z axis, but along a local coordinate system already known (x'y'z'). What I have up to this point is something like the figure shown below, depicting the original (xyz) and the local coordinate systems (x'y'z'):
I want to obtain the 2-D projection of this volume (i.e. an image) through a specific plane on the local coordinate system, say at z' = z0. How can I do this?
If the volume was oriented along the z axis this projection could be readily achieved. i.e. if the volume, in Matlab, is V, then:
projection = sum(V,3);
thus, the projection can be computed just as the sum along the 3rd dimension of the array. However with a change of orientation the problem becomes more complicated.
I've been looking at radon transform (2D, that applies only to 2-D images and not volumes) and also been considering ortographic projections, but at this point I'm clueless as to what to do!
Thanks for any advice!
New attempt at solution:
Following the tutorial http://blogs.mathworks.com/steve/2006/08/17/spatial-transformations-three-dimensional-rotation/ and making some small changes, I might have something which could help you. Bear in mind, I have little or no experience with volumetric data in MATLAB, so the implementation is quite hacky.
In the below code I use tformarray() to rotate the structure in space. First, the data is centered, then rotated using rotationmat3D to produce the spacial transformation, before the data is moved back to its original position.
As I have never used tformarray before, I handeled datapoints falling outside the defined region after rotation by simply padding the data matrix (NxMxP) with zeros all around. If anyone know a better way, please let us know :)
The code:
%Synthetic dataset, 25x50x25
blob = flow();
%Pad to allow for rotations in space. Bad solution,
%something better might be possible to better understanding
%of tformarray()
blob = padarray(blob,size(blob));
f1 = figure(1);clf;
s1=subplot(1,2,1);
p = patch(isosurface(blob,1));
set(p, 'FaceColor', 'red', 'EdgeColor', 'none');
daspect([1 1 1]);
view([1 1 1])
camlight
lighting gouraud
%Calculate center
blob_center = (size(blob) + 1) / 2;
%Translate to origin transformation
T1 = [1 0 0 0
0 1 0 0
0 0 1 0
-blob_center 1];
%Rotation around [0 0 1]
rot = -pi/3;
Rot = rotationmat3D(rot,[0 1 1]);
T2 = [ 1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1];
T2(1:3,1:3) = Rot;
%Translation back
T3 = [1 0 0 0
0 1 0 0
0 0 1 0
blob_center 1];
%Total transform
T = T1 * T2 * T3;
%See http://blogs.mathworks.com/steve/2006/08/17/spatial-transformations-three-dimensional-rotation/
tform = maketform('affine', T);
R = makeresampler('linear', 'fill');
TDIMS_A = [1 2 3];
TDIMS_B = [1 2 3];
TSIZE_B = size(blob);
TMAP_B = [];
F = 0;
blob2 = ...
tformarray(blob, tform, R, TDIMS_A, TDIMS_B, TSIZE_B, TMAP_B, F);
s2=subplot(1,2,2);
p2 = patch(isosurface(blob2,1));
set(p2, 'FaceColor', 'red', 'EdgeColor', 'none');
daspect([1 1 1]);
view([1 1 1])
camlight
lighting gouraud
The arbitrary visualization below is just to confirm that the data is rotated as expected, plotting a closed surface when the data passed the value '1'. With blob2, you should know be able to project by using simple sums.
figure(2)
subplot(1,2,1);imagesc(sum(blob,3));
subplot(1,2,2);imagesc(sum(blob2,3));
Assuming you have access to the coordinate basis R=[x' y' z'], and that those vectors are orthonormal, you can simply extract the representation in this basis by multiplying your data with the the 3x3 matrix R, where x',y',z' are column vectors.
With the data stored in D (Nx3), you can get the representation with R, by multiplying by it:
Dmarked = D*R;
and now D = Dmarked*inv(R), so going back and forth is stragihtforward.
The following code might provide help to see the transformation. Here I create a synthetic dataset, rotate it, and then rotate it back. Doing sum(DR(:,3)) would then be your sum along z'
%#Create synthetic dataset
N1 = 250;
r1 = 1;
dr1 = 0.1;
dz1 = 0;
mu1 = [0;0];
Sigma1 = eye(2);
theta1 = 0 + (2*pi).*rand(N1,1);
rRand1 = normrnd(r1,dr1,1,N1);
rZ1 = rand(N1,1)*dz1+1;
D = [([rZ1*0 rZ1*0] + repmat(rRand1',1,2)).*[sin(theta1) cos(theta1)] rZ1];
%Create roation matrix
rot = pi/8;
R = rotationmat3D(rot,[0 1 0]);
% R = 0.9239 0 0.3827
% 0 1.0000 0
% -0.3827 0 0.9239
Rinv = inv(R);
%Rotate data
DR = D*R;
%#Visaulize data
f1 = figure(1);clf
subplot(1,3,1);
plot3(DR(:,1),DR(:,2),DR(:,3),'.');title('Your data')
subplot(1,3,2);
plot3(DR*Rinv(:,1),DR*Rinv(:,2),DR*Rinv(:,3),'.r');
view([0.5 0.5 0.2]);title('Representation using your [xmarked ymarked zmarked]');
subplot(1,3,3);
plot3(D(:,1),D(:,2),D(:,3),'.');
view([0.5 0.5 0.2]);title('Original data before rotation');
If you have two normalized 3x1 vectors x2 and y2 corresponding to your local coordinate system (x' and y').
Then, for a position P, its local coordinate will be xP=P'x2 and yP=P'*y2.
So you can try to project your volume using accumarray:
[x y z]=ndgrid(1:M,1:N,1:P);
posP=[x(:) y(:) z(:)];
xP=round(posP*x2);
yP=round(posP*y2);
xP=xP+min(xP(:))+1;
yP=yP+min(yP(:))+1;
V2=accumarray([xP(:),yP(:)],V(:));
If you provide your data, I will test it.