How to know rotation degree in matlab for a rotated image? - matlab

I would like to :
step 1) Rotate an image with 20 degrees using this code rotatedImage = imrotate(originalImage, 20);.
step 2) calculate degree rotation used in step 1 based only on the rotated image if it possible or based on rotated image and the original image.
there is any function in matlab could do the step 2 or a proposition to do that?

This example shows one way to perform step 2:
A = 'peppers.jpg';
img = im2double(imread(A));
img_r=imrotate(img,20,'nearest','crop'); % <-- this is the distorted image
% rotated 20 deg
xopt = fminsearch(#(x) imr(x,img_r,img), 10); % <-- start with 10 deg as guess
where imr is the function
function obj= imr(x,img1,img2);
img1_r = imrotate(img1,x,'nearest','crop');
obj = sum((img2(:)-img1_r(:)).^2);
The function wraps imrotate, generating an objective function to minimize so that it can be used by fminsearch.
This shows the original, distorted, and reversed image (with angle determined as above):
Note the limitations: the rotated images are cropped so that a point-by-point comparison is possible during computation of the objective function.
This is probably not the absolutely best way to do this, as I imagine that there are morphological algorithms designed to answer your specific question in a more general way. Still it worked.

You might want to check this. I had used Fourier-Mellin transform to retrieve rotation. Its accurate up to 1 degree. I think you will have to invest some time + don't forget to check some papers on Fourier-Mellin transform

Related

How can I rotate connected components so that they are upright in Matlab?

Currently I am working with a sudoku grid and I have the binary image. I am using Regionprops to get the area of the connected components and then turn the rest of the image black. After this I call the OCR method to try and read the sudoku numbers. The problem is that this only works if the sudoku grid in the image is straight and upright. If it is rotated even a little bit I am not able to pull the numbers. This is the code I have so far:
% get grid connected parts
conn_part = bwconncomp(im_binary);
% blacken area outside
stats = regionprops(conn_part,'Area');
im_out = im_binary; % Make mask
im_out(vertcat(conn_part.PixelIdxList{[stats.Area] < 825 | [stats.Area] > 2500})) = 0;
imagesc(im_out);
title("Numbers pulled");
sudokuNum = ocr(im_out,'TextLayout','Block','CharacterSet','0123456789');
sudokuNum.Text;
Where im_binary is the binary image
im_out is the output image
stats is the object returned from regionprops containing the area of the connected components
I know I can rotate the image before getting the OCR results by doing:
im_out = imrotate(im_out, angle)
However I don't know what angle the grid is at since this is part of a function that loops through for multiple images. I looked into the regionprops method because there is an attribute 'Orientation' which I can pull from there but I don't understand how I would actually use it. It also states that regionprops will return a value between -90 and 90, but my image could be rotated by more than 90 degrees.
Don't rotate the connected component or the binary image. First use the binary image to determine the rotation, then rotate the original grey-scale or color input image, and then binarize the rotated image. You'll be able to transform with interpolation, which will improve your results greatly. It does require to do the binarization step twice, but I don't think this step usually is too expensive.
The regionprops orientation feature is computed by "fitting" an ellipse to the shape. This is meaningful only for elongated objects. For a square sudoku grid this will not yield any valuable information.
Instead, look at the angle at which the smallest Feret diameter was obtained. The Feret diameters are the lengths of the projections at arbitrary angles. At one angle, this projection is smallest. By necessity it will be at an angle corresponding to one of the principal axes of the square. Here is more information about how to compute Feret diameters in MATLAB.
A different alternative is e.g. to use the Hough transform to detect the lines of the grid.
Do note that the geometry of the puzzle will never tell you about which side is up. The angle you get here should be taken modulo π/2 (i.e. constrain to the range -π/4 to π/4).
To know what direction is up you might do by trying to read the text, if it fails, rotate by 90 degrees and try again.

Rotating a template to match an edge

(I use MATLAB R2015a). I have a plot of edge points of an object (obtained using edge detection), and I have a plot of the template of the object. I want to rotate the template until it matches with the detected edge points. (Figure link included: solid blue - template, red dots - edge detected points; the rotation is subtle, but it's there.)
I plan to rotate the template in a loop about the centroid through different values of thetas (which I know how to do), and ask the code to 'stop executing when it matches with the edge' (which is what I want to know how) and return the corresponding theta.
The number of points making up the template and that making up the edges are not the same, so splitting the plots into 3 lines and 1 (half) ellipse and directly comparing does not work.
Using regionprops 'orientation' does not give the expected result for each frame because of the way the edges are being detected in each frame. (I can elaborate more on this if required)
I have intentionally plotted points using plot, rather than keeping the edge as a BW image because, otherwise, I'm having to round off indices while creating the template, and for my application, I cannot afford to lose precision like that.
I'm not lazy, I don't want somebody to just code it up for me. None of my ideas worked and I'm unable to think any differently, so perhaps somebody with a fresh mind and more experience in Matlab will have some idea.
Assuming both image and template are bitmaps (= template isn't given as 3x lines + half of ellipse), and you know the position and size of the template, just not the angle:
For each edge, find the closest point on the template, and sum all the distances, or perhaps sum of sqrt of distances, or some other metric that penalizes many small outliers - many points will be slightly wrong if rotation is slightly wrong. Few points that are completely wrong are noise to ignore. So, something like:
minDist = inf;
minAngle = -1;
for t = 1 : length(thetas)
templatePoints = ...; % calculate template points.
for i = 1 : length(edges)
edge = edges(i); % assuming this edge is (x, y) edge point
mind = inf; % Min distance
for j = 1 : length(templatePoints)
d = sum(sqrt((edge - templatePoints).^2));
if (d < mind)
mind = d;
end
end
end
currentDist = sum(mind);
if (currentDist < minDist)
...
end
When you complete the full circle of template rotation select the rotation with the lowest difference.
This procedure might be a bit problematic because you might have template rotated by 10.5 deg and you are going in a step of 1 deg, so you will not have totally optimal angle in the end. Plus you will try tons of completely wrong angles, slowing you down.
But to find the optimal angle you can change angle step, say you first try every 10 deg rotation, then every 1deg around the minimum, then every 0.1deg etc. Or use optimization method. Gradient descent should work fine if you don't have too much noise, use simulated annealing for noisy images. Use the same code as above, currentDist is a good enough optimization parameter - it should be as close to 0 as possible.
If you have unknown template size too, or unknown template position, you definitely should use simulated annealing, gradient descent will almost surely get stuck in a local minimum. Use code similar to above to calculate difference between the edges and template, and put all the unknown parameters to the method.
One option is to make an image of your result for each rotation, but do not make it binary, make it grayscale. There are ways of making this so the "maximum interpolated edge" is where your continuous function is (such as Xiaolin Wu's line algorithm, for example).
As you are matching an image, you are not loosing precision, but putting the precision of your target to the same level as your image.
Once you get this, for each rotation, use a metric to evaluate how the 2 grayscale (yes, not binary) images match, using i.e. correlation coefficient, mutual information, universal quality index (UQI), etc.

Matlab: Extrinsics function on two checkerboards in one image

I am using the extrinsics function for a project that aims to determine distances of objects from the camera that have checkerboards attached to them. This is done for every frame in a video. My workflow is fairly simple:
Attach an asymmetric checkerboard to each object
Calibrate the camera using the Image Processing Toolbox on ~ 20 images
For each frame
while checkerboardsFound < numCheckerboards
% Undistort the image
data = undistortImage(data, cameraParams);
% Get the checkerboard points in the frame
[imagePoints, boardSize] = detectCheckerboardPoints(data);
worldPoints = generateCheckerboardPoints(boardSize, squareSize);
% Get the rotation and translation
[rot, trans] = extrinsics(imagePoints,worldPoints, cameraParams);
% <Draw a black rectangle over the area where the checkerboard points are
% using the imagePoints matrix. This "hides" the checkerboard so we can
% run the detectCheckerboardPoints(data) function again and find a different
% one.>
data = hideCheckerboard(data,...
[imagePoints(1,:);...
imagePoints(end,:)]);
checkerboardsFound = checkerboardsFound + 1;
end;
My problem is that when I find, say, two particular checkerboards that are about 5cm in depth apart and look at the Z value of their respective translation vectors as returned by the extrinsics function, I expect to see a difference of about 50 units (5cm). However, I see a considerably greater difference - e.g. about 400 units (40cm). I am certain the square sizes for the checkerboards are correctly defined (5mm each). The frames come from a Samsung smart phone capturing in video mode (which applies magnification).
Could anyone please help determine what could be causing this discrepancy in the Z values? Is it too ambitious to use the extrinsics function in such a manner?

Subpixel edge detection for almost vertical edges

I want to detect edges (with sub-pixel accuracy) in images like the one displayed:
The resolution would be around 600 X 1000.
I came across a comment by Mark Ransom here, which mentions about edge detection algorithms for vertical edges. I haven't come across any yet. Will it be useful in my case (since the edge isn't strictly a straight line)? It will always be a vertical edge though. I want it to be accurate till 1/100th of a pixel at least. I also want to have access to these sub-pixel co-ordinate values.
I have tried "Accurate subpixel edge location" by Agustin Trujillo-Pino. But this does not give me a continuous edge.
Are there any other algorithms available? I will be using MATLAB for this.
I have attached another similar image which the algorithm has to work on:
Any inputs will be appreciated.
Thank you.
Edit:
I was wondering if I could do this:
Apply Canny / Sobel in MATLAB and get the edges of this image (note that it won't be a continuous line). Then, somehow interpolate this Sobel edges and get the co-ordinates in subpixel. Is it possible?
A simple approach would be to project your image vertically and fit the projected profile with an appropriate function.
Here is a try, with an atan shape:
% Load image
Img = double(imread('bQsu5.png'));
% Project
x = 1:size(Img,2);
y = mean(Img,1);
% Fit
f = fit(x', y', 'a+b*atan((x0-x)/w)', 'Startpoint', [150 50 10 150])
% Display
figure
hold on
plot(x, y);
plot(f);
legend('Projected profile', 'atan fit');
And the result:
I get x_0 = 149.6 pix for your first image.
However, I doubt you will be able to achieve a subpixel accuracy of 1/100th of pixel with those images, for several reasons:
As you can see on the profile, your whites are saturated (grey levels at 255). As you cut the real atan profile, the fit is biased. If you have control over the experiments, I suggest you do it again again with a smaller exposure time for instance.
There are not so many points on the transition, so there is not so many information on where the transition is. Typically, your resolution will be the square root of the width of the atan (or whatever shape you prefer). In you case this limits the subpixel resolution at 1/5th of a pixel, at best.
Finally, your edges are not stricly vertical, they are slightly titled. If you choose to use this projection method, to increase the accuracy you should look for a way to correct this tilt before projecting. This won't increase your accuracy by several orders of magnitude, though.
Best,
There is a problem with your image. At pixel level, it seems like there are four interlaced subimages (odd and even rows and columns). Look at this zoomed area close to the edge.
In order to avoid this artifact, I just have taken the even rows and columns of your image, and compute subpixel edges. And finally, I look for the best fitting straight line, using the function clsq whose code is in this page:
%load image
url='http://i.stack.imgur.com/bQsu5.png';
image = imread(url);
imageEvenEven = image(1:2:end,1:2:end);
imshow(imageEvenEven, 'InitialMagnification', 'fit');
% subpixel detection
threshold = 25;
edges = subpixelEdges(imageEvenEven, threshold);
visEdges(edges);
% compute fit line
A = [ones(size(edges.x)) edges.x edges.y];
[c n] = clsq(A,2);
y = [1,200];
x = -(n(2)*y+c) / n(1);
hold on;
plot(x,y,'g');
When executing this code, you can see the green line that best aproximate all the edge points. The line is given by the equation c + n(1)*x + n(2)*y = 0
Take into account that this image has been scaled by 1/2 when taking only even rows and columns, so the right coordinates must be scaled.
Besides, you can try with the other tree subimages (imageEvenOdd, imageOddEven and imageOddOdd) and combine the four straigh lines to obtain the best solution.

Determine the orientation of a triangle in Matlab

I am working on a project to perform automatically the landing of a quadrotor by visual recognition on a target. I have the code to detect the target through HOG features. Now the idea is to find the triangle, which is isosceles, and measure the lines so that I can determine the orientation that way. I have tried Hough, but I cannot manage to succeed.
The target is a proposed one
, and it consists of an isosceles triangle inside a circle. But if you can think of a better one, please let me know.
Please, ask any questions if anything is unclear. Thank you very much
Update 1:
#McMa 's idea works well when I deal only with the target as an image. This is the code:
clc; close all;
im=imread('target.bmp');
im=rgb2gray(im);
im2=imcrop(im,[467.51 385.51 148.98 61.98]);
im2=imcomplement(im2);
im2=imrotate(im2,0);
s=regionprops(im2,'Area','Centroid','Extrema','Orientation');
[imH,imW]=size(im2);
if imH-s(end).Centroid(2) < imH/2
state=1; % Upright
else
state=2; % Upside down
end
imshow(im2);hold on
plot(s(end).Centroid(1), s(end).Centroid(2), 'b*')
if s(end).Orientation>0
degrees=s(end).Orientation;
else
degrees=s(end).Orientation+180;
end
if (0<degrees)&&(degrees<89.99) && state==2
degrees=degrees+180;
elseif (90<degrees) && (degrees<179) && state==1
degrees=degrees+180;
end
fprintf('The orientation is %g degrees\n',degrees)
Update 2:
Now I have another problem: I need to know somehow whether the camera is seeing the whole target or only the small circle+triangle. I need this before computing the orientation.
I have tried many options. For example, I wanted to count the number of circles, so if there are 2, it is seeing the big target, and if there is 1, just the small one. But they are not well detected. Even if I play with the sensitivity, it's not going to be a robust method.
Image: https://www.dropbox.com/s/7mbpna3xfquq5n7/P0016.bmp?dl=0
Classifer: https://www.dropbox.com/s/236vm3romw56983/Cascade1Matlab.xml?dl=0
im=imread('P0016.bmp');
detector = vision.CascadeObjectDetector('Cascade1Matlab.xml');
bbox = step(detector, im); % Detect the target.
detectedImg = insertObjectAnnotation(im, 'rectangle', bbox, 'target'); % Insert bounding boxes and return marked image.
imshow(detectedImg)
BW=rgb2gray(im);
BW=imcrop(BW,bbox(1,:) +[0 0 10 10]);
[imH,imW]=size(im);
centers = imfindcircles(im,[1 round(imH)]);
figure;hold on;
imshow(im);
plot(centers(:,1),centers(:,2),'r*','LineWidth',4)
I also tried with other approaches such as the Euler number, but with no success, I can't find anything that works properly.
I think the easiest and fastest way would be finding your target and binarize the image. Afterwards use regionprops() and read the "Orientation" property to read the orientation.
If you can't use that toolbox the function is very easily implement by calculating the covariance matrix of your region. Let me know if you need some tips on this.
Edit:
I just so happend to have some nicely vectorized functions around here ;) so if speed is a top priority, you can easily write your own regionprops() trimmed to the bare minimum like this:
function M=ImMoment(Image,ii,jj)
ImSize=size(Image);
K=repmat((1:ImSize(1))',1,ImSize(2)).^ii;
J=repmat(1:ImSize(2),ImSize(1),1).^jj;
M=K.*J.*Image;
M=sum(M(:));
end
for the image moments and
function [Matrix,Centroid,Angle]=CovMat(Image)
Centroid=[ImMoment(Image,0,1)/ImMoment(Image,0,0),...
ImMoment(Image,1,0)/ImMoment(Image,0,0)];
Miu20=ImMoment(Image,0,2)/ImMoment(Image,0,0)-Centroid(1)^2;
Miu02=ImMoment(Image,2,0)/ImMoment(Image,0,0)-Centroid(2)^2;
Miu11=ImMoment(Image,1,1)/ImMoment(Image,0,0)-Centroid(1)*Centroid(2);
Matrix=[Miu20,Miu11 %Covariance Matrix in case you need it for anything...
Miu11,Miu02];
Angle=1/2*atand(2*Miu11/(Miu20-Miu02)); %Your orientation
end
for your orientation and covariance matrix. more about it here.
Image moments are very might, have fun!
I have a very blunt solution in mind. It may work. I have not actually tried it, since there is no image to work on. So if it fails, post the error.
Assumptions:- You have filtered the image and obtained the binary image that contains only triangle "OR" with uniform noise.
Now you can take a 0 degree image (image1). Filter it and obtain binary image (bw1).
So when you are trying to land your quadrotor, take image (image2), convert it binary (bw2).
Now find the correlation between these two images {corr2(bw1, bw2)}. Store this in a variable.
Rotate image with a step angle. Let angle be 5 degrees. {imrotate(bw2, 5)}
Now again find correlation between these two images.
Do this for all angles.
The orientation would the angle (no. of rotation * 5) where the correlation is maximum.
The term maximum signifies that, you may not find the correlation to be 1 as this highly depends on your filtering techniques to obtain perfect binary image.
I also accept that computing the correlation for all the angles requires high computation speed as well as long time. This would be really difficult to achieve in real time if you do not have high computation speed. (In this case you can look into Parallel Computing Toolbox) specially parfor.
Hope this was useful to you. Post a comment if you face any error.
Finally good luck. Nice project.
P.S. Pad white or black pixels depending on your binary image while rotating image.