Rotating a template to match an edge - matlab

(I use MATLAB R2015a). I have a plot of edge points of an object (obtained using edge detection), and I have a plot of the template of the object. I want to rotate the template until it matches with the detected edge points. (Figure link included: solid blue - template, red dots - edge detected points; the rotation is subtle, but it's there.)
I plan to rotate the template in a loop about the centroid through different values of thetas (which I know how to do), and ask the code to 'stop executing when it matches with the edge' (which is what I want to know how) and return the corresponding theta.
The number of points making up the template and that making up the edges are not the same, so splitting the plots into 3 lines and 1 (half) ellipse and directly comparing does not work.
Using regionprops 'orientation' does not give the expected result for each frame because of the way the edges are being detected in each frame. (I can elaborate more on this if required)
I have intentionally plotted points using plot, rather than keeping the edge as a BW image because, otherwise, I'm having to round off indices while creating the template, and for my application, I cannot afford to lose precision like that.
I'm not lazy, I don't want somebody to just code it up for me. None of my ideas worked and I'm unable to think any differently, so perhaps somebody with a fresh mind and more experience in Matlab will have some idea.

Assuming both image and template are bitmaps (= template isn't given as 3x lines + half of ellipse), and you know the position and size of the template, just not the angle:
For each edge, find the closest point on the template, and sum all the distances, or perhaps sum of sqrt of distances, or some other metric that penalizes many small outliers - many points will be slightly wrong if rotation is slightly wrong. Few points that are completely wrong are noise to ignore. So, something like:
minDist = inf;
minAngle = -1;
for t = 1 : length(thetas)
templatePoints = ...; % calculate template points.
for i = 1 : length(edges)
edge = edges(i); % assuming this edge is (x, y) edge point
mind = inf; % Min distance
for j = 1 : length(templatePoints)
d = sum(sqrt((edge - templatePoints).^2));
if (d < mind)
mind = d;
end
end
end
currentDist = sum(mind);
if (currentDist < minDist)
...
end
When you complete the full circle of template rotation select the rotation with the lowest difference.
This procedure might be a bit problematic because you might have template rotated by 10.5 deg and you are going in a step of 1 deg, so you will not have totally optimal angle in the end. Plus you will try tons of completely wrong angles, slowing you down.
But to find the optimal angle you can change angle step, say you first try every 10 deg rotation, then every 1deg around the minimum, then every 0.1deg etc. Or use optimization method. Gradient descent should work fine if you don't have too much noise, use simulated annealing for noisy images. Use the same code as above, currentDist is a good enough optimization parameter - it should be as close to 0 as possible.
If you have unknown template size too, or unknown template position, you definitely should use simulated annealing, gradient descent will almost surely get stuck in a local minimum. Use code similar to above to calculate difference between the edges and template, and put all the unknown parameters to the method.

One option is to make an image of your result for each rotation, but do not make it binary, make it grayscale. There are ways of making this so the "maximum interpolated edge" is where your continuous function is (such as Xiaolin Wu's line algorithm, for example).
As you are matching an image, you are not loosing precision, but putting the precision of your target to the same level as your image.
Once you get this, for each rotation, use a metric to evaluate how the 2 grayscale (yes, not binary) images match, using i.e. correlation coefficient, mutual information, universal quality index (UQI), etc.

Related

Moving least squares fitting for point displacements having issues

Explanation of the problem:
I have points with (x,y,z) coordinates at two+ distinct times. For convenience, they can be imagined as irregularly spaced points along the surface of an inverted paraboloid.
There is some minimal thickness to the paraboloid. The paraboloid changes shape slightly as time proceeds (like a balloon inflating) and when it does so, all of the points move.
By substracting the coordinates at time2 - time1, I can get the displacement vectors at each point.
It is important to note (and I suspect this might be the source of the problem) that at the first time point, the x and y coordinates range from 0 to 2000, and the z coordinates are all within a narrower range - say 350 to 450. During the deformation, each point has an x component of displacement, y component, and z component.
The x and y components are small (~50 at most), while the z component is the largest (goes up to 400 near the center, much less near the edges).
Using weighted moving least squares at the location of each point, I am trying to fit the components of displacements to a second degree polynomial surface in terms of the original x,y,z coordinates of the point: eg.
x component of
displacement = ax^2 + bxy + cx + dy^2.. + hz^2 + iz + j
I use the lsqr function in MATLAB,like so, looping through each point for each time interval:
Ux = displacements{k,1}(:,1);
Cx = lsqr((adjust_B_matrix'*W*adjust_B_matrix),(adjust_B_matrix'*W*Ux),1e-7,10000);
W is the weight matrix, and adjust_B_matrix is the matrix of all (x,y,z) coordinates at time 1, shifted so that they're all centered around the point at which I'm trying to fit the function.
What is going wrong?
It's just not working -- once I have the functions, they're re-centered around the actual coordinates of the points.
But once I plot the resulting points (initial pointx + displacementx, initial pointy + displacementy, initial pointz + displacementz) by plugging in the coordinates at time 1 into the now-discovered functions, it just spits out a surface that looks just like the surface at time 1.
What might be going wrong? Things I have tried:
It's not an issue with the code itself- I generated 'fake' data using a grid of points and it worked perfectly. The predicted locations were superimposed with the actual coordinates and I was able to get back the function I started with. But in my trial example, I used x,y,z from 0 to 5, evenly spaced.
Global fitting works (but I need local fitting...).
I tried MATLAB's curve fitting toolbox and just tried to fit one of the displacements to only x and y coordinates, globally. It worked perfectly.
I think I shouldn't have a singular matrix issue because I use a large radius (around 75-80) points in the calculations, somewhat dispersed in 3D space.
Suspicions:
I think it has to do with the uneven distribution of initial (x,y,z) coordinates, but I don't know why or how to fix the issue, or even what method I can use.
If you read this far, thank you so much. Any advice would be greatly appreciated.
Figure for reference:
green = predicted points at time 2. Overlapping mostly with red, the actual coordinates of the points at time 1.
blue is the correct coordinates of points at time 2 (this is where the green ones should be if things were working).
image
Updated link for files:
http://a.tmp.ninja/eWfkNmFZyTFk.zip
Contents - code, sample data (please load the .mat files).
I can't actually access the code you posted, so here's some general suggestions.
It does look like the curve fitting toolbox has tools that do exactly what you are looking for, checkout the bottom of this page: https://www.mathworks.com/help/curvefit/polynomial.html#bt9ykh.
It looks like for whatever your learned function for the displacement is just very small or zero everywhere. I suspect the issue is just a minor typo/error on your part somewhere in your pipeline, possibly translating what you have to work with the fit function will reveal the issue.
This really shouldn't be the issue, but in the future if you had much more unbalanced data you could normalize it all before fitting (x_norm = (x - x_mu)/x_std).
Also, I don't think this is your problem either, but you can check if your matrix is close to singular by checking the condition number using the cord() function. So you could check cond(adjust_B_matrix'Wadjust_B_matrix). Second, If you check the documentation for lsqr there is an option to get a debug return flag, that is worth checking too.

Oriented Bounding Box algorithm, Need some understanding/clarification of a few lines of existing (working) code

I am reviewing some MATLAB code that is publicly available at the following location:
https://github.com/mattools/matGeom/blob/master/matGeom/geom2d/orientedBox.m
This is an implementation of the rotating calipers algorithm on the convex hull of a set of points in order to compute an oriented bounding box. My review was to understand intuitively how the algorithm works however I seek clarification on certain lines within the file which I am confused on.
On line 44: hull = bsxfun(#minus, hull, center);. This appears to translate all the points within the convex hull set so the calculated centroid is at (0,0). Is there any particular reason why this is performed? My only guess would be that it allows straightforward rotational transforms later on in the code, as rotating about the real origin would cause significant problems.
On line 71 and 74: indA2 = mod(indA, nV) + 1; , indB2 = mod(indB, nV) + 1;. Is this a trick in order to prevent the access index going out of bounds? My guess is to prevent out of bounds access, it will roll the index over upon reaching the end.
On line 125: y2 = - x * sit + y * cot;. This is the correct transformation as the code behaves properly, but I am not sure why this is actually used and different from the other rotational transforms done later and also prior (with the calls to rotateVector). My best guess is that I am simply not visualizing what rotation needs to be done in my head correctly.
Side note: The external function calls vectorAngle, rotateVector, createLine, and distancePointLine can all be found under the same repository, in files named after the function name (as per MATLAB standard). They are relatively uninteresting and do what you would expect aside from the fact that there is normalization of vector angles going on.
I'm the author of the above piece of code, so I can give some explanations about it:
First of all, the algorithm is indeed a rotating caliper algorithm. In the current implementation, only the width of the algorithm is tested (I did not check the west and est vertice). Actually, it seems the two results correspond most of the time.
Line 44 -> the goal of translating to origin was to improve numerical accuracy. When a polygon is located far away from the origin, coordinates may be large, and close together. Many computation involve products of coordinates. By translating the polygon around the origin, the coordinates are smaller, and the precision of the resulting products are expected to be improved. Well, to be honest, I did not evidenced this effect directly, this is more a careful way of coding than a fix…
Line 71-74! Yes. The idea is to find the index of the next vertex along the polygon. If the current vertex is the last vertex of the polygon, then the next vertex index should be 1. The use of modulo rescale between 0 and N-1. The two lines ensure correct iteration.
Line 125: There are several transformations involved. Using the rotateVector() function, one simply computes the minimal with for a given edge. On line 125, one rotate the points (of the convex hull) to align with the “best” direction (the one that minimizes the width). The last change of coordinates (lines 132->140) is due to the fact that the center of the oriented box is different from the centroid of the polygon. Then we add a shift, which is corrected by the rotation.
I did not really look at the code, this is an explanation of how the rotating calipers work.
A fundamental property is that the tightest bounding box is such that one of its sides overlaps an edge of the hull. So what you do is essentially
try every edge in turn;
for a given edge, seen as being horizontal, south, find the farthest vertices north, west and east;
evaluate the area or the perimeter of the rectangle that they define;
remember the best area.
It is important to note that when you switch from an edge to the next, the N/W/E vertices can only move forward, and are readily found by finding the next decrease of the relevant coordinate. This is how the total processing time is linear in the number of edges (the search for the initial N/E/W vertices takes 3(N-3) comparisons, then the updates take 3(N-1)+Nn+Nw+Ne comparisons, where Nn, Nw, Ne are the number of moves from a vertex to the next; obviously Nn+Nw+Ne = 3N in total).
The modulos are there to implement the cyclic indexing of the edges and vertices.

Subpixel edge detection for almost vertical edges

I want to detect edges (with sub-pixel accuracy) in images like the one displayed:
The resolution would be around 600 X 1000.
I came across a comment by Mark Ransom here, which mentions about edge detection algorithms for vertical edges. I haven't come across any yet. Will it be useful in my case (since the edge isn't strictly a straight line)? It will always be a vertical edge though. I want it to be accurate till 1/100th of a pixel at least. I also want to have access to these sub-pixel co-ordinate values.
I have tried "Accurate subpixel edge location" by Agustin Trujillo-Pino. But this does not give me a continuous edge.
Are there any other algorithms available? I will be using MATLAB for this.
I have attached another similar image which the algorithm has to work on:
Any inputs will be appreciated.
Thank you.
Edit:
I was wondering if I could do this:
Apply Canny / Sobel in MATLAB and get the edges of this image (note that it won't be a continuous line). Then, somehow interpolate this Sobel edges and get the co-ordinates in subpixel. Is it possible?
A simple approach would be to project your image vertically and fit the projected profile with an appropriate function.
Here is a try, with an atan shape:
% Load image
Img = double(imread('bQsu5.png'));
% Project
x = 1:size(Img,2);
y = mean(Img,1);
% Fit
f = fit(x', y', 'a+b*atan((x0-x)/w)', 'Startpoint', [150 50 10 150])
% Display
figure
hold on
plot(x, y);
plot(f);
legend('Projected profile', 'atan fit');
And the result:
I get x_0 = 149.6 pix for your first image.
However, I doubt you will be able to achieve a subpixel accuracy of 1/100th of pixel with those images, for several reasons:
As you can see on the profile, your whites are saturated (grey levels at 255). As you cut the real atan profile, the fit is biased. If you have control over the experiments, I suggest you do it again again with a smaller exposure time for instance.
There are not so many points on the transition, so there is not so many information on where the transition is. Typically, your resolution will be the square root of the width of the atan (or whatever shape you prefer). In you case this limits the subpixel resolution at 1/5th of a pixel, at best.
Finally, your edges are not stricly vertical, they are slightly titled. If you choose to use this projection method, to increase the accuracy you should look for a way to correct this tilt before projecting. This won't increase your accuracy by several orders of magnitude, though.
Best,
There is a problem with your image. At pixel level, it seems like there are four interlaced subimages (odd and even rows and columns). Look at this zoomed area close to the edge.
In order to avoid this artifact, I just have taken the even rows and columns of your image, and compute subpixel edges. And finally, I look for the best fitting straight line, using the function clsq whose code is in this page:
%load image
url='http://i.stack.imgur.com/bQsu5.png';
image = imread(url);
imageEvenEven = image(1:2:end,1:2:end);
imshow(imageEvenEven, 'InitialMagnification', 'fit');
% subpixel detection
threshold = 25;
edges = subpixelEdges(imageEvenEven, threshold);
visEdges(edges);
% compute fit line
A = [ones(size(edges.x)) edges.x edges.y];
[c n] = clsq(A,2);
y = [1,200];
x = -(n(2)*y+c) / n(1);
hold on;
plot(x,y,'g');
When executing this code, you can see the green line that best aproximate all the edge points. The line is given by the equation c + n(1)*x + n(2)*y = 0
Take into account that this image has been scaled by 1/2 when taking only even rows and columns, so the right coordinates must be scaled.
Besides, you can try with the other tree subimages (imageEvenOdd, imageOddEven and imageOddOdd) and combine the four straigh lines to obtain the best solution.

Determine the orientation of a triangle in Matlab

I am working on a project to perform automatically the landing of a quadrotor by visual recognition on a target. I have the code to detect the target through HOG features. Now the idea is to find the triangle, which is isosceles, and measure the lines so that I can determine the orientation that way. I have tried Hough, but I cannot manage to succeed.
The target is a proposed one
, and it consists of an isosceles triangle inside a circle. But if you can think of a better one, please let me know.
Please, ask any questions if anything is unclear. Thank you very much
Update 1:
#McMa 's idea works well when I deal only with the target as an image. This is the code:
clc; close all;
im=imread('target.bmp');
im=rgb2gray(im);
im2=imcrop(im,[467.51 385.51 148.98 61.98]);
im2=imcomplement(im2);
im2=imrotate(im2,0);
s=regionprops(im2,'Area','Centroid','Extrema','Orientation');
[imH,imW]=size(im2);
if imH-s(end).Centroid(2) < imH/2
state=1; % Upright
else
state=2; % Upside down
end
imshow(im2);hold on
plot(s(end).Centroid(1), s(end).Centroid(2), 'b*')
if s(end).Orientation>0
degrees=s(end).Orientation;
else
degrees=s(end).Orientation+180;
end
if (0<degrees)&&(degrees<89.99) && state==2
degrees=degrees+180;
elseif (90<degrees) && (degrees<179) && state==1
degrees=degrees+180;
end
fprintf('The orientation is %g degrees\n',degrees)
Update 2:
Now I have another problem: I need to know somehow whether the camera is seeing the whole target or only the small circle+triangle. I need this before computing the orientation.
I have tried many options. For example, I wanted to count the number of circles, so if there are 2, it is seeing the big target, and if there is 1, just the small one. But they are not well detected. Even if I play with the sensitivity, it's not going to be a robust method.
Image: https://www.dropbox.com/s/7mbpna3xfquq5n7/P0016.bmp?dl=0
Classifer: https://www.dropbox.com/s/236vm3romw56983/Cascade1Matlab.xml?dl=0
im=imread('P0016.bmp');
detector = vision.CascadeObjectDetector('Cascade1Matlab.xml');
bbox = step(detector, im); % Detect the target.
detectedImg = insertObjectAnnotation(im, 'rectangle', bbox, 'target'); % Insert bounding boxes and return marked image.
imshow(detectedImg)
BW=rgb2gray(im);
BW=imcrop(BW,bbox(1,:) +[0 0 10 10]);
[imH,imW]=size(im);
centers = imfindcircles(im,[1 round(imH)]);
figure;hold on;
imshow(im);
plot(centers(:,1),centers(:,2),'r*','LineWidth',4)
I also tried with other approaches such as the Euler number, but with no success, I can't find anything that works properly.
I think the easiest and fastest way would be finding your target and binarize the image. Afterwards use regionprops() and read the "Orientation" property to read the orientation.
If you can't use that toolbox the function is very easily implement by calculating the covariance matrix of your region. Let me know if you need some tips on this.
Edit:
I just so happend to have some nicely vectorized functions around here ;) so if speed is a top priority, you can easily write your own regionprops() trimmed to the bare minimum like this:
function M=ImMoment(Image,ii,jj)
ImSize=size(Image);
K=repmat((1:ImSize(1))',1,ImSize(2)).^ii;
J=repmat(1:ImSize(2),ImSize(1),1).^jj;
M=K.*J.*Image;
M=sum(M(:));
end
for the image moments and
function [Matrix,Centroid,Angle]=CovMat(Image)
Centroid=[ImMoment(Image,0,1)/ImMoment(Image,0,0),...
ImMoment(Image,1,0)/ImMoment(Image,0,0)];
Miu20=ImMoment(Image,0,2)/ImMoment(Image,0,0)-Centroid(1)^2;
Miu02=ImMoment(Image,2,0)/ImMoment(Image,0,0)-Centroid(2)^2;
Miu11=ImMoment(Image,1,1)/ImMoment(Image,0,0)-Centroid(1)*Centroid(2);
Matrix=[Miu20,Miu11 %Covariance Matrix in case you need it for anything...
Miu11,Miu02];
Angle=1/2*atand(2*Miu11/(Miu20-Miu02)); %Your orientation
end
for your orientation and covariance matrix. more about it here.
Image moments are very might, have fun!
I have a very blunt solution in mind. It may work. I have not actually tried it, since there is no image to work on. So if it fails, post the error.
Assumptions:- You have filtered the image and obtained the binary image that contains only triangle "OR" with uniform noise.
Now you can take a 0 degree image (image1). Filter it and obtain binary image (bw1).
So when you are trying to land your quadrotor, take image (image2), convert it binary (bw2).
Now find the correlation between these two images {corr2(bw1, bw2)}. Store this in a variable.
Rotate image with a step angle. Let angle be 5 degrees. {imrotate(bw2, 5)}
Now again find correlation between these two images.
Do this for all angles.
The orientation would the angle (no. of rotation * 5) where the correlation is maximum.
The term maximum signifies that, you may not find the correlation to be 1 as this highly depends on your filtering techniques to obtain perfect binary image.
I also accept that computing the correlation for all the angles requires high computation speed as well as long time. This would be really difficult to achieve in real time if you do not have high computation speed. (In this case you can look into Parallel Computing Toolbox) specially parfor.
Hope this was useful to you. Post a comment if you face any error.
Finally good luck. Nice project.
P.S. Pad white or black pixels depending on your binary image while rotating image.

Evaluate straightness of an arbitrary countour

I want to get a metric of straightness of contour in my binary image (relatively faster). The image looks as follows:
Now, the contours in the red box are the ones which I would like to be removed preferably. Since they are not straight. These are the things I have tried. I am as of now implementing in MATLAB.
1.Collect row and column coordinates of each contour and then take derivative. For straight objects (such as rectangle), derivative will be mostly low with a few spikes (along the corners of the rectangle).
Problem: The coordinates collected are not in order i.e. the order in which the contour will be traversed if we imaging it as a path. Therefore, derivative gives absurdly high values sometimes. Also, the contour is not absolutely straight, its an output of edge detection algorithm, so you can imagine that there might be some discontinuity (see the rectangle at the bottom, human eye can understand that it is a rectangle though it is not absolutely straight).
2.Tried to think about polyfit, but again this contour issue comes up. Since its a rectangle I don't know how to apply polyfit to that point set.
Also, I would like to remove contours which are distributed vertically/horizontally. Basically this is a lane detection algorithm. So lanes cannot be absolutely vertical/horizontal.
Any ideas?
You should look into the features of regionprops more. To be fair I stole the script from this answer, but here it is:
BW = imread('lanes.png');
BW = im2bw(BW);
figure(1),
subplot(1,2,1);
imshow(BW);
cc = bwconncomp(BW);
l = labelmatrix(cc);
a_rp = regionprops(CC,'Area','MajorAxisLength','MinorAxislength','Orientation','PixelList','Eccentricity');
idx = ([a_rp.Eccentricity] > 0.99 & [a_rp.Area] > 100 & [a_rp.Orientation] < 70 & [a_rp.Orientation] > -90);
BW2 = ismember(l,find(idx));
subplot(1,2,2);
imshow(BW2);
You can mess around with the properties. 'Orientation', 'Eccentricity', and 'Area' are probably the parameters you want to mess with. I also messed with the ratios of the major/minor axis lengths but eccentricity basically does this (eccentricity is a measure of how "circular" an ellipse is). Here's the output:
I actually saw a good video specifically from matlab for lane detection using regionprops. I'll try to see if I can find it and link it.
You can segment your image using bwlabel, then work separately on each bwlabel connected object, using find. This should help solve your order problem.
About a metric, the only thing that come to mind at the moment is to fit to an ellipse, and set the a/b (major axis/minor axis) ratio (basically eccentricity) a parameter. For example a straight line (even if not perfect) will be fitted to an ellipse with a very big major axis and a very small minor axis. So say you set a ratio threshold of >10 etc... Fitting to an ellipse can be done using this FEX submission for example.