How to detect smooth curves in matlab - matlab

I am trying to detect a bent conveyor in an image. I used the following code using Hough transform to detect its edges
%# load image, and process it
I = imread('ggp\2.jpg');
g = rgb2gray(I);
bw = edge(g,'Canny');
[H,T,R] = hough(bw);
P = houghpeaks(H,500,'threshold',ceil(0.4*max(H(:))));
% I apply houghlines on the grayscale picture, otherwise it doesn't detect
% the straight lines shown in the picture
lines = houghlines(g,T,R,P,'FillGap',5,'MinLength',50);
figure, imshow(g), hold on
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
deltaY = xy(2,2) - xy(1,2);
deltaX = xy(2,1) - xy(1,1);
angle = atan2(deltaY, deltaX) * 180 / pi;
if (angle == 0)
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
% Plot beginnings and ends of lines
plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
end
end
As it is shown, two straight lines successfully detect top and bottom edges of the conveyor but I don't know how to detect if it is bent or not (in the picture it is bent) and how to calculate the degree of that.
The curve approximately is drawn manually in the picture below (red color):
I found no code or function for Hough transform in matlab to detect such smooth curves (e.g., 2nd degree polynomials: y= a*x^2). Any other solution is also welcome.
It's the original image:

Looking at your straight lines detecting the conveyor belt, I assume you can focus your processing around the region of interest (rows 750 to 950 in the image).
Proceeding from that point:
oimg = imread('http://i.stack.imgur.com/xfXUS.jpg'); %// read the image
gimg = im2double( rgb2gray( oimg( 751:950, :, : ) ) ); %// convert to gray, only the relevant part
fimg = imfilter(gimg, [ones(7,50);zeros(1,50);-ones(7,50)] ); %// find horizontal edge
Select only strong horizontal edge pixels around the center of the region
[row, col] = find(abs(fimg)>50);
sel = row>50 & row < 150 & col > 750 & col < 3250;
row=row(sel);
col=col(sel);
Fit a 2nd degree polynom and a line to these edge points
[P, S, mu] = polyfit(col,row,2);
[L, lS, lmu] = polyfit(col, row, 1);
Plot the estimated curves
xx=1:4000;
figure;imshow(oimg,'border','tight');
hold on;
plot(xx,polyval(P,xx,[],mu)+750,'LineWidth',1.5,'Color','r');
plot(xx,polyval(L,xx,[],lmu)+750,':g', 'LineWidth', 1.5);
The result is
You can visually appreciate how the 2nd degree fit P fits better the boundary of the conveyor belt. Looking at the first coefficient
>> P(1)
ans =
1.4574
You see that the coefficient of x^2 of the curve is not negligible making the curve distinctly not a straight line.

Related

convert image from Cartesian to Polar

I want to convert an image from Cartesian to Polar and to use it for opengl texture.
So I used matlab referring to the two articles below.
Link 1
Link 2
My code is exactly same with Link 2's anwser
% load image
img = imread('my_image.png');
% convert pixel coordinates from cartesian to polar
[h,w,~] = size(img);
[X,Y] = meshgrid((1:w)-floor(w/2), (1:h)-floor(h/2));
[theta,rho] = cart2pol(X, Y);
Z = zeros(size(theta));
% show pixel locations (subsample to get less dense points)
XX = X(1:8:end,1:4:end);
YY = Y(1:8:end,1:4:end);
tt = theta(1:8:end,1:4:end);
rr = rho(1:8:end,1:4:end);
subplot(121), scatter(XX(:),YY(:),3,'filled'), axis ij image
subplot(122), scatter(tt(:),rr(:),3,'filled'), axis ij square tight
% show images
figure
subplot(121), imshow(img), axis on
subplot(122), warp(theta, rho, Z, img), view(2), axis square
The result was exactly what I wanted, and I was very satisfied except for one thing. It's the area (red circled area) in the picture just below. Considering that the opposite side (blue circled area) is not, I think this part should also be filled. Because of this part is empty, so there is a problem when using it as a texture.
And I wonder how I can fill this part. Thank you.
(little difference from Link 2's answer code like degree<->radian and axis values. but i think it is not important.)
Those issues you show in your question happen because your algorithm is wrong.
What you did (push):
throw a grid on the source image
transform those points
try to plot these colored points and let MATLAB do some magic to make it look like a dense picture
Do it the other way around (pull):
throw a grid on the output
transform that backwards
sample the input at those points
The distinction is called "push" (into output) vs "pull" (from input). Only Pull gives proper results.
Very little MATLAB code is necessary. You just need pol2cart and interp2, and a meshgrid.
With interp2 you get to choose the interpolation (linear, cubic, ...). Nearest-neighbor interpolation leaves visible artefacts.
im = im2single(imread("PQFax.jpg"));
% center of polar map, manually picked
cx = 10 + 409/2;
cy = 7 + 413/2;
% output parameters
radius = 212;
dRho = 1;
dTheta = 2*pi / (2*pi * radius);
Thetas = pi/2 - (0:dTheta:2*pi);
Rhos = (0:dRho:radius);
% polar mesh
[Theta, Rho] = meshgrid(Thetas, Rhos);
% transform...
[Xq,Yq] = pol2cart(Theta, Rho);
% translate to sit on the circle's center
Xq = Xq + cx;
Yq = Yq + cy;
% sample image at those points
Ro = interp2(im(:,:,1), Xq,Yq, "cubic");
Go = interp2(im(:,:,2), Xq,Yq, "cubic");
Bo = interp2(im(:,:,3), Xq,Yq, "cubic");
Vo = cat(3, Ro, Go, Bo);
Vo = imrotate(Vo, 180);
imshow(Vo)
The other way around (get a "torus" from a "ribbon") is quite similar. Throw a meshgrid on the torus space, subtract center, transform from cartesian to polar, and use those to sample from the "ribbon" image into the "torus" image.
I'm more familiar with OpenCV than with MATLAB. Perhaps MATLAB has something like OpenCV's warpPolar(), or a generic remap(). In any case, the operation is trivial to do entirely "by hand" but there are enough supporting functions to take the heavy lifting off your hands (interp2, pol2cart, meshgrid).
1.- The white arcs tell that the used translation pol-cart introduces significant errors.
2.- Reversing the following script solves your question.
It's a script that goes from cart-pol without introducing errors or ignoring input data, which is what happens when such wide white arcs show up upon translation apparently correct.
clear all;clc;close all
clc,cla;
format long;
A=imread('shaffen dass.jpg');
[sz1 sz2 sz3]=size(A);
szx=sz2;szy=sz1;
A1=A(:,:,1);A2=A(:,:,2);A3=A(:,:,3); % working with binary maps or grey scale images this wouldn't be necessary
figure(1);imshow(A);
hold all;
Cx=floor(szx/2);Cy=floor(szy/2);
plot(Cx,Cy,'co'); % because observe image centre not centered
Rmin=80;Rmax=400; % radius search range for imfindcircles
[centers, radii]=imfindcircles(A,[Rmin Rmax],... % outer circle
'ObjectPolarity','dark','Sensitivity',0.9);
h=viscircles(centers,radii);
hold all; % inner circle
[centers2, radii2]=imfindcircles(A,[Rmin Rmax],...
'ObjectPolarity','bright');
h=viscircles(centers2,radii2);
% L=floor(.5*(radii+radii2)); % this is NOT the length X that should have the resulting XY morphed graph
L=floor(2*pi*radii); % expected length of the morphed graph
cx=floor(.5*(centers(1)+centers2(1))); % coordinates of averaged circle centres
cy=floor(.5*(centers(2)+centers2(2)));
plot(cx,cy,'r*'); % check avg centre circle is not aligned to figure centre
plot([cx 1],[cy 1],'r-.');
t=[45:360/L:404+1-360/L]; % if step=1 then we only get 360 points but need an amount of L points
% if angle step 1/L over minute waiting for for loop to finish
R=radii+5;x=R*sind(t)+cx;y=R*cosd(t)+cy; % build outer perimeter
hL1=plot(x,y,'m'); % axis equal;grid on;
% hold all;
% plot(hL1.XData,hL1.YData,'ro');
x_ref=hL1.XData;y_ref=hL1.YData;
% Sx=zeros(ceil(R),1);Sy=zeros(ceil(R),1);
Sx={};Sy={};
for k=1:1:numel(hL1.XData)
Lx=floor(linspace(x_ref(k),cx,ceil(R)));
Ly=floor(linspace(y_ref(k),cy,ceil(R)));
% plot(Lx,Ly,'go'); % check
% plot([cx x(k)],[cy y(k)],'r');
% L1=unique([Lx;Ly]','rows');
Sx=[Sx Lx'];Sy=[Sy Ly'];
end
sx=cell2mat(Sx);sy=cell2mat(Sy);
[s1 s2]=size(sx);
B1=uint8(zeros(s1,s2));
B2=uint8(zeros(s1,s2));
B3=uint8(zeros(s1,s2));
for n=1:1:s2
for k=1:1:s1
B1(k,n)=A1(sx(k,n),sy(k,n));
B2(k,n)=A2(sx(k,n),sy(k,n));
B3(k,n)=A3(sx(k,n),sy(k,n));
end
end
C=uint8(zeros(s1,s2,3));
C(:,:,1)=B1;
C(:,:,2)=B2;
C(:,:,3)=B3;
figure(2);imshow(C);
the resulting
3.- let me know if you'd like some assistance writing pol-cart from this script.
Regards
John BG

How to to identify letters on a license plate with varying perspectives

I am making a script in Matlab that takes in an image of the rear of a car. After some image processing I would like to output the original image of the car with a rectangle around the license plate of the car. Here is what I have written so far:
origImg = imread('CAR_IMAGE.jpg');
I = imresize(origImg, [500, NaN]); % easier viewing and edge connecting
G = rgb2gray(I);
M = imgaussfilt(G); % blur to remove some noise
E = edge(M, 'Canny', 0.4);
% I can assume all letters are somewhat upright
RP = regionprops(E, 'PixelIdxList', 'BoundingBox');
W = vertcat(RP.BoundingBox); W = W(:,3); % get the widths of the BBs
H = vertcat(RP.BoundingBox); H = H(:,4); % get the heights of the BBs
FATTIES = W > H; % find the BBs that are more wide than tall
RP = RP(FATTIES);
E(vertcat(RP.PixelIdxList)) = false; % remove more wide than tall regions
D = imdilate(E, strel('disk', 1)); % dilate for easier viewing
figure();
imshowpair(I, D, 'montage'); % display original image and processed image
Here are some examples:
From here I am unsure how to isolate the letters of the license plate, particularly like in the second example above where each letter has a decreased area due to the perspective of the image. My first idea was to get the bounding box of all regions and keep only the regions where the perimeter to area ratio is "similar" but this resulted in removing the letters of the plate that were connected when I dilate the image like the K and V in the fourth example above.
I would appreciate some suggestions on how I should go about isolating these letters. No code is necessary, and any advice is appreciated.
So I continued to work despite not receiving any answers here on SO and managed to get a working version through trial and error. All of the following code comes after the code in my original question and all plots below are from the first example image above. First, I found the variance for every single pixel row of the image and plotted them like so:
V = var(D, 0, 2);
X = 1:length(V);
figure();
hold on;
scatter(X, V);
I then fit a very high order polynomial to this scatter plot and saved the values where the slope of the polynomial was zero and the variance value was very low (i.e. the dark row of pixels immediately before or after a row with some white):
P = polyfit(X', V, 25);
PV = polyval(P, X);
Z = X(find(PV < 0.03 & abs(gradient(PV)) < 0.0001));
plot(X, PV); % red curve on plot
scatter(Z, zeros(1,length(Z))); % orange circles on x-axis
I then calculate the integral of the polynomial between any consecutive Z values (my dark rows), and save the two Z values between which the integral is the largest, which I mark with lines on the plot:
MAX_INTEG = -1;
MIN_ROW = -1;
MAX_ROW = -1;
for i = 1:(length(Z)-1)
TEMP_MIN = Z(i);
TEMP_MAX = Z(i+1);
Q = polyint(P);
TEMP_INTEG = diff(polyval(Q, [TEMP_MIN, TEMP_MAX]));
if (TEMP_INTEG > MAX_INTEG)
MAX_INTEG = TEMP_INTEG;
MIN_ROW = TEMP_MIN;
MAX_ROW = TEMP_MAX;
end
end
line([MIN_ROW, MIN_ROW], [-0.1, max(V)+0.1]);
line([MAX_ROW, MAX_ROW], [-0.1, max(V)+0.1]);
hold off;
Since the X-values of these lines correspond row numbers in the original image, I can crop my image between MIN_ROW and MAX_ROW:
I repeat the above steps now for the columns of pixels, crop, and remove any excess black rows of columns to result in the identified plate:
I then perform 2D cross correlation between this cropped image and the edged image D using Matlab's xcorr2 to locate the plate in the original image. After finding the location I just draw a rectangle around the discovered plate like so:

Count circle objects in an image using matlab

How to count circle objects in a bright image using MATLAB?
The input image is:
imfindcircles function can't find any circle in this image.
Based on well known image processing techniques, you can write your own processing tool:
img = imread('Mlj6r.jpg'); % read the image
imgGray = rgb2gray(img); % convert to grayscale
sigma = 1;
imgGray = imgaussfilt(imgGray, sigma); % filter the image (we will take derivatives, which are sensitive to noise)
imshow(imgGray) % show the image
[gx, gy] = gradient(double(imgGray)); % take the first derivative
[gxx, gxy] = gradient(gx); % take the second derivatives
[gxy, gyy] = gradient(gy); % take the second derivatives
k = 0.04; %0.04-0.15 (see wikipedia)
blob = (gxx.*gyy - gxy.*gxy - k*(gxx + gyy).^2); % Harris corner detector (high second derivatives in two perpendicular directions)
blob = blob .* (gxx < 0 & gyy < 0); % select the top of the corner (i.e. positive second derivative)
figure
imshow(blob) % show the blobs
blobThresshold = 1;
circles = imregionalmax(blob) & blob > blobThresshold; % find local maxima and apply a thresshold
figure
imshow(imgGray) % show the original image
hold on
[X, Y] = find(circles); % find the position of the circles
plot(Y, X, 'w.'); % plot the circle positions on top of the original figure
nCircles = length(X)
This code counts 2710 circles, which is probably a slight (but not so bad) overestimation.
The following figure shows the original image with the circle positions indicated as white dots. Some wrong detections are made at the border of the object. You can try to make some adjustments to the constants sigma, k and blobThresshold to obtain better results. In particular, higher k may be beneficial. See wikipedia, for more information about the Harris corner detector.

How to draw a straight across the centroid points of the barcode using best fit points Matlab

This is the processed image and I can't increase the bwareaopen() as it won't work for my other image.
Anyway I'm trying to find the shortest points in the centre points of the barcode, to get the straight line across the centre points in the barcode.
Example:
After doing a centroid command, the points in the barcode are near to each other. Therefore, I just wanted to get the shortest points(which is the barcode) and draw a straight line across.
All the points need not be join, best fit points will do.
Step 1
Step 2
Step 3
If you dont have the x,y elements Andrey uses, you can find them by segmenting the image and using a naive threshold value on the area to avoid including the number below the bar code.
I've hacked out a solution in MATLAB doing the following:
Loading the image and making it binary
Extracting all connected components using bwlabel().
Getting useful information about each of them via regionprops() [.centroid will be a good approximation to the middel point for the lines].
Thresholded out small regions (noise and numbers)
Extracted x,y coordinates
Used Andreys linear fit solution
Code:
set(0,'DefaultFigureWindowStyle','docked');
close all;clear all;clc;
Im = imread('29ekeap.jpg');
Im=rgb2gray(Im);
%%
%Make binary
temp = zeros(size(Im));
temp(Im > mean(Im(:)))=1;
Im = temp;
%Visualize
f1 = figure(1);
imagesc(Im);colormap(gray);
%Find connected components
LabelIm = bwlabel(Im);
RegionInfo = regionprops(LabelIm);
%Remove background region
RegionInfo(1) = [];
%Get average area of regions
AvgArea = mean([RegionInfo(1:end).Area]);
%Vector to keep track of likely "bar elements"
Bar = zeros(length(RegionInfo),1);
%Iterate over regions, plot centroids if area is big enough
for i=1:length(RegionInfo)
if RegionInfo(i).Area > AvgArea
hold on;
plot(RegionInfo(i).Centroid(1),RegionInfo(i).Centroid(2),'r*')
Bar(i) = 1;
end
end
%Extract x,y points for interpolation
X = [RegionInfo(Bar==1).Centroid];
X = reshape(X,2,length(X)/2);
x = X(1,:);
y = X(2,:);
%Plot line according to Andrey
p = polyfit(x,y,1);
xMin = min(x(:));
xMax = max(x(:));
xRange = xMin:0.01:xMax;
yRange = p(1).*xRange + p(2);
plot(xRange,yRange,'LineWidth',2,'Color',[0.9 0.2 0.2]);
The result is a pretty good fitted line. You should be able to extend it to the ends by using the 'p' polynomal and evaluate when you dont encounter any more '1's if needed.
Result:
If you already found the x,y of the centers, you should use polyfit function:
You will then find the polynomial coefficients of the best line. In order to draw a segment, you can take the minimal and maximal x
p = polyfit(x,y,1);
xMin = min(x(:));
xMax = max(x(:));
xRange = xMin:0.01:xMax;
yRange = p(1).*xRange + p(2);
plot(xRange,yRange);
If your ultimate goal is to generate a line perpendicular to the bars in the bar code and passing roughly through the centroids of the bars, then I have another option for you to consider...
A simple solution would be to perform a Hough transform to detect the primary orientation of lines in the bar code. Once you find the angle of the lines in the bar code, all you have to do is rotate that by 90 degrees to get the slope of a perpendicular line. The centroid of the entire bar code can then be used as an intercept for this line. Using the functions HOUGH and HOUGHPEAKS from the Image Processing Toolbox, here's the code starting with a cropped version of your image from step 1:
img = imread('bar_code.jpg'); %# Load the image
img = im2bw(img); %# Convert from RGB to BW
[H, theta, rho] = hough(img); %# Perform the Hough transform
peak = houghpeaks(H); %# Find the peak pt in the Hough transform
barAngle = theta(peak(2)); %# Find the angle of the bars
slope = -tan(pi*(barAngle + 90)/180); %# Compute the perpendicular line slope
[y, x] = find(img); %# Find the coordinates of all the white image points
xMean = mean(x); %# Find the x centroid of the bar code
yMean = mean(y); %# Find the y centroid of the bar code
xLine = 1:size(img,2); %# X points of perpendicular line
yLine = slope.*(xLine - xMean) + yMean; %# Y points of perpendicular line
imshow(img); %# Plot bar code image
hold on; %# Add to the plot
plot(xMean, yMean, 'r*'); %# Plot the bar code centroid
plot(xLine, yLine, 'r'); %# Plot the perpendicular line
And here's the resulting image:

Find the corners of a polygon represented by a region mask

BW = poly2mask(x, y, m, n) computes a
binary region of interest (ROI) mask,
BW, from an ROI polygon, represented
by the vectors x and y. The size of BW
is m-by-n.
poly2mask sets pixels in BW
that are inside the polygon (X,Y) to 1
and sets pixels outside the polygon to
0.
Problem:
Given such a binary mask BW of a convex quadrilateral, what would be the most efficient way to determine the four corners?
E.g.,
Best Solution so far:
Use edge to find the bounding lines, the Hough transform to find the 4 lines in the edge image and then find the intersection points of those 4 lines or use a corner detector on the edge image. Seems complicated, and I can't help feeling there's a simpler solution out there.
Btw, convhull doesn't always return 4 points (maybe someone can suggest qhull options to prevent that) : it returns a few points along the edges as well.
EDIT:
Amro's answer seems quite elegant and efficient. But there could be multiple "corners" at each real corner since the peaks aren't unique. I could cluster them based on θ and average the "corners" around a real corner but the main problem is the use of order(1:10).
Is 10 enough to account for all the corners or will this exclude a "corner" at a real corner?
This is somewhat similar to what #AndyL suggested. However I'm using the boundary signature in polar coordinates instead of the tangent.
Note that I start by extracting the edges, getting the boundary, then converting it to signature. Finally we find the points on the boundary that are furthest from the centroid, those points constitute the corners found. (Alternatively we can also detect peaks in the signature for corners).
The following is a complete implementation:
I = imread('oxyjj.png');
if ndims(I)==3
I = rgb2gray(I);
end
subplot(221), imshow(I), title('org')
%%# Process Image
%# edge detection
BW = edge(I, 'sobel');
subplot(222), imshow(BW), title('edge')
%# dilation-erosion
se = strel('disk', 2);
BW = imdilate(BW,se);
BW = imerode(BW,se);
subplot(223), imshow(BW), title('dilation-erosion')
%# fill holes
BW = imfill(BW, 'holes');
subplot(224), imshow(BW), title('fill')
%# get boundary
B = bwboundaries(BW, 8, 'noholes');
B = B{1};
%%# boudary signature
%# convert boundary from cartesian to ploar coordinates
objB = bsxfun(#minus, B, mean(B));
[theta, rho] = cart2pol(objB(:,2), objB(:,1));
%# find corners
%#corners = find( diff(diff(rho)>0) < 0 ); %# find peaks
[~,order] = sort(rho, 'descend');
corners = order(1:10);
%# plot boundary signature + corners
figure, plot(theta, rho, '.'), hold on
plot(theta(corners), rho(corners), 'ro'), hold off
xlim([-pi pi]), title('Boundary Signature'), xlabel('\theta'), ylabel('\rho')
%# plot image + corners
figure, imshow(BW), hold on
plot(B(corners,2), B(corners,1), 's', 'MarkerSize',10, 'MarkerFaceColor','r')
hold off, title('Corners')
EDIT:
In response to Jacob's comment, I should explain that I first tried to find the peaks in the signature using first/second derivatives, but ended up taking the furthest N-points. 10 was just an ad-hoc value, and would be difficult to generalize (I tried taking 4 same as number of corners, but it didn't cover all of them). I think the idea of clustering them to remove duplicates is worth looking into.
As far as I see it, the problem with the 1st approach was that if you plot rho without taking θ into account, you will get a different shape (not the same peaks), since the speed by which we trace the boundary is different and depends on the curvature. If we could figure out how to normalize that effect, we can get more accurate results using derivatives.
If you have the Image Processing Toolbox, there is a function called cornermetric which can implement a Harris corner detector or Shi and Tomasi's minimum eigenvalue method. This function has been present since version 6.2 of the Image Processing Toolbox (MATLAB version R2008b).
Using this function, I came up with a slightly different approach from the other answers. The solution below is based on the idea that a circular area centered at each "true" corner point will overlap the polygon by a smaller amount than a circular area centered over an erroneous corner point that is actually on the edge. This solution can also handle cases where multiple points are detected at the same corner...
The first step is to load the data:
rawImage = imread('oxyjj.png');
rawImage = rgb2gray(rawImage(7:473, 9:688, :)); % Remove the gray border
subplot(2, 2, 1);
imshow(rawImage);
title('Raw image');
Next, compute the corner metric using cornermetric. Note that I am masking the corner metric by the original polygon, so that we are looking for corner points that are inside the polygon (i.e. trying to find the corner pixels of the polygon). imregionalmax is then used to find the local maxima. Since you can have clusters of greater than 1 pixel with the same corner metric, I then add noise to the maxima and recompute so that I only get 1 pixel in each maximal region. Each maximal region is then labeled using bwlabel:
cornerImage = cornermetric(rawImage).*(rawImage > 0);
maxImage = imregionalmax(cornerImage);
noise = rand(nnz(maxImage), 1);
cornerImage(maxImage) = cornerImage(maxImage)+noise;
maxImage = imregionalmax(cornerImage);
labeledImage = bwlabel(maxImage);
The labeled regions are then dilated (using imdilate) with a disk-shaped structuring element (created using strel):
diskSize = 5;
dilatedImage = imdilate(labeledImage, strel('disk', diskSize));
subplot(2, 2, 2);
imshow(dilatedImage);
title('Dilated corner points');
Now that the labeled corner regions have been dilated, they will partially overlap the original polygon. Regions on an edge of the polygon will have about 50% overlap, while regions that are on a corner will have about 25% overlap. The function regionprops can be used to find the areas of overlap for each labeled region, and the 4 regions that have the least amount of overlap can thus be considered as the true corners:
maskImage = dilatedImage.*(rawImage > 0); % Overlap with the polygon
stats = regionprops(maskImage, 'Area'); % Compute the areas
[sortedValues, index] = sort([stats.Area]); % Sort in ascending order
cornerLabels = index(1:4); % The 4 smallest region labels
maskImage = ismember(maskImage, cornerLabels); % Mask of the 4 smallest regions
subplot(2, 2, 3);
imshow(maskImage);
title('Regions of minimal overlap');
And we can now get the pixel coordinates of the corners using find and ismember:
[r, c] = find(ismember(labeledImage, cornerLabels));
subplot(2, 2, 4);
imshow(rawImage);
hold on;
plot(c, r, 'r+', 'MarkerSize', 16, 'LineWidth', 2);
title('Corner points');
And here's a test with a diamond shaped region:
I like to solve this problem by working with a boundary, because it reduces this from a 2D problem to a 1D problem.
Use bwtraceboundary() from the image processing toolkit to extract a list of points on the boundary. Then convert the boundary into a series of tangent vectors (there are a number of ways to do this, one way would be to subrtact the
ith point along the boundary from the i+deltath point.) Once you have a list of vectors, take the dot product of adjacent vectors. The four points with the smallest dot products are your corners!
If you want your algorithm to work on polygons with an abritrary number of vertices, then simply search for dot products that are a certain number of standard deviations below the median dot product.
I decided to use a Harris corner detector (here's a more formal description) to obtain the corners. This can be implemented as follows:
%% Constants
Window = 3;
Sigma = 2;
K = 0.05;
nCorners = 4;
%% Derivative masks
dx = [-1 0 1; -1 0 1; -1 0 1];
dy = dx'; %SO code color fix '
%% Find the image gradient
% Mask is the binary image of the quadrilateral
Ix = conv2(double(Mask),dx,'same');
Iy = conv2(double(Mask),dy,'same');
%% Use a gaussian windowing function and compute the rest
Gaussian = fspecial('gaussian',Window,Sigma);
Ix2 = conv2(Ix.^2, Gaussian, 'same');
Iy2 = conv2(Iy.^2, Gaussian, 'same');
Ixy = conv2(Ix.*Iy, Gaussian, 'same');
%% Find the corners
CornerStrength = (Ix2.*Iy2 - Ixy.^2) - K*(Ix2 + Iy2).^2;
[val ind] = sort(CornerStrength(:),'descend');
[Ci Cj] = ind2sub(size(CornerStrength),ind(1:nCorners));
%% Display
imshow(Mask,[]);
hold on;
plot(Cj,Ci,'r*');
Here, the problem with multiple corners thanks to Gaussian windowing function which smooths the intensity change. Below, is a zoomed version of a corner with the hot colormap.
Here's an example using Ruby and HornetsEye. Basically the program creates a histogram of the quantised Sobel gradient orientation to find dominant orientations. If four dominant orientations are found, lines are fitted and the intersections between neighbouring lines are assumed to be the corners of the projected rectangle.
#!/usr/bin/env ruby
require 'hornetseye'
include Hornetseye
Q = 36
img = MultiArray.load_ubyte 'http://imgur.com/oxyjj.png'
dx, dy = 8, 6
box = [ dx ... 688, dy ... 473 ]
crop = img[ *box ]
crop.show
s0, s1 = crop.sobel( 0 ), crop.sobel( 1 )
mag = Math.sqrt s0 ** 2 + s1 ** 2
mag.normalise.show
arg = Math.atan2 s1, s0
msk = mag >= 500
arg_q = ( ( arg.mask( msk ) / Math::PI + 1 ) * Q / 2 ).to_int % Q
hist = arg_q.hist_weighted Q, mag.mask( msk )
segments = ( hist >= hist.max / 4 ).components
lines = arg_q.map segments
lines.unmask( msk ).normalise.show
if segments.max == 4
pos = MultiArray.scomplex *crop.shape
pos.real = MultiArray.int( *crop.shape ).indgen! % crop.shape[0]
pos.imag = MultiArray.int( *crop.shape ).indgen! / crop.shape[0]
weights = lines.hist( 5 ).major 1.0
centre = lines.hist_weighted( 5, pos.mask( msk ) ) / weights
vector = pos.mask( msk ) - lines.map( centre )
orientation = lines.hist_weighted( 5, vector ** 2 ) ** 0.5
corner = Sequence[ *( 0 ... 4 ).collect do |i|
i1, i2 = i + 1, ( i + 1 ) % 4 + 1
l1, a1, l2, a2 = centre[i1], orientation[i1], centre[i2], orientation[i2]
( l1 * a1.conj * a2 - l2 * a1 * a2.conj -
l1.conj * a1 * a2 + l2.conj * a1 * a2 ) /
( a1.conj * a2 - a1 * a2.conj )
end ]
result = MultiArray.ubytergb( *img.shape ).fill! 128
result[ *box ] = crop
corner.to_a.each do |c|
result[ c.real.to_i + dx - 1 .. c.real.to_i + dx + 1,
c.imag.to_i + dy - 1 .. c.imag.to_i + dy + 1 ] = RGB 255, 0, 0
end
result.show
end