Detect parallel lines using Hough transform in matlab - matlab

Suppose there is a binary image of a black background and white lines "plotted" on it, i.e., they aren't burnt onto the image. For example:
I need to retain only the lines that are parallel to atleast one of the other lines in the picture. If not perfectly parallel, at least close enough to be parallel(perhaps a variable that can control the degree of parallelism would help with that). In other words, if I choose a particular line and it has one or more lines that are parallel to it, I retain it, else I discard it. I need to do that for all lines in the image.
I came across Hough transform but I'm having trouble understanding how to use the bins to check the orientations and determine parallel lines. Or is there a better way to go about this?
Also, since the lines aren't a part of the image and are just plotted on it, I don't have an image to feed into the Hough Transform function. Can I use the output of the plot function as input directly? This is the code I wrote to plot the white lines:
Location1 is a m-by-2 matrix that contains the coordinates to draw the lines.
figure; imshow(blackImage);
hold on ;
for i=1:size(Location1,1)-1
h = plot([Location1(i,1) Location1(i+1,1)], [Location1(i,2) Location1(i+1,2)]) ;
set(h,'linewidth', .1, 'color', 'b') ;
end
Any help would be appreciated.

Given that
for i=1:size(Location1,1)-1
% one line = Location1(i,:) to Location1(i+1,:)
end
then
theta = atan2(diff(Location1(:,2)),diff(Location1(:,1)));
But since lines are parallel even if their theta is in the opposite direction, you want to map all angles to half a circle:
theta = mod(theta,pi/2);
Now theta is in the range [-π/2,π/2].
To find similar angles:
[s,i] = sort(theta);
k = find(diff(s)<0.01); % diff(s) is always positive because s is sorted
i = i([k,k+1]);
theta(i) % <-- sets of similar angles
% Location1(i,:),Location1(i+1,:) <- corresponding lines

Related

Find the intersections of a series of curves of an image: Matlab

I have an image with a series of lines, like below:
I would like to know if there is some method for finding the intersections of all of the lines.
I was checking another post where they offer a way to find the intersections, but once the image is segmented I suppose it has noise or something similar... I will start with a simple image to find each intersection.
My main idea was to solve the "system of equations" but I think for an image with many intersections would be too difficult, I do not know if there is any method to find all intersections.
I assume you don't have the line equations. I used skeletonization and filtering to detect small areas with more than one line crossing them. I'm not sure that it will be so simple for noisy image but it worth trying:
im = im2double(rgb2gray(imread('lines.png')));
% binarize black lines
bw = im == 0;
% skelatonize lines
sk = bwmorph(bw,'skel',inf);
% filter skeleton with 3X3 ones filter
A = imfilter(double(sk),ones(3));
% find blobs greater than 4 - more than one line crossing the filter
B = A > 4;
% get centroids of detected blobs
C = regionprops(B,'Centroid');
Cent = reshape([C.Centroid],2,[]).';
% plot
imshow(im)
hold on;
plot(Cent(:,1),Cent(:,2),'gx','LineWidth',2)

plot polar grey values in matrix without interpolating every for loop

I have a matrix with grey values between 0 and 1. For every entry in the matrix, there are certain polar coordinates that indicate the position of the grey values. I already have either Theta and Rho values (polar) ,both in separate 512×960 matrices. And grayscale values (in a matrix called C) for every Theta and Rho combination. I have the same for X and Y, as I just use pol2cart for the transformation. The problem is that I cannot directly plot these values, as they do not yet fit in the 'bins' of the new matrix.
What I want: to put the grey values in a square matrix of size 1024×1024. I cannot do this directly, because the polar coordinates fall in between the grid of this matrix. Therefore, we now use interpolation, but this is extremely time consuming and has to be done separately for every dataset, although the transformation from the original matrices to this final one will always be the same. Therefore, I'd like to solve this matrix once (either analytically or numerically) and use a matrix multiplication or something similar to apply the manipulation efficiently in every cycle of the code.
One example of what one of these transformations could look like this:
The zeros in the first matrix are the grid, and the value 1 (in between the grid) is the grey value that falls in between four grid points, then I'd like to transform to the second matrix (don't mind the visual spacing between the points).
For every dataset, I have hundreds of these matrices, so I would like to make the code more efficient.
Background: I'm using TriScatteredInterp now for the interpolation. We tried scatteredInterpolant as well, but that is slower. I also posted a related question, but decided to split the two possible solutions, because the solution I ask for here is also applicable to non-MATLAB code and will probably be faster and makes for a smoother (no continuous popping up of figures) execution of the code.
Using the image processing toolbox
Images work a bit differently than the data you have. However, it's fairly straightforward to map one representation into the other.
There is only one problem I see: wrapping. Obviously, θ = 2π = 0, but MATLAB does not know that. AFAIK, there is no easy way to tell MATLAB that.
Why does this matter? Well, simply put, inter-pixel interpolation uses information from the nearest N neighbors to find intermediate colors, with N depending on the interpolation kernel. When doing this somewhere in the middle of the image there is no problem, but at the edges MATLAB has to know that the left edge equals the right edge. This is not standard image processing, and I'm not aware of any function that is capable of this.
Implementation
Now, when disregarding the wrapping problem, this is one way to do it:
function resize_polar()
%% ORIGINAL IMAGE
% ==========================================================================
% Some random greyscale data
C = double(rgb2gray(imread('stars.png')))/255;
% Your current size, and desired size
sz_x = size(C,2); new_sz_x = 1024;
sz_y = size(C,1); new_sz_y = 1024;
% Ranges for teat and rho;
% replace with your actual values
rho_start = 0; theta_start = 0;
rho_end = 10; theta_end = 2*pi;
% Generate regularly spaced grid;
theta = linspace(theta_start, theta_end, sz_x);
rho = linspace(rho_start, rho_end, sz_y);
[theta, rho] = meshgrid(theta,rho);
% Make plot of generated data
plot_polar(theta, rho, C, 'Original image');
% Resize data
[theta,rho,C] = resize_polar_data(theta, rho, C, [new_sz_y new_sz_x]);
% Make plot of generated data
plot_polar(theta, rho, C, 'Rescaled image');
end
function [theta,rho,data] = resize_polar_data(theta,rho,data, new_dims)
% Create fake RGB image cube
IMG = cat(3, theta,rho,data);
% Rescale as if theta and rho are RG color data in the RGB
% image cube
IMG = imresize(IMG, new_dims, 'nearest');
% Split up the data again
theta = IMG(:,:,1);
rho = IMG(:,:,2);
data = IMG(:,:,3);
end
function plot_polar(theta, rho, data, label)
[X,Y] = pol2cart(theta, rho);
figure('renderer', 'opengl')
clf, hold on
surf(X,Y,zeros(size(X)), data, ...
'edgecolor', 'none');
colormap gray
title(label);
end
The images used and plotted:
Le awesomely-drawn 512×960 PNG image
Now, the two look the same (couldn't really come up with a better-suited image), so you'll have to believe me that the 512×960 has indeed been rescaled to 1024×1024, with nearest-neighbor interpolation.
Here are some timings for the actual imresize() operation for some simple kernels:
nearest : 0.008511 seconds.
bilinear: 0.019651 seconds.
bicubic : 0.025390 seconds. <-- default kernel
But this depends strongly on your hardware; I believe imresize offloads a lot of work to the GPU, so if you have a crappy one, it'll be slower.
Wrapping
If the wrapping problem is really important to you, you can modify the function above to do the following:
first, rescale the image with imresize() like before
horizontally concatenate the second half of the grayscale data and the first half. Meaning, you swap the first and second halves to make the left and right edges (0 and 2π) touch in the middle.
rescale this intermediate image with imresize()
Extract the central vertical strip of the rescaled intermediate image
split that up in two equal-width strips
and replace the edge strips of the output image with the two strips you just created
Now, this is kind of a brute force approach: you are re-scaling an image twice, and most of the pixels of the second image round will be discarded. If performance is a problem, you can of course apply the rescale to only the central strip of that intermediate image. But, well, that will be a bit more complicated.

Extract Individual Line Segment Coordinates from Boundary Image - MATLAB

I have a MATLAB script that gives me the boundary lines of an image using bwboundaries().
Now, after plotting that image, I am getting the complete image, formed from various straight line segments.
I would like to get the coordinates or show the individual line segments that form the boundary.
I think the method is called digital straightness of lines, but I want to know how to apply it here in this case.
[B,L,N] = bwboundaries(z,'noholes');
for k=1:length(B),
boundary = B{k};
if(k > N)
figure, plot(boundary(:,2),boundary(:,1),'g','LineWidth',2);
else
figure, plot(boundary(:,2),boundary(:,1),'r','LineWidth',2);
end
end
According to my understanding of your question,my idea is to use bwtraceboundary().
BW = imread('image.png');
imshow(BW,[]);
r = 165; % you can get the r and c for your image using "impixelinfo"
c = 43;
contour = bwtraceboundary(BW,[r c],'W',4,Inf,'counterclockwise');
hold on;
plot(contour(:,2),contour(:,1),'g','LineWidth',2);
x=contour(:,2)'
y=contour(:,2)'
Am I answering your question?
Use regionprops('desired feature') on the labelled image.
To generate a labelled image use
bwlabel(Img) (high memory usage)
or
bw=bwconncomp(Img,conn) (low memory use)
followed by
labelmatrix(bw)
The best way to proceed would probably be to use a Hough Transform to first get an idea of the lines present in the image. You can play around with Hough Transform to extract the end points of the lines.

Matlab Solid Circles

What we want is to draw several solid circles at random locations, with random gray scale colors, on a dark gray background. How can we do this? Also, if the circles overlap, we need them to change color in the overlapping part.
Since this is an assignment for school, we are not looking for ready-made answers, but for a guide which tools to use in MATLAB!
Here's a checklist of things I would investigate if you want to do this properly:
Figure out how to draw circles in MATLAB. Because you don't have the Image Processing Toolbox (see comments), you will probably have to make a function yourself. I'll give you some starter code:
function [xout, yout] = circle(x,y,r,rows,cols)
[X,Y] = meshgrid(x-r:x+r, y-r:y+r);
ind = find(X.^2 + Y.^2 <= r^2 & X >= 1 & X <= cols & Y >= 1 & Y <= rows);
xout = X(ind);
yout = Y(ind);
end
What the above function does is that it takes in an (x,y) co-ordinate as well as the radius of
the circle. You also will need to specify how many rows and how many columns you want in your image. The reason why is because this function will prevent giving you co-ordinates that are out of bounds in the image that you can't draw. The final output of this will give you co-ordinates of all values inside and along the boundary of the circle. These co-ordinates will already be in integer so there's no need for any rounding and such things. In addition, these will perfectly fit when you're assigning these co-ordinates to locations in your image. One caveat to note is that the co-ordinates assume an inverted Cartesian. This means that the top left corner is the origin (0,0). x values increase from left to right, and y values increase from top to bottom. You'll need to keep this convention in mind when drawing circles in your image.
Take a look at the rand class of functions. rand will generate random values for you and so you can use these to generate a random set of co-ordinates - each of these co-ordinates can thus serve as your centre. In addition, you can use this class of functions to help you figure out how big you want your circles and also what shade of gray you want your circles to be.
Take a look at set operations (logical AND, logical OR) etc. You can use a logical AND to find any circles that are intersecting with each other. When you find these areas, you can fill each of these areas with a different shade of gray. Again, the rand functions will also be of use here.
As such, here is a (possible) algorithm to help you do this:
Take a matrix of whatever size you want, and initialize all of the elements to dark gray. Perhaps an intensity of 32 may work.
Generate a random set of (x,y) co-ordinates, a random set of radii and a random set of intensity values for each circle.
For each pair of circles, check to see if there are any co-ordinates that intersect with each other. If there are such co-ordinates, generate a random shade of gray and fill in these co-ordinates with this new shade of gray. A possible way to do this would be to take each set of co-ordinates of the two circles and draw them on separate temporary images. You would then use the logical AND operator to find where the circles intersect.
Now that you have your circles, you can plot them all. Take a look at how plot works with plotting matrices. That way you don't have to loop through all of the circles as it'll be inefficient.
Good luck!
Let's get you home, shall we? Now this stays away from the Image Processing Toolbox functions, so hopefully these must work for you too.
Code
%%// Paramters
numc = 5;
graph_size = [300 300];
max_r = 100;
r_arr = randperm(max_r/2,numc)+max_r/2
cpts = [randperm(graph_size(1)-max_r,numc)' randperm(graph_size(2)-max_r,numc)']
color1 = randperm(155,numc)+100
prev = zeros(graph_size(1),graph_size(2));
for k = 1:numc
r = r_arr(k);
curr = zeros(graph_size(1),graph_size(2));
curr(cpts(k,1):cpts(k,1)+r-1,cpts(k,2):cpts(k,2)+r-1)= color1(k)*imcircle(r);
common_blob = prev & curr;
curr = prev + curr;
curr(common_blob) = min(color1(1),color1(2))-50;
prev = curr;
end
figure,imagesc(curr), colormap gray
%// Please note that the code uses a MATLAB file-exchange tool called
%// imcircle, which is available at -
%// http://www.mathworks.com/matlabcentral/fileexchange/128-imcircle
Screenshot of a sample run
As you said that your problem is an assignment for school I will therefore not tell you exactly how to do it but what you should look at.
you should be familiar how 2d arrays (matrices) work and how to plot them using image/imagesc/imshow ;
you should look at the strel function ;
you should look at the rand/randn function;
such concepts should be enough for the assignment.

Hough transform in MATLAB without using hough function

I found an implementation of the Hough transform in MATLAB at Rosetta Code, but I'm having trouble understanding it. Also I would like to modify it to show the original image and the reconstructed lines (de-Houghing).
Any help in understanding it and de-Houghing is appreciated. Thanks
Why is the image flipped?
theImage = flipud(theImage);
I can't wrap my head around the norm function. What is its purpose, and can it be avoided?
EDIT: norm is just a synonym for euclidean distance: sqrt(width^2 + height^2)
rhoLimit = norm([width height]);
Can someone provide an explanation of how/why rho, theta, and houghSpace is calculated?
rho = (-rhoLimit:1:rhoLimit);
theta = (0:thetaSampleFrequency:pi);
numThetas = numel(theta);
houghSpace = zeros(numel(rho),numThetas);
How would I de-Hough the Hough space to recreate the lines?
Calling the function using a 10x10 image of a diagonal line created using the identity (eye) function
theImage = eye(10)
thetaSampleFrequency = 0.1
[rho,theta,houghSpace] = houghTransform(theImage,thetaSampleFrequency)
The actual function
function [rho,theta,houghSpace] = houghTransform(theImage,thetaSampleFrequency)
%Define the hough space
theImage = flipud(theImage);
[width,height] = size(theImage);
rhoLimit = norm([width height]);
rho = (-rhoLimit:1:rhoLimit);
theta = (0:thetaSampleFrequency:pi);
numThetas = numel(theta);
houghSpace = zeros(numel(rho),numThetas);
%Find the "edge" pixels
[xIndicies,yIndicies] = find(theImage);
%Preallocate space for the accumulator array
numEdgePixels = numel(xIndicies);
accumulator = zeros(numEdgePixels,numThetas);
%Preallocate cosine and sine calculations to increase speed. In
%addition to precallculating sine and cosine we are also multiplying
%them by the proper pixel weights such that the rows will be indexed by
%the pixel number and the columns will be indexed by the thetas.
%Example: cosine(3,:) is 2*cosine(0 to pi)
% cosine(:,1) is (0 to width of image)*cosine(0)
cosine = (0:width-1)'*cos(theta); %Matrix Outerproduct
sine = (0:height-1)'*sin(theta); %Matrix Outerproduct
accumulator((1:numEdgePixels),:) = cosine(xIndicies,:) + sine(yIndicies,:);
%Scan over the thetas and bin the rhos
for i = (1:numThetas)
houghSpace(:,i) = hist(accumulator(:,i),rho);
end
pcolor(theta,rho,houghSpace);
shading flat;
title('Hough Transform');
xlabel('Theta (radians)');
ylabel('Rho (pixels)');
colormap('gray');
end
The Hough Transform is a "voting" approach where each image point casts a vote on the existence of a certain line (not a line segment) in an image. The voting is carried out in the parameter space for a line: the polar coordinate representation of normal vectors.
We discretize the parameter space and allow each image point to suggest parameters which would be compatible with a line through the point. Each of your questions can be addressed in terms of how the parameter space is treated in code. Wikipedia has a good article with worked examples that might clarify things (if you are having any conceptual troubles).
For your specific questions:
The image is flipped so the origin is the bottom right corner. As far as I can tell this step is not technically necessary. It does change the outcome somewhat due to discretization issues. The other implementations on Rosetta Code do not flip the image.
rhoLimit holds the maximum radius of an image point in polar coordinates (recall the norm of a vector is its magnitude).
rho and theta are discretizations of the polar coordinate plane according to a sampling rate. houghSpace creates a matrix with an element for each possible combination of the discrete rho/theta values.
The Hough Transform does not specify the lengths of putative lines; the peaks in the voting space just specify the polar coordinates of the normal vector of the line. You can "de-Hough" by selecting the peaks and drawing the corresponding lines, or perhaps by drawing every possible line and using the number of votes as a grayscale weight. It is not possible to re-create the original image from the Hough Transform, just the lines identified by the transform (and your thresholding scheme on the votes).
Following the example from the question produces the following graph. The placement of grid lines and the datatips cursor can be a bit misleading (though the variable values in the 'tip are correct). Since this is an image of the parameter space and not the image space the sampling rate we chose is determining the number of bins in each variable. At this sampling rate, the image points are compatible with more than one possible line; in other words our lines have subpixel resolution, in the sense that they cannot be drawn without overlap in a 10x10 image.
Once we have chosen a peak, such as that corresponding to the line with normal (rho,theta) = (6.858,0.9), we can draw that line in an image however we choose. Automated peak picking, that is thresholding to find the highly up-voted lines, is its own problem - you could ask a another question about the topic in DSP or about a particular algorithm here.
For example methods see the code and documentation of MATLAB's houghpeaks and houghlines functions.