selecting line using hough transform - matlab

Problem: Find an unwanted line in an image using Hough transform.
I have done the following,
Apply directional filter to analyze 12 different directions, rotated with respect to 15° each other.
Apply thresholding to obtain 12 binary images.
Now, I need to select either of those two images marked in yellow. Coz, the lines in those two images are the most prominent.
I have tried the following code. It doesn't seem to be working.
MATLAB Code
% Read 12 images into workspace.
input_images = {imread('1.png'),imread('2.png'),imread('3.png'),...
imread('4.png'),imread('5.png'),imread('6.png'),...
imread('7.png'),imread('8.png'),imread('9.png'),...
imread('10.png'),imread('11.png'),imread('12.png')};
longest_line = struct('point1',[0 0], 'point2',[0 0], 'theta', 0, 'rho', 0);
for n=1:12
%Create a binary image.
binary_image = edge(input_images{n},'canny');
%Create the Hough transform using the binary image.
[H,T,R] = hough(binary_image);
%Find peaks in the Hough transform of the image.
P = houghpeaks(H,3,'threshold',ceil(0.3*max(H(:))));
%Find lines
hough_lines = houghlines(binary_image,T,R,P,'FillGap',5,'MinLength',7);
longest_line = FindTheLongestLine(hough_lines, longest_line);
end
% Highlight the longest line segment by coloring it cyan.
plot(longest_line.point1, longest_line.point2,'LineWidth',2,'Color','cyan');
.
Relevant Source Code
function longest_line = FindTheLongestLine( hough_lines , old_longest_line)
%FINDTHELONGESTLINE Summary of this function goes here
% Detailed explanation goes here
longest_line = struct('point1',[0 0] ,'point2',[0 0],'theta', 0, 'rho', 0);
max_len = 0;
N = length(hough_lines);
for i = 1:N
% Determine the endpoints of the longest line segment
len = LenthOfLine(hough_lines(i));
if ( len > max_len)
max_len = len;
longest_line = hough_lines(i);
end
end
old_len = LenthOfLine(old_longest_line);
new_len = LenthOfLine(longest_line);
if(old_len > new_len)
longest_line = old_longest_line;
end
end
function length = LenthOfLine( linex )
%LENTHOFLINE Summary of this function goes here
% Detailed explanation goes here
length = norm(linex.point1 - linex.point2);
end
Test Images
Here are the 12 images, drive.google.com/open?id=0B-2FDw63ZNTnRnEzYlNyS0V4YVE

The problem with your code is the FillGap property of houghlines. You should allow larger gaps in the returned lines, because the searched line does not need to be continuous, e.g. 500:
hough_lines = houghlines(binary_image,T,R,P,'FillGap',500,'MinLength',7);
This finds the largest line in image 7 as desired.
Visualisation
To plot the found line on top of the image, you can use the following code:
figure
imshow(input_images{7});
hold on
% Highlight the longest line segment by coloring it cyan.
plot([longest_line.point1(1) longest_line.point2(1)], [longest_line.point1(2) longest_line.point2(2)],'LineWidth',2,'Color','cyan');
Finding maximum peak in Hough Transform
As an alternative, you may consider to select the line corresponding to the largest Hough Transform value, instead of the longest line. This can be done by selecting the longest_line as follows:
longest_line = ...
largest_H = 0; % init value
for n=1:12
binary_image = edge(input_images{n},'canny');
[H,T,R] = hough(binary_image);
P = houghpeaks(H,1,'threshold',ceil(0.3*max(H(:))));
hough_lines = houghlines(binary_image,T,R,P,'FillGap',500,'MinLength',7);
if (largest_H < H(P(1, 1), P(1, 2)))
largest_H = H(P(1, 1), P(1, 2));
longest_line = hough_lines(1);
longest_line.image = n;
end
end
This selects the following line in image 6, which is the other allowed outcome:

you can try changing the Hough functions' parameters according to your specific problem, it's not a perfect solution but it may be good enough for you:
img = im2double(rgb2gray(imread('line.jpg')));
% edge image
BW = edge(img,'canny');
% relevant angles (degrees) interval for the line you want
thetaInterval = -80:-70;
% run hough transform and take single peak
[H,T,R] = hough(BW,'Theta',thetaInterval);
npeaks = 1;
P = houghpeaks(H,npeaks);
% generate lines
minLen = 150; % you want the long line which is ~250 pixels long
% merge smaller lines (same direction) within gaps of 30 pixels
fillGap = 30;
lines = houghlines(BW,T,R,P,'FillGap',fillGap,'MinLength',minLen );
% plot
imshow(img);
hold on
xy = [lines.point1; lines.point2];
plot(xy(:,1),xy(:,2),'g','LineWidth',2);

Related

Select pixel position for image segmentation based on threshold value

I am writing a function to segment an image in MATLAB. I want to do a simple segmentation where I first sum up elements along columns and select pixel position which is greater than threshold and display only those pixels from the original[![enter image description here] image. I have written a function below where it can sum pixel intensities, I need help in selecting pixel position where intensity is greater than threshold and display only that part of image.
function S = image_segment(im)
% im is the image to segment
nrofsegments = 5; % there is 5 fragments for this image
S = cell(1,nrofsegments);
m = size(im,1);
n = size(im,2);
for kk = 1:nrofsegments
S_sum=sum(im); %sum the intensity along column
S_position = ... % get pixel position where the pixel intensity is greater than 128
S{kk} = ... % im(S_position), only part of image with this pixel position
end
This code should work for your image. Please find the explanations inline.
% Read your test image from the web
im = imread('https://i.stack.imgur.com/qg5gx.jpg');
% Get sum of intensities along the columns
colSum = sum(im);
% Set a proper threshold (128 was not working for me)
thresh = 300;
% Get an 1-D logical array indicating which columns have letters
hasLetter = colSum > thresh;
% Get the difference of consecutive values in hasLetter
hasLetterDiff = diff( hasLetter ) ;
% If this is greater than 0, we have the start of a letter.
% The find gets the indices where this is true.
% Then, we add one to compensate for the offset introduced by diff.
letterStartPos = find( hasLetterDiff > 0 ) + 1;
% Use a similar approach to find the letter end position
% Here, we search for differences smaller than 0.
letterEndPos = find( hasLetterDiff < 0 ) + 1;
% Cut out image from start and end positions
numSegments = 5;
S = cell(1, numSegments );
for k = 1 : numel(S)
S{k} = im( :, letterStartPos(k) : letterEndPos(k));
end
You should consider furthermore to add some code to make it fail-safe for cases where there are less than 5 segments. Moreover, I would recommend to do some low-pass or median filtering on colSum.
Alternative: Instead of deriving the start and stop positions, it would also be possible to use splitapply (introduced in R2015) directly on hasLetterDiff as follows:
S = splitapply( #(X) { im(:,X) }, 1 : numel( hasLetter), [1 cumsum( abs( hasLetterDiff))] + 1);
S = S(2:2:end);

Why is my bilinear interpolation vastly different from the in-built matlab function?

I've been working on bilinear interpolation based on wiki example in matlab. I followed the example to the T, but when comparing the outputs from my function and the in-built matlab function, the results are vastly different and I can't figure out why or how that happens.
Using inbuilt matlab function:
Result of my function below:
function T = bilinear(X,h,w)
%pre-allocating the output size
T = uint8(zeros(h,w));
%padding the original image with 0 so i don't go out of bounds
X = padarray(X,[2,2],'both');
%calculating dimension ratios
hr = h/size(X,1);
wr = w/size(X,2);
for row = 3:h-3
for col = 3:w-3
%for calculating equivalent position on the original image
o_row = ceil(row/hr);
o_col = ceil(col/wr);
%getting the intensity values from horizontal neighbors
Q12=X(o_row+1,o_col-1);
Q22=X(o_row+1,o_col+1);
Q11=X(o_row-1,o_col-1);
Q21=X(o_row-1,o_col+1);
%calculating the relative positions to the enlarged image
y2=round((o_row-1)*hr);
y=round(o_row*hr);
y1=round((o_row+1)*hr);
x1=round((o_col-1)*wr);
x=round(o_col*wr);
x2=round((o_col+1)*wr);
%interpolating on 2 first axis and the result between them
R1=((x2-x)/(x2-x1))*Q11+((x-x1)/(x2-x1))*Q21;
R2=((x2-x)/(x2-x1))*Q12+((x-x1)/(x2-x1))*Q22;
P=round(((y2-y)/(y2-y1))*R1+((y-y1)/(y2-y1))*R2);
T(row,col) = P;
T = uint8(T);
end
end
end
The arguments passed to the function are step4 = bilinear(Igray,1668,1836); (scale factor of 3).
You are finding the pixel nearest to the point you want to interpolate, then find 4 of this pixel’s neighbors and interpolate between them:
o_row = ceil(row/hr);
o_col = ceil(col/wr);
Q12=X(o_row+1,o_col-1);
Q22=X(o_row+1,o_col+1);
Q11=X(o_row-1,o_col-1);
Q21=X(o_row-1,o_col+1);
Instead, find the 4 pixels nearest the point you want to interpolate:
o_row = ceil(row/hr);
o_col = ceil(col/wr);
Q12=X(o_row,o_col-1);
Q22=X(o_row,o_col);
Q11=X(o_row-1,o_col-1);
Q21=X(o_row-1,o_col);
The same pixel’s coordinates then need to be used when computing distances. The easiest way to do that is to separate out the floating-point coordinates of the output pixel ((row,col)) in the input image (o_row,o_col), and the location of the nearest pixel in the input image (fo_row,fo_col). Then, the distances are simply d_row = o_row - fo_row and 1-d_row, etc.
This is how I would write this function:
function T = bilinear(X,h,w)
% Pre-allocating the output size
T = zeros(h,w,'uint8'); % Create the matrix in the right type, rather than cast !!
% Calculating dimension ratios
hr = h/size(X,1); % Not with the padded sizes!!
wr = w/size(X,2);
% Padding the original image with 0 so I don't go out of bounds
pad = 2;
X = padarray(X,[pad,pad],'both');
% Loop
for col = 1:w % Looping over the row in the inner loop is faster!!
for row = 1:h
% For calculating equivalent position on the original image
o_row = row/hr;
o_col = col/wr;
fo_row = floor(o_row); % Code is simpler when using floor here !!
fo_col = floor(o_col);
% Getting the intensity values from horizontal neighbors
Q11 = double(X(fo_row +pad, fo_col +pad)); % Indexing taking padding into account !!
Q21 = double(X(fo_row+1+pad, fo_col +pad)); % Casting to double might not be necessary, but MATLAB does weird things with integer computation !!
Q12 = double(X(fo_row +pad, fo_col+1+pad));
Q22 = double(X(fo_row+1+pad, fo_col+1+pad));
% Calculating the relative positions to the enlarged image
d_row = o_row - fo_row;
d_col = o_col - fo_col;
% Interpolating on 2 first axis and the result between them
R1 = (1-d_row)*Q11 + d_row*Q21;
R2 = (1-d_row)*Q12 + d_row*Q22;
T(row,col) = round((1-d_col)*R1 + d_col*R2);
end
end
end

Searching Across a Line in a Matrix in Octave

The attached image has a line with a break in it.
My code finds the line using a hough transform resulting in r=32 and theta=2.3213. The hough transform isn't perfect, the angle (especially with a more complex image) is always off by a little bit, and in this case, because of the edge detection, the line is offset. I want to read values across the line to find the breaks in it. In order to do this, I will need to be able to sample values on either side of the line to find where the maximum density of the line is.
Further explanation (if you want it):
If you look closely at the image you can see areas where the line crosses a pixel pretty much dead on resulting in a value of nearly 1/white. Other areas have two pixels side by side with values of about .5/gray. I need to find a solution that takes into account the anti-aliasing of the line, and allows me to extract the breaks in it.
%Program Preparation
clear ; close all; clc %clearing command window
pkg load image %loading image analyzation suite
pkg load optim
%Import Image
I_original = imread("C:/Users/3015799/Desktop/I.jpg");
%Process Image to make analysis quicker and more effective
I = mat2gray(I_original); %convert to black and white
I = edge(I, 'sobel');
%Perform Hough Transform
angles = pi*[-10:189]/180;
hough = houghtf(I,"line",angles);
%Detect hot spots in hough transform
detect = hough>.5*max(hough(:));
%Shrink hotspots to geometric center, and index
detect = bwmorph(detect,'shrink',inf);
[ii, jj] = find(detect);
r = ii - (size(hough,1)-1)/2;
theta = angles(jj);
%Cull duplicates. i.e outside of 0-180 degrees
dup = theta<-1e-6 | theta>=pi-1e-6;
r(dup) = [];
theta(dup) = [];
%Compute line parameters (using Octave's implicit singleton expansion)
r = r(:)'
theta = theta(:)'
x = repmat([1;1133],1,length(r)); % 2xN matrix, N==length(r)
y = (r - x.*cos(theta))./sin(theta); % solve line equation for y
%The above goes wrong when theta==0, fix that:
horizontal = theta < 1e-6;
x(:,horizontal) = r(horizontal);
y(:,horizontal) = [1;:];
%Plot
figure
imshow(I)
hold on
plot(y,x,'r-','linewidth',2)
If you are only interested in the length of the gap, this would be very easy:
clear all
pkg load image
img_fn = "input.jpg";
if (! exist (img_fn, "file"))
urlwrite ("https://i.stack.imgur.com/5UnpO.jpg", img_fn);
endif
Io = imread(img_fn);
I = im2bw (Io);
r = max(I);
c = max(I');
ri = find (diff(r));
ci = find (diff(c));
## both should have 4 elements (one break)
assert (numel (ri) == 4);
assert (numel (ci) == 4);
## the gap is in the middle
dx = diff(ri(2:3))
dy = diff(ci(2:3))
# the length is now easy
l = hypot (dy, dx)
gives
dx = 5
dy = 5
l = 7.0711
without any hogh transform. Of course you have to also check the corener cases for horizontal and vertical lines but this should give you an idea

Wrong number of objects being returned

I have this MATLab code to count the number of objects in the image. There are two objects in the image I am choosing (a car and a cyclist). However, the program is returning a wrong output saying there are 0 objects. Can someone find the error in the code? Thanks.
The logic behind the code is:
1. Take two input images are given, one without objects and one with objects.
2. Convert the input images from RGB to Gray scale.
3. Compare the two images and find the difference.
4. Convert the image obtained to binary.
5. In the image, only open the blobs whose area is greater than 4000.
6. Display the count and density.
clc;
MV = imread('car.png'); %To read image
MV1 = imread('backgnd.png');
A = double(rgb2gray(MV)); %convert to gray
B= double(rgb2gray(MV1)); %convert 2nd image to gray
[height, width] = size(A); %image size?
h1 = figure(1);
%Foreground Detection
thresh=11;
fr_diff = abs(A-B);
for j = 1:width
for k = 1:height
if (fr_diff(k,j)>thresh)
fg(k,j) = A(k,j);
else
fg(k,j) = 0;
end
end
end
subplot(2,2,1) , imagesc(MV), title ({'Orignal Frame'});
subplot(2,2,2) , imshow(mat2gray(A)), title ('converted Frame');
subplot(2,2,3) , imshow(mat2gray(B)), title ('BACKGND Frame ');
sd=imadjust(fg); % adjust the image intensity values to the color map
level=graythresh(sd);
m=imnoise(sd,'gaussian',0,0.025); % apply Gaussian noise
k=wiener2(m,[5,5]); %filtering using Weiner filter
bw=im2bw(k,level);
bw2=imfill(bw,'holes');
bw3 = bwareaopen(bw2,5000);
labeled = bwlabel(bw3,8);
cc=bwconncomp(bw3);
Densityoftraffic = cc.NumObjects/(size(bw3,1)*size(bw3,2));
blobMeasurements = regionprops(labeled,'all');
numberofcars = size(blobMeasurements, 1);
subplot(2,2,4) , imagesc(labeled), title ({'Foreground'});
hold off;
disp(numberofcars); % display number of cars
disp(Densityoftraffic); %display number of vehicles
An empty image(of a road) with no objects(vehicles) in it
An image of the same road but with 2 objects(car and cyclist) in it
Try This it will help you in an optimize manner
clc
clear all
close all
im1 = imread('image1.png');
im2 = imread('image2.png');
gray1 = double(rgb2gray(im1));
gray2 = double(rgb2gray(im2));
absDif = mat2gray(abs(gray1 - gray2));
figure,imshow(absDif,[])
absDfbw = im2bw(absDif,0.9*graythresh(absDif));
figure,imshow(absDfbw,[])
absDfbw = bwareaopen(absDfbw,25);
absDfbw = imclose(absDfbw,strel('disk',5));
figure,imshow(absDfbw,[])
Results are:
Thank You

How to detect lines using houghlines on the actual image rather than in the hough graph

I want to detect lines in a text document. Here is the original image this was eroded to make the task of edge detection easier using the erode function. Here is the eroded image.
Now to detect the lines I used houghlines, and used the following code in my script file.
I = imread('c:\new.jpg');
rotI = imrotate(I,33,'crop');
bw_I = rgb2gray(rotI);
BW = edge(bw_I,'canny');
[H,T,R] = hough(BW);
imshow(H,[],'XData',T,'YData',R,...
'InitialMagnification','fit');
xlabel('\theta'), ylabel('\rho');
axis on, axis normal, hold on;
P = houghpeaks(H,5,'threshold',ceil(0.3*max(H(:))));
x = T(P(:,2)); y = R(P(:,1));
plot(x,y,'s','color','white');
% Find lines and plot them
lines = houghlines(BW,T,R,P,'FillGap',5,'MinLength',7);
figure, imshow(rotI), hold on
max_len = 0;
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
% Plot beginnings and ends of lines
plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
% Determine the endpoints of the longest line segment
len = norm(lines(k).point1 - lines(k).point2);
if ( len > max_len)
max_len = len;
xy_long = xy;
end
end
% highlight the longest line segment
plot(xy_long(:,1),xy_long(:,2),'LineWidth',2,'Color','blue');
This produced this result. Now I know that the intersecting points are the detected lines. What I want is to somehow show these lines detected onto the original image such as highlighting lines or underlining them. Is this possible? Which function would I use for that?
edit: What I wanted to say was that how do I translate the detected lines( intersecting points) from the last result to a more clearer result.
You want to apply imshow to the results of the edge function-call.
This part of the Matlab documentation explains what you are trying to accomplish:
Read an image into the MATLAB workspace.
I = imread('circuit.tif');
For this example, rotate and crop the image using the imrotate function.
rotI = imrotate(I,33,'crop');
fig1 = imshow(rotI);
Find the edges in the image using the edge function.
BW = edge(rotI,'canny');
figure, imshow(BW);
It is this 3rd step you are after. You already ran the edge function.
Now, all that remains is visualizing the results BW with imshow.