I have attached a Code of Hough Transform in MATLAB below:
%Hough Transform to find lines
%Load an Image and Convert to Grayscale to apply canny Filter
im = imread('lines.jpg');
im_gray = rgb2gray(im);
im_edge = edge(im_gray, 'canny');
figure, imshow(im), title('Original Image');
figure, imshow(im_gray), title('Grayscale Image');
figure, imshow(im_edge), title('Canny Filter Edge');
%Apply Hough Transform to Find the Candidate Lines
[accum theta rho] = hough(im_edge);
figure, imagesc(accum, 'xData', theta, 'ydata', rho), title('Hough Accumulator');
peaks = houghpeaks(accum, 100, 'Threshold', ceil(0.6 * max(accum(:))),'NHoodSize', [5,5]);
size(peaks);
%Finding the line segments in the image
line_segs = houghlines(edges, theta, rows, peaks, 'FillGap', 50,'MinLength', 100);
%Plotting
figure, imshow(im), title('Line Segments');
hold on;
for k=1:length(line_segs)
endpoints = [line_segs(k).point1; line_segs(k).point2];
plot(endpoints(:,1), endpoints(:,2), 'LineWidth', 2, 'Color','green');
end
hold off;
When I'm trying to implement the same in OCTAVE by changing the 'hough into houghtf', 'houghlines to hough_line' and 'houghpeaks into immaximas' in the following way:
%Hough Transform to find lines
pkg load image;
%Load an Image and Convert to Grayscale to apply canny Filter
im = imread('lines.jpg');
im_gray = rgb2gray(im);
im_edge = edge(im_gray, 'canny');
figure, imshow(im), title('Original Image');
figure, imshow(im_gray), title('Grayscale Image');
figure, imshow(im_edge), title('Canny Filter Edge');
%Apply Hough Transform to Find the Candidate Lines
[accum theta rho] = houghtf(im_edge); %In Octave and 'hough' in MATLAB
figure, imagesc(accum, 'xData', theta, 'ydata', rho), title('Hough Accumulator');
peaks = immaximas(accum, 100, 'Threshold', ceil(0.6 * max(accum(:))),'NHoodSize', [5,5]);
size(peaks);
%Finding the line segments in the image
line_segs = hough_line(edges, theta, rows, peaks, 'FillGap', 50, 'MinLength', 100);
%Plotting
figure, imshow(im), title('Line Segments');
hold on;
for k=1:length(line_segs)
endpoints = [line_segs(k).point1; line_segs(k).point2];
plot(endpoints(:,1), endpoints(:,2), 'LineWidth', 2, 'Color', 'green');
end
hold off;
I get the following error while executing it:
error: element number 3 undefined in return list
error: called from
HoughTransformLines at line 14 column 18
I am getting the error stating 'rho' is undefined.
I am completely new to MATLAB and Octave. Can anyone please help me implement the Hough-Transform in Octave?
I suggest the following updates for the original code:
%Hough Transform to find lines
pkg load image;
%Load an Image and Convert to Grayscale to apply canny Filter
im = imread('lines.jpg');
im_gray = rgb2gray(im);
im_edge = edge(im_gray, 'canny');
figure 1, imshow(im), title('Original Image');
figure 2, imshow(im_gray), title('Grayscale Image');
figure 3, imshow(im_edge), title('Canny Filter Edge');
%Apply Hough Transform to Find the Candidate Lines
accum = houghtf(im_edge);
theta = -90:90;
diag_length = (size(accum)(1) - 1) / 2;
rho = -diag_length:diag_length;
figure 4, imagesc(theta, rho, accum), title('Hough Accumulator');
peaks = houghpeaks(accum, 100, 'Threshold', ceil(0.6 * max(accum(:))), 'NHoodSize', [5,5]);
%Finding the line segments in the image
line_segs = houghlines(im_edge, theta, rho, peaks, 'FillGap', 50, 'MinLength', 100);
%Plotting
figure 5, imshow(im), title('Line Segments');
hold on;
for k=1:length(line_segs)
endpoints = [line_segs(k).point1; line_segs(k).point2];
plot(endpoints(:,1), endpoints(:,2), 'LineWidth', 2, 'Color', 'green');
end
hold off;
Let's walk through all updates and review them:
%Hough Transform to find lines
pkg load image;
%Load an Image and Convert to Grayscale to apply canny Filter
im = imread('lines.jpg');
im_gray = rgb2gray(im);
im_edge = edge(im_gray, 'canny');
figure 1, imshow(im), title('Original Image');
figure 2, imshow(im_gray), title('Grayscale Image');
figure 3, imshow(im_edge), title('Canny Filter Edge');
There are only very minor changes - the indices for the figures were added to divide them into consistent separate windows (see Multiple Plot Windows).
After that we are applying Hough transform and restoring "lost" theta and rho values in the Octave:
%Apply Hough Transform to Find the Candidate Lines
accum = houghtf(im_edge);
theta = -90:90;
diag_length = (size(accum)(1) - 1) / 2;
rho = -diag_length:diag_length;
figure 4, imagesc(theta, rho, accum), title('Hough Accumulator');
According to the houghtf function's docs, it returns only an accumulator with rows corresponding to indices of rho values, and columns - to indices of theta values. How can we restore the original rho and theta values? Well, the number of rho values (rows in the accum matrix variable) goes up to 2*diag_length - 1, where diag_length is the length of the diagonal of the input image. Knowing this, we should restore the diagonal length (it is a reversed action): diag_length = (size(accum)(1) - 1) / 2. Then we can restore rho values, which go from minus diagonal to diagonal: rho = -diag_length:diag_length. With thetas everything is easier - they are in the range of pi*(-90:90)/180, but we will use degrees instead: theta = -90:90. I've added the index for figure as did before and changed the call to imagesc according to its docs - it should be called as imagesc (x, y, img).
After that we use houghpeaks function to get the peaks:
peaks = houghpeaks(accum, 100, 'Threshold', ceil(0.6 * max(accum(:))), 'NHoodSize', [5,5]);
And we use houghlines to get the result line segments (guess there were some errata with variables' names):
line_segs = houghlines(im_edge, theta, rho, peaks, 'FillGap', 50, 'MinLength', 100);
And finally there goes plotting code - it wasn't changed at all, because it worked correctly.
The reason why Octave tells you that rho is undefined is because Matlab's hough function and Octave's houghtf function are not exact equivalents.
Here's the description of the output arguments returned by hough from the corresponding Mathwork's webpage:
The function returns rho, the distance from the origin to the line
along a vector perpendicular to the line, and theta, the angle in
degrees between the x-axis and this vector. The function also returns
the Standard Hough Transform, H, which is a parameter space matrix
whose rows and columns correspond to rho and theta values
respectively.
Octave's houghtf, on the other hand, only returns matrix H:
The result H is an N by M matrix containing the Hough transform. Here,
N is the number different values of r that has been attempted. This is
computed as 2*diag_length - 1, where diag_length is the length of the
diagonal of the input image. M is the number of different values of
theta. These can be set through the third input argument arg. This
must be a vector of real numbers, and is by default pi*(-90:90)/180.
Now, in your script, the only place where you call rho is on line 15, when trying to display the Hough accumulator.
I suggest you instead plot the accumulator this way:
figure, imagesc(H),xlabel('xData'),ylabel('ydata'),title('Hough accumulator')
Let me know if this works for you!
[H] = houghtf(edges);
You have to pass the argument in this manner since the octave returns only a matrix of values. you can't assign three different variables to a matrix.
So, assign a single variable to that and you will get the result.
Related
I am trying to color segments of a spline curve with different RGB values. Many thanks to #Suever, I have a working version:
x = [0.16;0.15;0.25;0.48;0.67];
y = [0.77;0.55;0.39;0.22;0.21];
spcv = cscvn([x, y].'); % spline curve
N = size(x, 1);
figure;
hold on;
for idx = 1:N-2
before = get(gca, 'children'); % before plotting this segment
fnplt(spcv, spcv.breaks([idx, idx+1]), 2);
after = get(gca, 'children'); % after plotting this segment
new = setdiff(after, before);
set(new, 'Color', [idx/N, 1-idx/N, 0, idx/N]); % set new segment to a specific RGBA color
end
hold off;
Now I am looking to speed it up. Is it possible?
No explicit benchmarks per se, but you can vectorise this easily by
a. collecting the plotted points and dividing them into 'segments' (e.g. using the buffer function)
b. setting the 'color' property of the Children (thanks to #Suever for pointing out this can be done on an array of object handles directly)
%% Get spline curve
x = [0.16; 0.15; 0.25; 0.48; 0.67];
y = [0.77; 0.55; 0.39; 0.22; 0.21];
spcv = cscvn ([x, y].');
%% Split into segments
pts = fnplt (spcv); xpts = pts(1,:).'; ypts = pts(2,:).';
idx = buffer ([1 : length(xpts)]', 10, 1, 'nodelay'); % 10pt segments
lastidx=idx(:,end); lastidx(lastidx==0)=[]; idx(:,end)=[]; % correct last segment
% Plot segments
plot (xpts(idx), ypts(idx), xpts(lastidx), ypts(lastidx), 'linewidth', 10);
% Adjust colour and transparency
Children = flipud (get (gca, 'children'));
Colours = hsv (size (Children, 1)); % generate from colourmap
Alphas = linspace (0, 1, length (Children)).'; % for example
set (Children, {'color'}, num2cell([Colours, Alphas],2));
Note: As also pointed out in the comments section (thanks #Dev-iL), setting the colour to an RGBA quadruplet the way you ask (i.e. as opposed to a simple RGB triplet) is a newer (also, for now, undocumented) Matlab feature. This code, e.g. will not work in 2013b.
I have an arbitrary shape, of which the exterior boundary has been traced in MATLAB using bwboundaries. Using regionprops, I can calculate the total area enclosed by this shape.
However, I want to know the area for only the parts of the shape that fall within a circle of known radius R centered at coordinates [x1, y1]. What is the best way to accomplish this?
There are a few ways to approach this. One way you could alter the mask before performing bwboundaries (or regionprops) so that it only includes pixels which are within the given circle.
This example assumes that you already have a logical matrix M that you pass to bwboundaries.
function [A, boundaries] = traceWithinCircle(M, x1, y1, R);
%// Get pixel centers
[x,y] = meshgrid(1:size(M, 1), 1:size(M, 2));
%// Compute their distance from x1, y1
distances = sqrt(sum(bsxfun(#minus, [x(:), y(:)], [x1, y1]).^2, 2));
%// Determine which are inside of the circle with radius R
isInside = distances <= R;
%// Set the values outside of this circle in M to zero
%// This will ensure that they are not detected in bwboundaries
M(~isInside) = 0;
%// Now perform bwboundaries on things that are
%// inside the circle AND were 1 in M
boundaries = bwboundaries(M);
%// You can, however, get the area by simply counting the number of 1s in M
A = sum(M(:));
%// Of if you really want to use regionprops on M
%// props = regionprops(M);
%// otherArea = sum([props.Area]);
end
And as an example
%// Load some example data
data = load('mri');
M = data.D(:,:,12) > 60;
%// Trace the boundaries using the method described above
B = traceWithinCircle(M, 70, 90, 50);
%// Display the results
figure;
hax = axes();
him = imagesc(M, 'Parent', hax);
hold(hax, 'on');
colormap gray
axis(hax, 'image');
%// Plot the reference circle
t = linspace(0, 2*pi, 100);
plot(x1 + cos(t)*R, y1 + sin(t)*R);
%// Plot the segmented boundaries
B = bwboundaries(M);
for k = 1:numel(B)
plot(B{k}(:,2), B{k}(:,1), 'r');
end
The hough transform in matlab is called in the following way:
[H, theta, rho] = hough(BW)
If I want to specify the theta values, I can use
[H, theta, rho] = hough(BW, 'Theta', 'begin:step:end')
The theta parameter specify a vector of Hough transform theta values. My problem is the fact that the acceptable range of theta values in Matlab is between -90 and 90 degrees. I want to calculate the hough transform with theta values between 0 and 180 degrees. Should I re-implement hough transform in matlab? is there any other code that allows this range in hough transform?
Due to the definition of the hough-Transformation, the vale for r(roh,theta)=r(-roh,theta+180). You can flip the data you get for -90:0 horizontally and you will get the data for 90:180.
The following code uses the example from the documentation and completes the data to full 360 degree:
%example code
RGB = imread('gantrycrane.png');
% Convert to intensity.
I = rgb2gray(RGB);
% Extract edges.
BW = edge(I,'canny');
[H,T,R] = hough(BW,'RhoResolution',0.5,'Theta',-90:0.5:89.5);
% Display the original image.
subplot(3,1,1);
imshow(RGB);
title('Gantrycrane Image');
% Display the Hough matrix.
subplot(4,1,2);
imshow(imadjust(mat2gray(H)),'XData',T,'YData',R,...
'InitialMagnification','fit');
title('Hough Transform of Gantrycrane Image');
xlabel('\theta'), ylabel('\rho');
axis on, axis normal, hold on;
colormap(hot);
%Modifications begin
subplot(3,1,3);
%append another 180 degree to the axis
T2=[T T+180];
%append flipped data
H2=[H,H(end:-1:1,:)];
%plot the same way.
imshow(imadjust(mat2gray(H2)),'XData',T2,'YData',R,...
'InitialMagnification','fit');
title('Hough Transform of Gantrycrane Image');
xlabel('\theta'), ylabel('\rho');
axis on, axis normal, hold on;
colormap(hot);
Does anyone know how to use the Hough transform to detect the strongest lines in the binary image:
A = zeros(7,7);
A([6 10 18 24 36 38 41]) = 1;
Using the (rho; theta) format with theta in steps of 45° from -45° to 90°. And how do I show the accumulator array in MATLAB as well.
Any help or hints please?
Thank you!
If you have access to the Image Processing Toolbox, you can use the functions HOUGH, HOUGHPEAKS, and HOUGHLINES:
%# your binary image
BW = false(7,7);
BW([6 10 18 24 36 38 41]) = true;
%# hough transform, detect peaks, then get lines segments
[H T R] = hough(BW);
P = houghpeaks(H, 4);
lines = houghlines(BW, T, R, P, 'MinLength',2);
%# show accumulator matrix and peaks
imshow(H./max(H(:)), [], 'XData',T, 'YData',R), hold on
plot(T(P(:,2)), R(P(:,1)), 'gs', 'LineWidth',2);
xlabel('\theta'), ylabel('\rho')
axis on, axis normal
colormap(hot), colorbar
%# overlay detected lines over image
figure, imshow(BW), hold on
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1), xy(:,2), 'g.-', 'LineWidth',2);
end
hold off
Each pixel (x,y) maps to a set of lines (rho,theta) that run through it.
Build an accumulator matrix indexed by (rho theta).
For each point (x,y) that is on, generate all the quantized (rho, theta) values that correspond to (x,y) and increment the corresponding point in the accumulator.
Finding the strongest lines corresponds to finding peaks in the accumulator.
In practice, the descritization of the polar parameters is important to get right. Too fine and not enough points will overlap. Too coarse and each bin could correspond to multiple lines.
in pseudo code with liberties:
accum = zeros(360,100);
[y,x] = find(binaryImage);
y = y - size(binaryImage,1)/2; % use locations offset from the center of the image
x = x - size(binaryImage,2)/2;
npts = length(x);
for i = 1:npts
for theta = 1:360 % all possible orientations
rho = %% use trigonometry to find minimum distance between origin and theta oriented line passing through x,y here
q_rho = %% quantize rho so that it fits neatly into the accumulator %%
accum(theta,rho) = accum(theta,rho) + 1;
end
end
I have points in 3D space and their corresponding 2D image points. How can I make a mesh out of the 3D points, then texture the triangle faces formed by the mesh?
Note that the function trisurf that you were originally trying to use returns a handle to a patch object. If you look at the 'FaceColor' property for patch objects, you can see that there is no 'texturemap' option. That option is only valid for the 'FaceColor' property of surface objects. You will therefore have to find a way to plot your triangular surface as a surface object instead of a patch object. Here are two ways to approach this:
If your data is in a uniform grid...
If the coordinates of your surface data represent a uniform grid such that z is a rectangular set of points that span from xmin to xmax in the x-axis and ymin to ymax in the y-axis, you can plot it using surf instead of trisurf:
Z = ... % N-by-M matrix of data
x = linspace(xmin, xmax, size(Z, 2)); % x-coordinates for columns of Z
y = linspace(ymin, ymax, size(Z, 1)); % y-coordinates for rows of Z
[X, Y] = meshgrid(x, y); % Create meshes for x and y
C = imread('image1.jpg'); % Load RGB image
h = surf(X, Y, Z, flipdim(C, 1), ... % Plot surface (flips rows of C, if needed)
'FaceColor', 'texturemap', ...
'EdgeColor', 'none');
axis equal
In order to illustrate the results of the above code, I initialized the data as Z = peaks;, used the built-in sample image 'peppers.png', and set the x and y values to span from 1 to 16. This resulted in the following texture-mapped surface:
If your data is non-uniformly spaced...
If your data are not regularly spaced, you can create a set of regularly-spaced X and Y coordinates (as I did above using meshgrid) and then use one of the functions griddata or TriScatteredInterp to interpolate a regular grid of Z values from your irregular set of z values. I discuss how to use these two functions in my answer to another SO question. Here's a refined version of the code you posted using TriScatteredInterp (Note: as of R2013a scatteredInterpolant is the recommended alternative):
x = ... % Scattered x data
y = ... % Scattered y data
z = ... % Scattered z data
xmin = min(x);
xmax = max(x);
ymin = min(y);
ymax = max(y);
F = TriScatteredInterp(x(:), y(:), z(:)); % Create interpolant
N = 50; % Number of y values in uniform grid
M = 50; % Number of x values in uniform grid
xu = linspace(xmin, xmax, M); % Uniform x-coordinates
yu = linspace(ymin, ymax, N); % Uniform y-coordinates
[X, Y] = meshgrid(xu, yu); % Create meshes for xu and yu
Z = F(X, Y); % Evaluate interpolant (N-by-M matrix)
C = imread('image1.jpg'); % Load RGB image
h = surf(X, Y, Z, flipdim(C, 1), ... % Plot surface
'FaceColor', 'texturemap', ...
'EdgeColor', 'none');
axis equal
In this case, you have to first choose the values of N and M for the size of your matrix Z. In order to illustrate the results of the above code, I initialized the data for x, y, and z as follows and used the built-in sample image 'peppers.png':
x = rand(1, 100)-0.5; % 100 random values in the range -0.5 to 0.5
y = rand(1, 100)-0.5; % 100 random values in the range -0.5 to 0.5
z = exp(-(x.^2+y.^2)./0.125); % Values from a 2-D Gaussian distribution
This resulted in the following texture-mapped surface:
Notice that there are jagged edges near the corners of the surface. These are places where there were too few points for TriScatteredInterp to adequately fit an interpolated surface. The Z values at these points are therefore nan, resulting in the surface point not being plotted.
If your texture is already in the proper geometry you can just use regular old texture mapping.
The link to the MathWorks documentation of texture mapping:
http://www.mathworks.com/access/helpdesk/help/techdoc/visualize/f0-18164.html#f0-9250
Re-EDIT: Updated the code a little:
Try this approach (I just got it to work).
a=imread('image.jpg');
b=double(a)/255;
[x,y,z]=peaks(30); %# This is a surface maker that you do have
%# The matrix [x,y,z] is the representation of the surface.
surf(x,y,z,b,'FaceColor','texturemap') %# Try this with any image and you
%# should see a pretty explanatory
%# result. (Just copy and paste) ;)
So [x,y,z] is the 'surface' or rather a matrix containing a number of points in the form (x,y,z) that are on the surface. Notice that the image is stretched to fit the surface.