Canny edge detector is detecting the borders of the images - matlab

clear_all();
image_name = 'canny_test.png';
% no of pixels discarded on border areas
discard_pixels = 10;
% read image and convert to grayscale
input_image = gray_imread(image_name);
% discard border area
input_image = discard_image_area(input_image, discard_pixels);
% create a binary image
binary_image = edge(input_image,'canny');
imshow(binary_image);
Input
Expected Outcome
Actual Outcome
Here, we see that the borderlines of the image are being detected by the Canny Edge Detector which is not my expected outcome.
How can I achieve this?
Source Code
function [output_image] = discard_image_area( input_image, pixel_count)
output_image = input_image;
[height, width] = size(input_image);
% discard top
output_image(1:pixel_count, :) = 0;
% discard bottom
h = height - pixel_count;
output_image(h:height, :) = 0;
% discard left
output_image(:,1:pixel_count) = 0;
% discard right
output_image(:,(width-pixel_count):width) = 0;
end
function img = gray_imread( image_name )
I = imread(image_name);
if(is_rgb(I))
img = rgb2gray(I);
elseif (is_gray(I))
img = I;
end
end

Apply discard_image_area after applying the edge function. Otherwise the discarded area makes its boundary apparently. i.e. do this:
image_name = 'canny_test.png';
discard_pixels = 10;
input_image = rgb2gray(imread(image_name)); % there is no such this as gray_imread
% create a binary image
binary_image = edge(input_image,'canny');
% discard border area
binary_image = discard_image_area(binary_image , discard_pixels);
imshow(binary_image);
Output:

The answer is simple, your function discard_image_area changes the image value to 0 near the borders of the image.
Hence it creates Step between the image value and 0.
This is exactly what the Canny Edge Detector looks for.
You can easily see it if you display the image after applying the function.
Just don't use that function.

Related

Removing colored lines in Matlab

I am attempting to delete colored lines (specifically a yellow and blue line) within a series of images in Matlab. An example image can be found here:
I am able to segment out the blue line segments using basic thresholding. I am also able to segment out the bright yellow circles within the yellow line segment using thresholding. Finally, I am working on removing the remaining elements of the line segment using a hough transform w/ the houghlines function and a mask.
Is there a more elegant way to perform this, or am I stuck employing this combination of methods?
Thanks
Edit: I discovered that the hough transform is only removing single pixels from my image and not the entire yellow line. I was contemplating dilating around the detected pixels and checking for similarity, but I'm worried that the yellow line is too similar to the background colors (it's position could change such that it is not fully tracking the dark background it happens to be over now). Any suggestions would be greatly appreciated.
%% This block was intended to deal with another data
set this function has to analyze, but it actually ended up removing my
yellow circles as well, making a further threshold step unnecessary so far
% Converts to a binary image containing almost exclusively lines and crosshairs
mask = im2bw(rgb_img, 0.8);
% Invert mask
mask = ~mask;
% Remove detected lines and crosshairs by setting to 0
rgb_img(repmat(~mask, [1, 1, 3])) = 0;
%% Removes blue targetting lines if present
% Define thresholds for RGB channel 3 based on histogram settings to remove
% blue lines
channel3Min = 0.000;
channel3Max = 0.478;
% Create mask based on chosen histogram thresholds
noBlue = (rgb_img(:,:,3) >= channel3Min ) & (rgb_img(:,:,3) <= channel3Max);
% Set background pixels where noBlue is false to zero.
rgb_img(repmat(~noBlue,[1 1 3])) = 0;
%% Removes any other targetting lines if present
imageGreyed = rgb2gray(rgb_img);
% Performs canny edge detection
BW = edge(imageGreyed, 'canny');
% Computes the hough transform
[H,theta,rho] = hough(BW);
% Finds the peaks in the hough matrix
P = houghpeaks(H,5,'threshold',ceil(0.3*max(H(:))));
% Finds any large lines present in the image
lines = houghlines(BW,theta,rho,P,'FillGap',5,'MinLength',100);
colEnd = [];
rowEnd = [];
for i = 1:length(lines)
% Extracts line start and end points from houghlines output
pointHold = lines(i).point1;
colEnd = [colEnd pointHold(1)];
rowEnd = [rowEnd pointHold(2)];
pointHold = lines(i).point2;
colEnd = [colEnd pointHold(1)];
rowEnd = [rowEnd pointHold(2)];
% Creates a line segment from the line endpoints using a simple linear regression
fit = polyfit(colEnd, rowEnd, 1);
% Creates index of "x" (column) values to be fed into regression
colIndex = (colEnd(1):colEnd(2));
rowIndex = [];
% Obtains "y" (row) pixel values from regression
for i = colIndex
rowHold = fit(1) * i + fit(2);
rowIndex = [rowIndex rowHold];
end
% Round regression output
rowIndex = round(rowIndex);
% Assemble coordinate matrix
lineCoordinates = [colIndex; rowIndex]';
rgbDim = size(rgb_img);
% Create mask based on input image size
yellowMask = ones(rgbDim(1), rgbDim(2));
for i = 1:length(rowIndex)
yellowMask(rowIndex(i), colIndex(i)) = 0;
end
% Remove the lines found by hough transform
rgb_img(repmat(~yellowMask,[1 1 3])) = 0;
end
end
I briefly tested the example given on
http://de.mathworks.com/help/images/examples/color-based-segmentation-using-k-means-clustering.html?prodcode=IP&language=en
using your image which is:
he = imread('HlQVN.jpg');
imshow(he)
cform = makecform('srgb2lab');
lab_he = applycform(he,cform);
ab = double(lab_he(:,:,2:3));
nrows = size(ab,1);
ncols = size(ab,2);
ab = reshape(ab,nrows*ncols,2);
nColors = 3;
% repeat the clustering 3 times to avoid local minima
[cluster_idx, cluster_center] = kmeans(ab,nColors,'distance','sqEuclidean', ...
'Replicates',3);
pixel_labels = reshape(cluster_idx,nrows,ncols);
segmented_images = cell(1,3);
rgb_label = repmat(pixel_labels,[1 1 3]);
for k = 1:nColors
color = he;
color(rgb_label ~= k) = 0;
segmented_images{k} = color;
end
imshow(segmented_images{1}), title('objects in cluster 1');
this already pretty well identifies the blue line.
This post won't go into the image processing side of the problem, but rather just focuses on the implementation and would suggest ways to improve the existing code. Now, the code has polyfit calculation at each loop iteration, which I am not sure could be vectorized. So, rather let's try to vectorize rest of the codes inside the loop and hopefully that would bring in some speedup for the overall code. The changes I would like to propose are at two steps within the innermost loop.
1) Replace -
rowIndex=[]
for i = colIndex
rowHold = fit(1) * i + fit(2)
rowIndex = [rowIndex rowHold];
end
with -
rowIndex = fit(1)*colIndex + fit(2)
2) Replace -
yellowMask = ones(rgbDim(1), rgbDim(2));
for i = 1:length(rowIndex)
yellowMask(rowIndex(i), colIndex(i)) = 0;
end
rgb_img(repmat(~yellowMask,[1 1 3])) = 0;
with -
idx1 = (colIndex-1)*rgbDim(1) + rowIndex
rgb_img(bsxfun(#plus,idx1(:),[0:rgbDim(3)-1]*rgbDim(1)*rgbDim(2))) = 0;
It turns out that the answer involved converting the image into the Lab colorspace and performing treshholding. This segmented out the lines with minimal loss in the rest of the image. The code is below:
% Convert RGB image to L*a*b color space for thresholding
rgb_img = im2double(rgb_img);
cform = makecform('srgb2lab', 'AdaptedWhitePoint', whitepoint('D65'));
I = applycform(rgb_img,cform);
% Define thresholds for channel 2 based on histogram settings
channel2Min = -1.970;
channel2Max = 48.061;
% Create mask based on chosen histogram threshold
BW = (I(:,:,2) <= channel2Min ) | (I(:,:,2) >= channel2Max);
% Determines the eccentricity for regions of pixels; basically how line-like
% (vals close to 1) or circular (vals close to 0) the region is
rp = regionprops(BW, 'PixelIdxList', 'Eccentricity');
% Selects for regions which are not line segments (areas which
% may have been incorrectly thresholded out with the crosshairs)
rp = rp([rp.Eccentricity] < 0.99);
% Removes the non-line segment regions from the mask
BW(vertcat(rp.PixelIdxList)) = false;
% Set background pixels where BW is false to zero.
rgb_img(repmat(BW,[1 1 3])) = 0;

How to draw a filled circle on a video frame using matlab

I have an "inverted-pendulum" video which I try to find the mid point of moving part. I am using Computer Vision Toolbox
I change the mid point's color using detected coordinates. Assume that X is the frame's row number for the detected mid point and the Y is the col number.
while ~isDone(hVideoFileReader)
frame = step(hVideoFileReader);
...
frame(X-3:X+3, Y-3:Y+3, 1) = 1; % # R=1 make the defined region red
frame(X-3:X+3, Y-3:Y+3, 2) = 0; % # G=0
frame(X-3:X+3, Y-3:Y+3, 3) = 0; % # B=0
step(hVideoPlayer, frame);
end
Then I easily have a red square. But I want to add a red filled circle on the detected point, instead of a square. How can I do that?
You can use the insertShape function. Example:
img = imread('peppers.png');
img = insertShape(img, 'FilledCircle', [150 280 35], ...
'LineWidth',5, 'Color','blue');
imshow(img)
The position parameter is specified as [x y radius]
EDIT:
Here is an alternative where we manually draw the circular shape (with transparency):
% some RGB image
img = imread('peppers.png');
[imgH,imgW,~] = size(img);
% circle parameters
r = 35; % radius
c = [150 280]; % center
t = linspace(0, 2*pi, 50); % approximate circle with 50 points
% create a circular mask
BW = poly2mask(r*cos(t)+c(1), r*sin(t)+c(2), imgH, imgW);
% overlay filled circular shape by using the mask
% to fill the image with the desired color (for all three channels R,G,B)
clr = [0 0 255]; % blue color
a = 0.5; % blending factor
z = false(size(BW));
mask = cat(3,BW,z,z); img(mask) = a*clr(1) + (1-a)*img(mask);
mask = cat(3,z,BW,z); img(mask) = a*clr(2) + (1-a)*img(mask);
mask = cat(3,z,z,BW); img(mask) = a*clr(3) + (1-a)*img(mask);
% show result
imshow(img)
I'm using the poly2mask function from Image Processing Toolbox to create the circle mask (idea from this post). If you don't have access to this function, here is an alternative:
[X,Y] = ndgrid((1:imgH)-c(2), (1:imgW)-c(1));
BW = (X.^2 + Y.^2) < r^2;
That way you get a solution using core MATLAB functions only (no toolboxes!)
If you have an older version of MATLAB with the Computer Vision System Toolbox installed, you can use vision.ShapeInserter system object.
Thanks #Dima, I have created a shapeInserter object.
greenColor = uint8([0 255 0]);
hFilledCircle = vision.ShapeInserter('Shape','Circles',...
'BorderColor','Custom',...
'CustomBorderColor', greenColor ,...
'Fill', true, ...
'FillColor', 'Custom',...
'CustomFillColor', greenColor );
...
fc = int32([Y X 7;]);
frame = step(hFilledCircle, frame, fc);
I then applied it to detected point.

Converting code to take RGB image instead of grayscale

I have this code converting a fisheye image into rectangular form but the code is only able to perform this operation on a grayscale image. Can anybody help converting the code to perform the operation on a RGB image. The code is as follows:
edit: I have updated the code to contain a functionality which performs interpolation in each color channel. But this seem to disform the output image. See pictures below
function imP = FISHCOLOR (imR)
rMin=0.1;
rMax=1;
[Mr, Nr, Dr] = size(imR); % size of rectangular image
xRc = (Mr+1)/2; % co-ordinates of the center of the image
yRc = (Nr+1)/2;
sx = (Mr-1)/2; % scale factors
sy = (Nr-1)/2;
M=size(imR,1);N=size(imR,2);
dr = (rMax - rMin)/(M-1);
dth = 2*pi/N;
r=rMin:dr:rMin+(M-1)*dr;
th=(0:dth:(N-1)*dth)';
[r,th]=meshgrid(r,th);
x=r.*cos(th);
y=r.*sin(th);
xR = x*sx + xRc;
yR = y*sy + yRc;
imP =zeros(M, N); % initialize the final matrix
for k=1:3 % colors
T = imR(:,:,k);
Ichannel = interp2(T,xR,yR);
imP(:,:,k)= Ichannel; % add k channel
end
SOLVED
Input image <- Image link
Grayscale output, what i would like in color <- Image link
Try changing these three lines:
[Mr Nr] = size(imR); % size of rectangular image
...
imP = zeros(M, N);
...
imP = interp2(imR, xR, yR); %interpolate (imR, xR, yR);
...to these:
[Mr Nr Pr] = size(imR); % size of rectangular image
...
imP = zeros(M, N, Pr);
...
for dim = 1:Pr
imP(:,:,dim) = interp2(imR(:,:,dim), xR, yR); %interpolate (imR, xR, yR);
end

How to cut the portion and highlight it

Suppose we take any image from Internet and then copy or move some part from that image to any other area inside that image the image should show from where that the part is copied / moved and then pasted. By using matlab.
a = imread('obama.jpg');
a = rgb2gray(a);
[x1 y1] = size(a);
b = uint8(imcrop(a, [170 110 200 150]));
[x2 y2] = size(b);
c = uint8(zeros(x1,y1));
for i = 1:x2
for j = 1:y2
c(i+169,j+109) = b(i,j);
end
end
[x3 y3] = size(c)
subplot(1,3,1),imshow(a);
subplot(1,3,2),imshow(b);
subplot(1,3,3),imshow(c);
Code
%%// Input data and params
a = imread('Lenna.png');
a = rgb2gray(a);
src_xy = [300,300]; %% Starting X-Y of the source from where the portion would be cut from
src_dims = [100 100]; %% Dimensions of the portion to be cut
tgt_xy = [200,200]; %% Starting X-Y of the target to where the portion would be put into
%%// Get masks
msrc = false(size(a));
msrc(src_xy(1):src_xy(1)+src_dims(1)-1,src_xy(2):src_xy(2)+src_dims(2)-1)=true;
mtgt = false(size(a));
mtgt(tgt_xy(1):tgt_xy(1)+src_dims(1)-1,tgt_xy(2):tgt_xy(2)+src_dims(2)-1)=true;
%%// If you would like to have a cursor based cut, explore ROIPOLY, GINPUT - e.g. - mask1 = roipoly(a)
mask1 = msrc;
a2 = double(a);
%%// Get crop-mask boundary and dilate it a bit to show it as the "frame" on the original image
a2(imdilate(edge(mask1,'sobel'),strel('disk',2))) = 0;
a2 = uint8(a2);
%%// Display original image with cropped portion being highlighted
figure,imshow(a2);title('Cropped portion highlighted')
figure,imshow(a);title('Original Image')
figure,imshow(mask1);title('Mask that was cropped')
img1 = uint8(bsxfun(#times,double(a),mask1));
figure,imshow(img1);title('Masked portion of image')
%%// Get and display original image with cropped portion being overlayed at the target coordinates
a_final = a;
a_final(mtgt) = a(msrc);
figure,imshow(uint8(a_final));title('Original image with the cut portion being overlayed')
Output
Please note that to use RGB images, you would need to tinker a bit more with the above code.

Matlab Image Processing: Bound image by a rectangle

I have an image and I plotted the boundary of the image. Can anyone please tell me how to draw a rectangle on the image by overwriting the boundary pixel values, using MATLAB.
If it is a straight rectangle, just set the values in the matrix:
function Stack1()
im = imread('peppers.png');
x = 10;
y = 20;
w = 40;
h = 50;
im(y:y+h,x,:) = 255;
im(y:y+h,x+w,:) = 255;
im(y,x:x+w,:) = 255;
im(y+h,x:x+w,:) = 255;
figure();imshow(im);
end
Probably you can use this File Exchange submission:
Draw a border around an image