Orientational Canny Edge Detection - matlab

I want to detect edges using Canny method. In the end I want two edge maps: 1 for horizontal 1 for vertical direction.
In MATLAB this can be achieved by using Sobel or Prewitt operators with an extra direction argument, but for Canny we do not have this option.
E = edge(I,'Sobel','horizontal')
Any idea how to extract both horizontal and vertical edges, separately, by using Canny?

There is no way using the built in edge function. However, Canny edge detection uses the angles from the Sobel Operator. It is very easy to reproduce these values.
Start with an image, I'll use a built in demo image.
A = im2double(rgb2gray(imread('peppers.png')));
Get the Canny edges
A_canny = edge(A, 'Canny');
Sobel Operator -- We can't use the built in implementation (edge(A_filter, 'Sobel')), because we want the edge angles, not just the edge locations, so we implement our own operator.
a. Gaussian filter. This is a preprocessing step for Canny, so we should probably reproduce it here
A_filter = imgaussfilt(A);
b. Convolution to find oriented gradients
%These filters measure the difference in values between vertically or horizontally adjacent pixels.
%Effectively, this finds vertical and horizontal gradients.
vertical_filter = [-1 0 1; -2 0 2; -1 0 1];
horizontal_filter = [-1 -2 -1; 0 0 0; 1 2 1];
A_vertical = conv2(A_filter, vertical_filter, 'same');
A_horizontal = conv2(A_filter, horizontal_filter, 'same');
c. Calculate the angles
A_angle = arctan(A_vertical./A_horizontal);
Get the angle values at the edge locations
A_canny_angles = nan(size(A));
A_canny_angles(A_canny) = A_angle(A_canny);
Choose the angles that you are interested in
angle_tolerance = 22.5/180*pi;
target_angle = 0;
A_target_angle = A_canny_angles >= target_angle*pi/180 - angle_tolerance & ...
A_canny_angles<= target_angle*pi/180 + angle_tolerance;
So if I'm looking for horizontal lines, my target angle would be zero. The image below illustrates steps 1, 2, 4 and 5. The final result of the extracted horizontal lines is shown in the bottom right. You can see that they are not exactly horizontal because I used such a large angle tolerance window. This is a tunable parameter depending on how exactly you want to hit your target angle.

Related

Detect endpoints of a line

I want to detect the points shown in the image below:
I have done this so far:
[X,map] = rgb2ind(img,0.0);
img = ind2gray(X,map); % Convert indexed to grayscale
level = graythresh(img); % Compute an appropriate threshold
img_bw = im2bw(img,level);% Convert grayscale to binary
mask = zeros(size(img_bw));
mask(2:end-2,2:end-2) = 1;
img_bw(mask<1) = 1;
%invert image
img_inv =1-img_bw;
% find blobs
img_blobs = bwmorph(img_inv,'majority',10);
% figure, imshow(img_blobs);
[rows, columns] = size(img_blobs);
for col = 1 : columns
thisColumn = img_blobs(:, col);
topRow = find(thisColumn, 1, 'first');
bottomRow = find(thisColumn, 1, 'last');
img_blobs(topRow : bottomRow, col) = true;
end
inverted = imcomplement(img_blobs);
ed = edge(inverted,'canny');
figure, imshow(ed),title('inverted');
Now how to proceed to get the coordinates of the desired position?
The top point is obviously the white pixel with the highest ordinate, which is easily obtained.
The bottom point is not so well defined. What you can do is
follow the peak edges until you reach a local minimum, on the left and on the right. That gives you a line segment, which you can intersect with the vertical through the top point.
if you know a the peak width, try every pixel on the vertical through the top point, downward, and stop until it has no left nor right neighbors at a distance equal to the peak with.
as above, but stop when the distance between the left and right neighbors exceeds a threshold.
In this particular case, you could consider using houghlines in matlab. Setting the required Theta and MinLength parameter values, you should be able to get the two vertical lines parallel to your peak. You can use the end points of the vertical lines, to get the point at the bottom.
Here is a sample code.
[H,theta,rho] = hough(bw,'Theta',5:1:30);%This is the angle range
P = houghpeaks(H,500,'NHoodSize',[11 11]);
lines = houghlines(bw,theta,rho,P,'FillGap',10,'MinLength',300);
Here is a complete description of how houghlines actually works.

Edge detection at a certain degree using canny method

I am using MATLAB. I want to use canny method for edge detection. But I need the edges that are diagonal or the edges that are only on 40 to 50 degree angle. how can i do that?
You need write canny edge detector's code by your own (you would get lots of implementation )in the internet. You would then be calculating the gradient magnitudes and gradient directions in the second step. There you need to filter out the angles and corresponding magnitudes.
Hope this helps you.
I've answered a similar question about how to use Matlab's edge function to find oriented edges with Canny ( Orientational Canny Edge Detection ), but I also wanted to try out a custom implementation as suggested by Avijit.
Canny Edge Detection steps:
Start with an image, I'll use a built in demo image.
A = im2double(rgb2gray(imread('peppers.png')));
Gaussian Filter
A_filter = imgaussfilt(A);
Sobel Edge Detection -- We can't use the built in implementation (edge(A_filter, 'Sobel')), because we want the edge angles, not just the edge locations, so we implement our own operator.
a. Convolution to find oriented gradients
%These filters measure the difference in values between vertically or horizontally adjacent pixels.
%Effectively, this finds vertical and horizontal gradients.
vertical_filter = [-1 0 1; -2 0 2; -1 0 1];
horizontal_filter = [-1 -2 -1; 0 0 0; 1 2 1];
A_vertical = conv2(A_filter, vertical_filter, 'same');
A_horizontal = conv2(A_filter, horizontal_filter, 'same');
b. Calculate the angles
A_angle = arctan(A_vertical./A_horizontal);
At this step, we traditionally bin edges by orientation (0°, 45°, 90°, 135°), but since you only want diagonal edges between 40 and 50 degrees, we will retain those edges and discard the rest.
% I lowered the thresholds to include more pixels
% But for your original post, you would use 40 and 50
lower_angle_threshold = 22.5;
upper_angle_threshold = 67.5;
diagonal_map = zeros(size(A), 'logical');
diagonal_map (A_angle>(lower_angle_threshold*pi/180) & A_angle<(upper_angle_threshold*pi/180)) = 1;
Perform non-max suppression on the remaining edges -- This is the most difficult portion to adapt to different angles. To find the exact edge location, you compare two adjacent pixels: for 0° edges, compare east-west, for 45° south-west pixel to north-east pixel, for 90° compare north-south, and for 135° north-west pixel to south-east pixel.
Since your desired angle is close to 45°, I just used south-west, but if you wanted 10° to 20°, for example, you'd have to put some more thought into these comparisons.
non_max = A_sobel;
[n_rows, n_col] = size(A);
%For every pixel
for row = 2:n_rows-1
for col = 2:n_col-1
%If we are at a diagonal edge
if(diagonal_map(row, col))
%Compare north east and south west pixels
if(A_sobel(row, col)<A_sobel(row-1, col-1) || ...
A_sobel(row, col)<A_sobel(row+1, col+1))
non_max(row, col) = 0;
end
else
non_max(row, col) = 0;
end
end
end
Edge tracking with hysteresis -- Decide whether weak edge pixels are close enough (I use a 3x3 window) to strong edge pixels. If they are, include them in the edge. If not, they are noise; remove them.
high_threshold = 0.5; %These thresholds are tunable parameters
low_threshold = 0.01;
weak_edge_pixels = non_max > low_threshold & non_max < high_threshold;
strong_edge_pixels = non_max > high_threshold;
final = strong_edge_pixels;
for row = 2:n_rows-1
for col = 2:n_col-1
window = strong_edge_pixels(row-1:row+1, col-1:col+1);
if(weak_edge_pixels(row, col) && any(window(:)))
final(row, col) = 1;
end
end
end
Here are my results.
As you can see, discarding the other edge orientations has a very negative effect on the hysteresis step because fewer strong pixels are detected. Adjusting the high_threshold would help somewhat. Another option would be to do Steps 5 and 6 using all edge orientations, and then use the diagonal_map to extract the diagonal edges.

How to colour the edges after using sobel filter?

I am using sobel filter for edge detection. How to illustrate the gradient direction with color coding. For example, horizontal edges with blue and vertical edges with yellow?
Thank you.
Since you can specify whether you want horizontal or vertical edge detected (check here), you could perform 2 filtering operations (one horizontal and the other vertical) and save each resulting image, then concatenating them to form a final, 3-channels RGB image.
The RGB color code for yellow is [1 1 0] and that of blue is [0 0 1], so in your case the vertical edge image will occupy the first 2 channels whereas the horizontal edge image will occupy the last channel.
Example:
clear
clc
close all
A = imread('circuit.tif');
[r,c,~] = size(A);
EdgeH = edge(A,'Sobel','Horizontal');
EdgeV = edge(A,'Sobel','Vertical');
%// Arrange the binary images to form a RGB color image.
FinalIm = zeros(r,c,3,'uint8');
FinalIm(:,:,1) = 255*EdgeV;
FinalIm(:,:,2) = 255*EdgeV;
FinalIm(:,:,3) = 255*EdgeH;
figure;
subplot(1,2,1)
imshow(A)
subplot(1,2,2)
imshow(FinalIm)
Output:

Transform Image using Roll-Pitch-Yaw angles (Image rectification)

I am working on an application where I need to rectify an image taken from a mobile camera platform. The platform measures roll, pitch and yaw angles, and I want to make it look like the image is taken from directly above, by some sort of transform from this information.
In other words, I want a perfect square lying flat on the ground, photographed from afar with some camera orientation, to be transformed, so that the square is perfectly symmetrical afterwards.
I have been trying to do this through OpenCV(C++) and Matlab, but I seem to be missing something fundamental about how this is done.
In Matlab, I have tried the following:
%% Transform perspective
img = imread('my_favourite_image.jpg');
R = R_z(yaw_angle)*R_y(pitch_angle)*R_x(roll_angle);
tform = projective2d(R);
outputImage = imwarp(img,tform);
figure(1), imshow(outputImage);
Where R_z/y/x are the standard rotational matrices (implemented with degrees).
For some yaw-rotation, it all works just fine:
R = R_z(10)*R_y(0)*R_x(0);
Which gives the result:
If I try to rotate the image by the same amount about the X- or Y- axes, I get results like this:
R = R_z(10)*R_y(0)*R_x(10);
However, if I rotate by 10 degrees, divided by some huge number, it starts to look OK. But then again, this is a result that has no research value what so ever:
R = R_z(10)*R_y(0)*R_x(10/1000);
Can someone please help me understand why rotating about the X- or Y-axes makes the transformation go wild? Is there any way of solving this without dividing by some random number and other magic tricks? Is this maybe something that can be solved using Euler parameters of some sort? Any help will be highly appreciated!
Update: Full setup and measurements
For completeness, the full test code and initial image has been added, as well as the platforms Euler angles:
Code:
%% Transform perspective
function [] = main()
img = imread('some_image.jpg');
R = R_z(0)*R_y(0)*R_x(10);
tform = projective2d(R);
outputImage = imwarp(img,tform);
figure(1), imshow(outputImage);
end
%% Matrix for Yaw-rotation about the Z-axis
function [R] = R_z(psi)
R = [cosd(psi) -sind(psi) 0;
sind(psi) cosd(psi) 0;
0 0 1];
end
%% Matrix for Pitch-rotation about the Y-axis
function [R] = R_y(theta)
R = [cosd(theta) 0 sind(theta);
0 1 0 ;
-sind(theta) 0 cosd(theta) ];
end
%% Matrix for Roll-rotation about the X-axis
function [R] = R_x(phi)
R = [1 0 0;
0 cosd(phi) -sind(phi);
0 sind(phi) cosd(phi)];
end
The initial image:
Camera platform measurements in the BODY coordinate frame:
Roll: -10
Pitch: -30
Yaw: 166 (angular deviation from north)
From what I understand the Yaw-angle is not directly relevant to the transformation. I might, however, be wrong about this.
Additional info:
I would like specify that the environment in which the setup will be used contains no lines (oceanic photo) that can reliably used as a reference (the horizon will usually not be in the picture). Also the square in the initial image is merely used as a measure to see if the transformation is correct, and will not be there in a real scenario.
So, this is what I ended up doing: I figured that unless you are actually dealing with 3D images, rectifying the perspective of a photo is a 2D operation. With this in mind, I replaced the z-axis values of the transformation matrix with zeros and ones, and applied a 2D Affine transformation to the image.
Rotation of the initial image (see initial post) with measured Roll = -10 and Pitch = -30 was done in the following manner:
R_rotation = R_y(-60)*R_x(10);
R_2d = [ R_rot(1,1) R_rot(1,2) 0;
R_rot(2,1) R_rot(2,2) 0;
0 0 1 ]
This implies a rotation of the camera platform to a virtual camera orientation where the camera is placed above the scene, pointing straight downwards. Note the values used for roll and pitch in the matrix above.
Additionally, if rotating the image so that is aligned with the platform heading, a rotation about the z-axis might be added, giving:
R_rotation = R_y(-60)*R_x(10)*R_z(some_heading);
R_2d = [ R_rot(1,1) R_rot(1,2) 0;
R_rot(2,1) R_rot(2,2) 0;
0 0 1 ]
Note that this does not change the actual image - it only rotates it.
As a result, the initial image rotated about the Y- and X-axes looks like:
The full code for doing this transformation, as displayed above, was:
% Load image
img = imread('initial_image.jpg');
% Full rotation matrix. Z-axis included, but not used.
R_rot = R_y(-60)*R_x(10)*R_z(0);
% Strip the values related to the Z-axis from R_rot
R_2d = [ R_rot(1,1) R_rot(1,2) 0;
R_rot(2,1) R_rot(2,2) 0;
0 0 1 ];
% Generate transformation matrix, and warp (matlab syntax)
tform = affine2d(R_2d);
outputImage = imwarp(img,tform);
% Display image
figure(1), imshow(outputImage);
%*** Rotation Matrix Functions ***%
%% Matrix for Yaw-rotation about the Z-axis
function [R] = R_z(psi)
R = [cosd(psi) -sind(psi) 0;
sind(psi) cosd(psi) 0;
0 0 1];
end
%% Matrix for Pitch-rotation about the Y-axis
function [R] = R_y(theta)
R = [cosd(theta) 0 sind(theta);
0 1 0 ;
-sind(theta) 0 cosd(theta) ];
end
%% Matrix for Roll-rotation about the X-axis
function [R] = R_x(phi)
R = [1 0 0;
0 cosd(phi) -sind(phi);
0 sind(phi) cosd(phi)];
end
Thank you for the support, I hope this helps someone!
I think you can derive transformation this way:
1) Let you have four 3d-points A(-1,-1,0), B(1,-1,0), C(1,1,0) and D(-1,1,0). You can take any 4 noncollinear points. They not related to image.
2) You have transformation matrix, so you can set your camera by multiplying points coords by transformation matrix. And you'll get 3d coords relative to camera position/direction.
3) You need to get projection of your points to screen plane. The simpliest way is to use ortographic projection (simply ignore depth coordinate). On this stage you've got 2D projections of transformed points.
4) Once you have 2 sets of 2D points coordinates (the set from step 1 without 3-rd coordinate and the set from step 3), you can compute homography matrix in standard way.
5) Apply inverse homograhy transformation to your image.
You need to estimate a homography. For an off-the-shelf Matlab solution, see function vgg_H_from_x_lin.m from http://www.robots.ox.ac.uk/~vgg/hzbook/code/ .
For the theory dig into a Computer Vision textbook, such as the one available freely at http://szeliski.org/Book/ or in Chapter 3 of http://programmingcomputervision.com/downloads/ProgrammingComputerVision_CCdraft.pdf
Maybe my answer is not correct due to my mis-understanding of the camera parameters, but I was wondering whether the Yaw/Pitch/Roll is relative to the position of your object. I used the formula of general rotations, and my code is below (the rotation functions R_x, R_y, and R_z were copied from yours, I didn't paste them here)
close all
file='http://i.stack.imgur.com/m5e01.jpg'; % original image
I=imread(file);
R_rot = R_x(-10)*R_y(-30)*R_z(166);
R_rot = inv(R_rot);
R_2d = [ R_rot(1,1) R_rot(1,2) 0;
R_rot(2,1) R_rot(2,2) 0;
0 0 1 ];
T = maketform('affine',R_2d);
transformedI = imtransform(I,T);
figure, imshow(I), figure, imshow(transformedI)
The result:
This indicates that you still need some rotation operation to get the 'correct' alignment in your mind (but probably not necessary the correct position in the camera's mind).
So I change R_rot = inv(R_rot); to R_rot = inv(R_rot)*R_x(-5)*R_y(25)*R_z(180);, and now it gave me:
Looks better like what you want.
Thanks.

Matlab manipulate edges

I'm working on an image processing project. I have a grayscale image and detected edges with Canny edge detection. Now I would like to manipulate the result by filtering the unnecessary edges. I would like to keep the edges which are close to horizontal and delete edges which are close to vertical.
How can I delete the close to vertical edges?
One option is to use half of a Sobel operator. The full algorithm finds horizontal and vertical edges, then combines them. You are only interested in horizontal edges, so just compute that part (which is Gy in the Wikipedia article).
You may also want to threshold the result to get a black and white image instead of shades of gray.
You could apply this technique to the original grayscale image or the result of the Canny edge detection.
It depends on how cost-intensive it is allowed to be. One easy way to do would be:
(1) Convolute your image with Sobel-Filters (gives Dx, Dy).
For each canny-edge-pixel:
(2) Normalize (Dx, Dy), s.t. in every pixel you have the direction of your edge.
(3) Compute the inner products with the direction you want to remove (in your case (0,1)).
(4) If the absolut value of the inner product is smaller than some threshold remove the pixel.
Example:
img = ...;
canny_img = ...;
removeDir = [0;1];
% convolute with sobel masks
sobelX = [1, 0, -1; 2, 0, -2; 1, 0, -1];
sobelY = sobelX';
DxImg = conv2(img,sobelX,'same');
DyImg = conv2(img,sobelY,'same');
% for each canny-edge-pixel:
for lin = 1:size(img,1) % <-> y
for col = 1:size(img,2) % <-> x
if canny_img(lin,col)
% normalize direction
normDir = [DxImg(lin,col); DyImg(lin,col)];
normDir = normDir / norm(normDir,2);
% inner product
innerP = normDir' * removeDir;
% remove edge?
if abs(innerP) < cos(45/180*pi) % 45° threshold
canny_img(lin,col) = 0;
end
end
end
end
You can optimize it a lot due to your requirements.