find edge in image - matlab

How can I detect the edges in an image without using method 'edge', with only using mathematical operations (matrix or Derived or div or any other)? Indeed, how can I rewrite the function edge by using the algorithm Canny or sobel or any other?
For example:
pink rectangle 256*256
black rectangle 127*127
Answer:Canny Tutorial

You state that you wish to use Canny, Sobel or another algorithm. These can both be used in edge. Try for example:
BW = edge(I,'canny');
where I is your image matrix. If you are interested in finding out how edge works, type
edit edge
into your command window. You will then get to see MATLAB's own implementation.
You may wish to reimplement edge from scratch, to gain a good understanding of how image processing algorithms work. If so, I would direct you towards the following sources:
The Canny wikipedia page
The Sobel wikipedia page
I personally found this book an excellent reference for getting to grips with the basics of things like filters and edge detectors.
For your specific example with the rectangles, it is quite possible to use edge to find the edges. The one trick you have to do is to convert the rgb image to a grayscale one, using rgb2gray. Try for example:
rgb_image = imread('iarLe.png');
gray_image = rgb2gray(rgb_image);
edge_image = edge(gray_image);
imshow(edge_image);

Related

How can I find the boundary surface in this image

I am new to image processing. I want to find the surface between black and white pixels which separates them. Here is the link of image.
The size of image is (21,900,900)
https://drive.google.com/file/d/1zUWK0Fb_n6f1JZou5mrUJq0x3h2X8mBK/view?usp=sharing
I tried to use boundarymask command of MATLAB in one plane of image but I am getting noise and also it works for 2d image only. Please suggest me how to find boundary 3d surface here. Thank you.
This is the output image after applying boundarymask.
Your first step should be to get rid of your noise. Since you got some kind of salt and pepper noise you can to that using the median filter on a 2D-image with medfilt2() in matlab. After that you can use an edge ditector to find your edge pixels. The code for this could look like this. If you want the surface, you need to loop this, over the 3rd dimension of your 3D-image. The code will look like this:
for ii=1:16
I=imread('image.tif',ii);
I_bs=boundarymask(I);
I_filt=medfilt2(I_bs,[7 7]);
boundarysurface(:,:,ii)=edge(I_filt,'Canny');
end
The edge detector I used here is certainly overkill for this easy case, but was the easiest thing I could think of in short term. If performance is relevant let me know, and I will give you another approach.

How do I close off edges after Canny edge detection for filling region in Matlab using imfreehand?

I need to close boundaries of a person I have got using a Canny edge detector. My aim is to be able to extract the filled (white) silhouette of the person and then save the image.
I read that imfreehand might be used for freehand drawing, but how would I implement it for this purpose?
(There might be multiple gaps in boundaries in my datasets so using imfreehand multiple times might be required)
Based on the problem, you can use morphological operator imdilate and following imerode. imdilate will widen the boundary and make all boundaries connected, but all boundaries become thick. Then use imerode to go back to original width.
Also you can use bwmorph(img,'thin',Inf) to do the second step that.
img = imdilate(img,strel('disk',3))
img2 = imerode(img,strel('disk',2))
You could try using morphological operators such as imfill or bwmorph (with bridge)
BW2 = bwmorph(BW,'bridge');

Align blurred and sharp rectangles

Looking to align (register) and find the linear transformation between a pair of rectangular borders. One is a blurred and transformed version of the other (unknown blur kernel and a similarity transformation - rotation, translation and scale).
This is the input pair of images:
So far, I've tried registering the pair of images using both mutual information and brightness constancy. Namely, with the imregtform function from MATLAB's image processing toolbox. This is the best result I've been able to obtain (displaying a fused image with the blurred pixels in channels R,B and the sharp in channel G):
Which is not bad but is not perfect. Note in the right side the blurriness is not symmetric around the sharp rectangles.
I'm wondering if there is any other, simpler way to do this. Note, that I have complete control over the pattern! If anyone has an idea of a better pattern to use for alignment it can certainly help!
you can try affine registration, resize the bigger image and use DROP http://www.mrf-registration.net/deformable/index.html, it does more sophisticated stuff like discrete optimization using patch based matching, followed by b-spline interpolation to deform images

Headlights detection using Difference of Gaussian (DoG)

I am developing a project of detecting vehicles' headlights in night scene. First I am working on a demo on MATLAB. My detection method is edge detection using Difference of Gaussian (DoG): I take the convolution of the image with Gaussian blur with 2 difference sigma then minus 2 filtered images to find edge. My result is shown below:
Now my problem is to find a method in MATLAB to circle the round edge such as car's headlights and even street lights and ignore other edge. If you guys got any suggestion, please tell me.
I think you may be able to get a better segmentation using a slightly different approach.
There is already strong contrast between the lights and the background, so you can take advantage of this to segment out the bright spots using a simple threshold, then you can apply some blob detection to filter out any small blobs (e.g. streetlights). Then you can proceed from there with contour detection, Hough circles, etc. until you find the objects of interest.
As an example, I took your source image and did the following:
Convert to 8-bit greyscale
Apply Gaussian blur
Threshold
This is a section of the source image:
And this is the thresholded overlay:
Perhaps this type of approach is worth exploring further. Please comment to let me know what you think.

How to detect curves in a binary image?

I have a binary image, i want to detect/trace curves in that image. I don't know any thing (coordinates, angle etc). Can any one guide me how should i start? suppose i have this image
I want to separate out curves and other lines. I am only interested in curved lines and their parameters. I want to store information of curves (in array) to use afterward.
It really depends on what you mean by "curve".
If you want to simply identify each discrete collection of pixels as a "curve", you could use a connected-components algorithm. Each component would correspond to a collection of pixels. You could then apply some test to determine linearity or some other feature of the component.
If you're looking for straight lines, circular curves, or any other parametric curve you could use the Hough transform to detect the elements from the image.
The best approach is really going to depend on which curves you're looking for, and what information you need about the curves.
reference links:
Circular Hough Transform Demo
A Brief Description of the Application of the Hough
Transform for Detecting Circles in Computer Images
A method for detection of circular arcs based on the Hough transform
Google goodness
Since you already seem to have a good binary image, it might be easiest to just separate the different connected components of the image and then calculate their parameters.
First, you can do the separation by scanning through the image, and when you encounter a black pixel you can apply a standard flood-fill algorithm to find out all the pixels in your shape. If you have matlab image toolbox, you can find use bwconncomp and bwselect procedures for this. If your shapes are not fully connected, you might apply a morphological closing operation to your image to connect the shapes.
After you have segmented out the different shapes, you can filter out the curves by testing how much they deviate from a line. You can do this simply by picking up the endpoints of the curve, and calculating how far the other points are from the line defined by the endpoints. If this value exceeds some maximum, you have a curve instead of a line.
Another approach would be to measure the ratio of the distance of the endpoints and length of the object. This ratio would be near 1 for lines and larger for curves and wiggly shapes.
If your images have angles, which you wish to separate from curves, you might inspect the directional gradient of your curves. Segment the shape, pick set of equidistant points from it and for each point, calculate the angle to the previous point and to the next point. If the difference of the angle is too high, you do not have a smooth curve, but some angled shape.
Possible difficulties in implementation include thick lines, which you can solve by skeleton transformation. For matlab implementation of skeleton and finding curve endpoints, see matlab image processing toolkit documentation
1) Read a book on Image Analysis
2) Scan for a black pixel, when found look for neighbouring pixels that are also black, store their location then make them white. This gets the points in one object and removes it from the image. Just keep repeating this till there are no remaining black pixels.
If you want to separate the curves from the straight lines try line fitting and then getting the coefficient of correlation. Similar algorithms are available for curves and the correlation tells you the closeness of the point to the idealised shape.
There is also another solution possible with the use of chain codes.
Understanding Freeman chain codes for OCR
The chain code basically assigns a value between 1-8(or 0 to 7) for each pixel saying at which pixel location in a 8-connected neighbourhood does your connected predecessor lie. Thus like mention in Hackworths suggestions one performs connected component labeling and then calculates the chain codes for each component curve. Look at the distribution and the gradient of the chain codes, one can distinguish easily between lines and curves. The problem with the method though is when we have osciallating curves, in which case the gradient is less useful and one depends on the clustering of the chain codes!
Im no computer vision expert, but i think that you could detect lines/curves in binary images relatively easy using some basic edge-detection algorithms (e.g. sobel filter).