How to smooth a line surface to meet one pixel width? - matlab

I have a small problem in smoothing a line surface.
The image is a result of edge detection after sobel processing.
It has an uneven surface, and the unit of bulges is one pixel.
Red circle parts are the bulges.
http://www.mathworks.com/help/matlab/creating_plots/smooth-contour-data-with-convolution-filter.html
The link I have tried, but the line width was increased too more.
I have to get a straight line with one pixel width.
How to clear up these bulges?
Many Thanks.
7/21 update:
Canny method can generate a detected image with one pixel.
The result of Canny edge detection:
The line was segmented 2 parts, the under part was shifted by one pixel.
I hope the line that can be considered a straight line rather than 2 lines when the line width is within 2~3 pixel.
With Dilate and Erosion, I tried to smooth the line that become a straight line
Canny > Dilate:
Canny > Dilate > Erosion:
The result of before and after are the same...
Could anyone give me an idea?
Many Thanks again.

If you are working with a simple 2-D, black-and-white image, stored as a 0/1 array, and you guarantee there will always be a straight line all the way from top to bottom, then this may help -
% recreate scenario
A = zeros(100,50);
A(:,25) = 1;
A([30:40,80:85],24)=1;
A([20:35,70:90],26)=1;
% a rude way
S = sum(A,1);
B = repmat(S==max(S),100,1);
imshow(B)

Maybe an approach similar to Canny edge detection could be the solution you are searching for. The Canny edge detector uses non maximum suppression, meaning that the edges are formed only by pixels that have the maximum gradient between their neighbours, resulting in many occasions in one-pixel-width edges.

Related

Remove similar lines provided by Hough transform

I found with Hough transform more lines but somethings are very similar for my final target.
For example
In this image I have 5 lines but I really need just 2 lines.
How I can remove the unnecessary lines?
My code is
image = cv.Canny(image, 200);
lines = cv.HoughLinesP(image,'Threshold',80,'MinLineLength',100,'MaxLineGap',50);
A simple way can be with lines intersecting, but lines can be parallel and very close in certain situations.
Any idea?
My crude method was
use canny edge detector
take the first line from houghlines
draw black thick line over the original line in houglines inpu
repeat until you get no output from houghlines
I used it to detect edges of a card, so I took four best lines.
I would compute the the slope and intercept of the lines and compare them to see if they're both within some tolerance you define. The intercept should be described on the same coordinate frame, say with the origin at pixel r,c = (0,0). Identical lines could be merged then. The only failure case I can think of is if you have non-contiguous line segments that would have the same slope and intercept - those would be merged with this approach. But in your image you don't seem to have this issue.

Autonomous seam detection in Images on matlab

I'm trying to detect seams in welding images for an autonomous welding process.
I want to find pixel positions of the detected line (the red line in the desired image) in the original image.
I used the following code and finally removed noise from the image to reach the result below.
clc,clear,clf;
im = imread('https://i.stack.imgur.com/UJcKA.png');
imshow(im);title('Original image'); pause(0.5);
sim = edge(im, 'sobel');
imshow(sim);title('after Sobel'); pause(0.5);
mask = im > 5;
se = strel('square', 5);
mask_s = imerode(mask, se);
mask(mask_s) = false;
mask = imdilate(mask, se);
sim(mask) = false;
imshow(sim);title('after mask');pause(0.5);
sim= medfilt2(sim);
imshow(sim);title('after noise removal')
Unfortunately there is nothing remaining in the image to find the seam perfectly.
Any help would be appreciated.
Download Original image.
You need to make your filter more robust to noise. This can be done by giving it a larger support:
filter = [ones(2,9);zeros(1,9);-ones(2,9)];
msk = imerode(im > 0, ones(11)); % only object pixels, discarding BG
fim =imfilter(im,filter);
robust = bwmorph((fim>0.75).*msk,'skel',inf); % get only strong pixels
The robust mask looks like:
As you can see, the seam line is well detected, we just need to pick it as the largest connected component:
st = regionprops(bwlabel(robust,8), 'Area', 'PixelList');
[ma mxi] = max([st.Area]); % select the region with the largest area
Now we can fit a polygon (2nd degree) to the seem:
pp=polyfit(st(mxi).PixelList(:,1), st(mxi).PixelList(:,2), 2);
And here it is over the image:
imshow(im, 'border','tight');hold on;
xx=1:size(im,2);plot(xx,polyval(pp,xx)+2,'r');
Note the +2 Y offset due to filter width.
PS,
You might find this thread relevant.
Shai gives a great answer, but I wanted to add a bit more context about why your noise filtering doesn't work.
Why median filtering doesn't work
Wikipedia suggests that median filtering removes noise while preserving edges, which is why you might have chosen to use it. However, in your case it will almost certainly not work, here's why:
Median filtering slides a window across the image. In each area, it replaces the central pixel with the median value from the surrounding window. medfilt2 uses a 3x3 window by default. Let's look at a 3x3 block near your line,
A 3x3 block around [212 157] looks like this
[0 0 0
1 1 1
0 0 0]
The median value is 0! So even though we're in the middle of a line segment, the pixel will be filtered out.
The alternative to median filtering
Shai's method for removing noise instead finds the largest connected group of pixels and ignores smaller groups of pixels. If you also wanted to remove these small groups from your image, Matlab provides a filter bwareaopen which removes small objects from binary images.
For example, if you replace your line
sim= medfilt2(sim);
with
sim= bwareaopen(sim, 4);
The result is much better
Alternative edge detectors
One last note, Shai uses a horizontal gradient filter to find horizontal edges in your image. It works great because your edge is horizontal. If you edge will not always be horizontal, you might want to use another edge detection method. In your original code, you use Sobel, but Matlab provides many options, all of which perform better if you tune their thresholds. As an example, in the following image, I've highlighted the pixels selected by your code (with bwareaopen modification) using four different edge detectors.

How can i straighten a hand drawn line in matlab?

I'm working on an image processing project where i need to detect corners. But when i try to detect corners using corner function, it detects the small displacements as corners as shown.
I've tried with different thresholds from 0 to 0.24 and couldn't get food results.
imgskele = bwmorph(imgfill,'thin',Inf);
C = corner(imgspur, 'SensitivityFactor', 0.24);
figure; imshow(imgspur);
hold on;
plot(C(:,1), C(:,2),'bo','MarkerSize',10,'MarkerFaceColor','g');
hold off;
So i'm thinking of adjusting(redrawing) the line to make it straight line connecting those points
Edit 1:
Here are the full size original and output images:
The problem which you have is that the corner function is the Harris corner detector, which finds the corner of filled polygons.
Now a line can be approximated by a very thin polygon, certainly when pixelated, but that's not perfect as you notice here.
A more robust method is to use something like the Hough transform to find line features in the image. These lines will have intersections, some of which are approximately the corners you want. Others are fake intersections, because the Hough transform assumes lines and not line segments. You'll need to experiment a bit what you accept and what you reject. How rounded can a corner be, before you no longer call it a corner?

Fill an outline which is incomplete

Consider that I have a colored image like this in which the outline is not complete (There are gaps between lines). I want to be able to fill the area between the lines with one color or another. This actually is a binary image which I got after applying canny edge detector on a corresponding gray scale image.
I tried first dilating the image and then eroding it, but the result is not good enough. I want to be able to preserve the thickness of the root
Any help would be greatly appreciated
Original Image
Image after edge detection and some manual removal of pixels
Using the information in the edge image, I thought I would try to extract pixels from the original image of a certain color. For every white pixel in the edited image, I used a search space in the original image along the same horizontal line. I used different thresholds for R, G and B and I ended up with this
I'm not sure what your original image looks like. It would be helpful to see.
You have gaps between the lines because a line in your original image has two edges, one on each side. The canny algorithm is detecting them both. The Canny edge detection algorithm has at its heart the application of two Sobel kernels to calculate the gradient, one for detecting horizontal edges and one for detection vertical edges.
-1 0 +1
-2 0 +2
-1 0 +1
and
+1 +2 +1
0 0 0
-1 -2 -1
These kernels will present peaks for both sides of the line. One peak positive and one negative. You can exclude one side of the line by excluding the corresponding peak. After taking the gradient of each direction truncate any values below zero (set the values to zero) to remove the second peak. Then continue with the Canny edge detection as usual. This will result in the detection of only a single edge for each line instead of the two that you are seeing now.
I'll add a third approach now that I have seen the image. It looks like most of the information is in the green channel.
Green channel image
This image gives you a decent result if you simply apply a threshold.
Thresholded image with a somewhat arbitrary threshold
You can then either clean this image up by itself or use your edge image. To clean it up with the edge image you produced remove any white pixels that are more than a certain distance from one of your detected edges (create a Euclidean distance map from your edge image and use that to set any white pixels greater than a certain distance from an edge to black).
If you are still collecting images you may want to try to position the camera in a way to avoid the bottom of the jar (or whatever this is).
You could attempt to use a line scanning methodology. Start at the side and scan horizontally. When you hit an edge you assume you are in a root and you start setting the voxels to white. When you hit another edge you assume you are leaving a root and you start. There will be some fringe cases and you may want to add additional checks, such as limiting the allowed thickness of a root.
You could also do a flood fill style algorithm where you take a seed point in a root and travel up the root filling it in.
Not sure how well these would work as it depends on the image and I did not test it.

Techniques for differentiating between circle rectangle and triangle?

What coding techniques would allow me to to differentiate between circles, rectangles, and triangles in black and white image bitmaps?
You could train an Artificial Neural Network to classify the shapes :P
If noise is low enough to extract curves, approximations can be used: for each shape select parameters giving least error (he method of least squares may help here) and then compare these errors...
If the image is noisy, I would consider Hough transform - it may be used to detect shapes with small count of parameters, like circle (harder for rectangles and triangles).
just an idea off of the top of my head: scan the (pixel) image line-by-line, pixel-by-pixel. If you encounter the first white pixel (assuming it has a black background) you keep it's position as a starting point and look at the eight pixels surrounding it in every direction for the next white pixel. If you find an adjacent second pixel you can establish a directional vector between those two pixels.
Now repeat this until the direction of your vector changes (or the change is above a certain threshold). Keep the last point before the change as the endpoint of your first line and repeat the process for the next line.
Then calculate the angle between the two lines and store it. Now trace the third line. Calculate the angle between the 2nd and 3rd line as well.
If both angles are rectangular you probably found a rectangle, otherwise you probably found a triangle. If you can't find any straight line you could conclude that you found a circle.
I know the algorithm is a bit sketchy but I think (with some refinement) it could work if your image's quality is not too bad (too much noise, gaps in the lines etc.).
You are looking for the Hough Transform. For an implementation, try the AForge.NET framework. It includes circle and line hough transformations.