Autonomous seam detection in Images on matlab - matlab

I'm trying to detect seams in welding images for an autonomous welding process.
I want to find pixel positions of the detected line (the red line in the desired image) in the original image.
I used the following code and finally removed noise from the image to reach the result below.
clc,clear,clf;
im = imread('https://i.stack.imgur.com/UJcKA.png');
imshow(im);title('Original image'); pause(0.5);
sim = edge(im, 'sobel');
imshow(sim);title('after Sobel'); pause(0.5);
mask = im > 5;
se = strel('square', 5);
mask_s = imerode(mask, se);
mask(mask_s) = false;
mask = imdilate(mask, se);
sim(mask) = false;
imshow(sim);title('after mask');pause(0.5);
sim= medfilt2(sim);
imshow(sim);title('after noise removal')
Unfortunately there is nothing remaining in the image to find the seam perfectly.
Any help would be appreciated.
Download Original image.

You need to make your filter more robust to noise. This can be done by giving it a larger support:
filter = [ones(2,9);zeros(1,9);-ones(2,9)];
msk = imerode(im > 0, ones(11)); % only object pixels, discarding BG
fim =imfilter(im,filter);
robust = bwmorph((fim>0.75).*msk,'skel',inf); % get only strong pixels
The robust mask looks like:
As you can see, the seam line is well detected, we just need to pick it as the largest connected component:
st = regionprops(bwlabel(robust,8), 'Area', 'PixelList');
[ma mxi] = max([st.Area]); % select the region with the largest area
Now we can fit a polygon (2nd degree) to the seem:
pp=polyfit(st(mxi).PixelList(:,1), st(mxi).PixelList(:,2), 2);
And here it is over the image:
imshow(im, 'border','tight');hold on;
xx=1:size(im,2);plot(xx,polyval(pp,xx)+2,'r');
Note the +2 Y offset due to filter width.
PS,
You might find this thread relevant.

Shai gives a great answer, but I wanted to add a bit more context about why your noise filtering doesn't work.
Why median filtering doesn't work
Wikipedia suggests that median filtering removes noise while preserving edges, which is why you might have chosen to use it. However, in your case it will almost certainly not work, here's why:
Median filtering slides a window across the image. In each area, it replaces the central pixel with the median value from the surrounding window. medfilt2 uses a 3x3 window by default. Let's look at a 3x3 block near your line,
A 3x3 block around [212 157] looks like this
[0 0 0
1 1 1
0 0 0]
The median value is 0! So even though we're in the middle of a line segment, the pixel will be filtered out.
The alternative to median filtering
Shai's method for removing noise instead finds the largest connected group of pixels and ignores smaller groups of pixels. If you also wanted to remove these small groups from your image, Matlab provides a filter bwareaopen which removes small objects from binary images.
For example, if you replace your line
sim= medfilt2(sim);
with
sim= bwareaopen(sim, 4);
The result is much better
Alternative edge detectors
One last note, Shai uses a horizontal gradient filter to find horizontal edges in your image. It works great because your edge is horizontal. If you edge will not always be horizontal, you might want to use another edge detection method. In your original code, you use Sobel, but Matlab provides many options, all of which perform better if you tune their thresholds. As an example, in the following image, I've highlighted the pixels selected by your code (with bwareaopen modification) using four different edge detectors.

Related

Detecting strongest points on text

I need to find text areas on a natural image.
I = rgb2gray(imread('image-name.jpg'));
points = detectHarrisFeatures(I);
imshow(I); hold on;
plot(points);
Above version of code retrieves all detected strongest points.
When I change the line that starts with "plot" like this:
[m,n] = size(points.Location);
plot(points.selectStrongest(int64((m*2)/3)));
I get the points with less noise from above but in various situations I need to reduce noisy points and the output figure was:
Input image is on the left side and Output image is on the right side
As you can see, there are still noisy points out of the rectangle(red lines) area. (rectangle lines are added byme on photoshop, output is the same without red lines)
The main question is I need a perspectively noised text regions rectangle like this (red rectangle on the image):
Desired output with rectangle
By finding this rectangle, I can afford affine process to image to correct the perspective issue and make it ready for OCR process.
The interest point density in noisy regions looks low compared to the point-density in other regions. By density, I mean the number of interest-points per unit area. Assuming this observation holds in general, it is possible to filter out the noisy regions.
I don't have matlab, so the code is in opencv.
As I mentioned in a comment, I initially thought a median filter would work, but when I tried it, it didn't. So I tried adaptive thresholding, because it is doing kind-of density calculation in my implementation and rejecting less-dense regions. Please see the comments in the code for further clarification.
/* load image as gray scale */
Mat im = imread("yEEy9.jpg", 0);
/* find interest points: using FAST here */
vector<KeyPoint> keypoints;
FAST(im, keypoints, 15);
/* mark interest points pixels with value 255 in a blank image */
Mat goodfeatures = Mat::zeros(im.rows, im.cols, CV_8U);
for (KeyPoint p: keypoints)
{
goodfeatures.at<unsigned char>(p.pt) = 255;
}
/* density filtering using adaptive thresholding:
compute a threshold for each pixel as the mean value of blocksize*blocksize neighborhood
of (x,y) minus c */
int blocksize = 15, c = 7;
Mat bw;
adaptiveThreshold(goodfeatures, bw, 255, CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY, blocksize, c);
We can't detect bounding rectangle from printed text lines as lines may not cover entire page area or line detection itself may be improper as we've not yet done perspective corrections.
So i suggest eased out approach for problem:
Detect all four page edge lines which will give good estimate of page's rotation on table's plane (or camera roll). Correct image for rotation first.
I guess much correction may not be required to image for camera yaw and tilt, as one will not be shooting a page from high angles say 45 degrees and for 5 to 10 degree yaw / tilt characters will still be recognizable. Moreover difference in width of top to bottom edge and left to right edge can be used to estimate correction factor against tilt and yaw for easing out detection algorithm threashold.

how to Solve the over segmentation in watershed

My input Image is
output image is
Expected output is something like this
it is seen that some of the ellipse like structures are merged with rectangle.Also I'm unable to separate each labels to get the ellipses
Algorithm used is watershed
clear; close all;
I = imread('Sub.png');
I = rgb2gray(I);
figure; imshow(I)
I2 = imtophat(I, strel('square', 45));
figure; imshow(I2)
% Alpha=.047;
% h = fspecial('motion', 10, 5);
% w=gausswin(I2,Alpha) % you'll have to play with N and alpha
% I2 = imfilter(I2,h,'same','symmetric'); % something like these options
level = .047;
BW = im2bw(I2,level);
D = -bwdist(~BW,'chessboard');
D(~BW) = -Inf;
L = watershed(D);
imshow(label2rgb(L,'jet','w'))
ultimate opening code :
ImageSource=imread('cameraman.tif');
ImTmp=ImageSource
ImResidue = zeros(size(ImageSource));
ImIndicator= zeros(size(ImageSource));
ImValues= zeros(size(ImageSource));
For size= 1 : N
se = strel('square',N);
ImOp = imopen(ImageSource,se);
ImDiff=imabsdiff(ImOp,ImTmp)
if ImResidue < ImDiff then
ImResidue = ImDiff
ImIndicator = size
ImValues = ImOp
end
ImTmp=ImOp
end
You have to use a watershed with markers if you want something accurate, but it's going to be more tricky. By default the basic watershed over segment because it uses each local minima as a marker.
Therefore, you have to preprocess a little bit your image in order to increase the separation between the objects you want to segment, and then use markers in order to guide your watershed.
[EDIT according to you EDIT] If you want just the little structures between the vertebras, then I would recommend to perform a small erosion in order to increase the gap between them, followed by an ultimate opening. The structures you want will disappear for small radii, when the vertebras will need bigger ones.
Don't forget to use the markers on the gradient image.
[EDIT 2, preliminary results] I was curious about your problem, so I gave it a try. Instead of going to the small regions between the vertebras (the one you want to segment), I tried to first segment the vertebras (what you want being between them).
Here is what I did:
Small opening (square order 1, square is faster and fine because you patient is well oriented in the image, else disk or hexagon) in order to increase the gap between the vertebras and their neighborhood.
Area Opening (surface 23, but it does not really matter) in order to flatten the different zones, so erase the peaks.
Area Closing (surface 23, but it does not really matter) in order to flatten again the different zones, so fill the holes. See the image result at this point. See, everything looks smoother, but the different boundaries/rims are still intact.
Ultimate opening (UO).
ThresholdingS according to the ultimate opening results (residues, values and indicators). See the vertebras approximation. I haven't the values anymore (I've deleted my code), but you can take a look to the UO results and find them back. However, I still have the UO results if you want.
Opening (disk order 7) in order to erase all the artifacts and false positive (the vertebras being big)
Same operations as 5 in order to approximate the small patterns you want to segment. See the results
Small erosion of result on step 7 in order to have the outer markers (between the vertebras). I complete the outer markers with the boundary of a big dilation (disk order 11) of results of step 6.
Here are the markers I get.
Watershed with the computed markers, here is the preliminary result
The patterns you want to segment are between the vertebras, so I guess this result narrows down a lot the region of interest.
Does it work for you?
I cannot share the code, but I guess that you should find everything in MatLab.
You can improve this result by detecting the rectangular shapes.

How to find edge from dark line to grey smeared region

I am trying to detect the edge from black horizontal line to the gray-smeared foreground.
The desired edge/result is slightly marked red.
What have I tried so far :
My approach was to use the standard Chan-Vese Segmentation combined with several preprocseeing methods like gaussian blurring, maximum filter or morpholigocal operator like erosion. However, when I am initializing the level set function at the lower part of the image, the contour gets stuck right before the dersired edge.
Due to the noise I can't get rid off without destroying important information of the image, simple methods like the sobel or prewitt filtering might fail.
Another apporoach of me was to search for the maximum/minimum intensity columnwise of the image and mark the darkest pixel per column.
As you can assume, this will fail too because the edge I am looking for is not the only part that has dark pixels, that's why this method is very error-prone.
Edit
Snakes does not help either.
The active contour, marked as blue, simply goes over the edge and at the left and right the contour gets stuck. The Code I tried wasthe function Snake2D(I,P,Options) taken from here.
Here is the original image if you would like to help me.
I think your approach using the rows and finding the maximum is probably easiest.
The one problem you have is distinguishing the two main maxima. For this, you can apply a rough smoothing, to find the middle between the two maxima (blue line in image below). You can then only take the lower bit, which is the one you are interested in and find the maximum of this bit.
In a final step, just add up the two indices.
Result:
Could go like this:
ib = imread('LHkm2.png'); %Read image
sz = size(ib); %get dimensions
for i = 1:sz(2)
[~, ind_mid(i)] = max(smooth(-double(ib(:, i)), 130));%First round
line_to_smooth = ib(ind_mid(i):end, i);%Get line with one maximum
[~, ind(i)] = min(smooth(double(line_to_smooth), 10));%Second round
ind(i) = ind(i) + ind_mid(i);%Add indices to get final position
end
imshow(ib,[]);
hold on;
plot(ind_mid, 'LineWidth', 3);
plot(ind, 'LineWidth', 3);
Note: You can of course smooth the final line just like any other graphs to get rid of bumps like this:
ind = smooth(ind, 10)
where 10 is your smoothing window (the higher the broader, see here.

Subpixel edge detection for almost vertical edges

I want to detect edges (with sub-pixel accuracy) in images like the one displayed:
The resolution would be around 600 X 1000.
I came across a comment by Mark Ransom here, which mentions about edge detection algorithms for vertical edges. I haven't come across any yet. Will it be useful in my case (since the edge isn't strictly a straight line)? It will always be a vertical edge though. I want it to be accurate till 1/100th of a pixel at least. I also want to have access to these sub-pixel co-ordinate values.
I have tried "Accurate subpixel edge location" by Agustin Trujillo-Pino. But this does not give me a continuous edge.
Are there any other algorithms available? I will be using MATLAB for this.
I have attached another similar image which the algorithm has to work on:
Any inputs will be appreciated.
Thank you.
Edit:
I was wondering if I could do this:
Apply Canny / Sobel in MATLAB and get the edges of this image (note that it won't be a continuous line). Then, somehow interpolate this Sobel edges and get the co-ordinates in subpixel. Is it possible?
A simple approach would be to project your image vertically and fit the projected profile with an appropriate function.
Here is a try, with an atan shape:
% Load image
Img = double(imread('bQsu5.png'));
% Project
x = 1:size(Img,2);
y = mean(Img,1);
% Fit
f = fit(x', y', 'a+b*atan((x0-x)/w)', 'Startpoint', [150 50 10 150])
% Display
figure
hold on
plot(x, y);
plot(f);
legend('Projected profile', 'atan fit');
And the result:
I get x_0 = 149.6 pix for your first image.
However, I doubt you will be able to achieve a subpixel accuracy of 1/100th of pixel with those images, for several reasons:
As you can see on the profile, your whites are saturated (grey levels at 255). As you cut the real atan profile, the fit is biased. If you have control over the experiments, I suggest you do it again again with a smaller exposure time for instance.
There are not so many points on the transition, so there is not so many information on where the transition is. Typically, your resolution will be the square root of the width of the atan (or whatever shape you prefer). In you case this limits the subpixel resolution at 1/5th of a pixel, at best.
Finally, your edges are not stricly vertical, they are slightly titled. If you choose to use this projection method, to increase the accuracy you should look for a way to correct this tilt before projecting. This won't increase your accuracy by several orders of magnitude, though.
Best,
There is a problem with your image. At pixel level, it seems like there are four interlaced subimages (odd and even rows and columns). Look at this zoomed area close to the edge.
In order to avoid this artifact, I just have taken the even rows and columns of your image, and compute subpixel edges. And finally, I look for the best fitting straight line, using the function clsq whose code is in this page:
%load image
url='http://i.stack.imgur.com/bQsu5.png';
image = imread(url);
imageEvenEven = image(1:2:end,1:2:end);
imshow(imageEvenEven, 'InitialMagnification', 'fit');
% subpixel detection
threshold = 25;
edges = subpixelEdges(imageEvenEven, threshold);
visEdges(edges);
% compute fit line
A = [ones(size(edges.x)) edges.x edges.y];
[c n] = clsq(A,2);
y = [1,200];
x = -(n(2)*y+c) / n(1);
hold on;
plot(x,y,'g');
When executing this code, you can see the green line that best aproximate all the edge points. The line is given by the equation c + n(1)*x + n(2)*y = 0
Take into account that this image has been scaled by 1/2 when taking only even rows and columns, so the right coordinates must be scaled.
Besides, you can try with the other tree subimages (imageEvenOdd, imageOddEven and imageOddOdd) and combine the four straigh lines to obtain the best solution.

Segmenting a grayscale image

I am having trouble achieving the correct segmentation of a grayscale image:
The ground truth, i.e. what I would like the segmentation to look like, is this:
I am most interested in the three components within the circle. Thus, as you can see, I would like to segment the top image into three components: two semi-circles, and a rectangle between them.
I have tried various combinations of dilation, erosion, and reconstruction, as well as various clustering algorithms, including k-means, isodata, and mixture of gaussians--all with varying degrees of success.
Any suggestions would be appreciated.
Edit: here is the best result I've been able to obtain. This was obtained using an active contour to segment the circular ROI, and then applying isodata clustering:
There are two problems with this:
The white halo around the bottom-right cluster, belonging to the top-left cluster
The gray halo around both the top-right and bottom-left cluster, belonging to the center cluster.
Here's a starter...
use circular Hough transform to find the circular part. For that I initially threshold the image locally.
im=rgb2gray(imread('Ly7C8.png'));
imbw = thresholdLocally(im,[2 2]); % thresold localy with a 2x2 window
% preparing to find the circle
props = regionprops(imbw,'Area','PixelIdxList','MajorAxisLength','MinorAxisLength');
[~,indexOfMax] = max([props.Area]);
approximateRadius = props(indexOfMax).MajorAxisLength/2;
radius=round(approximateRadius);%-1:approximateRadius+1);
%find the circle using Hough trans.
h = circle_hough(edge(imbw), radius,'same');
[~,maxIndex] = max(h(:));
[i,j,k] = ind2sub(size(h), maxIndex);
center.x = j; center.y = i;
figure;imagesc(im);imellipse(gca,[center.x-radius center.y-radius 2*radius 2*radius]);
title('Finding the circle using Hough Trans.');
select only what's inside the circle:
[y,x] = meshgrid(1:size(im,2),1:size(im,1));
z = (x-j).^2+(y-i).^2;
f = (z<=radius^2);
im=im.*uint8(f);
EDIT:
look for a place to start threshold the image to segment it by looking at the histogram, finding it's first local maxima, and iterating from there until 2 separate segments are found, using bwlabel:
p=hist(im(im>0),1:255);
p=smooth(p,5);
[pks,locs] = findpeaks(p);
bw=bwlabel(im>locs(1));
i=0;
while numel(unique(bw))<3
bw=bwlabel(im>locs(1)+i);
i=i+1;
end
imagesc(bw);
The middle part can now be obtained by taking out the two labeled parts from the circle, and what is left will be the middle part (+some of the halo)
bw2=(bw<1.*f);
but after some median filtering we get something more reasonble
bw2= medfilt2(medfilt2(bw2));
and together we get:
imagesc(bw+3*bw2);
The last part is a real "quick and dirty", I'm sure that with the tools you already used you'll get better results...
One can also obtain an approximate result using the watershed transformation. This is the watershed on the inverted image -> watershed(255-I) Here is an example result:
Another Simple method is to perform a morphological closing on the original image with a disc structuring element (one can perform multiscale closing for granulometries) and then obtain the full circle. After this extracting the circle is and components withing is easier.
se = strel('disk',3);
Iclo = imclose(I, se);% This closes open circular cells.
Ithresh = Iclo>170;% one can locate this threshold automatically by histogram modes (if you know apriori your cell structure.)
Icircle = bwareaopen(Ithresh, 50); %to remove small noise components in the bg
Ithresh2 = I>185; % This again needs a simple histogram.