I am trying to detect the edge from black horizontal line to the gray-smeared foreground.
The desired edge/result is slightly marked red.
What have I tried so far :
My approach was to use the standard Chan-Vese Segmentation combined with several preprocseeing methods like gaussian blurring, maximum filter or morpholigocal operator like erosion. However, when I am initializing the level set function at the lower part of the image, the contour gets stuck right before the dersired edge.
Due to the noise I can't get rid off without destroying important information of the image, simple methods like the sobel or prewitt filtering might fail.
Another apporoach of me was to search for the maximum/minimum intensity columnwise of the image and mark the darkest pixel per column.
As you can assume, this will fail too because the edge I am looking for is not the only part that has dark pixels, that's why this method is very error-prone.
Edit
Snakes does not help either.
The active contour, marked as blue, simply goes over the edge and at the left and right the contour gets stuck. The Code I tried wasthe function Snake2D(I,P,Options) taken from here.
Here is the original image if you would like to help me.
I think your approach using the rows and finding the maximum is probably easiest.
The one problem you have is distinguishing the two main maxima. For this, you can apply a rough smoothing, to find the middle between the two maxima (blue line in image below). You can then only take the lower bit, which is the one you are interested in and find the maximum of this bit.
In a final step, just add up the two indices.
Result:
Could go like this:
ib = imread('LHkm2.png'); %Read image
sz = size(ib); %get dimensions
for i = 1:sz(2)
[~, ind_mid(i)] = max(smooth(-double(ib(:, i)), 130));%First round
line_to_smooth = ib(ind_mid(i):end, i);%Get line with one maximum
[~, ind(i)] = min(smooth(double(line_to_smooth), 10));%Second round
ind(i) = ind(i) + ind_mid(i);%Add indices to get final position
end
imshow(ib,[]);
hold on;
plot(ind_mid, 'LineWidth', 3);
plot(ind, 'LineWidth', 3);
Note: You can of course smooth the final line just like any other graphs to get rid of bumps like this:
ind = smooth(ind, 10)
where 10 is your smoothing window (the higher the broader, see here.
Related
I'm trying to detect seams in welding images for an autonomous welding process.
I want to find pixel positions of the detected line (the red line in the desired image) in the original image.
I used the following code and finally removed noise from the image to reach the result below.
clc,clear,clf;
im = imread('https://i.stack.imgur.com/UJcKA.png');
imshow(im);title('Original image'); pause(0.5);
sim = edge(im, 'sobel');
imshow(sim);title('after Sobel'); pause(0.5);
mask = im > 5;
se = strel('square', 5);
mask_s = imerode(mask, se);
mask(mask_s) = false;
mask = imdilate(mask, se);
sim(mask) = false;
imshow(sim);title('after mask');pause(0.5);
sim= medfilt2(sim);
imshow(sim);title('after noise removal')
Unfortunately there is nothing remaining in the image to find the seam perfectly.
Any help would be appreciated.
Download Original image.
You need to make your filter more robust to noise. This can be done by giving it a larger support:
filter = [ones(2,9);zeros(1,9);-ones(2,9)];
msk = imerode(im > 0, ones(11)); % only object pixels, discarding BG
fim =imfilter(im,filter);
robust = bwmorph((fim>0.75).*msk,'skel',inf); % get only strong pixels
The robust mask looks like:
As you can see, the seam line is well detected, we just need to pick it as the largest connected component:
st = regionprops(bwlabel(robust,8), 'Area', 'PixelList');
[ma mxi] = max([st.Area]); % select the region with the largest area
Now we can fit a polygon (2nd degree) to the seem:
pp=polyfit(st(mxi).PixelList(:,1), st(mxi).PixelList(:,2), 2);
And here it is over the image:
imshow(im, 'border','tight');hold on;
xx=1:size(im,2);plot(xx,polyval(pp,xx)+2,'r');
Note the +2 Y offset due to filter width.
PS,
You might find this thread relevant.
Shai gives a great answer, but I wanted to add a bit more context about why your noise filtering doesn't work.
Why median filtering doesn't work
Wikipedia suggests that median filtering removes noise while preserving edges, which is why you might have chosen to use it. However, in your case it will almost certainly not work, here's why:
Median filtering slides a window across the image. In each area, it replaces the central pixel with the median value from the surrounding window. medfilt2 uses a 3x3 window by default. Let's look at a 3x3 block near your line,
A 3x3 block around [212 157] looks like this
[0 0 0
1 1 1
0 0 0]
The median value is 0! So even though we're in the middle of a line segment, the pixel will be filtered out.
The alternative to median filtering
Shai's method for removing noise instead finds the largest connected group of pixels and ignores smaller groups of pixels. If you also wanted to remove these small groups from your image, Matlab provides a filter bwareaopen which removes small objects from binary images.
For example, if you replace your line
sim= medfilt2(sim);
with
sim= bwareaopen(sim, 4);
The result is much better
Alternative edge detectors
One last note, Shai uses a horizontal gradient filter to find horizontal edges in your image. It works great because your edge is horizontal. If you edge will not always be horizontal, you might want to use another edge detection method. In your original code, you use Sobel, but Matlab provides many options, all of which perform better if you tune their thresholds. As an example, in the following image, I've highlighted the pixels selected by your code (with bwareaopen modification) using four different edge detectors.
I have a picture which shows the magnitude of some value in terms of color, like this one:
The higher the magnitude, the more it looks red. However there are only a few points at the edge has very high values and most of the points have much lower values. If a colorbar with equal intervals is used the picture just looks blue throughout and the values in most areas cannot be distinguished.
Is it possible to set a colorbar that has, say, exponentially increasing intervals (or others) so that the center part can also show different colors?
Here is one way, it's not quite right but it's close. The idea is to make your own log spaced colour map. I do it by using linear interpolation between log spaced break points. It could probably be improved to use logarithmic interpolation (and maybe have better end cases):
First I simulate some data (open to suggestions for simulating data that better illustrates this example):
M = exp(rand(50)*10)
Then plot it (ignore the figure(2), that's just to make this match the image later)
n = 64
figure(2)
imagesc(M)
colormap(jet(n))
colorbar
now create a log spaced colour map
linMap = jet(n);
breaks = round(0:n/5:n)';
breaks(1) = 1;
cBreaks = linMap(breaks,:);
idx = 2.^(6:-1:1)';
idx(end) = 0; %// this part is hacky
logMap = interp1(((idx)/64)*max(M(:)),flipud(cBreaks),linspace(0,max(M(:)),64)); %// based on http://stackoverflow.com/questions/17230837/how-to-create-a-custom-colormap-programatically
figure(1)
imagesc(M)
colormap(logMap)
colorbar
results in
as you can see, the data remain unchanged and the data inspector still gives you back the same values but the colour bar is pretty much on a log scale now. I'd be interested to see what this looks like on your data.
I'm trying to find a starting point, but I can't seem to find the right answer. I'd be very grateful for some guidance. I also don't know the proper terminology, hence the title.
I took an image of a bag with a black background behind it.
And I want to extract the bag, similar to this.
And if possible, find the center, like this.
Essentially, I want to be able to extract the blob of pixels and then find the center point.
I know these are two separate questions, but I figured if someone can do the latter, then they can do the first. I am using MATLAB, but would like to write my own code and not use their image processing functions, like edge(). What methods/algorithms can I use? Any papers/links would be nice (:
Well, assuming that your image only consists of a black background and a bag inside it, a very common way to perform what you're asking is to threshold the image, then find the centroid of all of the white pixels.
I did a Google search and the closest thing that I can think of that matches what you want looks like this:
http://ak.picdn.net/shutterstock/videos/3455555/preview/stock-footage-single-blank-gray-shopping-bag-loop-rotate-on-black-background.jpg
This image is RGB for some reason, even though it's grayscale so we're going to convert this to grayscale. I'm assuming you can't use any built-in MATLAB functions and so rgb2gray is out. You can still implement it yourself though as rgb2gray implements the SMPTE Rec. 709 standard.
Once we read in the image, you can threshold the image and then find the centroid of all of the white pixels. That can be done using find to determine the non-zero row and column locations and then you'd just find the mean of both of them separately. Once we do that, we can show the image and plot a red circle where the centroid is located. As such:
im = imread('http://ak.picdn.net/shutterstock/videos/3455555/preview/stock-footage-single-blank-gray-shopping-bag-loop-rotate-on-black-background.jpg');
%// Convert colour image to grayscale
im = double(im);
im = 0.299*im(:,:,1) + 0.587*im(:,:,2) + 0.114*im(:,:,3);
im = uint8(im);
thresh = 30; %// Choose threshold here
%// Threshold image
im_thresh = im > thresh;
%// Find non-zero locations
[rows,cols] = find(im_thresh);
%// Find the centroid
mean_row = mean(rows);
mean_col = mean(cols);
%// Show the image and the centroid
imshow(im); hold on;
plot(mean_col, mean_row, 'r.', 'MarkerSize', 18);
When I run the above code, this is what we get:
Not bad! Now your next concern is the case of handling multiple objects. As you have intelligently determined, this code only detects one object. For the case of multiple objects, we're going to have to do something different. What you need to do is identify all of the objects in the image by an ID. This means that we need to create a matrix of IDs where each pixel in this matrix denotes which object the object belongs to. After, we iterate through each object ID and find each centroid. This is performed by creating a mask for each ID, finding the centroid of that mask and saving this result. This is what is known as finding the connected components.
regionprops is the most common way to do this in MATLAB, but as you want to implement this yourself, I will defer you to my post I wrote a while ago on how to find the connected components of a binary image:
How to find all connected components in a binary image in Matlab?
Mind you, that algorithm is not the most efficient one so it may take a few seconds, but I'm sure you don't mind the wait :) So let's deal with the case of multiple objects now. I also found this image on Google:
We'd threshold the image as normal, then what's going to be different is performing a connected components analysis, then we iterate through each label and find the centroid. However, an additional constraint that I'm going to enforce is that we're going to check the area of each object found in the connected components result. If it's less than some number, this means that the object is probably attributed to quantization noise and we should skip this result.
Therefore, assuming that you took the code in the above linked post and placed it into a function called conncomptest which has the following prototype:
B = conncomptest(A);
So, take the code in the referenced post, and place it into a function called conncomptest.m with a function header such that:
function B = conncomptest(A)
where A is the input binary image and B is the matrix of IDs, you would do something like this:
im = imread('http://cdn.c.photoshelter.com/img-get2/I0000dqEHPhmGs.w/fit=1000x750/84483552.jpg');
im = double(im);
im = 0.299*im(:,:,1) + 0.587*im(:,:,2) + 0.114*im(:,:,3);
im = uint8(im);
thresh = 30; %// Choose threshold here
%// Threshold image
im_thresh = im > thresh;
%// Perform connected components analysis
labels = conncomptest(im_thresh);
%// Find the total number of objects in the image
num_labels = max(labels(:));
%// Find centroids of each object and show the image
figure;
imshow(im);
hold on;
for idx = 1 : num_labels
%// Find the ith object mask
mask = labels == idx;
%// Find the area
arr = sum(mask(:));
%// If area is less than a threshold
%// don't process this object
if arr < 50
continue;
end
%// Else, find the centroid normally
%// Find non-zero locations
[rows,cols] = find(mask);
%// Find the centroid
mean_row = mean(rows);
mean_col = mean(cols);
%// Show the image and the centroid
plot(mean_col, mean_row, 'r.', 'MarkerSize', 18);
end
We get:
I have no intention of detracting from Ray's (#rayryeng) excellent advice and, as usual, beautifully crafted, reasoned and illustrated answer, however I note that you are interested in solutions other than Matlab and actually want to develop your own code, so I though I would provide some additional options for you.
You could look to the excellent ImageMagick, which is installed in most Linux distros and available for OS X, other good operating systems and Windows. It includes a "Connected Components" method and if you apply it to this image:
like this at the command-line:
convert bags.png -threshold 20% \
-define connected-components:verbose=true \
-define connected-components:area-threshold=600 \
-connected-components 8 -auto-level output.png
The output will be:
Objects (id: bounding-box centroid area mean-color):
2: 630x473+0+0 309.0,252.9 195140 srgb(0,0,0)
1: 248x220+0+0 131.8,105.5 40559 srgb(249,249,249)
7: 299x231+328+186 507.5,304.8 36620 srgb(254,254,254)
3: 140x171+403+0 458.0,80.2 13671 srgb(253,253,253)
12: 125x150+206+323 259.8,382.4 10940 srgb(253,253,253)
8: 40x50+339+221 357.0,248.0 1060 srgb(0,0,0)
which shows 6 objects, one per line, and gives the bounding boxes, centroids and mean-colour of each. So, the 3rd line means a box 299 pixels wide by 231 pixels tall, with its top-left corner at 328 across from the top-left of the image and 186 pixels down from the top-left corner.
If I draw in the bounding boxes, you can see them here:
The centroids are also listed for you.
The outout image from the command above is like this, showing each shape shaded in a different shade of grey. Note that the darkest one has come up black so is VERY hard to see - nearly impossible :-)
If, as you say, you wish to look at writing your own connected component code, you could look at my code in another answer of mine... here
Anyway, I hope this helps and you see it just as an addition to the excellent answer Ray has already provided.
I am struggling to find a good contour detection function that would count the number of contour in bw images that I have processed using some previous tools. As you can see, my profile picture is an example of such images,
,
In this image, ideally, I wish to have a function which counts four closed contour.
I don't mind if it also detects the really tiny contours in between, or the entire shape itself as extra contours. As long as it counts the medium sized ones, I can fix the rest by applying area threshold. My problem is that any function I have tried detects only one contour - the entire shape, as it cannot separate it to the su-conours which are connected to one another.
Any suggestions?
Here is my shot at this, although your question might get closed because it's off-topic, too broad or a possible duplicate. Anyhow I propose another way to count the number of contours. You could also do it using bwboundaries as was demonstrated in the link provided by #knedlsepp in the possible duplicate. Just for the sake of it here is another way.
The idea is to apply a morphological closure of your image and actually count the number of enclosed surfaces instead instead of contours. That might end up being the same thing but I think it's easier to visualize surfaces.
Since the shapes in your image look like circle (kind of...) the structuring element used to close the image is a disk. The size (here 5) is up to you but for the image you provided its fine. After that, use regionprops to locate image regions (here the blobs) and count them, which comes back to counting contours I guess. You can provide the Area parameter to filter out shapes based on their area. Here I ask the function to provide centroids to plot them.
Whole code:
clear
clc
close all
%// Read, threshold and clean up the image
Im = im2bw(imread('ImContour.png'));
Im = imclearborder(Im);
%// Apply disk structuring element to morphologically close the image.
%// Play around with the size to alter the output.
se = strel('disk',5);
Im_closed = imclose(Im,se);
%// Find centroids of circle-ish shapes. Youcan also get the area to filter
%// out those you don't want.
S = regionprops(~Im_closed,'Centroid','Area');
%// remove the outer border of the image (1st output of regioprops).
S(1) = [];
%// Make array with centroids and show them.
Centro = vertcat(S.Centroid);
imshow(Im)
hold on
scatter(Centro(:,1),Centro(:,2),40,'filled')
And the output:
So as you see the algorithm detected 5 regions, but try playing a bit with the parameters and you will see which ones to change to get the desired output of 4.
Have fun!
I want to detect edges (with sub-pixel accuracy) in images like the one displayed:
The resolution would be around 600 X 1000.
I came across a comment by Mark Ransom here, which mentions about edge detection algorithms for vertical edges. I haven't come across any yet. Will it be useful in my case (since the edge isn't strictly a straight line)? It will always be a vertical edge though. I want it to be accurate till 1/100th of a pixel at least. I also want to have access to these sub-pixel co-ordinate values.
I have tried "Accurate subpixel edge location" by Agustin Trujillo-Pino. But this does not give me a continuous edge.
Are there any other algorithms available? I will be using MATLAB for this.
I have attached another similar image which the algorithm has to work on:
Any inputs will be appreciated.
Thank you.
Edit:
I was wondering if I could do this:
Apply Canny / Sobel in MATLAB and get the edges of this image (note that it won't be a continuous line). Then, somehow interpolate this Sobel edges and get the co-ordinates in subpixel. Is it possible?
A simple approach would be to project your image vertically and fit the projected profile with an appropriate function.
Here is a try, with an atan shape:
% Load image
Img = double(imread('bQsu5.png'));
% Project
x = 1:size(Img,2);
y = mean(Img,1);
% Fit
f = fit(x', y', 'a+b*atan((x0-x)/w)', 'Startpoint', [150 50 10 150])
% Display
figure
hold on
plot(x, y);
plot(f);
legend('Projected profile', 'atan fit');
And the result:
I get x_0 = 149.6 pix for your first image.
However, I doubt you will be able to achieve a subpixel accuracy of 1/100th of pixel with those images, for several reasons:
As you can see on the profile, your whites are saturated (grey levels at 255). As you cut the real atan profile, the fit is biased. If you have control over the experiments, I suggest you do it again again with a smaller exposure time for instance.
There are not so many points on the transition, so there is not so many information on where the transition is. Typically, your resolution will be the square root of the width of the atan (or whatever shape you prefer). In you case this limits the subpixel resolution at 1/5th of a pixel, at best.
Finally, your edges are not stricly vertical, they are slightly titled. If you choose to use this projection method, to increase the accuracy you should look for a way to correct this tilt before projecting. This won't increase your accuracy by several orders of magnitude, though.
Best,
There is a problem with your image. At pixel level, it seems like there are four interlaced subimages (odd and even rows and columns). Look at this zoomed area close to the edge.
In order to avoid this artifact, I just have taken the even rows and columns of your image, and compute subpixel edges. And finally, I look for the best fitting straight line, using the function clsq whose code is in this page:
%load image
url='http://i.stack.imgur.com/bQsu5.png';
image = imread(url);
imageEvenEven = image(1:2:end,1:2:end);
imshow(imageEvenEven, 'InitialMagnification', 'fit');
% subpixel detection
threshold = 25;
edges = subpixelEdges(imageEvenEven, threshold);
visEdges(edges);
% compute fit line
A = [ones(size(edges.x)) edges.x edges.y];
[c n] = clsq(A,2);
y = [1,200];
x = -(n(2)*y+c) / n(1);
hold on;
plot(x,y,'g');
When executing this code, you can see the green line that best aproximate all the edge points. The line is given by the equation c + n(1)*x + n(2)*y = 0
Take into account that this image has been scaled by 1/2 when taking only even rows and columns, so the right coordinates must be scaled.
Besides, you can try with the other tree subimages (imageEvenOdd, imageOddEven and imageOddOdd) and combine the four straigh lines to obtain the best solution.