In my progress work, I have to detect a parasite. I have found the parasite using HSV and later made it into a grey image. Now I have done edge detection too. I need some code which tells MATLAB to find the largest contour (parasite) and make the rest of the area as black pixels.
You can select the "largest" contour by filling in the holes that each contour surrounds, figure out which shape gives you the largest area, then use the locations of the largest area and copy that over to a final image. As what Benoit_11 suggested, use regionprops - specifically the Area and PixelList flags. Something like this:
im = imclearborder(im2bw(imread('http://i.stack.imgur.com/a5Yi7.jpg')));
im_fill = imfill(im, 'holes');
s = regionprops(im_fill, 'Area', 'PixelList');
[~,ind] = max([s.Area]);
pix = sub2ind(size(im), s(ind).PixelList(:,2), s(ind).PixelList(:,1));
out = zeros(size(im));
out(pix) = im(pix);
imshow(out);
The first line of code reads in your image from StackOverflow directly. The image is also a RGB image for some reason, and so I convert this into binary through im2bw. There is also a white border that surrounds the image. You most likely had this image open in a figure and saved the image from the figure. I got rid of this by using imclearborder to remove the white border.
Next, we need to fill in the areas that the contour surround, so use imfill with the holes flag. Next, use regionprops to analyze the different filled objects in the image - specifically the Area and which pixels belong to each object in the filled image. Once we obtain these attributes, find the filled contour that gives you the biggest area, then access the correct regionprops element, extract out the pixel locations that belong to the object, then use these and copy over the pixels to an output image and display the results.
We get:
Alternatively, you can use the Perimeter flag (as what Benoit_11) suggested, and simply find the maximum perimeter which will correspond to the largest contour. This should still give you what you want. As such, simply replace the Area flag with Perimeter in the third and fourth lines of code and you should still get the same results.
Since my answer was pretty much all written out I'll give it to you anyway, but the idea is similar to #rayryeng's answer.
Basically I use the Perimeter and PixelIdxList flags during the call to regionprops and therefore get the linear indices of the pixels forming the largest contour, once the image border has been removed using imclearborder.
Here is the code:
clc
clear
BW = imclearborder(im2bw(imread('http://i.stack.imgur.com/a5Yi7.jpg')));
S= regionprops(BW, 'Perimeter','PixelIdxList');
[~,idx] = max([S.Perimeter]);
Indices = S(idx).PixelIdxList;
NewIm = false(size(BW));
NewIm(Indices) = 1;
imshow(NewIm)
And the output:
As you see there are many ways to achieve the same result haha.
This could be one approach -
%// Read in image as binary
im = im2bw(imread('http://i.stack.imgur.com/a5Yi7.jpg'));
im = im(40:320,90:375); %// clear out the whitish border you have
figure, imshow(im), title('Original image')
%// Fill all contours to get us filled blobs and then select the biggest one
outer_blob = imfill(im,'holes');
figure, imshow(outer_blob), title('Filled Blobs')
%// Select the biggest blob that will correspond to the biggest contour
outer_blob = biggest_blob(outer_blob);
%// Get the biggest contour from the biggest filled blob
out = outer_blob & im;
figure, imshow(out), title('Final output: Biggest Contour')
The function biggest_blob that is based on bsxfun is an alternative to what other answers posted here perform with regionprops. From my experience, I have found out this bsxfun based technique to be faster than regionprops. Here are few benchmarks comparing these two techniques for runtime performances on one of my previous answers.
Associated function -
function out = biggest_blob(BW)
%// Find and labels blobs in the binary image BW
[L, num] = bwlabel(BW, 8);
%// Count of pixels in each blob, basically should give area of each blob
counts = sum(bsxfun(#eq,L(:),1:num));
%// Get the label(ind) cooresponding to blob with the maximum area
%// which would be the biggest blob
[~,ind] = max(counts);
%// Get only the logical mask of the biggest blob by comparing all labels
%// to the label(ind) of the biggest blob
out = (L==ind);
return;
Debug images -
Related
I have this BW image:
And using the function RegionProps, it shows that some objetcs are connected:
So I used morphological operations like imerode to separte the objects to get their centroids:
Now I have all the centroids of each object separated, but to that I lost a lot of information when eroding the region, like you can see in picture 3 in comparison with picture 1.
So I was thinking if is there anyway to "dilate" the picture 3 till get closer to picture 1 but without connecting again the objects.
You might want to take a look at bwmorph(). With the 'thicken', inf name-value pair it will thicken the labels until they would overlap. It's a neat tool for segmentation. We can use it to create segmentation borders for the original image.
bw is the original image.
labels is the image of the eroded labels.
lines = bwmorph(labels, 'thicken', inf);
segmented_bw = bw & lines
You could also skip a few phases and achieve similiar results with a marker based watershed. Or even better, as the morphological seesaw has destroyed some information as seen on the poorly segmented cluster on the lower right.
You can assign each white pixel in the mask to the closest centroid and work with the resulting label map:
[y x]= find(bw); % get coordinates of mask pixels
D = pdist2([x(:), y(:)], [cx(:), cy(:)]); % assuming cx, cy are centers' coordinates
[~, lb] = min(D, [], 2); % find index of closest center
lb_map = 0*bw;
lb_map(bw) = lb; % should give you the map.
See pdist2 for more information.
Hello I am working with matlab. I am trying to generate a bounding box around a silhouette. The problem here is that the silhouette is fragmented
as shown here
The code i tried is
BW=bwconncomp(image);
STATS = regionprops(BW, 'FilledArea','BoundingBox');
which gives me a bounding box around a part of the silhouette. I cannot use dilate which is the preferred morphological operation in this case as it connects the silhouette with neighboring fragments.
Thanks in advance for the help.
Here is something to get you going with the image you posted. I used a line structuring element with an angle to dilate the image and amplify the signal from the small white chunks at the left of the silhouette. Then using regionprops its easier to identify objects individually and select the object with the largest area (i.e. the silhouette), calculated with the property FilledArea, and report back the bounding box on the original image. It might not be perfect but it's a start and it seems to give a pretty decent result.
Here is the code:
clear
clc
close all
BW = im2bw(imread('Silhouette.png'));
BW = imclearborder(BW);
%// Dilate with a line structuring element oriented at about 60 degrees to
%// amplify the elements at an angle that you don't want.
se = strel('line',5,60);
dilateddBW = imdilate(BW,se);
figure;
imshow(dilateddBW)
The dilated image looks like this:
Calling regionprops and displaying the output:
%// Get the region properties and select that with the largest area.
S = regionprops(dilateddBW,'BoundingBox','FilledArea','PixelIdxList');
boundingboxes = cat(1, S.BoundingBox);
FilledAreas = cat(1,S.FilledArea);
[~,MaxAreaIndex] = max(FilledAreas);
%// Get linear indices of the corresponding silhouette to display along
%// with its bounding box.
MaxIndices = S(MaxAreaIndex).PixelIdxList;
%// Create empty image to put the silhouette + box
NewIm = false(size(dilateddBW));
NewIm(MaxIndices) = 1;
figure;
imshow(BW)
rectangle('Position',boundingboxes(MaxAreaIndex,:),'EdgeColor','r')
Output:
Hope that helps somehow!
Since you have an array of vectors containing indice of pixels(bwconncomp() returns a struct which have a member named PixelIdxList), you can create a rectangle by finding pixels with min x, min y, max x, max y.
Here is a good example: 2D Minimal Bounding Box
How can i calculate the average of a certain area in an image using mat-lab?
For example, if i have an intensity image with an area that is more alight and i want to know what is the average of the intensity there- how do i calculate it?
I think i can find the coordinates of the alight area by using the 'impixelinfo' command.
If there is another more efficient way to find the coordinates i will also be glad to know.
After i know the coordinates how do i calculate the average of part of the image?
You could use one of the imroi type functions in Matlab such as imfreehand
I = imread('cameraman.tif');
h = imshow(I);
e = imfreehand;
% now select area on image - do not close image
% this makes a mask from the area you just drew
BW = createMask(e);
% this takes the mean of pixel values in that area
I_mean = mean(I(BW));
Alternatively, look into using regionprops, especially if there's likely to be more than one of these features in the image. Here, I'm finding points in the image above some threshold intensity and then using imdilate to pick out a small area around each of those points (presuming the points above the threshold are well separated, which may not be the case - if they are too close then imdilate will merge them into one area).
se = strel('disk',5);
BW = imdilate(I>thresh,se);
s = regionprops(BW, I, 'MeanIntensity');
As a preface: this is my first question - I've tried my best to make it as clear as possible, but I apologise if it doesn't meet the required standards.
As part of a summer project, I am taking time-lapse images of an internal melt figure growing inside a crystal of ice. For each of these images I would like to measure the perimeter of, and area enclosed by the figure formed. Linked below is an example of one of my images:
The method that I'm trying to use is the following:
Load image, crop, and convert to grayscale
Process to reduce noise
Find edge/perimeter
Attempt to join edges
Fill perimeter with white
Measure Area and Perimeter using regionprops
This is the code that I am using:
clear; close all;
% load image and convert to grayscale
tyrgb = imread('TyndallTest.jpg');
ty = rgb2gray(tyrgb);
figure; imshow(ty)
% apply a weiner filter to remove noise.
% N is a measure of the window size for detecting coherent features
N=20;
tywf = wiener2(ty,[N,N]);
tywf = tywf(N:end-N,N:end-N);
% rescale the image adaptively to enhance contrast without enhancing noise
tywfb = adapthisteq(tywf);
% apply a canny edge detection
tyedb = edge(tywfb,'canny');
%join edges
diskEnt1 = strel('disk',8); % radius of 4
tyjoin1 = imclose(tyedb,diskEnt1);
figure; imshow(tyjoin1)
It is at this stage that I am struggling. The edges do not quite join, no matter how much I play around with the morphological structuring element. Perhaps there is a better way to complete the edges? Linked is an example of the figure this code outputs:
The reason that I am trying to join the edges is so that I can fill the perimeter with white pixels and then use regionprops to output the area. I have tried using the imfill command, but cannot seem to fill the outline as there are a large number of dark regions to be filled within the perimeter.
Is there a better way to get the area of one of these melt figures that is more appropriate in this case?
As background research: I can make this method work for a simple image consisting of a black circle on a white background using the below code. However I don't know how edit it to handle more complex images with edges that are less well defined.
clear all
close all
clc
%% Read in RGB image from directory
RGB1 = imread('1.jpg') ;
%% Convert RPG image to grayscale image
I1 = rgb2gray(RGB1) ;
%% Transform Image
%CROP
IC1 = imcrop(I1,[74 43 278 285]);
%BINARY IMAGE
BW1 = im2bw(IC1); %Convert to binary image so the boundary can be traced
%FIND PERIMETER
BWP1 = bwperim(BW1);
%Traces perimeters of objects & colours them white (1).
%Sets all other pixels to black (0)
%Doing the same job as an edge detection algorithm?
%FILL PERIMETER WITH WHITE IN ORDER TO MEASURE AREA AND PERIMETER
BWF1 = imfill(BWP1); %This opens figure and allows you to select the areas to fill with white.
%MEASURE PERIMETER
D1 = regionprops(BWF1, 'area', 'perimeter');
%Returns an array containing the properties area and perimeter.
%D1(1) returns the perimeter of the box and an area value identical to that
%perimeter? The box must be bounded by a perimeter.
%D1(2) returns the perimeter and area of the section filled in BWF1
%% Display Area and Perimeter data
D1(2)
I think you might have room to improve the effect of edge detection in addition to the morphological transformations, for instance the following resulted in what appeared to me a relatively satisfactory perimeter.
tyedb = edge(tywfb,'sobel',0.012);
%join edges
diskEnt1 = strel('disk',7); % radius of 4
tyjoin1 = imclose(tyedb,diskEnt1);
In addition I used bwfill interactively to fill in most of the interior. It should be possible to fill the interior programatically but I did not pursue this.
% interactively fill internal regions
[ny nx] = size(tyjoin1);
figure; imshow(tyjoin1)
tyjoin2=tyjoin1;
titl = sprintf('click on a region to fill\nclick outside window to stop...')
while 1
pts=ginput(1)
tyjoin2 = bwfill(tyjoin2,pts(1,1),pts(1,2),8);
imshow(tyjoin2)
title(titl)
if (pts(1,1)<1 | pts(1,1)>nx | pts(1,2)<1 | pts(1,2)>ny), break, end
end
This was the result I obtained
The "fractal" properties of the perimeter may be of importance to you however. Perhaps you want to retain the folds in your shape.
You might want to consider Active Contours. This will give you a continous boundary of the object rather than patchy edges.
Below are links to
A book:
http://www.amazon.co.uk/Active-Contours-Application-Techniques-Statistics/dp/1447115570/ref=sr_1_fkmr2_1?ie=UTF8&qid=1377248739&sr=8-1-fkmr2&keywords=Active+shape+models+Andrew+Blake%2C+Michael+Isard
A demo:
http://users.ecs.soton.ac.uk/msn/book/new_demo/Snakes/
and some Matlab code on the File Exchange:
http://www.mathworks.co.uk/matlabcentral/fileexchange/28149-snake-active-contour
and a link to a description on how to implement it: http://www.cb.uu.se/~cris/blog/index.php/archives/217
Using the implementation on the File Exchange, you can get something like this:
%% Load the image
% You could use the segmented image obtained previously
% and then apply the snake on that (although I use the original image).
% This will probably make the snake work better and the edges
% in your image is not that well defined.
% Make sure the original and the segmented image
% have the same size. They don't at the moment
I = imread('33kew0g.jpg');
% Convert the image to double data type
I = im2double(I);
% Show the image and select some points with the mouse (at least 4)
% figure, imshow(I); [y,x] = getpts;
% I have pre-selected the coordinates already
x = [ 525.8445 473.3837 413.4284 318.9989 212.5783 140.6320 62.6902 32.7125 55.1957 98.6633 164.6141 217.0749 317.5000 428.4172 494.3680 527.3434 561.8177 545.3300];
y = [ 435.9251 510.8691 570.8244 561.8311 570.8244 554.3367 476.3949 390.9586 311.5179 190.1085 113.6655 91.1823 98.6767 106.1711 142.1443 218.5872 296.5291 375.9698];
% Make an array with the selected coordinates
P=[x(:) y(:)];
%% Start Snake Process
% You probably have to fiddle with the parameters
% a bit more that I have
Options=struct;
Options.Verbose=true;
Options.Iterations=1000;
Options.Delta = 0.02;
Options.Alpha = 0.5;
Options.Beta = 0.2;
figure(1);
[O,J]=Snake2D(I,P,Options);
If the end result is an area/diameter estimate, then why not try to find maximal and minimal shapes that fit in the outline and then use the shapes' area to estimate the total area. For instance, compute a minimal circle around the edge set then a maximal circle inside the edges. Then you could use these to estimate diameter and area of the actual shape.
The advantage is that your bounding shapes can be fit in a way that minimizes error (unbounded edges) while optimizing size either up or down for the inner and outer shape, respectively.
Hello, I have an image as shown above. Is it possible for me to detect the center point of the cross and output the result using Matlab? Thanks.
Here you go. I'm assuming that you have the image toolbox because if you don't then you probably shouldn't be trying to do this sort of thing. However, all of these functions can be implemented with convolutions I believe. I did this process on the image you presented above and obtained the point (139,286) where 138 is the row and 268 is the column.
1.Convert the image to a binary image:
bw = bw2im(img, .25);
where img is the original image. Depending on the image you might have to adjust the second parameters (which ranges from 0 to 1) so that you only get the cross. Don't worry about the cross not being fully connected because we'll remedy that in the next step.
2.Dilate the image to join the parts. I had to do this twice because I had to set the threshold so low on the binary image conversion (some parts of your image were pretty dark). Dilation essentially just adds pixels around existing white pixels (I'll also be inverting the binary image as I send it into bwmorph because the operations are made to act on white pixels which are the ones that have a value of 1).
bw2 = bwmorph(~bw, 'dilate', 2);
The last parameter says how many times to do the dilation operation.
3.Shrink the image to a point.
bw3 = bwmorph(bw2, 'shrink',Inf);
Again, the last parameter says how many times to perform the operation. In this case I put in Inf which shrinks until there is only one pixel that is white (in other words a 1).
4.Find the pixel that is still a 1.
[i,j] = find(bw3);
Here, i is the row and j is the column of the pixel in bw3 such that bw3(i,j) is equal to 1. All the other pixels should be 0 in bw3.
There might be other ways to do this with bwmorph, but I think that this way works pretty well. You might have to adjust it depending on the picture too. I can include images of each step if desired.
I just encountered the same kind of problem, and I found other solutions that I would like to share:
Assume image file name is pict1.jpg.
1.Read input image, crop relevant part and covert to Gray-scale:
origI = imread('pict1.jpg'); %Read input image
I = origI(32:304, 83:532, :); %Crop relevant part
I = im2double(rgb2gray(I)); %Covert to Grayscale and to double (set pixel range [0, 1]).
2.Convert image to binary image in robust approach:
%Subtract from each pixel the median of its 21x21 neighbors
%Emphasize pixels that are deviated from surrounding neighbors
medD = abs(I - medfilt2(I, [21, 21], 'symmetric'));
%Set threshold to 5 sigma of medD
thresh = std2(medD(:))*5;
%Convert image to binary image using above threshold
BW = im2bw(medD, thresh);
BW Image:
3.Now I suggest two approaches for finding the center:
Find find centroid (find center of mass of the white cluster)
Find two lines using Hough transform, and find the intersection point
Both solutions return sub-pixel result.
3.1.Find cross center using regionprops (find centroid):
%Find centroid of the cross (centroid of the cluster)
s = regionprops(BW, 'centroid');
centroids = cat(1, s.Centroid);
figure;imshow(BW);
hold on, plot(centroids(:,1), centroids(:,2), 'b*', 'MarkerSize', 15), hold off
%Display cross center in original image
figure;imshow(origI), hold on, plot(82+centroids(:,1), 31+centroids(:,2), 'b*', 'MarkerSize', 15), hold off
Centroid result (BW image):
Centroid result (original image):
3.2 Find cross center by intersection of two lines (using Hough transform):
%Create the Hough transform using the binary image.
[H,T,R] = hough(BW);
%ind peaks in the Hough transform of the image.
P = houghpeaks(H,2,'threshold',ceil(0.3*max(H(:))));
x = T(P(:,2)); y = R(P(:,1));
%Find lines and plot them.
lines = houghlines(BW,T,R,P,'FillGap',5,'MinLength',7);
figure, imshow(BW), hold on
L = cell(1, length(lines));
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
% Plot beginnings and ends of lines
plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
%http://robotics.stanford.edu/~birch/projective/node4.html
%Find lines in homogeneous coordinates (using cross product):
L{k} = cross([xy(1,1); xy(1,2); 1], [xy(2,1); xy(2,2); 1]);
end
%https://en.wikipedia.org/wiki/Line%E2%80%93line_intersection
%Lines intersection in homogeneous coordinates (using cross product):
p = cross(L{1}, L{2});
%Convert from homogeneous coordinate to euclidean coordinate (divide by last element).
p = p./p(end);
plot(p(1), p(2), 'x', 'LineWidth', 1, 'Color', 'white', 'MarkerSize', 15)
Hough transform result:
I think that there is a far simpler way of solving this. The lines which form the cross-hair are of equal length. Therefore it in will be symmetric in all orientations. So if we do a simple line scan horizontally as well as vertically, to find the extremities of the lines forming the cross-hair. the median of these values will give the x and y co-ordinates of the center. Simple geometry.
I just love these discussions of how to find something without defining first what that something is! But, if I had to guess, I’d suggest the center of mass of the original gray scale image.
What about this;
a) convert to binary just to make the algorithm faster.
b) Perform a find on the resulting array
c) choose the element which has either lowest/highest row/column index (you would have four points to choose from then
d) now keep searching neighbours
have a global criteria for search that if search does not result in more than a few iterations, the point selected is false and choose another extreme point
e) going along the neighbouring points, you will end up at a point where you have three possible neighbours.That is you intersection
I would start by using the grayscale image map. The darkest points are on the cross, so discriminating on the highest values is a starting point. After discrimination, set all the lower points to white and leave the rest as they are. This would maximize the contrast between points on the cross and points in the image. Next up is to come up with a filter for determining the position with the highest average values. I would step through the entire image with a NxM array and take the mean value at the center point. Create a new array of these means and you should have the highest mean at the intersection. I'm curious to see how someone else may try this!