extracting leaf after watershed segmentation in matlab - matlab

after I applied watershed segmentation, I want to extract remained leaf from image,and only I want to get without background like image-2. Please can you help me. Thanks a lot. I attach below also my code.
I'm new at stackoverflow, therefore I'm not allowed to post images.I asked the same qustion in mathworks, you can check the images from there if you will.
Thanks a lot in advance.
http://www.mathworks.com/matlabcentral/answers/237106-extracting-leaf-from-background
image-1: after watershed segmentation(colored version):
image-2:image to be ;
my code:
% I -- intensity image
% Gmag -- gradient mag.
se = strel('disk', 30);
Ie = imerode(I, se);
Iobr = imreconstruct(Ie, I);
figure
imshow(Iobr), title('Opening-by-reconstruction (Iobr)')
Iobrd = imdilate(Iobr, se);
Iobrcbr = imreconstruct(imcomplement(Iobrd), imcomplement(Iobr));
Iobrcbr = imcomplement(Iobrcbr);
figure
imshow(Iobrcbr), title('Opening-closing by reconstruction (Iobrcbr)')
fgm = imregionalmax(Iobrcbr);
figure
imshow(fgm), title('Regional maxima of opening-closing by reconstruction (fgm)')
% modify area
I2 = I;
I2(fgm) = 255;
figure
imshow(I2), title('Regional maxima superimposed on original image (I2)')
se2 = strel(ones(10,10));
fgm2 = imclose(fgm, se2);
fgm3 = imerode(fgm2, se2);
fgm4 = bwareaopen(fgm3, 100);
I3 = I;
I3(fgm4) = 255;
figure
imshow(I3)
title('Modified regional maxima superimposed on original image (fgm4)')
% background markers
bw = im2bw(Iobrcbr, graythresh(Iobrcbr));
figure
imshow(bw), title('Thresholded opening-closing by reconstruction (bw)')
D = bwdist(bw);
DL = watershed(D);
bgm = DL == 0;
figure
imshow(bgm), title('Watershed ridge lines (bgm)')
gradmag2 = imimposemin(Gmag, bgm | fgm4);
L = watershed(gradmag2);
I4 = I;
I4(imdilate(L == 0, ones(3, 3)) | bgm | fgm4) = 255;
figure
imshow(I4)
title('Markers and object boundaries superimposed on original image (I4)')
Lrgb = label2rgb(L, 'jet', 'w', 'shuffle');
figure
imshow(Lrgb)
title('Colored watershed label matrix (Lrgb)')
figure
imshow(I)
hold on
himage = imshow(Lrgb);
himage.AlphaData = 0.3;
title('Lrgb superimposed transparently on original image')
props = regionprops(L);
[~,ind] = max([props.Area]);
imshow(L == ind);

It's not possible to extract the leaf according to the segmented image, because the yellow component cuts the leaf in different parts.
Moreover, according to the source code, I understand that you use a basic watershed that produces an over segmentation. Use a watershed constrained, also known as watershed with markers.
Find a way to share the original image and the image after processing.

I confirm that you certainly have an issue with the watershed you use. I ran your code on my own library, and I used: one inner marker (biggest component, consequently the leaf on the corner is discarded), one outer (dilation of the inner), the gradient image.
And here is my result: www.thibault.biz/StackOverflow/ResultLeaf.png. So only one component, because I use only one inner marker. It's not perfect, but already closer and easier to post-process.

Related

Make single colored regions in image smooth in MATLAB

I have written some algorithms for an image but the output has some differences from my ground truth which you can see in image below:
I don't want to make it exactly like the 2nd image but since my images is kinda simple I think there are some filters to at least remove those white curves inside the circles.
Can you suggest me any?
Thanks
You can try using morphological operations like imclose
You need to play with it, to get the desired result.
I used imbinarize to convert from uint8 to black/white.
I = imread('https://i.stack.imgur.com/r8XO7.png'); %Read image directly from URL.
R = I(:,:,1);G = I(:,:,2);B = I(:,:,3);
R = imbinarize(255 - R);G = imbinarize(255 - G);B = imbinarize(255 - B); %Convert to binary (use 255-R to inverse polarity because background is white).
SE = strel('disk', 15);
R = imclose(R, SE); %Close opreation.
G = imclose(G, SE);
B = imclose(B, SE);
J = im2uint8(cat(3, ~R, ~G, ~B)); %Use ~R to invert to original polarity.
figure;imshow(J);
Almost...
[It's weird that image is reversed up/down].

How to smoothen the edges from an image obtained from imagesc function

I have an RGB image obtained from saving the imagesc function as shown below. how to refine/smoothen the edges present in the image.
It consists of sharper edges, where I need to smoothen them. Im not able to find a solution for performing this for an RGB image. Instead of the staircase effect seen in the image I'd like to even out the edges. Please help thanks in advance.
maybe imresize will help you:
% here im just generating an image similar to yours
A = zeros(20);
for ii = -2:2
A = A + (ii + 3)*diag(ones(20-abs(ii),1),ii);
end
A([1:5 16:20],:) = 0;A(:,[1:5 16:20]) = 0;
subplot(121);
imagesc(A);
title('original')
% resizing image with bi-linear interpolation
B = imresize(A,100,'bilinear');
subplot(122);
imagesc(B);
title('resized')
EDIT
here I do resize + filtering + rounding:
% generates image
A = zeros(20);
for ii = -2:2
A = A + (ii + 3)*diag(ones(20-abs(ii),1),ii);
end
A([1:5 16:20],:) = 0;A(:,[1:5 16:20]) = 0;
subplot(121);
imagesc(A);
title('original')
% resizing
B = imresize(A,20,'nearest');
% filtering & rounding
C = ceil(imgaussfilt(B,8));
subplot(122);
imagesc(C);
title('resized')
solution
use imfilter and fspecial to perform a convolution of you image with gaussian.
I = imread('im.png');
H = fspecial('gaussian',5,5);
I2 = imfilter(I,H);
change 'blurlevel' parameter (which determines the gaussian kernel size) to make the image smoother or sharper.
result
If you are just looking for straighter edges, like an elevation map you can try contourf.
cmap = colormap();
[col,row] = meshgrid(1:size(img,2), 1:size(img,1));
v = linspace(min(img(:)),max(img(:)),size(cmap,1));
contourf(col,row,img,v,'edgecolor','none');
axis('ij');
This produces the following result using a test function that I generated.

How do i smoothen the edges of a multicomponent image?

I have an image in which i would like to smoothen its edges. There was a bit of a challenge in getting a more accurate segmentation. I however got a solution by adapting the suggestion from: What can I do to enhance my image quality?.
The original images is here:
and segmented image as well
The code i used is as follows:
%# Read in image
Img = imread('image_name.png');
%# Apply filter
h = fspecial('average');
Img = imfilter(Img, h);
%# Segment image
Img = rgb2gray(Img);
thresh = multithresh(Img, 2);
Iseg = imquantize(Img, thresh);
figure, imshow(Iseg,[]), title('Segmented Image');
%# separate channels
blackPixels = (Iseg == 1);
grayPixels = (Iseg == 2);
whitePixels = (Iseg == 3);
%# grow white channel
whitePixels_dilated = imdilate(whitePixels, strel('disk', 4, 4));
%# Add all channels
Iseg(whitePixels | whitePixels_dilated) = 3;
figure, imshow(Iseg,[]);
My challenge right now is to smoothen the edges of the solid (whitePixels) or the edges of all objects. I have no idea how to do this. I have tried filtering but that only takes off the small spots.
Please any help, ideas, or suggestions or advice is greatly appreciated. Thank you.
I would suggest applying a rectangular filter multiple times. Here's an approach of how to do this:
I=imread('Orl1r.png');
I_gray=rgb2gray(I);
I_filtered=I_gray; % initialize the filtered image
for ii=1:10
I_filtered=imfilter(I_filtered,1/25*ones(5)); % apply rectangular-filter multiple times
end
figure
subplot(1,2,1)
imshow(I,[0 255]);
subplot(1,2,2);
imshow(I_filtered,[0 255])
Here's what the filtered image would look like:
Hope this helps.
EDIT: Instead of the rectangular filter you could also use a Gaussian one. But the general idea of applying multiple times persists. You can create a Gaussian filter for exapmle using f=fspecial('gaussian',5,6) which creates a 5x5 filtermask with sigma=6.

Find the real time co-ordinates of the four points marked in red in the image

To be exact I need the four end points of the road in the image below.
I used find[x y]. It does not provide satisfying result in real time.
I'm assuming the images are already annotated. In this case we just find the marked points and extract coordinates (if you need to find the red points dynamically through code, this won't work at all)
The first thing you have to do is find a good feature to use for segmentation. See my SO answer here what-should-i-use-hsv-hsb-or-rgb-and-why for code and details. That produces the following image:
we can see that saturation (and a few others) are good candidate colors spaces. So now you must transfer your image to the new color space and do thresholding to find your points.
Points are obtained using matlab's region properties looking specifically for the centroid. At that point you are done.
Here is complete code and results
im = imread('http://i.stack.imgur.com/eajRb.jpg');
HUE = 1;
SATURATION = 2;
BRIGHTNESS = 3;
%see https://stackoverflow.com/questions/30022377/what-should-i-use-hsv-hsb-or-rgb-and-why/30036455#30036455
ViewColoredSpaces(im)
%convert image to hsv
him = rgb2hsv(im);
%threshold, all rows, all columns,
my_threshold = 0.8; %determined empirically
thresh_sat = him(:,:,SATURATION) > my_threshold;
%remove small blobs using a 3 pixel disk
se = strel('disk',3');
cleaned_sat = imopen(thresh_sat, se);% imopen = imdilate(imerode(im,se),se)
%find the centroids of the remaining blobs
s = regionprops(cleaned_sat, 'centroid');
centroids = cat(1, s.Centroid);
%plot the results
figure();
subplot(2,2,1) ;imshow(thresh_sat) ;title('Thresholded saturation channel')
subplot(2,2,2) ;imshow(cleaned_sat);title('After morpphological opening')
subplot(2,2,3:4);imshow(im) ;title('Annotated img')
hold on
for (curr_centroid = 1:1:size(centroids,1))
%prints coordinate
x = round(centroids(curr_centroid,1));
y = round(centroids(curr_centroid,2));
text(x,y,sprintf('[%d,%d]',x,y),'Color','y');
end
%plots centroids
scatter(centroids(:,1),centroids(:,2),[],'y')
hold off
%prints out centroids
centroids
centroids =
7.4593 143.0000
383.0000 87.9911
435.3106 355.9255
494.6491 91.1491
Some sample code would make it much easier to tailor a specific solution to your problem.
One solution to this general problem is using impoint.
Something like
h = figure();
ax = gca;
% ... drawing your image
points = {};
points = [points; impoint(ax,initialX,initialY)];
% ... generate more points
indx = 1 % or whatever point you care about
[currentX,currentY] = getPosition(points{indx});
should do the trick.
Edit: First argument of impoint is an axis object, not a figure object.

How to detect edges of a colored photo?

I'm trying to implement the blind deconvolution algorithm example from Mathworks' site and when making edge detection I'm having problems because you can't use edge detection functions with RGB photos. So I converted the photo to YUV but after this I don't know in what order I should do processes and I don't even know if I am using the right method.
I applied the edge() function for all three (y,u,v) then I used YUV to RGB method to combine them again. It didn't work, I can't obtain the final WEIGHT value.
My code is below and an example link is at http://www.mathworks.com/help/images/deblurring-with-the-blind-deconvolution-algorithm.html.
Img = imread('image.tif');
PSF = fspecial('motion',13,45);
Blurred = imfilter(Img,PSF,'circ','conv');
INITPSF = ones(size(PSF));
[J P] = deconvblind(Blurred,INITPSF,30);
% RGB to YUV
R=Img(:,:,1); G=Img(:,:,2); B=Img(:,:,3);
Y=round((R+2*G+B)/4);
U=R-G;
V=B-G;
% finding edges for Y,U,V
WEIGHT1 = edge(Y,'sobel',.28);
se1 = strel('disk',1);
se2 = strel('line',13,45);
WEIGHT1 = ~imdilate(WEIGHT1,[se1 se2]);
WEIGHT1 = padarray(WEIGHT1(2:end-1,2:end-1),[1 1]);
WEIGHT2 = edge(U,'sobel',.28);
se1 = strel('disk',1);
se2 = strel('line',13,45);
WEIGHT2 = ~imdilate(WEIGHT2,[se1 se2]);
WEIGHT2 = padarray(WEIGHT2(2:end-1,2:end-1),[1 1]);
WEIGHT3 = edge(V,'sobel',.28);
se1 = strel('disk',1);
se2 = strel('line',13,45);
WEIGHT3 = ~imdilate(WEIGHT3,[se1 se2]);
WEIGHT3 = padarray(WEIGHT3(2:end-1,2:end-1),[1 1]);
% YUV to RGB again
G=round((WEIGHT1-(WEIGHT2+WEIGHT3)/4));
R=WEIGHT2+G;
B=WEIGHT3+G;
WEIGHT(:,:,1)=G; WEIGHT(:,:,2)=R; WEIGHT(:,:,3)=B;
P1 = P;
P1(find(P1 < 0.01))= 0;
[J2 P2] = deconvblind(Blurred,P1,50,[],double(WEIGHT));
figure, imshow(J2)
title('Newly Deblurred Image');
figure, imshow(P2,[],'InitialMagnification','fit')
title('Newly Reconstructed PSF')
I'll not get into the deconvblind de-blurring here, but let me show you how edge detection for color images can work.
% load an image
I = imread('peppers.png');
% note that this is a RGB image.
e = edge(I, 'sobel');
will fail, because edge wants a 2D image, and an RGB or a YUV image is 3D, in the sense that the third dimension is the color channel.
There are a few ways of fixing this. One is to convert the image to grayscale, using
gray = rgb2gray(I);
This can then be passed into edge, to return edges based on the gray level intensities in 'gray'.
e = edge(gray,'sobel'); % also try with different thresholds for sobel.
If you are really interested in the edge information in each channel, you could simply pass in the individual channels into edge separately. For example,
eRed = edge(I(:,:,1), 'sobel'); % edges only in the I(:,:,1): red channel.
eGreen = edge(I(:,:,2), 'sobel');
eBlue = edge(I(:,:,3), 'sobel');
and then based on how each of these eRed, eGreen and eBlue look, you could potentially combine then using the logical 'or', such that the result is an edge if any of the channels independently think it is an edge.
eCombined = eRed | eGreen | eBlue;
What you did originally is probably unintended, as the YUV colorspace can distort the sense of edges. An edge in the R plane may not be an edge in the Y, U or V plane, and hence you'd need to make sure to use the right colorspace to detect edges, so that you can combine them after, like shown with the RGB colorspace earlier.
The final form of code is below,
Img = imread('image.tif');
PSF = fspecial('motion',13,45);
Blurred = imfilter(Img,PSF,'circ','conv');
INITPSF = ones(size(PSF));
[J P] = deconvblind(Blurred,INITPSF,30);
eRed = edge(Img(:,:,1), 'sobel');
eGreen = edge(Img(:,:,2), 'sobel');
eBlue = edge(Img(:,:,3), 'sobel');
WEIGHT = eRed | eGreen | eBlue;
se1 = strel('disk',1);
se2 = strel('line',13,45);
WEIGHT = ~imdilate(WEIGHT,[se1 se2]);
WEIGHT = padarray(WEIGHT(2:end-1,2:end-1),[1 1]);
figure
imshow(WEIGHT)
title('Weight Array')
P1 = P;
P1(find(P1 < 0.01))= 0;
WEIGHT2 = repmat(WEIGHT,[1 1 3]);
WEIGHT3 = im2double(WEIGHT2);
[J2 P2] = deconvblind(Blurred,P1,50,[],WEIGHT3);
figure, imshow(J2)
title('Newly Deblurred Image');
figure, imshow(P2,[],'InitialMagnification','fit')
title('Newly Reconstructed PSF')
I can't post images yet.so I'm giving link of outputs.The last output, newly restored, has problem as I have indicated at the title of photo.I guess, still there's a data type problem.
Output link http://imgur.com/a/dDF2N