I have this image with white points on a dark background:
I want to group pixels that are close by into a single blob. In this image, that would mean that there will be two blobs in the image, one for the pixels at the top and one for the pixels at the bottom. Any pixels that are not too close to these two blobs must be changed into the background color (A threshold must be specified to choose which pixels fall into the blobs and which of them are too far). How do I go about this? Any Matlab function that can be used?
To group dots, one can simply smooth the image sufficiently to blur them together. The dots that are close together (with respect to the blur kernel size) will be merged, the dots that are further apart will not.
The best way to smooth the image is using a Gaussian filter. MATLAB implements this using the imgaussfilt function since 2015a. For older versions of MATLAB (or Octave, as I'm using here) you can use fspecial and imfilter instead. But you have to be careful because fspecial makes it really easy to create a kernel that is not at all a Gaussian kernel. This is the reason that that method is deprecated now, and the imgaussfilt function was created.
Here is some code that does this:
% Load image
img = imread('https://i.stack.imgur.com/NIcb9.png');
img = rgb2gray(img);
% Threshold to get dots
dots = img > 127; % doesn't matter, this case is trivial
% Group dots
% smooth = imgaussfilt(img,10); % This works for newer MATLABs
g = fspecial("gaussian",6*10+1,10);
smooth = imfilter(img,g,'replicate'); % I'm using Octave here, it doesn't yet implement imgaussfilt
% Find an appropriate threshold for dot density
regions = smooth > 80; % A smaller value makes for fewer isolated points
% Dots within regions
newDots = dots & regions;
To identify blobs that are within the same region, simply label the regions image, and multiply with the dots image:
% Label regions
regions = bwlabel(regions);
% Label dots within regions
newDots = regions .* dots;
% Display
imshow(label2rgb(newDots,'jet','k'))
Related
Introduction
Background: I am segmenting images using the watershed algorithm in MATLAB. For memory and time constraints, I prefer to perform this segmentation on subsampled images, let's say with a resize factor of 0.45.
The problem: I can't properly re-scale the output of the segmentation to the original image scale, both for visualization purposes and other post processing steps.
Minimal Working Example
For example, I have this image:
I run this minimal script and I get a watershed segmentation output L that consists in a label image, where each connected component is addressed with a natural number and the borders between the connected components are zero-valued:
im_orig = imread('kitty.jpg'); % Load image [530x530]
im_res = imresize(im_orig, 0.45); % Resize image [239x239]
im_res = rgb2gray(im_res); % Convert to grayscale
im_blur = imgaussfilt(im_res, 5); % Gaussian filtering
L = watershed(im_blur); % Watershed aglorithm
Now I have L that has the same dimension of im_res. How can I use the result stored in L to actually segment the original im_orig image?
Wrong solution
The first approach I tried was to resize L to the original scale by using imresize.
L_big = imresize(L, [size(im_orig,1), size(im_orig,2)]); % Upsample L
Unfortunately the upsampling of L produces a series of unwanted artifacts. It especially loses some of the fundamental zeros that represent the boundaries between the image segments. Here is what I mean:
figure; imagesc(imfuse(im_res, L == 0)); axis auto equal;
figure; imagesc(imfuse(im_orig, L_big == 0)); axis auto equal;
I know that this is due to the blurring caused by the upscaling process, but for now I couldn't think about anything else that could succeed.
The only other approach I thought about involve the use of Mathematical Morphology to "enlarge" the boundaries of the resized image and then upsample, but this would still lead to some unwanted artifacts.
TL;DR (or recap)
Is there a way to perform watershed on a downscaled image in MATLAB and then upscale the result to the original image, keeping the crisp region boundaries outputted by the algorithm? Is what I am looking for a completely absurd thing to ask?
If you only need the watershed segment borders after upsizing the image, then just make these little changes:
L_big = ~imresize(L==0, [size(im_orig,1), size(im_orig,2)]); % Upsample L
and here the results:
you can use nearest neighbor interpolation when resizing:
L_big = imresize(L, [size(im_orig,1), size(im_orig,2)],'nearest'); % Upsample L
Normally when we resize images we star fro the destination, iterate over x, y, and find the best matching pixel in the source. Here you want to do the reverse. Iterate over the source in x, y and write to the destination buffer, with 0 taking priority (so initialise to 0xFF, then don't overwrite any zeroes with other values),
There's unlikely to be a function that does this on the toolkit, you;ll hav e to roll your own.
I'm trying to remove boundary from my image in Matlab.
I've tried this
clc,clear,clf
Im=im2double(imread('Im.png'));
imshow(Im);title('Original Image')
pause(.5)
imshow(edge(Im));title('after Sobel')
pause(.5)
imshow(Im-edge(Im));title('Im-edge(Im)')
and the result is
but there is two clear problem:
The output of the edge by default Sobel contain some inner part of shape.
Subtract binary image from gray scale one!(output of edge is binary)
any help would be appreciated.
Download original image.
One way I can think of doing this is threshold the image so that you have a solid white object, shrink the object by a little bit. Then, use the slightly decreased object to index into the main object mask and remove this area. Also, increase the area of the intermediate result by a little bit to ensure that you remove the outer edge of the boundary. This will ultimately produce a hollowed out mask which is designed to remove the boundaries of your object within some tolerance while leaving the rest of the image intact. Any values that are true in this mask can be used to remove the boundaries.
For reproducibility, I've uploaded your image to Stack Imgur so that we don't have to rely on a third party website to download your image:
This "little bit" for shrinking and growing you will have to play around with. I chose 5 pixels as this seems to work. To do the shrinking and growing, use an erosion and dilation respectively with imerode and imdilate respectively and I used a structuring element of a 5 x 5 pixel square.
% Read from Stack Imgur directly
im = imread('https://i.stack.imgur.com/UJcKA.png');
% Perform Sobel Edge detection
sim = edge(im, 'sobel');
% Find the mask of the object
mask = im > 5;
% Shrink the object
se = strel('square', 5);
mask_s = imerode(mask, se);
% Remove the inner boundary of the object
mask(mask_s) = false;
% Slightly enlarge now to ensure outer boundary is removed
mask = imdilate(mask, se);
% Create new image by removing the boundaries of the
% edge image
sim(mask) = false;
% Show the result
figure; imshow(sim);
We now get this image:
You'll have to play around with the Sobel threshold because I actually don't know what you used to get the desired image you want. Suffice it to say that the default threshold does not give what your expected results show.
Here is the original image with better vision: we can see a lot of noise around the main skeleton, the circle thing, which I want to remove them, and do not affect the main skeleton. I'm not sure if it called noise
I'm doing it to deblurring a image, and this image is the motion blur kernel which identify the camera motion when the camera capture a image.
ps: this image is the kernel for one case, and what I need is a general method in here. thank you for your help
there is a paper in CVPR2014 named "Separable Kernel for Image Deblurring" which talk about this, I want to extract main skeleton of the image to make the kernel more robust, sorry for the explaination here as my English is not good
and here is the ture grayscale image:
I want it to be like this:
How can I do it using Matlab?
here are some other kernel images:
As #rayryeng well explained, median filtering is the best option to clean noise in the image, which I realized when I had studied about image restoration. However, in your case, what you need to do seems to me not cleaning noise in the image. You want to more likely eliminate the sparks in the image.
Simply I applied single thresholding to your noisy image to eliminate sparks.
Try this:
desIm = imread('http://i.stack.imgur.com/jyYUx.png'); % // Your expected (desired) image
nIm = imread('http://i.stack.imgur.com/pXO0p.png'); % // Your original image
nImgray = rgb2gray(nIm);
T = graythresh(nImgray)*255; % // Thereshold value
S = size(nImgray);
R = zeros(S) + 5; % // Your expected image bluish so I try to close it
G = zeros(S) + 3; % // Your expected image bluish so I try to close it
B = zeros(S) + 20; % // Your expected image bluish so I try to close it
logInd = nImgray > T; % // Logical index of pixel exclude spark component
R(logInd) = nImgray(logInd); % // Get original pixels without sparks
G(logInd) = nImgray(logInd); % // Get original pixels without sparks
B(logInd) = nImgray(logInd); % // Get original pixels without sparks
rgbImage = cat(3, R, G, B); % // Concatenating Red Green Blue channels
figure,
subplot(1, 3, 1)
imshow(nIm); title('Original Image');
subplot(1, 3, 2)
imshow(desIm); title('Desired Image');
subplot(1, 3, 3)
imshow(uint8(rgbImage)); title('Restoration Result');
What I got is:
The only thing I can see that is different between the two images is that there is some quantization noise / error around the perimeter of the object. This resembles salt and pepper noise and the best way to remove that noise is to use median filtering. The median filter basically analyzes local overlapping pixel neighbourhoods in your image, sorts the intensities and chooses the median value as the output for each pixel neighbourhood. Salt and pepper noise corrupts image pixels by randomly selecting pixels and setting their intensities to either black (pepper) or white (salt). By employing the median filter, sorting the intensities puts these noisy pixels at the lower and higher ends and by choosing the median, you would get the best intensity that could have possibly been there.
To do median filtering in MATLAB, use the medfilt2 function. This is assuming you have the Image Processing Toolbox installed. If you don't, then what I am proposing won't work. Assuming that you do have it, you would call it in the following way:
out = medfilt2(im, [M N]);
im would be the image loaded in imread and M and N are the rows and columns of the size of the pixel neighbourhood you want to analyze. By choosing a 7 x 7 pixel neighbourhood (i.e. M = N = 7), and reading your image directly from StackOverflow, this is the result I get:
Compare this image with your original one:
If you also look at your desired output, this more or less mimics what you want.
Also, the code I used was the following... only three lines!
im = rgb2gray(imread('http://i.stack.imgur.com/pXO0p.png'));
out = medfilt2(im, [7 7]);
imshow(out);
The first line I had to convert your image into grayscale because the original image was in fact RGB. I had to use rgb2gray to do that. The second line performs median filtering on your image with a 7 x 7 neighbourhood and the final line shows the image in a separate window with imshow.
Want to implement median filtering yourself?
If you want to get an idea of how to actually write a median filtering algorithm yourself, check out my recent post here. A question poser asked to implement the filtering mechanism without using medfilt2, and I provided an answer.
Matlab Median Filter Code
Hope this helps.
Good luck!
As a preface: this is my first question - I've tried my best to make it as clear as possible, but I apologise if it doesn't meet the required standards.
As part of a summer project, I am taking time-lapse images of an internal melt figure growing inside a crystal of ice. For each of these images I would like to measure the perimeter of, and area enclosed by the figure formed. Linked below is an example of one of my images:
The method that I'm trying to use is the following:
Load image, crop, and convert to grayscale
Process to reduce noise
Find edge/perimeter
Attempt to join edges
Fill perimeter with white
Measure Area and Perimeter using regionprops
This is the code that I am using:
clear; close all;
% load image and convert to grayscale
tyrgb = imread('TyndallTest.jpg');
ty = rgb2gray(tyrgb);
figure; imshow(ty)
% apply a weiner filter to remove noise.
% N is a measure of the window size for detecting coherent features
N=20;
tywf = wiener2(ty,[N,N]);
tywf = tywf(N:end-N,N:end-N);
% rescale the image adaptively to enhance contrast without enhancing noise
tywfb = adapthisteq(tywf);
% apply a canny edge detection
tyedb = edge(tywfb,'canny');
%join edges
diskEnt1 = strel('disk',8); % radius of 4
tyjoin1 = imclose(tyedb,diskEnt1);
figure; imshow(tyjoin1)
It is at this stage that I am struggling. The edges do not quite join, no matter how much I play around with the morphological structuring element. Perhaps there is a better way to complete the edges? Linked is an example of the figure this code outputs:
The reason that I am trying to join the edges is so that I can fill the perimeter with white pixels and then use regionprops to output the area. I have tried using the imfill command, but cannot seem to fill the outline as there are a large number of dark regions to be filled within the perimeter.
Is there a better way to get the area of one of these melt figures that is more appropriate in this case?
As background research: I can make this method work for a simple image consisting of a black circle on a white background using the below code. However I don't know how edit it to handle more complex images with edges that are less well defined.
clear all
close all
clc
%% Read in RGB image from directory
RGB1 = imread('1.jpg') ;
%% Convert RPG image to grayscale image
I1 = rgb2gray(RGB1) ;
%% Transform Image
%CROP
IC1 = imcrop(I1,[74 43 278 285]);
%BINARY IMAGE
BW1 = im2bw(IC1); %Convert to binary image so the boundary can be traced
%FIND PERIMETER
BWP1 = bwperim(BW1);
%Traces perimeters of objects & colours them white (1).
%Sets all other pixels to black (0)
%Doing the same job as an edge detection algorithm?
%FILL PERIMETER WITH WHITE IN ORDER TO MEASURE AREA AND PERIMETER
BWF1 = imfill(BWP1); %This opens figure and allows you to select the areas to fill with white.
%MEASURE PERIMETER
D1 = regionprops(BWF1, 'area', 'perimeter');
%Returns an array containing the properties area and perimeter.
%D1(1) returns the perimeter of the box and an area value identical to that
%perimeter? The box must be bounded by a perimeter.
%D1(2) returns the perimeter and area of the section filled in BWF1
%% Display Area and Perimeter data
D1(2)
I think you might have room to improve the effect of edge detection in addition to the morphological transformations, for instance the following resulted in what appeared to me a relatively satisfactory perimeter.
tyedb = edge(tywfb,'sobel',0.012);
%join edges
diskEnt1 = strel('disk',7); % radius of 4
tyjoin1 = imclose(tyedb,diskEnt1);
In addition I used bwfill interactively to fill in most of the interior. It should be possible to fill the interior programatically but I did not pursue this.
% interactively fill internal regions
[ny nx] = size(tyjoin1);
figure; imshow(tyjoin1)
tyjoin2=tyjoin1;
titl = sprintf('click on a region to fill\nclick outside window to stop...')
while 1
pts=ginput(1)
tyjoin2 = bwfill(tyjoin2,pts(1,1),pts(1,2),8);
imshow(tyjoin2)
title(titl)
if (pts(1,1)<1 | pts(1,1)>nx | pts(1,2)<1 | pts(1,2)>ny), break, end
end
This was the result I obtained
The "fractal" properties of the perimeter may be of importance to you however. Perhaps you want to retain the folds in your shape.
You might want to consider Active Contours. This will give you a continous boundary of the object rather than patchy edges.
Below are links to
A book:
http://www.amazon.co.uk/Active-Contours-Application-Techniques-Statistics/dp/1447115570/ref=sr_1_fkmr2_1?ie=UTF8&qid=1377248739&sr=8-1-fkmr2&keywords=Active+shape+models+Andrew+Blake%2C+Michael+Isard
A demo:
http://users.ecs.soton.ac.uk/msn/book/new_demo/Snakes/
and some Matlab code on the File Exchange:
http://www.mathworks.co.uk/matlabcentral/fileexchange/28149-snake-active-contour
and a link to a description on how to implement it: http://www.cb.uu.se/~cris/blog/index.php/archives/217
Using the implementation on the File Exchange, you can get something like this:
%% Load the image
% You could use the segmented image obtained previously
% and then apply the snake on that (although I use the original image).
% This will probably make the snake work better and the edges
% in your image is not that well defined.
% Make sure the original and the segmented image
% have the same size. They don't at the moment
I = imread('33kew0g.jpg');
% Convert the image to double data type
I = im2double(I);
% Show the image and select some points with the mouse (at least 4)
% figure, imshow(I); [y,x] = getpts;
% I have pre-selected the coordinates already
x = [ 525.8445 473.3837 413.4284 318.9989 212.5783 140.6320 62.6902 32.7125 55.1957 98.6633 164.6141 217.0749 317.5000 428.4172 494.3680 527.3434 561.8177 545.3300];
y = [ 435.9251 510.8691 570.8244 561.8311 570.8244 554.3367 476.3949 390.9586 311.5179 190.1085 113.6655 91.1823 98.6767 106.1711 142.1443 218.5872 296.5291 375.9698];
% Make an array with the selected coordinates
P=[x(:) y(:)];
%% Start Snake Process
% You probably have to fiddle with the parameters
% a bit more that I have
Options=struct;
Options.Verbose=true;
Options.Iterations=1000;
Options.Delta = 0.02;
Options.Alpha = 0.5;
Options.Beta = 0.2;
figure(1);
[O,J]=Snake2D(I,P,Options);
If the end result is an area/diameter estimate, then why not try to find maximal and minimal shapes that fit in the outline and then use the shapes' area to estimate the total area. For instance, compute a minimal circle around the edge set then a maximal circle inside the edges. Then you could use these to estimate diameter and area of the actual shape.
The advantage is that your bounding shapes can be fit in a way that minimizes error (unbounded edges) while optimizing size either up or down for the inner and outer shape, respectively.
I am having trouble achieving the correct segmentation of a grayscale image:
The ground truth, i.e. what I would like the segmentation to look like, is this:
I am most interested in the three components within the circle. Thus, as you can see, I would like to segment the top image into three components: two semi-circles, and a rectangle between them.
I have tried various combinations of dilation, erosion, and reconstruction, as well as various clustering algorithms, including k-means, isodata, and mixture of gaussians--all with varying degrees of success.
Any suggestions would be appreciated.
Edit: here is the best result I've been able to obtain. This was obtained using an active contour to segment the circular ROI, and then applying isodata clustering:
There are two problems with this:
The white halo around the bottom-right cluster, belonging to the top-left cluster
The gray halo around both the top-right and bottom-left cluster, belonging to the center cluster.
Here's a starter...
use circular Hough transform to find the circular part. For that I initially threshold the image locally.
im=rgb2gray(imread('Ly7C8.png'));
imbw = thresholdLocally(im,[2 2]); % thresold localy with a 2x2 window
% preparing to find the circle
props = regionprops(imbw,'Area','PixelIdxList','MajorAxisLength','MinorAxisLength');
[~,indexOfMax] = max([props.Area]);
approximateRadius = props(indexOfMax).MajorAxisLength/2;
radius=round(approximateRadius);%-1:approximateRadius+1);
%find the circle using Hough trans.
h = circle_hough(edge(imbw), radius,'same');
[~,maxIndex] = max(h(:));
[i,j,k] = ind2sub(size(h), maxIndex);
center.x = j; center.y = i;
figure;imagesc(im);imellipse(gca,[center.x-radius center.y-radius 2*radius 2*radius]);
title('Finding the circle using Hough Trans.');
select only what's inside the circle:
[y,x] = meshgrid(1:size(im,2),1:size(im,1));
z = (x-j).^2+(y-i).^2;
f = (z<=radius^2);
im=im.*uint8(f);
EDIT:
look for a place to start threshold the image to segment it by looking at the histogram, finding it's first local maxima, and iterating from there until 2 separate segments are found, using bwlabel:
p=hist(im(im>0),1:255);
p=smooth(p,5);
[pks,locs] = findpeaks(p);
bw=bwlabel(im>locs(1));
i=0;
while numel(unique(bw))<3
bw=bwlabel(im>locs(1)+i);
i=i+1;
end
imagesc(bw);
The middle part can now be obtained by taking out the two labeled parts from the circle, and what is left will be the middle part (+some of the halo)
bw2=(bw<1.*f);
but after some median filtering we get something more reasonble
bw2= medfilt2(medfilt2(bw2));
and together we get:
imagesc(bw+3*bw2);
The last part is a real "quick and dirty", I'm sure that with the tools you already used you'll get better results...
One can also obtain an approximate result using the watershed transformation. This is the watershed on the inverted image -> watershed(255-I) Here is an example result:
Another Simple method is to perform a morphological closing on the original image with a disc structuring element (one can perform multiscale closing for granulometries) and then obtain the full circle. After this extracting the circle is and components withing is easier.
se = strel('disk',3);
Iclo = imclose(I, se);% This closes open circular cells.
Ithresh = Iclo>170;% one can locate this threshold automatically by histogram modes (if you know apriori your cell structure.)
Icircle = bwareaopen(Ithresh, 50); %to remove small noise components in the bg
Ithresh2 = I>185; % This again needs a simple histogram.