I am interested in separating features on an image using the watershed algorithm. Using the matlab tutorial, I tried to write a small proof of principle algorithm that I can further use in my image analysis.
Im = imread('../../Pictures/testrec.png');
bw = rgb2gray(Im);
figure
imshow(bw,'InitialMagnification','fit'), title('bw')
%Compute the distance transform of the complement of the binary image.
D = bwdist(~bw);
figure
imshow(D,[],'InitialMagnification','fit')
title('Distance transform of ~bw')
%Complement the distance transform, and force pixels that don't belong to the objects to be at Inf .
D = -D;
D(~bw) = Inf;
%Compute the watershed transform and display the resulting label matrix as an RGB image.
L = watershed(D);
L(~bw) = 0;
rgb = label2rgb(L,'jet',[.5 .5 .5]);
figure
imshow(rgb,'InitialMagnification','fit')
title('Watershed transform of D')
It appears that the feature separation is somewhat random, as can be seen from the prolonged feature in the middle. However, there does not seem to be any parameters for the watershed algorithm, that could be used to optimize its performance. Can you suggest how such parameter can be introduced, or a better algorithm to process the data.
Bonus Question: I am interested to first separate my image using bwconncomp, then selectively apply the watershed algorithm to only some of the regions. Assume I know which of the cc.PixelIdxList regions I want to apply the algorithm to - how do I get a new PixelIdxList with separated components.
Watershed transformation cannot separate convex shapes.
There is no way to change that. A convex shape always results in one object.
Blobs very close to convex will always result in poor watershed results.
The only reason why you have that "somewhat random" result instead of a single basin is that a few pixels are a bit off the perimeter.
Results of watershed are improved by pre- and post-processing. But that would be very specific to a certain problem.
Related
I want to evaluate the grid quality where all coordinates differ in the real case.
Signal is of a ECG signal where average life-time is 75 years.
My task is to evaluate its age at the moment of measurement, which is an inverse problem.
I think 2D approximation of the 3D case is hard (done here by Abo-Zahhad) with with 3-leads (2 on chest and one at left leg - MIT-BIT arrhythmia database):
where f is a piecewise continuous function in R^2, \epsilon is the error matrix and A is a 2D matrix.
Now, I evaluate the average grid distance in x-axis (time) and average grid distance in y-axis (energy).
I think this can be done by Matlab's Image Analysis toolbox.
However, I am not sure how complete the toolbox's approaches are.
I think a transform approach must be used in the setting of uneven and noncontinuous grids. One approach is exact linear time euclidean distance transforms of grid line sampled shapes by Joakim Lindblad et all.
The method presents a distance transform (DT) which assigns to each image point its smallest distance to a selected subset of image points.
This kind of approach is often a basis of algorithms for many methods in image analysis.
I tested unsuccessfully the case with bwdist (Distance transform of binary image) with chessboard (returns empty square matrix), cityblock, euclidean and quasi-euclidean where the last three options return full matrix.
Another pseudocode
% https://stackoverflow.com/a/29956008/54964
%// retrieve picture
imgRGB = imread('dummy.png');
%// detect lines
imgHSV = rgb2hsv(imgRGB);
BW = (imgHSV(:,:,3) < 1);
BW = imclose(imclose(BW, strel('line',40,0)), strel('line',10,90));
%// clear those masked pixels by setting them to background white color
imgRGB2 = imgRGB;
imgRGB2(repmat(BW,[1 1 3])) = 255;
%// show extracted signal
imshow(imgRGB2)
where I think the approach will not work here because the grids are not necessarily continuous and not necessary ideal.
pdist based on the Lumbreras' answer
In the real examples, all coordinates differ such that pdist hamming and jaccard are always 1 with real data.
The options euclidean, cytoblock, minkowski, chebychev, mahalanobis, cosine, correlation, and spearman offer some descriptions of the data.
However, these options make me now little sense in such full matrices.
I want to estimate how long the signal can live.
Sources
J. Müller, and S. Siltanen. Linear and nonlinear inverse problems with practical applications.
EIT with the D-bar method: discontinuous heart-and-lungs phantom. http://wiki.helsinki.fi/display/mathstatHenkilokunta/EIT+with+the+D-bar+method%3A+discontinuous+heart-and-lungs+phantom Visited 29-Feb 2016.
There is a function in Matlab defined as pdist which computes the pairwisedistance between all row elements in a matrix and enables you to choose the type of distance you want to use (Euclidean, cityblock, correlation). Are you after something like this? Not sure I understood your question!
cheers!
Simply, do not do it in the post-processing. Those artifacts of the body can be about about raster images, about the viewer and/or ... Do quality assurance in the signal generation/processing step.
It is much easier to evaluate the original signal than its views.
I have a grayscale image, and need to increase its resolution. How can this be done in MATLAB? Would it mainly be done by multiplying the dimensions of the image for instance?
You need to perform interpolation. There are many ways to do this. Use imresize (e.g. imgOut=imresize(img,scale,method);), or if you do not have the Image Processing Toolbox, consider the following code:
function imres = resizeim(I,outsize,interpalg)
if nargin<3 || isempty(interpalg),
interpalg='cubic';
end
rows=outsize(1);
cols=outsize(2);
vscale = size(I,1) / rows;
hscale = size(I,2) / cols;
imgClass = class(I);
imres = interp2(double(I), (1:cols)*hscale + 0.5 * (1 - hscale), ...
(1:rows)'*vscale + 0.5 * (1 - vscale), ...
interpalg);
imres = cast(imres,imgClass);
Note: This is a rough start. You many need to perform pre-filtering, or other transformations. Also, this example only supports 2D (grayscale) images. For RGB, adapt this to process each color plane, or simple process each plane in a loop. Again, this is just an example.
Aside from edge handling, this gives the same results as imresize with anti-aliasing turned off (i.e. imresize(...,'Antialiasing',false)).
Regarding edge handling, see the documentation for interp2 for information on the extrapval parameter. The code gets ugly, but you can either patch the min/max elements in the interpolation points (interp2 inputs) to simply map exactly to the edges, or you can use NaN for extrapval, and post-process imres to replace the NaNs with its neighbor, etc. Note that simply interpolating at points such as linspace(1,size(I,1),rows) will not give the expected scale change.
You can also perform sinc interpolation by Fourier transforming the image, zero-padding it, inverse Fourier transforming it, and taking the absolute value.
im_rz = abs(ifft2(padarray(fft2(im),[row_pad, col_pad])))
You can "hallucinate" high resolution details. See for example Glasner et al "Single image Super resolution" ICCV 2009.
An implementation of that algorithm can be found here
I have a vein image as follow. I use watershed algorithm to extract the skeleton of the vein.
My code: (K is the original image).
level = graythresh(K);
BW = im2bw(K,level);
D = bwdist(~BW);
DL = watershed(D);
bgm = DL == 0;
imshow(bgm);
The result is:
As you can see lot of information is lost. Can anybody help me out? Thanks.
It looks like the lighting is somewhat uneven. This can be corrected using certain morphological operations. The basic idea is to compute an image that represents just the uneven lighting and subtract it, or to divide by it (which also enhances contrast). Because we want to find just the lighting, it is important to use a large enough structuring element, so that the operation examines more global properties rather than local ones.
%# Load image and convert to [0,1].
A = im2double(imread('http://i.stack.imgur.com/TQp1i.png'));
%# Any large (relative to objects) structuring element will do.
%# Try sizes up to about half of the image size.
se = strel('square',32);
%# Removes uneven lighting and enhances contrast.
B = imdivide(A,imclose(A,se));
%# Otsu's method works well now.
C = B > graythresh(B);
D = bwdist(~C);
DL = watershed(D);
imshow(DL==0);
Here are C (left), plus DL==0 (center) and its overlay on the original image:
You are losing information because when you apply im2bw, you are basically converting your uint8 image, where the pixel brightness takes a value from intmin('uint8')==0 to intmax('uint8')==255, into a binary image (where only logical values are used). This is what entails a loss of information that you observed.
If you display the image BW you will see that all the elements of K that had a value greater than the threshold level turn into ones, while those ones below the threshold turn into zeros.
Yes, you'll need to lower your threshold likely (lower than what Otsu's method is giving you). And if the edge map is noisy when you lower the threshold, you should apply a 2-D Gaussian smoothing filter before you lower the threshold. This will move the edges slightly but will clean up noise too, so it's a tradeoff.
The 2-D Gaussian can be applied doing something like
w=gausswin(N,Alpha) % you'll have to play with N and alpha
K = imfilter(K,w,'same','symmetric'); % something like these options
Before you apply the rest of your algorithm.
In matlab, how to generate two clusters of random points like the following graph. Can you show me the scripts/code?
If you want to generate such data points, you will need to have their probability distribution to be able to generate the points.
For your point, I do not have the real distributions, so I can only give an approximation. From your figure I see that both lay approximately on a circle, with a random radius and a limited span for the angle. I assume those angles and radii are uniformly distributed over certain ranges, which seems like a pretty good starting point.
Therefore it also makes sense to generate the random data in polar coordinates (i.e. angle and radius) instead of the cartesian ones (i.e. horizontal and vertical), and transform them to allow plotting.
C1 = [0 0]; % center of the circle
C2 = [-5 7.5];
R1 = [8 10]; % range of radii
R2 = [8 10];
A1 = [1 3]*pi/2; % [rad] range of allowed angles
A2 = [-1 1]*pi/2;
nPoints = 500;
urand = #(nPoints,limits)(limits(1) + rand(nPoints,1)*diff(limits));
randomCircle = #(n,r,a)(pol2cart(urand(n,a),urand(n,r)));
[P1x,P1y] = randomCircle(nPoints,R1,A1);
P1x = P1x + C1(1);
P1y = P1y + C1(2);
[P2x,P2y] = randomCircle(nPoints,R2,A2);
P2x = P2x + C2(1);
P2y = P2y + C2(2);
figure
plot(P1x,P1y,'or'); hold on;
plot(P2x,P2y,'sb'); hold on;
axis square
This yields:
This method works relatively well when you deal with distributions that you can transform easily and when you can easily describe the possible locations of the points. If you cannot, there are other methods such as the inverse transforming sampling method which offer algorithms to generate the data instead of manual variable transformations as I did here.
K-means is not going to give you what you want.
For K-means, vectors are classified based on their nearest cluster center. I can only think of two ways you could get the non-convex assignment shown in the picture:
Your input data is actually higher-dimensional, and your sample image is just a 2-d projection.
You're using a distance metric with different scaling across the dimensions.
To achieve your aim:
Use a non-linear clustering algorithm.
Apply a non-linear transform to your input data. (Probably not feasible).
You can find a list on non-linear clustering algorithms here. Specifically, look at this reference on the MST clustering page. Your exact shape appears on the fourth page of the PDF together with a comparison of what happens with K-Means.
For existing MATLAB code, you could try this Kernel K-Means implementation. Also, check out the Clustering Toolbox.
Assuming that you really want to do the clustering operation on existing data, as opposed to generating the data itself. Since you have a plot of some data, it seems logical that you already know how to do that! If I am wrong in this assumption, then you should word your questions more carefully in the future.
The human brain is quite good at seeing patterns in things like this, that writing a code for on a computer will often take some serious effort.
As has been said already, traditional clustering tools such as k-means will fail. Luckily, the image processing toolbox has good tools for these purposes already written. I might suggest converting the plot into an image, using filled in dots to plot the points. Make sure the dots are large enough that they touch each other within a cluster, with some overlap. Then use dilation/erosion tools if necessary to make sure that any small cracks are filled in, but don't go so far as to cause the clusters to merge. Finally, use region segmentation tools to pick out the clusters. Once done, transform back from pixel units in the image into your spatial units, and you have accomplished your task.
For the image processing approach to work, you will need sufficient separation between the clusters compared to the coarseness within a cluster. But that seems obvious for any method to succeed.
I have an image and my aim is to binarize the image. I have filtered the image with a low pass Gaussian filter and have computed the intensity histogram of the image.
I now want to perform smoothing of the histogram so that I can obtain the threshold for binarization. I used a low pass filter but it did not work. This is the filter I used.
h = fspecial('gaussian', [8 8],2);
Can anyone help me with this? What is the process with respect to smoothing of a histogram?
imhist(Ig);
Thanks a lot for all your help.
I've been working on a very similar problem recently, trying to compute a threshold in order to exclude noisy background pixels from MRI data prior to performing other computations on the images. What I did was fit a spline to the histogram to smooth it while maintaining an accurate fit of the shape. I used the splinefit package from the file exchange to perform the fitting. I computed a histogram for a stack of images treated together, but it should work similarly for an individual image. I also happened to use a logarithmic transformation of my histogram data, but that may or may not be a useful step for your application.
[my_histogram, xvals] = hist(reshape(image_volume), 1, []), number_of_bins);
my_log_hist = log(my_histogram);
my_log_hist(~isfinite(my_log_hist)) = 0; % Get rid of NaN values that arise from empty bins (log of zero = NaN)
figure(1), plot(xvals, my_log_hist, 'b');
hold on
breaks = linspace(0, max_pixel_intensity, numberofbreaks);
xx = linspace(0, max_pixel_intensity, max_pixel_intensity+1);
pp = splinefit(xvals, my_log_hist, breaks, 'r');
plot(xx, ppval(pp, xx), 'r');
Note that the spline is differentiable and you can use ppdiff to get the derivative, which is useful for finding maxima and minima to help pick an appropriate threshold. The numberofbreaks is set to a relatively low number so that the spline will smooth the histogram. I used linspace in the example to pick the breaks, but if you know that some portion of the histogram exhibits much greater curvature than elsewhere, you'd want to have more breaks in that region and less elsewhere in order to accurately capture the shape of the histogram.
To smooth the histogram you need to use a 1-D filter. This is easily done using the filter function. Here is an example:
I = imread('pout.tif');
h = imhist(I);
smooth_h = filter(normpdf(-4:4, 0,1),1,h);
Of course you can use any smoothing function you choose. The mean would simply be ones(1,8).
Since your goal here is just to find the threshold to binarize an image you could just use the graythresh function which uses Otsu's method.