I am trying to find a discrete approximation to the Gaussian smoothing operation, as shown in the link :
http://bit.ly/1cSgkwt
G is some discrete smoothing kernel, a Gaussian in this case and * is the convolution operation. Basically, I want to apply a smoothing kernel to each pixel in the image. I am doing this in MATLAB and using the following code to create the matrix G, which is naive and hence painfully slow :
z = rgb2gray(imread('train_02463_1.bmp'));
im_sz = size(z);
ksize = 5;
% Gaussian kernel of size ksize*ksize
gw_mat = g_sigma(1,2*ksize+1)'*g_sigma(1,2*ksize+1);
G = sparse(length(ksize+1:im_sz(1)-ksize),prod(im_sz));
for i = ksize+1:im_sz(1)-ksize
for j = ksize+1:im_sz(2)-ksize
[x,y] = meshgrid(i-ksize:i+ksize,j-ksize:j+ksize);
row_num = sub2ind(im_sz,i,j);
colnums = sub2ind(im_sz,x,y);
G(row_num,colnums(:)) = gw_mat(:)';
end
end
Is there a more efficient way to do this ?
EDIT: I should apologize for not complete specifying the problem. Most of the answers below are valid ones, but the issue here is the above approximation is part of a optimization objective where the variable is z. The whole problem looks something like this :
http://goo.gl/rEE02y
Thus, I have to pre-generate a matrix G which approximates a smoothing function in order to feed this objective to a solver. I am using cvx, if that helps.
Typically people do it by applying two 1D Gaussian functions sequentially instead of one 2D function (idea of separable filters). You can approximate 1D horizontal Gaussian pretty fast, apply it to each pixel, save result in temp, and then apply a vertical Gaussian to temp. The expected speed up is quite significant: O(ksize*ksize) -> O(ksize+ksize)
You can use approximate Gaussian smoothing via box filters, e.g see integgaussfilt.m:
This function approximates Gaussian filtering by repeatedly applying
averaging filters. The averaging is performed via integral images which
results in a fixed and very low computational cost that is independent of
the Gaussian size.
The fastest way to apply Spatially Invariant Filter on an image using MATLAB would be using imfilter.
imfilter will automatically notice "Seperable" Kernel and will apply it in 2 steps (Vertically / Horizontally).
For creating some known and useful kernels you may look at fspecial.
Related
I applied k-means algorithm for segmenting images. I used built in k-means function. It works properly but I want to know the threshold value that converts it to binary images in k-means method. For example, we can get threshold value by using built in function in MATLAB:
threshold=graythresh(grayscaledImage);
a=im2bw(a,threshold);
%Applying k-means....
imdata=reshape(grayscaledImage,[],1);
imdata=double(imdata);
[imdx mn]=kmeans(imdata,2);
imIdx=reshape(imdx,size(grayscaledImage));
imshow(imIdx,[]);
Actually, k-means and the well known Otsu threshold for binarizing intensity images based on a global threshold have an interesting relationship:
http://www-cs.engr.ccny.cuny.edu/~wolberg/cs470/doc/Otsu-KMeansHIS09.pdf
It can be shown that k-means is a locally optimal, iterative solution to the same objective function as Otsu, where Otsu is a globally optimal, non-iterative solution.
Given greyscale intensity data, one could compute a threshold based on otsu, which can be expressed in MATLAB using graythresh, or otsuthresh, depending on which interface you prefer.
A = imread('cameraman.tif');
A = im2double(A);
totsu = otsuthresh(histcounts(A,10000))
[~,c] = kmeans(A(:),2,'Replicates',10);
tkmeans = mean(c)
You can obtain a grayscale threshold from kmeans by just finding the midpoint of the two centroids, which should make sense geometrically since on either side of that midpoint, you are closer to one of the centroids or the other, and should therefore lie in that respective cluster.
totsu =
0.3308
tkmeans =
0.3472
You can't get the threshold because there is no threshold in the kmeans algorithm.
K-means is a clustering algorithm, it returns clusters which in many cases cannot be obtained with a simple thresholding.
See this link to learn further on how k-means works.
I need to create a diagonal matrix containing the Fourier coefficients of the Gaussian wavelet function, but I'm unsure of what to do.
Currently I'm using this function to generate the Haar Wavelet matrix
http://www.mathworks.co.uk/matlabcentral/fileexchange/33625-haar-wavelet-transformation-matrix-implementation/content/ConstructHaarWaveletTransformationMatrix.m
and taking the rows at dyadic scales (2,4,8,16) as the transform:
M= 256
H = ConstructHaarWaveletTransformationMatrix(M);
fi = conj(dftmtx(M))/M;
H = fi*H;
H = H(4,:);
H = diag(H);
etc
How do I repeat this for Gaussian wavelets? Is there a built in Matlab function which will do this for me?
For reference I'm implementing the algorithm in section 4 of this paper:
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04218361
I maybe would not being answering the question, but i will try to help you advance.
As far as i know, the Matlab Wavelet Toolbox only deal with wavelet operations and coefficients, increase or decrease resolution levels, and similar operations, but do not exposes the internal matrices serving to doing the transformations from signals and coefficients.
Hence i fear the answer to this question is no. Some time ago, i did this for some of the Hart Class wavelet, and i actually build the matrix from the scratch, and then i compared the coefficients obtained with the Built-in Matlab Wavelet Toolbox, hence ensuring your matrices are good enough for your algorithm. In my case, recursive parameter estimation for time varying models.
For the function ConstructHaarWaveletTransformationMatrix it is really simple to create the matrix, because the Hart Class could be really simple expressed as Kronecker products.
The Gaussian Wavelet case as i fear should be done from the scratch too...
THe steps i suggest would be;
Although MATLAB dont include explicitely the matrices, you can use the Matlab built-in functions to recover the Gaussian Wavelets, and thus compose the matrix for your algorithm.
Build every column of the matrix with every Gaussian Wavelet, for every resolution levels you are requiring (the dyadic scales). Use the Matlab Wavelets toolbox for recover the shapes.
After this, compare the coefficients obtained by you, with the coefficients of the toolbox. This way you will correct the order of the Matrix row.
Numerically, being fj the signal projection over Vj (the PHI signals space, scaling functions) at resolution level j, and gj the signal projection over Wj (the PSI signals space, mother functions) at resolution level j, we can write:
f=fj0+sum_{j0}^{j1-1}{gj}
Hence, both fj0 and gj will induce two matrices, lets call them PHIj and PSIj matrices:
f=PHIj0*cj0+sum_{j0}^{j1-1}{PSIj*dj}
The PHIj columns contain the scaled and shifted scaling wavelet signal (one, for j0 only) for the approximation projection (the Vj0 space), and the PSIj columns contain the scaled and shifted mother wavelet signals (several, from j0 to j1-1) for the detail projection (onto the Wj0 to Wj1-1 spaces).
Hence, the Matrix you need is:
PHI=[PHIj0 PSIj0... PSIj1]
Thus you can express you original signal as:
f=PHI*C
where C is a vector of approximation and detail coefficients, for the levels:
C=[cj0' dj0'...dj1']'
The first part, for addressing the PHI build can be achieved by writing:
function PHI=MakePhi(l,str,Jmin,Jmax)
% [PHI]=MakePhi(l,str,Jmin,Jmax)
%
% Build full PHI Wavelet Matrix for obtaining wavelet coefficients
% (extract)
%FILTER
[LO_R,HI_R] = wfilters(str,'r');
lf=length(LO_R);
%PHI BUILD
PHI=[];
laux=l([end-Jmax end-Jmax:end]);
PHI=[PHI MakeWMatrix('a',str,laux)];
for j=Jmax:-1:Jmin
laux=l([end-j end-j:end]);
PHI=[PHI MakeWMatrix('d',str,laux)];
end
the wfilters is a MATLAB built in function, giving the required signal for the approximation and or detail wavelet signals.
The MakeWMatrix function is:
function M=MakeWMatrix(typestr,str,laux)
% M=MakeWMatrix(typestr,str,laux)
%
% Build Wavelet Matrix for obtaining wavelet coefficients
% for a single level vector.
% (extract)
[LO_R,HI_R] = wfilters(str,'r');
if typestr=='a'
F_R=LO_R';
else
F_R=HI_R';
end
la=length(laux);
lin=laux(2); lout=laux(3);
M=MakeCMatrix(F_R,lin,lout);
for i=3:la-1
lin=laux(i); lout=laux(i+1);
Mi=MakeCMatrix(LO_R',lin,lout);
M=Mi*M;
end
and finally the MakeCMatrix is:
function [M]=MakeCMatrix(F_R,lin,lout)
% Convolucion Matrix
% (extract)
lf=length(F_R);
M=[];
for i=1:lin
M(:,i)=[zeros(2*(i-1),1) ;F_R ;zeros(2*(lin-i),1)];
end
M=[zeros(1,lin); M ;zeros(1,lin)];
[ltot,lin]=size(M);
lmin=floor((ltot-lout)/2)+1;
lmax=floor((ltot-lout)/2)+lout;
M=M(lmin:lmax,:);
This last matrix should include some interpolation routine for having better general results in each case.
I expect this solve part of your problem.....
Hyp
I have two images, one is degraded and one is part of the original image. I need to enhance the first image by using the second one, and I need to do this in the frequency domain. I cut the same area from the degraded image, took its FFT, and tried to calculate the transfer function, but when I applied that function to the image the result was terrible.
So I tried h=fspecial('motion',9,45); to be my transfer function and then reconstructed the image with the code given below.
im = imread('home_degraded.png');
im = rgb2gray(im);
h = fspecial('motion',9,45);
H = zeros(519,311);
H(1:7,1:7) = h;
Hf = fft2(H);
d = 0.02;
Hf(find(abs(Hf)<d))=1;
I = ifft2(fft2(im)./Hf);
imshow(mat2gray(abs(I)))
I have two questions now:
How can I generate a transfer function by using the small rectangles (I mean by not using h=fspecial('motion',9,45);)?
What methods can I use to remove noise from an enhanced image?
I can recommend you a few ways to do that:
Arithmetic mean filter:
f = imfilter(g, fspecial('average', [m n]))
Geometric mean filter
f = exp(imfilter(log(g), ones(m, n), 'replicate')) .^ (1/(m*n))
Harmonic mean filter
f = (m*n) ./ imfilter(1 ./ (g + eps), ones(m, n), 'replicate');
where n and m are size of a mask (for instance, you can set m = 3 n = 3)
Basically what you want to do has two steps (at least) to it:
Estimate the PSF (blur kernel) by using the patch of the image with the squares in it.
Use the estimated kernel to do deconvolution to your blurry image
If you want to "guess" the PSF for step 1, that's fine but it's better to calculate it.
For step 2, you MUST first use edgetaper which will subside the ringing effects in your image, which you call noise.
The you use non-blind deconvolution (step 2) by using the function deconvlucy following this syntax:
J = deconvlucy(I,PSF)
this deconvolution procedure adds some noise, especially if your PSF is not 100% accurate, but you can make it smoother if you allow for more iterations (trading in details, NFL).
For the first step, if you don't care about the fact that you have the "sharp" square, you can just use blind deconvolution deconvblind and get some estimate for the PSF.
If you want to do it correctly and use the sharp patch then you can use it as your data term target in any optimization scheme involving the estimation of the PSF.
I have an image and my aim is to binarize the image. I have filtered the image with a low pass Gaussian filter and have computed the intensity histogram of the image.
I now want to perform smoothing of the histogram so that I can obtain the threshold for binarization. I used a low pass filter but it did not work. This is the filter I used.
h = fspecial('gaussian', [8 8],2);
Can anyone help me with this? What is the process with respect to smoothing of a histogram?
imhist(Ig);
Thanks a lot for all your help.
I've been working on a very similar problem recently, trying to compute a threshold in order to exclude noisy background pixels from MRI data prior to performing other computations on the images. What I did was fit a spline to the histogram to smooth it while maintaining an accurate fit of the shape. I used the splinefit package from the file exchange to perform the fitting. I computed a histogram for a stack of images treated together, but it should work similarly for an individual image. I also happened to use a logarithmic transformation of my histogram data, but that may or may not be a useful step for your application.
[my_histogram, xvals] = hist(reshape(image_volume), 1, []), number_of_bins);
my_log_hist = log(my_histogram);
my_log_hist(~isfinite(my_log_hist)) = 0; % Get rid of NaN values that arise from empty bins (log of zero = NaN)
figure(1), plot(xvals, my_log_hist, 'b');
hold on
breaks = linspace(0, max_pixel_intensity, numberofbreaks);
xx = linspace(0, max_pixel_intensity, max_pixel_intensity+1);
pp = splinefit(xvals, my_log_hist, breaks, 'r');
plot(xx, ppval(pp, xx), 'r');
Note that the spline is differentiable and you can use ppdiff to get the derivative, which is useful for finding maxima and minima to help pick an appropriate threshold. The numberofbreaks is set to a relatively low number so that the spline will smooth the histogram. I used linspace in the example to pick the breaks, but if you know that some portion of the histogram exhibits much greater curvature than elsewhere, you'd want to have more breaks in that region and less elsewhere in order to accurately capture the shape of the histogram.
To smooth the histogram you need to use a 1-D filter. This is easily done using the filter function. Here is an example:
I = imread('pout.tif');
h = imhist(I);
smooth_h = filter(normpdf(-4:4, 0,1),1,h);
Of course you can use any smoothing function you choose. The mean would simply be ones(1,8).
Since your goal here is just to find the threshold to binarize an image you could just use the graythresh function which uses Otsu's method.
Does the 'gaussian' filter in MATLAB convolve the image with the Gaussian kernel? Also, how do you choose the parameters hsize (size of filter) and sigma? What do you base it on?
You first create the filter with fspecial and then convolve the image with the filter using imfilter (which works on multidimensional images as in the example).
You specify sigma and hsize in fspecial.
Code:
%%# Read an image
I = imread('peppers.png');
%# Create the gaussian filter with hsize = [5 5] and sigma = 2
G = fspecial('gaussian',[5 5],2);
%# Filter it
Ig = imfilter(I,G,'same');
%# Display
imshow(Ig)
#Jacob already showed you how to use the Gaussian filter in Matlab, so I won't repeat that.
I would choose filter size to be about 3*sigma in each direction (round to odd integer). Thus, the filter decays to nearly zero at the edges, and you won't get discontinuities in the filtered image.
The choice of sigma depends a lot on what you want to do. Gaussian smoothing is low-pass filtering, which means that it suppresses high-frequency detail (noise, but also edges), while preserving the low-frequency parts of the image (i.e. those that don't vary so much). In other words, the filter blurs everything that is smaller than the filter.
If you're looking to suppress noise in an image in order to enhance the detection of small features, for example, I suggest to choose a sigma that makes the Gaussian just slightly smaller than the feature.
In MATLAB R2015a or newer, it is no longer necessary (or advisable from a performance standpoint) to use fspecial followed by imfilter since there is a new function called imgaussfilt that performs this operation in one step and more efficiently.
The basic syntax:
B = imgaussfilt(A,sigma) filters image A with a 2-D Gaussian smoothing kernel with standard deviation specified by sigma.
The size of the filter for a given Gaussian standard deviation (sigam) is chosen automatically, but can also be specified manually:
B = imgaussfilt(A,sigma,'FilterSize',[3 3]);
The default is 2*ceil(2*sigma)+1.
Additional features of imgaussfilter are ability to operate on gpuArrays, filtering in frequency or spacial domain, and advanced image padding options. It looks a lot like IPP... hmmm. Plus, there's a 3D version called imgaussfilt3.