Would Richardson–Lucy deconvolution work for recovering the latent kernel? - matlab

I am aware that Richardson–Lucy deconvolution is for recovering the latent image, but suppose we have a noisy image and the original image. Can we find the kernel that caused the transformation?
Below is a MATLAB code for Richardson-Lucy deconvolution and I am wondering if it is easy to modify and make it recover the kernel instead of the latent image. My thoughts are that we change the convolution options to valid so the output would represent the kernel, what do you think?
function latent_est = RL_deconvolution(observed, psf, iterations)
% to utilise the conv2 function we must make sure the inputs are double
observed = double(observed);
psf = double(psf);
% initial estimate is arbitrary - uniform 50% grey works fine
latent_est = 0.5*ones(size(observed));
% create an inverse psf
psf_hat = psf(end:-1:1,end:-1:1);
% iterate towards ML estimate for the latent image
for i= 1:iterations
est_conv = conv2(latent_est,psf,'same');
relative_blur = observed./est_conv;
error_est = conv2(relative_blur,psf_hat,'same');
latent_est = latent_est.* error_est;
end
Thanks in advance.

This is a very simple problem. Convolution is commutative. Hence, you don't need to change the implementation of RL deconvolution to obtain PSF, you can simply call it as follows:
psf = RL_deconvolution(observed, latent_est, iterations)

Related

Max pool operation in Matlab?

Is there a function that allows to compute a 2d max-pooling with predefined, kernel size and window stride in Matlab?
I was looking around but couldn't find anything so far...
Say I have a 3D cube of data [HxWxC], I'd like to run a 2d max-pooling on every channel separately (similar to the max-pool operation known from neural networks).
A similar opencv function would also help me out...
Here's another solution that doesn't require the neural network function.
You could do a convolution with your kernel on each channel and then select the slices of the resulting matrix that you want to keep (which corresponds to the stride). Here's a code sample for a generic case of linear average pooling.
% pool
kernel = ones(k)/k^2;
temp = conv2(padarray(data, [p p]), kernel, 'valid');
% downsample
pooled_data = temp(1:stride:end, 1:stride:end);
You could, of course, use an order statistic filter instead of the linear averaging in this example, or any other pooling function. You can also play around with the padarray value parameter to get the padding behavior you desire.
I ended up using matconvnet
k = 3; % kernel_size
p = 1; % padding size
data_pooled = vl_nnpool(single(data),[k,k],'Pad',[p,p,p,p]);

Image deblurring using the Wiener deconvolution

I am trying to deblur the following image by Wiener deconvolution. I was not given the original PSF, and I came up with an arbitrary value for the noise figure. I tried to optimize the code by playing around with the sigma values but I cannot get it to work.
my code...
img = im2double(imread('C:\Users\adhil\Desktop\matlab pics\test.JPG'));
LEN = 2;
THETA = 5;
PSF = fspecial('gaussian', LEN, THETA);
wnr1 = deconvwnr(img, PSF, 0.0000001);
imshow(wnr1);
title('Restored Image');
subplot(1,2,1);imshow(img);title('Before');
subplot(1,2,2);imshow(wnr1);title('After');
here is the result...
please advise
If you have no idea what the point spread function is, or you try an approximation that is far from the actual point spread function, Wiener deconvolution won't work very well, because it relies on knowing the point spread function.
You might have better results trying blind deconvolution (Matlab function):
deconvblind
using an array of ones the same size as your Gaussian PSF guess as the initial PSF (using the array of ones is suggested by the Matlab documentation).

Discrete Approximation to Gaussian smoothing

I am trying to find a discrete approximation to the Gaussian smoothing operation, as shown in the link :
http://bit.ly/1cSgkwt
G is some discrete smoothing kernel, a Gaussian in this case and * is the convolution operation. Basically, I want to apply a smoothing kernel to each pixel in the image. I am doing this in MATLAB and using the following code to create the matrix G, which is naive and hence painfully slow :
z = rgb2gray(imread('train_02463_1.bmp'));
im_sz = size(z);
ksize = 5;
% Gaussian kernel of size ksize*ksize
gw_mat = g_sigma(1,2*ksize+1)'*g_sigma(1,2*ksize+1);
G = sparse(length(ksize+1:im_sz(1)-ksize),prod(im_sz));
for i = ksize+1:im_sz(1)-ksize
for j = ksize+1:im_sz(2)-ksize
[x,y] = meshgrid(i-ksize:i+ksize,j-ksize:j+ksize);
row_num = sub2ind(im_sz,i,j);
colnums = sub2ind(im_sz,x,y);
G(row_num,colnums(:)) = gw_mat(:)';
end
end
Is there a more efficient way to do this ?
EDIT: I should apologize for not complete specifying the problem. Most of the answers below are valid ones, but the issue here is the above approximation is part of a optimization objective where the variable is z. The whole problem looks something like this :
http://goo.gl/rEE02y
Thus, I have to pre-generate a matrix G which approximates a smoothing function in order to feed this objective to a solver. I am using cvx, if that helps.
Typically people do it by applying two 1D Gaussian functions sequentially instead of one 2D function (idea of separable filters). You can approximate 1D horizontal Gaussian pretty fast, apply it to each pixel, save result in temp, and then apply a vertical Gaussian to temp. The expected speed up is quite significant: O(ksize*ksize) -> O(ksize+ksize)
You can use approximate Gaussian smoothing via box filters, e.g see integgaussfilt.m:
This function approximates Gaussian filtering by repeatedly applying
averaging filters. The averaging is performed via integral images which
results in a fixed and very low computational cost that is independent of
the Gaussian size.
The fastest way to apply Spatially Invariant Filter on an image using MATLAB would be using imfilter.
imfilter will automatically notice "Seperable" Kernel and will apply it in 2 steps (Vertically / Horizontally).
For creating some known and useful kernels you may look at fspecial.

Image deblurring using MATLAB

I have two images, one is degraded and one is part of the original image. I need to enhance the first image by using the second one, and I need to do this in the frequency domain. I cut the same area from the degraded image, took its FFT, and tried to calculate the transfer function, but when I applied that function to the image the result was terrible.
So I tried h=fspecial('motion',9,45); to be my transfer function and then reconstructed the image with the code given below.
im = imread('home_degraded.png');
im = rgb2gray(im);
h = fspecial('motion',9,45);
H = zeros(519,311);
H(1:7,1:7) = h;
Hf = fft2(H);
d = 0.02;
Hf(find(abs(Hf)<d))=1;
I = ifft2(fft2(im)./Hf);
imshow(mat2gray(abs(I)))
I have two questions now:
How can I generate a transfer function by using the small rectangles (I mean by not using h=fspecial('motion',9,45);)?
What methods can I use to remove noise from an enhanced image?
I can recommend you a few ways to do that:
Arithmetic mean filter:
f = imfilter(g, fspecial('average', [m n]))
Geometric mean filter
f = exp(imfilter(log(g), ones(m, n), 'replicate')) .^ (1/(m*n))
Harmonic mean filter
f = (m*n) ./ imfilter(1 ./ (g + eps), ones(m, n), 'replicate');
where n and m are size of a mask (for instance, you can set m = 3 n = 3)
Basically what you want to do has two steps (at least) to it:
Estimate the PSF (blur kernel) by using the patch of the image with the squares in it.
Use the estimated kernel to do deconvolution to your blurry image
If you want to "guess" the PSF for step 1, that's fine but it's better to calculate it.
For step 2, you MUST first use edgetaper which will subside the ringing effects in your image, which you call noise.
The you use non-blind deconvolution (step 2) by using the function deconvlucy following this syntax:
J = deconvlucy(I,PSF)
this deconvolution procedure adds some noise, especially if your PSF is not 100% accurate, but you can make it smoother if you allow for more iterations (trading in details, NFL).
For the first step, if you don't care about the fact that you have the "sharp" square, you can just use blind deconvolution deconvblind and get some estimate for the PSF.
If you want to do it correctly and use the sharp patch then you can use it as your data term target in any optimization scheme involving the estimation of the PSF.

Smoothing of histogram with a low-pass filter in MATLAB

I have an image and my aim is to binarize the image. I have filtered the image with a low pass Gaussian filter and have computed the intensity histogram of the image.
I now want to perform smoothing of the histogram so that I can obtain the threshold for binarization. I used a low pass filter but it did not work. This is the filter I used.
h = fspecial('gaussian', [8 8],2);
Can anyone help me with this? What is the process with respect to smoothing of a histogram?
imhist(Ig);
Thanks a lot for all your help.
I've been working on a very similar problem recently, trying to compute a threshold in order to exclude noisy background pixels from MRI data prior to performing other computations on the images. What I did was fit a spline to the histogram to smooth it while maintaining an accurate fit of the shape. I used the splinefit package from the file exchange to perform the fitting. I computed a histogram for a stack of images treated together, but it should work similarly for an individual image. I also happened to use a logarithmic transformation of my histogram data, but that may or may not be a useful step for your application.
[my_histogram, xvals] = hist(reshape(image_volume), 1, []), number_of_bins);
my_log_hist = log(my_histogram);
my_log_hist(~isfinite(my_log_hist)) = 0; % Get rid of NaN values that arise from empty bins (log of zero = NaN)
figure(1), plot(xvals, my_log_hist, 'b');
hold on
breaks = linspace(0, max_pixel_intensity, numberofbreaks);
xx = linspace(0, max_pixel_intensity, max_pixel_intensity+1);
pp = splinefit(xvals, my_log_hist, breaks, 'r');
plot(xx, ppval(pp, xx), 'r');
Note that the spline is differentiable and you can use ppdiff to get the derivative, which is useful for finding maxima and minima to help pick an appropriate threshold. The numberofbreaks is set to a relatively low number so that the spline will smooth the histogram. I used linspace in the example to pick the breaks, but if you know that some portion of the histogram exhibits much greater curvature than elsewhere, you'd want to have more breaks in that region and less elsewhere in order to accurately capture the shape of the histogram.
To smooth the histogram you need to use a 1-D filter. This is easily done using the filter function. Here is an example:
I = imread('pout.tif');
h = imhist(I);
smooth_h = filter(normpdf(-4:4, 0,1),1,h);
Of course you can use any smoothing function you choose. The mean would simply be ones(1,8).
Since your goal here is just to find the threshold to binarize an image you could just use the graythresh function which uses Otsu's method.