I'm trying to deblur an image I've blurred using gaussian filter, but using cepstrum analysis in MATLAB. So far I tried using "cceps" but there's a problem:
Error using -
Matrix dimensions must agree.
Error in cceps>rcunwrap (line 149)
y(:) = y(:)' - pi*nd*(0:(n-1))/nh;
Error in cceps (line 80)
[ah,nd] = rcunwrap(angle(h));
My image is 512x512 double. I only saw array usage in cepstrum analysis. I'm trying to "cceps(y)-cceps(h)" (y: blurred image, h: gaussian filter) then "icceps" to get the deblurred (original) image.
Is there any way to use cceps and icceps for images?
Also I tried applying the cceps algorithm;
Y=fftshift(fft2(y)); % Compute Fourier transform
CY=fftshift(ifft2(log(abs(Y)))); % Compute blurred image cepstrum
H=fftshift(fft2(h));
CH=fftshift(ifft2(log(abs(H)))); % Compute filter cepstrum
CX=CY-CH;
after this line I don't know how I can take the inverse cepstrum. There are some problems with unwrapping also, but I can't think of any way.
Related
I'm studying weed optimization algorithm of brain MRI images using Matlab R2017a. I obtained sample code from the Matlab File Exchange of Weed Optimization Algorithm. When I rand code with a binary image, I got an error message:
Undefined function 'colon' for input arguments of type 'uint8' and attributes 'full 3d real'.
What should I do to solve this problem?
This problem doesn't occur for some images. Sometimes, I get a matrix dimension error. Despite all errors, sample code works great 100%.
I=imread('brainxx.jpeg');
x=graytresh(I);
w=im2bw(I,x);
for.....
%min pixel value
%max pixel value
%image area[pixels]
.....
within-class variance (sum of weighted bg and fg variance)
% lowest within-class variance = optimal threshold (t)
%find and set the optimal threshold and generate binary image
[dat,indx]=min(var_tot);
opt_td = im2double(opt_t)...
I'm writing currently a program in Matlab which is related on image hashing. Loading the image and performing a simple down-sampling was not a problem.Here is my code
clear all;
close all;
clc;
%%load Image
I = im2single(imread('cameraman.tif'));
%%Perform filtering and downsampling
gaussPyramid = vision.Pyramid('PyramidLevel', 2);
J = step(gaussPyramid, I); %%Preprocessed Image
%%Get 2D Fourier Transform of Input Image
Y = fft2(I); %%fft of input image
The algorithm next assumes that 2D Fourier Transform ( Y in my case ) must be in the form Y(f_x,f_y) where f_x,f_y are the normalized spatial frequencies
in the range [0, 1].I'm not able to transform the output of fft2 function from Matlab as it is required by the algorithm.
I looked up the paper in question (Multimedia Forensic Analysis via Intrinsic and Extrinsic Fingerprints by Ashwin Swaminathan) and found
We take a Fourier transform on the preprocessed image
to obtain I(fx, fy). The Fourier transform output is converted into polar co-ordinates to arrive at I′(ρ, θ)
By this, they seem to mean that I(fx, fy) is the Fourier transformation of the image. But they're trying to use the Fourier-Mellin transformation, which is different from a simple fft2 on the image. Based on the information found in this set of slides,
If an input image is expressed in terms of coordinates that are natural logarithms of the original coordinates, then the magnitude of its FT is insensitive to any change in scale of the original image (since it is Mellin Transform in the original coordinates)
and in files on the MATLAB Central File exchange, you have to do additional work to get the Mellin-Fourier transform; particularly, taking the magnitude (which appears to be your missing step), converting the coordinates to log polar and taking a second fft2. It's unclear to me why the log of log polar coordinates was omitted from the paper in the steps required. See an implementation for image registration here for an example. Note that this code is old, and the transformImage method appears not to exist; it does the log polar transform.
I am trying to deblur the following image by Wiener deconvolution. I was not given the original PSF, and I came up with an arbitrary value for the noise figure. I tried to optimize the code by playing around with the sigma values but I cannot get it to work.
my code...
img = im2double(imread('C:\Users\adhil\Desktop\matlab pics\test.JPG'));
LEN = 2;
THETA = 5;
PSF = fspecial('gaussian', LEN, THETA);
wnr1 = deconvwnr(img, PSF, 0.0000001);
imshow(wnr1);
title('Restored Image');
subplot(1,2,1);imshow(img);title('Before');
subplot(1,2,2);imshow(wnr1);title('After');
here is the result...
please advise
If you have no idea what the point spread function is, or you try an approximation that is far from the actual point spread function, Wiener deconvolution won't work very well, because it relies on knowing the point spread function.
You might have better results trying blind deconvolution (Matlab function):
deconvblind
using an array of ones the same size as your Gaussian PSF guess as the initial PSF (using the array of ones is suggested by the Matlab documentation).
I have written the following code in order to convolve an image first
% Convoluton and deconvolution
% Firstly convultion
a = imread('duas.jpg');
subplot(231);
imshow(a)
b = rgb2gray(a);
subplot(232);
imshow(b);
% convolving with h, High Pass filter now
h = [1 2 1; 1 2 1];
c = conv2(b,h);
subplot(233);
imshow(c);
Now I need to deconvolve it , what to do ? I think I should get the original image using it ?
You can use MATLAB's Wiener Filter and use Noise Std of zero.
Deconvolution is usually done in the frequency domain.
I'll illustrate the steps to do direct Deconvolution (Which coincide with Wiener Filter for zero noise).
I assume Deconvolution (As opposed to Blind Deconvolution) where the applied filter is given:
Apply FFT on the filtered image.
Add zeros at the end of LPF filter in the Spatial Domain to have the same size as the image.
Apply FFT on this Filter Matrix.
Divide point by point the image by the filter.
If the filter has zero values set the output to be zero.
Apply IFFT on the output image.
Good Luck.
The other day I asked about something similiar and finally i solved that part, but i am stuck again.
I would like to create a noise filter, to remove noise from an image, avoiding edges and boundaries. My imput is an image file, and the filter is a smoothing linear FIR.
BUT i want the result to be written to the output mixed with the original content, following the next equation:
result(x,y) = original(x,y)*mask(x,y) + filter_output(x,y)*(1-mask(x,y))
Where: original(x,y) would be the imput, the image with noise (this for example, with gaussian noise).
mask(x,y) is a matrix of coefficients based on the edges of the image (alredy done)
and filter_ouput(x,y), should be the image after the linear FIR.
My problem is: I tried with so many filters and types of noise (gaussian, salt&pepper...), and i don't get a good result. The result(x,y) i get is the same than the image with noise! With any change. So strange.
Which filter would be the correct? I don't know if my error is in the filter, or in the code. But something is being implemented wrong. Here is the code.
filter = ones(5,5) / 25;
a2 = imfilter(a,filter); % a is the image with noise, a2 is the filtered image (output)
%The equation. G is the mask.
result=uint8(a).*uint8(G) + uint8(a2).*uint8(1-G);
imshow(result);
PS: Original image without noise
Any idea? Thank you so much!
a2 is smooth after applying the averaging filter on a. I'm trying to understand what you are expecting to show in the result image. Actually your G , obtained after sobel operator, is also a uint8 image ranging from 0 to 255. So I guess your
result=uint8(a).*uint8(G) + uint8(a2).*uint8(1-G);
should be result=a.*uint8(G1) + a2.*uint8(1-G1); where G1 =im2bw(G,thresh) with your preset thresh value.
EDIT
Response to your suggestion: how about using
result=a2+(255-G);