I am new in Image Processing with MATLAB. I am trying to re-implement the paper "Single Image Haze Removal Using Dark Channel Prior" (DCP) by He et. al..
Although the haze-free images are quite good in visibility, but the PSNR and SSIM value are not good. Comparing to other benchmark, their PSNR and SSIM results could reach 16.62 and 0.8179, respectively. My implementation reaches only 7.07 and 0.002.
I have no idea to overcome this and look for some suggestions.
[Update] Solved by using im2uint8() instead of uint8()
image = imread(imagePath);
% dehaze
patch_size = 15;
omega = 0.95;
r = 60;
eps = 0.001;
tx = 0.1;
haze_free = dcp_guidedfilter(image, patch_size, omega, r, eps, tx, true);
% measure quality
haze_free = im2uint8(haze_free);
[peaksnr, snr] = psnr(haze_free, image);
[ssimval,ssimmap] = ssim(haze_free, image);
Related
I am trying to write a code for denoising a noisy image in Matlab using Tikhonov regularization. For this purpose, I have written the following code:
refimg = im2double(mat2gray((imread('image1.png'))))*255; % original image
rng(0);
Noisyimg = refimg + randn(size(refimg))*20; % noisy image
%% Accelerated Gradient Method
x = cell(size(zeros(1,Nbiter)));
x{1} = Noisyimg; x{2} = Noisyimg;
y = zeros(size(refimg));
for iter = 3:Nbiter
y = x{iter-1} + ((iter-2)/(iter+1))*(x{iter-1}-x{iter-2});
[ux,uy] = grad(y,options);
GradFy = mu.*(y-Noisyimg)-K.*div(ux,uy,options);
xn = y - t*GradFy;
x{iter} = min(max(xn,0),255);
end
in which K, mu, and t are positive parameters that must be assinged. I have implemented the code with many different values for these parameters but I can not get a denoised image. I receive an out put image which is still very noisy. For this reason, I guess there is something wrong with the code. I really appreciate it if you could please help me to find the problem with my code. In the following, I have inserted the image that I have tried to denoise and also the energy functional that I have tried to minimize and the algorithm that I have used. Here, f is the noisy image and u is the image to be reconstructed. Many thanks.
I was trying to binarize some images. In some images I get the pattern as it is but in some images I lose some of the pattern. I am using greythresh for binarizing. Is there any other method to improve the output.
I = imread('image.jpg');
I = rgb2gray(I);
I = uint8(255*mat2gray(I));
figure,imshow(I);
I=imresize(I,[128 128]);
figure,imshow(I);
I = medfilt2(I,[5 5]);
I1 = medfilt2(I,[5 5]);
I = adapthisteq(I1,'clipLimit',0.4,'Distribution','rayleigh');
figure,imshow(I);
level = graythresh(I);
BW = im2bw(I, level);
figure,imshow(BW);
input
output
Getting a unique threshold for the whole image seems to be bad in your case. You should try to perform adaptive local thresholding to better adapt to smooth image intensity variations.
You can find a matlab example here
I want to add Multiplicative Gamma Noise to a image using "randg" function in Matlab and remove that noise. We have to keep in mind that the noise should have mean 1 and level 4. It should follow Gamma law ( with Gamma Probability Distribution Function).
The image after addition of noise becomes
f=u*v;
where f=noisy image, u=original image, v=noisy image.
The gamma law is:
gv(v)=L^L/(Γ(L)) v^(L-1) exp(-Lv) 1_(v≥0)
where L is the noise level and v is the noise.
Here is the code that I've tried:
img = imread('lena.png');
img1 = img./ 255;
imgdob = double(img1);
noisyimg = imgdob + randg(1,size(imgdob)) .* 0.4;
noisyimg(noisyimg< 0) = 0;
noisyimg(noisyimg> 1) = 1;
figure,imshow(img);
figure,imshow(noisyimg);
imwrite(img, 'lenaOriginal.jpg', 'Quality', 100);
imwrite(noisyimg, 'lenaNoisy.jpg', 'Quality', 100);
But I could not get the expected result. Please suggest me a way.
0.4 is VERY destructive. So destructive that it'll force it to threshold values to 0 or 1. You should try 0.2 instead. Also if you're looking for a normal distribution noise, you should use randn instead of randg. The following code is here.
Note that I don't have sexylena.png on my computer so I must use bag.png instead.
imgdob = im2double(imread('bag.png'));
noisyimg = imgdob + randg(1,size(imgdob)) .* 0.15;
noisyimg(noisyimg< 0) = 0;
noisyimg(noisyimg> 1) = 1;
figure,imshow(imgdob);
figure,imshow(noisyimg);
imwrite(imgdob, 'lenaOriginal.jpg', 'Quality', 100);
imwrite(noisyimg, 'lenaNoisy.jpg', 'Quality', 100);
These are the results. Normal image.
Noisy image using randg.
If you wish to use randn instead, you can use this line of code instead.
noisyimg = imgdob + randn(size(imgdob)) .* 0.2;
Noisy image using randn.
As for noise reduction, please refer to Matlab's tutorial for noise removal.
I am looking at some code which performs blurring of images. However, I am having trouble understanding the code and I was wondering if someone could help me understand what the code is doing roughly.
Here the variable "Iref" is an image.
Imin = min(Iref(:));
Iref_fft = Iref-Imin;
Iref_fft = fftshift(Iref_fft,1);
Iref_fft = fftshift(Iref_fft,2);
Iref_fft = fft(Iref_fft,[],1);
Iref_fft = fft(Iref_fft,[],2);
Iref_fft = fftshift(Iref_fft,1);
Iref_fft = fftshift(Iref_fft,2);
Here, I am already confused as to what it means to apply the fftshift on an image which is not already in the fourier domain. So, I can tell it is doing a fourier transform along each of the axes but why does it do a fftshift before and after?
The code follows as:
Nx_r = 32;
Ny_r = 32;
sigma = 1.5;
wx = reshape(gausswin(Nx_r,sigma), [1 Nx_r]);
wy = reshape(gausswin(Ny_r,sigma), [Ny_r 1]);
wx_rep = repmat(wx, [Ny_r 1]);
wy_rep = repmat(wy, [1 Nx_r]);
Window = wx_rep .* wy_rep;
xIndices = floor((Nx-Nx_r)/2)+1 : floor((Nx-Nx_r)/2)+Nx_r;
yIndices = floor((Ny-Ny_r)/2)+1 : floor((Ny-Ny_r)/2)+Ny_r;
Iref_blurred = zeros(Ny,Nx);
Iref_blurred(yIndices,xIndices,:) = Iref_fft(yIndices,xIndices) .* Window;
Iref_blurred = fftshift( ifft2( fftshift(Iref_blurred) ) );
Iref_blurred = abs(Iref_blurred)+Imin;
In the subsequent steps, I think we are doing a Gaussian blurring. However, I thought the kernel has to be generated in the fourier domain as well before we could multiply them like the line:
Iref_blurred(yIndices,xIndices,:) = Iref_fft(yIndices,xIndices) .* Window;
I am not sure if the Window is the fourier transform of the Gaussian convolution kernel or at least not being able to tell it from the code.
So, I am a bit confused as to how this is achieving the Gaussian blurring. Any help in understanding this code would be appreciated.
You are correct that there is no FFT of the Gaussian going on in this code, but the thing to remember (or learn) is that the Fourier space representation of a Gaussian is also a Gaussian, just with the reciprocal standard deviation. Whoever wrote this code probably knew this, or they just forgot and got lucky.
See the section of gausswin docs called Gaussian Window and the Fourier Transform. Condensed version of gausswin example in documentation:
N = 64; n = -(N-1)/2:(N-1)/2; alpha = 8;
w = gausswin(N,alpha);
nfft = 4*N; freq = -pi:2*pi/nfft:pi-pi/nfft;
wdft = fftshift(fft(w,nfft));
plot(n,w)
hold on
plot(freq/pi,abs(wdft) / 10,'r')
title('Gaussian Window and FFT')
legend({'win = gausswin(64,8)','0.1 * abs(FFT(win))'})
So, interpreting the output of gausswin as the Fourier space right away, without performing and FFT, equates to a Gaussian window in the with a much larger sigma in the spacial domain.
what i want to do is a histogram for quantized DCT coefficient for an image, to detect Double Quantization effect. when i use hist(x) it will categorize it in to 10s and if i changed it to hist(x,20) or 30 it does not really show the DQ effect. so is there any better way for this??
here is the code: on matlab
im = jpeg_read('image');
% Pull image information - Lum, Cb, Cr
lum = im.coef_arrays{im.comp_info(1).component_id};
cb = im.coef_arrays{im.comp_info(2).component_id};
cr = im.coef_arrays{im.comp_info(3).component_id};
% Pull quantization arrays
lqtable = im.quant_tables{im.comp_info(1).quant_tbl_no};
cqtable = im.quant_tables{im.comp_info(2).quant_tbl_no};
% Quantize above two sets of information
qcof = quantize(lum,lqtable);
bqcof = quantize(cb,cqtable);
rqcof = quantize(cr,cqtable);
hist(qcof,30); %lum quantized dct coefficient histogram
First, no need to quantize the coefficients. Secondly, the effect can be observed by plotting histograms of certain frequencies. You need to go through various positions in the blocks and look for the pattern. Plotting FFT of the histogram helps.
Here is the matlab code:
imJPG2 = jpeg_read('foto2.jpg');
lum = imJPG2.coef_arrays{imJPG2.comp_info(1).component_id};
for i = 1:8
for j = 1:8
r = lum( i:8:end, j:8:end );
histogram(r(:), 'binmethod','integers');
pause();
end
end
More details and background can be found in this paper: http://www.sciencedirect.com/science/article/pii/S0031320309001198