Applying MATLAB's idwt2 several times - matlab

I am using MATLAB to apply the Discrete Wavelet Transform on an image. I am applying it several times (3) in order to get a 3 level transform. I am using the dwt2 function provided by MATLAB in order to compress and idwt2 to make the decompression. The problem is that I do not know how to decompress several times, as in apply idwt2 several times to the previous received output, as it returns a matrix. Take for example:
x = idwt2(scaled3, vertical3, horizontal3, diagonal3, Lo_R, Ho_R);
How should idwt2 be applied to x?

Looking at the documentation for dwt2 and idwt2, it appears that you have 2 general options for reconstructing your multiply-decomposed images:
Store all of the horizontal, vertical, and diagonal detail coefficient matrices from each decomposition step and use them in the reconstruction.
Enter an empty matrix ([]) for any detail coefficient matrices that you didn't save from previous decomposition steps.
Since it was a slow day, here's some code showing how to do this and what the results look like for each case...
First, load a sample image and initialize some variables:
load woman; % Load image data
nLevel = 3; % Number of decompositions
nColors = size(map, 1); % Number of colors in colormap
cA = cell(1, nLevel); % Approximation coefficients
cH = cell(1, nLevel); % Horizontal detail coefficients
cV = cell(1, nLevel); % Vertical detail coefficients
cD = cell(1, nLevel); % Diagonal detail coefficients
Now, apply the decompositions (in this case 3) and store the detail coefficient matrices from each step in a cell array:
startImage = X;
for iLevel = 1:nLevel,
[cA{iLevel}, cH{iLevel}, cV{iLevel}, cD{iLevel}] = dwt2(startImage, 'db1');
startImage = cA{iLevel};
end
To see what the final decomposed image looks like, along with all the detail coefficient matrices along the way, run the following code (which makes use of wcodemat):
tiledImage = wcodemat(cA{nLevel}, nColors);
for iLevel = nLevel:-1:1,
tiledImage = [tiledImage wcodemat(cH{iLevel}, nColors); ...
wcodemat(cV{iLevel}, nColors) wcodemat(cD{iLevel}, nColors)];
end
figure;
imshow(tiledImage, map);
You should see something like this:
Now it's time to reconstruct! The following code performs a "full" reconstruction (using all of the stored detail coefficient matrices) and a "partial" reconstruction (using none of them), then it plots the images:
fullRecon = cA{nLevel};
for iLevel = nLevel:-1:1,
fullRecon = idwt2(fullRecon, cH{iLevel}, cV{iLevel}, cD{iLevel}, 'db1');
end
partialRecon = cA{nLevel};
for iLevel = nLevel:-1:1,
partialRecon = idwt2(partialRecon, [], [], [], 'db1');
end
figure;
imshow([X fullRecon; partialRecon zeros(size(X))], map, ...
'InitialMagnification', 50);
Notice that the original (top left) and the "full" reconstruction (top right) look indistinguishable, but the "partial" reconstruction (lower left) is very pixelated. The difference wouldn't be as severe if you applied fewer decomposition steps, like just 1 or 2.

% Multi-level reconstruction from DWT coefficients
% The variable "coefs" is what you get when you perform forward dwt2()
% on the image you're decomposing. It is a long row
% vector that has cA- approximation details, cH -horizontal details, cV-
% vertical details, cD-diagonal details
L=3; % For db 3-level reconstruction for example
k=size(image,1)/2^L; % I am assuming a square sized image where both
% dimensions are equal
for level=0:(L-1)
s=k*2^level;
if level==0
cA=reshape(coefs(1,1:s^2),s,s);
figure;imshow(cA,[])
end
cH=reshape(coefs(1,(s^2+1):2*s^2),s,s);
figure;imshow(cH,[])
cV=reshape(coefs(1,(2*s^2+1):3*s^2),s,s);
figure;imshow(cV,[])
cD=reshape(coefs(1,(3*s^2+1):4*s^2),s,s);
figure;imshow(cD,[])
I_rec=idwt2(cA,cH,cV,cD,"db1");
figure;imshow(I_rec,[])
cA=I_rec; % The recosntructed image is the approximation detail-cA
% for next levels of reconstruction
end

Related

Implementing a filtered backprojection algorithm using the central slice theorem in Matlab

I'm working on a filtered back projection algorithm using the central slice theorem for a homework assignment and while I understand the theory on paper, I've run into an issue implementing it in Matlab. I was provided with a skeleton to follow to do it but there is a step that I think I'm maybe misunderstanding. Here is what I have:
function img = sampleFBP(sino,angs)
% This step is necessary so that frequency information is preserved: we pad
% the sinogram with zeros so that this is ensured.
sino = padarray(sino, floor(size(sino,1)/2), 'both');
% diagDim should be the length of an individual row of the sinogram - don't
% hardcode this!
diagDim = size(sino, 2);
% The 2DFT (2D Fourier transform) of our image will start as a matrix of
% all zeros.
fimg = zeros(diagDim);
% Design your 1-d ramp filter.
rampFilter_1d = abs(linspace(-1, 1, diagDim))';
rowIndex = 1;
for nn = angs
% Each contribution to the image's 2DFT will also begin as all zero.
imContrib = zeros(diagDim);
% Get the current row of the sinogram - use rowIndex.
curRow = sino(rowIndex,:);
% Take the 1D Fourier transform the current row - be careful, as it's
% necessary to perform ifftshift and fftshift as Matlab tends to
% place zero-frequency components of a spectrum at the edges.
fourierCurRow = fftshift(fft(ifftshift(curRow)));
% Place the Fourier-transformed sino row and place it at the center of
% the next image contribution. Add the ramp filter in Fourier domain.
imContrib(floor(diagDim/2), :) = fourierCurRow;
imContrib = imContrib * fft(rampFilter_1d);
% Rotate the current image contribution to be at the correct angle on
% the 2D Fourier-space image.
imContrib = imrotate(imContrib, nn, 'crop');
% Add the current image contribution to the running representation of
% the image in Fourier space!
fimg = fimg + imContrib;
rowIndex = rowIndex + 1;
end
% Finally, just take the inverse 2D Fourier transform of the image! Don't
% forget - you may need an fftshift or ifftshift here.
rcon = fftshift(ifft2(ifftshift(fimg)));
The sinogram I'm inputting is just the output of the radon function on a Shepp-Logan phantom from 0 to 179 degrees. Running the code as it is now gives me a black image. I think I'm missing something in the loop where I add the FTs of rows to the image. From my understanding of the central slice theorem, what I think should be happening is this:
Initialize an array the same size as the what the 2DFT will be (i.e., diagDim x diagDim). This is the Fourier space.
Take a row of the sinogram which corresponds to the line integral information from a single angle and apply a 1D FT to it
According to the Central Slice Theorem, the FT of this line integral is a line through the Fourier domain that passes through the origin at an angle that corresponds to the angle at which the projection was taken. So to emulate that, I take the FT of that line integral and place it in the center row of the diagDim x diagDim matrix I created
Next I take the FT of the 1D ramp filter I created and multiply it with the FT of the line integral. Multiplication in the Fourier domain is equivalent to a convolution in the spatial domain so this convolves the line integral with the filter.
Now I rotate the entire matrix by the angle the projection was taken at. This should give me a diagDim x diagDim matrix with a single line of information passing through the center at an angle. Matlab increases the size of the matrix when it is rotated but since the sinogram was padded at the beginning, no information is lost and the matrices can still be added
If all of these empty matrices with a single line through the center are added up together, it should give me the complete 2D FT of the image. All that needs to be done is take the inverse 2D FT and the original image should be the result.
If the problem I'm running into is something conceptual, I'd be grateful if someone could point out where I messed up. If instead this is a Matlab thing (I'm still kind of new to Matlab), I'd appreciate learning what it is I missed.
The code that you have posted is a pretty good example of filtered backprojection (FBP) and I believe could be useful to people who wanted to learn the basis of FBP. One can use the function iradon(...) in MATLAB (see here) to perform FBP using a variety of filters. In your case of course, the point is to learn the basis of the central slice theorem and so finding a short cut is not the point. I have also learned a lot and refreshed my knowledge through answering to your question!
Now your code has been perfectly commented and describes the steps that need to be taken. There are a couple of subtle [programming] issues that need to be fixed so that the code works just fine.
First, your image representation in Fourier domain may end up having a missing array due to floor(diagDim/2) depending on the size of the sinogram. I would change this to round(diagDim/2) to have complete dataset in fimg. Be aware that this may lead to an error for certain sinogram sizes if not handled correctly. I would encourage you to visualize fimg to understand what that missing array is and why it matters.
Second issue is that your sinogram needs to be transposed to be consistent with your algorithm. Hence an addition of sino = sino'. Again, I do encourage you to try the code without this to see what happens! Note that zero padding must be happened along the views to avoid artifacts due to aliasing. I will demonstrate an example for this in this answer.
Third and most importantly, imContrib is a temporary holder for an array along fimg. Therefore, it must maintain the same size as fimg, so
imContrib = imContrib * fft(rampFilter_1d);
should be replaced with
imContrib(floor(diagDim/2), :) = imContrib(floor(diagDim/2), :)' .* rampFilter_1d;
Note that the Ramp filter is linear in frequency domain (thanks to #Cris Luengo for correcting this error). Therefore, you should drop the fft in fft(rampFilter_1d) as this filter is applied in the frequency domain (remember fft(x) decomposes the domain of x, such as time, space, etc to its frequency content).
Now a complete example to show how it works using the modified Shepp-Logan phantom:
angs = 0:359; % angles of rotation 0, 1, 2... 359
init_img = phantom('Modified Shepp-Logan', 100); % Initial image 2D [100 x 100]
sino = radon(init_img, angs); % Create a sinogram using radon transform
% Here is your function ....
% This step is necessary so that frequency information is preserved: we pad
% the sinogram with zeros so that this is ensured.
sino = padarray(sino, floor(size(sino,1)/2), 'both');
% Rotate the sinogram 90-degree to be compatible with your codes definition of view and radial positions
% dim 1 -> view
% dim 2 -> Radial position
sino = sino';
% diagDim should be the length of an individual row of the sinogram - don't
% hardcode this!
diagDim = size(sino, 2);
% The 2DFT (2D Fourier transform) of our image will start as a matrix of
% all zeros.
fimg = zeros(diagDim);
% Design your 1-d ramp filter.
rampFilter_1d = abs(linspace(-1, 1, diagDim))';
rowIndex = 1;
for nn = angs
% fprintf('rowIndex = %g => nn = %g\n', rowIndex, nn);
% Each contribution to the image's 2DFT will also begin as all zero.
imContrib = zeros(diagDim);
% Get the current row of the sinogram - use rowIndex.
curRow = sino(rowIndex,:);
% Take the 1D Fourier transform the current row - be careful, as it's
% necessary to perform ifftshift and fftshift as Matlab tends to
% place zero-frequency components of a spectrum at the edges.
fourierCurRow = fftshift(fft(ifftshift(curRow)));
% Place the Fourier-transformed sino row and place it at the center of
% the next image contribution. Add the ramp filter in Fourier domain.
imContrib(round(diagDim/2), :) = fourierCurRow;
imContrib(round(diagDim/2), :) = imContrib(round(diagDim/2), :)' .* rampFilter_1d; % <-- NOT fft(rampFilter_1d)
% Rotate the current image contribution to be at the correct angle on
% the 2D Fourier-space image.
imContrib = imrotate(imContrib, nn, 'crop');
% Add the current image contribution to the running representation of
% the image in Fourier space!
fimg = fimg + imContrib;
rowIndex = rowIndex + 1;
end
% Finally, just take the inverse 2D Fourier transform of the image! Don't
% forget - you may need an fftshift or ifftshift here.
rcon = fftshift(ifft2(ifftshift(fimg)));
Note that your image has complex value. So, I use imshow(abs(rcon),[]) to show the image. A couple of helpful images (food for thought) with the final reconstructed image rcon:
And here is the same image if you comment out the zero padding step (i.e. comment out sino = padarray(sino, floor(size(sino,1)/2), 'both');):
Note the different object size in the reconstructed images with and without zero padding. The object shrinks when the sinogram is zero padded since the radial contents are compressed.

Bandpass Filter for 4D image in Matlab

I have implemented in Matlab a bandpass filter for a 4D image (4D matrix). The first three dimensions are spatial dimensions, the last dimension is a temporal one. Here is the code:
function bandpass_img = bandpass_filter(img)
% Does bandpass filtering on input image
%
% Input:
% img: 4D image
%
% Output:
% bandpass_img: Bandpass filtered image
TR = 1; % Repetition time
n_vols = size(img,3);
X = [];
% Create matrix (voxels x time points)
for z = 1:size(img,3)
for y = 1:size(img,2)
for x = 1:size(img,1)
X = [X; squeeze(img(x,y,z,:))']; %#ok<AGROW>
end
end
end
Fs = 1/TR;
nyquist = 0.5*Fs;
% Pass bands
F = [0.01/nyquist, 0.1/nyquist];
type = 'bandpass';
% Filter order
n = floor(n_vols/3.5);
% Ensure filter order is odd for bandpass
if (mod(n,2) ~= 0), n=n+1; end
fltr = fir1(n, F, type);
% Looking at frequency response
% freqz(fltr)
% Store plot to file
% set(gcf, 'Color', 'w');
% export_fig('freq_response', '-png', '-r100');
% Apply to image
X = filter(fltr, 1, X);
% Reconstructing image
i = 1;
bandpass_img = zeros(size(img));
for z = 1:size(img,3)
for y = 1:size(img,2)
for x = 1:size(img,1)
bandpass_img(x,y,z,:) = X(i,:)';
i = i + 1;
end
end
end
end
I'm not sure if the implementation is correct. Could somebody verify it or does somebody find a failure?
Edit: Thanks to SleuthEye it now kind of works when I'm using bandpass_img = filter(fltr, 1, img, [], 4);. But there is still a minor problem. My images are of size 80x35x12x350, i.e. there are 350 time points. I have plotted the average time series before and after applying the bandpass filter.
Before bandpass filtering:
After bandpass filtering:
Why is this peak at the very beginning of the filtered image?
Edit 2: There is now a peak at the beginning and at the end. See:
I have made a second plot where I marked each point with a *. See:
So the first and last two time points seem to be lower.
It seems that I have to remove 2 time points at the beginning and also 2 time points at the end, so in total I will loose 4 time points.
What do you think?
Filtering a concatenation of all the elements using the 1-D filter function as you are doing is likely going to result in a distorted image as the smoothing makes the end of each row blend into the beginning of the next one. Unless you are trying to obtain a psychedelic rendition of your 4D data, this is unlikely to do what you are expecting it to.
Now, according to Matlab's filter documentation:
y = filter(b,a,x,zi,dim) acts along dimension dim. For example, if x is a matrix, then filter(b,a,x,zi,2) returns the filtered data for each row.
So, to smooth out the 3D images over time (which you indicated is the fourth dimension of you matrix img), you should use
bandpass_img = filter(fltr, 1, img, [], 4);
Used as such, the signal would starts a 0 because that's the default initial state of the filter and the filter takes a few samples to ramp up. If you know the value of the initial state you can specify that with the zi argument (the 4th argument):
bandpass_img = filter(fltr, 1, img, zi, 4);
Otherwise, a typical order N linear FIR filter has a delay of N/2 samples. To get rid of the initial ramp up you can thus just discard those N/2 initial samples:
bandpass_img = bandpass_img(:,:,:,(N/2+1):end);
Similarly, if you want to see the output corresponding to the last N/2 input values, you'd have to pad your input with N/2 extra samples (zeros will do):
img = padarray(img, [0 0 0 N/2], 0, 'post');

Symbolic gradient differing wildly from analytic gradient

I am trying to simulate a network of mobile robots that uses artificial potential fields for movement planning to a shared destination xd. This is done by generating a series of m-files (one for each robot) from a symbolic expression, as this seems to be the best way in terms of computational time and accuracy. However, I can't figure out what is going wrong with my gradient computation: the analytical gradient that is being computed seems to be faulty, while the numerical gradient is calculated correctly (see the image posted below). I have written a MWE listed below, which also exhibits this problem. I have checked the file generating part of the code, and it does return a correct function file with a correct gradient. But I can't figure out why the analytic and numerical gradient are so different.
(A larger version of the image below can be found here)
% create symbolic variables
xd = sym('xd',[1 2]);
x = sym('x',[2 2]);
% create a potential function and a gradient function for both (x,y) pairs
% in x
for i=1:size(x,1)
phi = norm(x(i,:)-xd)/norm(x(1,:)-x(2,:)); % potential field function
xvector = reshape(x.',1,size(x,1)*size(x,2)); % reshape x to allow for gradient computation
grad = gradient(phi,xvector(2*i-1:2*i)); % compute the gradient
gradx = grad(1);grady=grad(2); % split the gradient in two components
% create function file names
gradfun = strcat('GradTester',int2str(i),'.m');
phifun = strcat('PotTester',int2str(i),'.m');
% generate two output files
matlabFunction(gradx, grady,'file',gradfun,'outputs',{'gradx','grady'},'vars',{xvector, xd});
matlabFunction(phi,'file',phifun,'vars',{xvector, xd});
end
clear all % make sure the workspace is empty: the functions are in the files
pause(0.1) % ensure the function file has been generated before it is called
% these are later overwritten by a specific case, but they can be used for
% debugging
x = 0.5*rand(2);
xd = 0.5*rand(1,2);
% values for the Stackoverflow case
x = [0.0533 0.0023;
0.4809 0.3875];
xd = [0.4087 0.4343];
xp = x; % dummy variable to keep x intact
% compute potential field and gradient for both (x,y) pairs
for i=1:size(x,1)
% create a grid centered on the selected (x,y) pair
xGrid = (x(i,1)-0.1):0.005:(x(i,1)+0.1);
yGrid = (x(i,2)-0.1):0.005:(x(i,2)+0.1);
% preallocate the gradient and potential matrices
gradx = zeros(length(xGrid),length(yGrid));
grady = zeros(length(xGrid),length(yGrid));
phi = zeros(length(xGrid),length(yGrid));
% generate appropriate function handles
fun = str2func(strcat('GradTester',int2str(i)));
fun2 = str2func(strcat('PotTester',int2str(i)));
% compute analytic gradient and potential for each position in the xGrid and
% yGrid vectors
for ii = 1:length(yGrid)
for jj = 1:length(xGrid)
xp(i,:) = [xGrid(ii) yGrid(jj)]; % select the position
Xvec = reshape(xp.',1,size(x,1)*size(x,2)); % turn the input into a vector
[gradx(ii,jj),grady(ii,jj)] = fun(Xvec,xd); % compute gradients
phi(jj,ii) = fun2(Xvec,xd); % compute potential value
end
end
[FX,FY] = gradient(phi); % compute the NUMERICAL gradient for comparison
%scale the numerical gradient
FX = FX/0.005;
FY = FY/0.005;
% plot analytic result
subplot(2,2,2*i-1)
hold all
xlim([xGrid(1) xGrid(end)]);
ylim([yGrid(1) yGrid(end)]);
quiver(xGrid,yGrid,-gradx,-grady)
contour(xGrid,yGrid,phi)
title(strcat('Analytic result for position ',int2str(i)));
xlabel('x');
ylabel('y');
subplot(2,2,2*i)
hold all
xlim([xGrid(1) xGrid(end)]);
ylim([yGrid(1) yGrid(end)]);
quiver(xGrid,yGrid,-FX,-FY)
contour(xGrid,yGrid,phi)
title(strcat('Numerical result for position ',int2str(i)));
xlabel('x');
ylabel('y');
end
The potential field I am trying to generate is defined by an (x,y) position, in my code called xd. x is the position matrix of dimension N x 2, where the first column represents x1, x2, and so on, and the second column represents y1, y2, and so on. Xvec is simply a reshaping of this vector to x1,y1,x2,y2,x3,y3 and so on, as the matlabfunction I am generating only accepts vector inputs.
The gradient for robot i is being calculated by taking the derivative w.r.t. x_i and y_i, these two components together yield a single derivative 'vector' shown in the quiver plots. The derivative should look like this, and I checked that the symbolic expression for [gradx,grady] indeed looks like that before an m-file is generated.
To fix the particular problem given in the question, you were actually calculating phi in such a way that meant you doing gradient(phi) was not giving the correct results compared to the symbolic gradient. I'll try and explain. Here is how you created xGrid and yGrid:
% create a grid centered on the selected (x,y) pair
xGrid = (x(i,1)-0.1):0.005:(x(i,1)+0.1);
yGrid = (x(i,2)-0.1):0.005:(x(i,2)+0.1);
But then in the for loop, ii and jj were used like phi(jj,ii) or gradx(ii,jj), but corresponding to the same physical position. This is why your results were different. Another problem you had was you used gradient incorrectly. Matlab assumes that [FX,FY]=gradient(phi) means that phi is calculated from phi=f(x,y) where x and y are matrices created using meshgrid. You effectively had the elements of phi arranged differently to that, an so gradient(phi) gave the wrong answer. Between reversing the jj and ii, and the incorrect gradient, the errors cancelled out (I suspect you tried doing phi(jj,ii) after trying phi(ii,jj) first and finding it didn't work).
Anyway, to sort it all out, on the line after you create xGrid and yGrid, put this in:
[X,Y]=meshgrid(xGrid,yGrid);
Then change the code after you load fun and fun2 to:
for ii = 1:length(xGrid) %// x loop
for jj = 1:length(yGrid) %// y loop
xp(i,:) = [X(ii,jj);Y(ii,jj)]; %// using X and Y not xGrid and yGrid
Xvec = reshape(xp.',1,size(x,1)*size(x,2));
[gradx(ii,jj),grady(ii,jj)] = fun(Xvec,xd);
phi(ii,jj) = fun2(Xvec,xd);
end
end
[FX,FY] = gradient(phi,0.005); %// use the second argument of gradient to set spacing
subplot(2,2,2*i-1)
hold all
axis([min(X(:)) max(X(:)) min(Y(:)) max(Y(:))]) %// use axis rather than xlim/ylim
quiver(X,Y,gradx,grady)
contour(X,Y,phi)
title(strcat('Analytic result for position ',int2str(i)));
xlabel('x');
ylabel('y');
subplot(2,2,2*i)
hold all
axis([min(X(:)) max(X(:)) min(Y(:)) max(Y(:))])
quiver(X,Y,FX,FY)
contour(X,Y,phi)
title(strcat('Numerical result for position ',int2str(i)));
xlabel('x');
ylabel('y');
I have some other comments about your code. I think your potential function is ill-defined, which is causing all sorts of problems. You say in the question that x is an Nx2 matrix, but you potential function is defined as
norm(x(i,:)-xd)/norm(x(1,:)-x(2,:));
which means if N was three, you'd have the following three potentials:
norm(x(1,:)-xd)/norm(x(1,:)-x(2,:));
norm(x(2,:)-xd)/norm(x(1,:)-x(2,:));
norm(x(3,:)-xd)/norm(x(1,:)-x(2,:));
and I don't think the third one makes sense. I think this could be causing some confusion with the gradients.
Also, I'm not sure if there is a reason to create the .m file functions in your real code, but they are not necessary for the code you posted.

Matlab inverse FFT from phase/magnitude only

So I have this image 'I'. I take F = fft2(I) to get the 2D fourier transform. To reconstruct it, I could go ifft2(F).
The problem is, I need to reconstruct this image from only the a) magnitude, and b) phase components of F. How can I separate these two components of the fourier transform, and then reconstruct the image from each?
I tried the abs() and angle() functions to get magnitude and phase, but the phase one won't reconstruct properly.
Help?
You need one matrix with the same magnitude as F and 0 phase, and another with the same phase as F and uniform magnitude. As you noted abs gives you the magnitude. To get the uniform magnitude same phase matrix, you need to use angle to get the phase, and then separate the phase back into real and imaginary parts.
> F_Mag = abs(F); %# has same magnitude as F, 0 phase
> F_Phase = cos(angle(F)) + j*(sin(angle(F)); %# has magnitude 1, same phase as F
> I_Mag = ifft2(F_Mag);
> I_Phase = ifft2(F_Phase);
it's too late to put another answer to this post, but...anyway
# zhilevan, you can use the codes I have written using mtrw's answer:
image = rgb2gray(imread('pillsetc.png'));
subplot(131),imshow(image),title('original image');
set(gcf, 'Position', get(0, 'ScreenSize')); % maximize the figure window
%:::::::::::::::::::::
F = fft2(double(image));
F_Mag = abs(F); % has the same magnitude as image, 0 phase
F_Phase = exp(1i*angle(F)); % has magnitude 1, same phase as image
% OR: F_Phase = cos(angle(F)) + 1i*(sin(angle(F)));
%:::::::::::::::::::::
% reconstruction
I_Mag = log(abs(ifft2(F_Mag*exp(i*0)))+1);
I_Phase = ifft2(F_Phase);
%:::::::::::::::::::::
% Calculate limits for plotting
% To display the images properly using imshow, the color range
% of the plot must the minimum and maximum values in the data.
I_Mag_min = min(min(abs(I_Mag)));
I_Mag_max = max(max(abs(I_Mag)));
I_Phase_min = min(min(abs(I_Phase)));
I_Phase_max = max(max(abs(I_Phase)));
%:::::::::::::::::::::
% Display reconstructed images
% because the magnitude and phase were switched, the image will be complex.
% This means that the magnitude of the image must be taken in order to
% produce a viewable 2-D image.
subplot(132),imshow(abs(I_Mag),[I_Mag_min I_Mag_max]), colormap gray
title('reconstructed image only by Magnitude');
subplot(133),imshow(abs(I_Phase),[I_Phase_min I_Phase_max]), colormap gray
title('reconstructed image only by Phase');

Wavelet Transform for N dimensions

I came across this amazing response Applying MATLAB's idwt2 several times which I executed to understand it myself. However, I am unable to get how to use the same with work with an RGB image. So, I have 3 Questions.
How would the code be applied to an RGB image with only the transformed image displayed in the output that is along with the high and low frequency components along row and column,is it possible to view the fusion of all the components as a single image? I am aware that I have to put cat operator, but I cant understand how to go about it.
Secondly, I am also getting a mazed image! I am perplexed since I cannot seem to follow the reason. I have also attached the same code with the statement showing how this image has been generated.
3.What does the term db1 in the function signature of dwt imply?
CODE:
load woman; % Load image data
%startImage=imread('pic_rgb.jpg'); % IF I WANT TO WORK WITH RGB IMAGE
nLevel = 3; % Number of decompositions
nColors = size(map,1); % Number of colors in colormap
cA = cell(1,nLevel); % Approximation coefficients
cH = cell(1,nLevel); % Horizontal detail coefficients
cV = cell(1,nLevel); % Vertical detail coefficients
cD = cell(1,nLevel); % Diagonal detail coefficients
startImage = X;
for iLevel = 1:nLevel,
[cA{iLevel},cH{iLevel},cV{iLevel},cD{iLevel}] = dwt2(startImage,'db1');
startImage = cA{iLevel};
end
figure;colormap(map);
imagesc(dwt2(startImage,'db1')); %THIS GIVES THE MAZED IMAGE INSTEAD OF THE TRANSFORMED IMAGE
figure;
tiledImage = wcodemat(cA{nLevel},nColors);
for iLevel = nLevel:-1:1,
tiledImage = [tiledImage wcodemat(cH{iLevel},nColors); ...
wcodemat(cV{iLevel},nColors) wcodemat(cD{iLevel},nColors)];
end
figure;
imshow(tiledImage,map);
%reconstruct
fullRecon = cA{nLevel};
for iLevel = nLevel:-1:1,
fullRecon = idwt2(fullRecon,cH{iLevel},cV{iLevel},cD{iLevel},'db1');
end
partialRecon = cA{nLevel};
for iLevel = nLevel:-1:1,
partialRecon = idwt2(partialRecon,[],[],[],'db1');
end
figure;
imshow([X fullRecon; partialRecon zeros(size(X))],map,...
'InitialMagnification',50);
The sample image used in my answer to that other question was an indexed image, so there are a few changes that need to be made to get that code working for an RGB image.
I'll first address your question about the 'db1' argument passed to DWT2. This specifies the type of wavelet to use for the decomposition (in this case, a Daubechies wavelet). More information about available wavelets can be found in the documentation for the functions WFILTERS and WAVEINFO.
I'll address your first two questions by showing you how to modify the code from my other answer to work for an RGB image. I'll use the sample 'peppers.png' image. You'll first want to load your image and define the number of values each color component has. Since the sample image is an unsigned 8-bit integer type (the most common situation), nColors will be 256:
X = imread('peppers.png'); %# Load sample image
nColors = 256; %# Number of values per color component
If your images are larger unsigned integer types (e.g. 'uint16'), a general way to find the number of color values is to use the function INTMAX like so:
nColors = double(intmax(class(X)))+1;
For the ensuing code, an image type of 'uint8' is assumed.
Applying the decompositions is no different than in the indexed image case. The coefficient matrices will simply be M-by-N-by-3 matrices instead of M-by-N matrices:
nLevel = 3; %# Number of decompositions
cA = cell(1,nLevel); %# Approximation coefficient storage
cH = cell(1,nLevel); %# Horizontal detail coefficient storage
cV = cell(1,nLevel); %# Vertical detail coefficient storage
cD = cell(1,nLevel); %# Diagonal detail coefficient storage
startImage = X;
for iLevel = 1:nLevel, %# Apply nLevel decompositions
[cA{iLevel},cH{iLevel},cV{iLevel},cD{iLevel}] = dwt2(startImage,'db1');
startImage = cA{iLevel};
end
The code to create the tiled image to show the horizontal, vertical, and diagonal components for each decomposition will change due to the fact that we are now working with 3-D matrices and must use the CAT function instead of the concatenation operator []:
tiledImage = wcodemat(cA{nLevel},nColors);
for iLevel = nLevel:-1:1
tiledImage = cat(1,cat(2,tiledImage,...
wcodemat(cH{iLevel},nColors)),...
cat(2,wcodemat(cV{iLevel},nColors),...
wcodemat(cD{iLevel},nColors)));
end
figure;
imshow(uint8(tiledImage-1)); %# Convert to unsigned 8-bit integer to display
This will give the following image showing the horizontal (top right), vertical (bottom left), and diagonal (bottom right) components for each decomposition step, along with the reduced image (top left):
The reconstruction steps are unchanged from the other answer. Only the code for displaying the final images needs to be modified:
fullRecon = cA{nLevel};
for iLevel = nLevel:-1:1,
fullRecon = idwt2(fullRecon,cH{iLevel},cV{iLevel},cD{iLevel},'db1');
end
partialRecon = cA{nLevel};
for iLevel = nLevel:-1:1,
partialRecon = idwt2(partialRecon,[],[],[],'db1');
end
figure;
tiledImage = cat(1,cat(2,X,uint8(fullRecon)),...
cat(2,uint8(partialRecon),zeros(size(X),'uint8')));
imshow(tiledImage,'InitialMagnification',50);
And you will get an image showing the original RGB image (top left), the fully-reconstructed image using all of the stored detail coefficient matrices (top right), and the partially-reconstructed image using none of the stored detail coefficient matrices (bottom left):