How can I restore my image back from the Fourier Spectrum - matlab

I've taken the following image:
PandaNoise.bmp and tried to remove the periodic noise by focusing on its Fourier spectrum. Commented lines are the ones I'm not sure about. I can't get it back to the image plane. What am I doing wrong here?
panda = imread('PandaNoise.bmp');
fpanda = fft2(panda); % 2d fast fourier transform
fpanda = fftshift(fpanda); % center FFT
fpanda = abs(fpanda); % get magnitude
fpanda = log(1 + fpanda); % use log to expand range of dark pixels into bright region
fpanda = mat2gray(fpanda); % scale image from 0 to 1
figure; imshow(fpanda,[]); % show the picture
zpanda = fpanda;
zpanda(fpanda<0.5)=0;
zpanda(fpanda>0.5)=1;
%img = ifft2(zpanda);
%img = ifftshift(img);
%img = exp(1-img);
%img = abs(img);

Here is an example of how to work with the complex Fourier transform. We can take the log modulus for display, but don't change the original Fourier transform matrix, since the phase information that we throw away with abs is very important.
% Load data
panda = imread('https://i.stack.imgur.com/9SlW5.png');
panda = im2double(panda);
% Forward transform
fpanda = fft2(panda);
% Prepare FT for display -- don't change fpanda!
fd = fftshift(fpanda);
fd = log(1 + abs(fd));
figure; imshow(fd,[]); % show the picture
% From here we learn that we should keep the central 1/5th along both axes
% Low-pass filter
sz = size(fpanda);
center = floor(sz/2)+1;
half_width = ceil(sz/10)-1;
filter = zeros(sz);
filter(center(1)+(-half_width(1):half_width(1)),...
center(2)+(-half_width(2):half_width(2))) = 1;
filter = ifftshift(filter); % The origin should be on the top-left, like that of fpanda.
fpanda = fpanda .* filter;
% Inverse transform
newpanda = ifft2(fpanda);
figure; imshow(newpanda);
After computing the ifft2, newpanda is supposed to be purely real-valued if we designed the filter correctly (i.e. perfectly symmetric around the origin). Any imaginary component still present should be purely numerical innacuracy. MATLAB will detect that the input to ifft2 is conjugate symmetric, and return a purely real result. Octave will not, and you would have to do newpanda=real(newpanda) to avoid warnings from imshow.

Related

How can I combine HF and LF components of an image?

I separated the High Frequency (HF) component and low-frequency (LF) component from an image. After this step, I applied some denoising technique to the HF and LF. Afterwards I want to combine them together. How can I do that this?
I used the below code for decomposition
%// Load an image
Orig = double(rgb2gray(imread('lena.jpg')));
O=ROFdenoise(Orig, 12);
O=uint8(O);
figure, imshow(O)
%// Transform
Orig_T = dct2(Orig);
%// Split between high - and low-frequency in the spectrum (*)
cutoff = round(0.5 * 226);
High_T = fliplr(tril(fliplr(Orig_T), cutoff));
Low_T = Orig_T - High_T;
%// Transform back
High = idct2(High_T);
Low = idct2(Low_T);
I've commented out ROFdenoise because I don't know what it does. If you've split your image in the frequency domain, you want to combine it back together in frequency too. Also; I've added some plotting to make it easier to see what's happening.
%// Load an image
Orig = double(rgb2gray(imread('Lenna.png')));
%O=ROFdenoise(Orig, 12);
O=Orig; % No denoising before DCT
O=uint8(O);
figure(1), subplot(2,2,1), imshow(O), title('Before')
%// Discrete Cosine Transform
T = dct2(Orig);
%// Split between high - and low-frequency in the spectrum (*)
cutoff = round(0.5 * 226);
highT = fliplr(tril(fliplr(T), cutoff));
lowT = T - highT;
%//Do some denoising
highT = 0*highT;
subplot(2,2,2), imshow(highT), title('High T')
subplot(2,2,4), imshow(lowT), title('Low T')
%// Combine back
denoiseT = highT + lowT;
%// Transform back
denoiseO = uint8(idct2(denoiseT));
subplot(2,2,3), imshow(denoiseO), title('After')
Also; here is Lenna

Count circle objects in an image using matlab

How to count circle objects in a bright image using MATLAB?
The input image is:
imfindcircles function can't find any circle in this image.
Based on well known image processing techniques, you can write your own processing tool:
img = imread('Mlj6r.jpg'); % read the image
imgGray = rgb2gray(img); % convert to grayscale
sigma = 1;
imgGray = imgaussfilt(imgGray, sigma); % filter the image (we will take derivatives, which are sensitive to noise)
imshow(imgGray) % show the image
[gx, gy] = gradient(double(imgGray)); % take the first derivative
[gxx, gxy] = gradient(gx); % take the second derivatives
[gxy, gyy] = gradient(gy); % take the second derivatives
k = 0.04; %0.04-0.15 (see wikipedia)
blob = (gxx.*gyy - gxy.*gxy - k*(gxx + gyy).^2); % Harris corner detector (high second derivatives in two perpendicular directions)
blob = blob .* (gxx < 0 & gyy < 0); % select the top of the corner (i.e. positive second derivative)
figure
imshow(blob) % show the blobs
blobThresshold = 1;
circles = imregionalmax(blob) & blob > blobThresshold; % find local maxima and apply a thresshold
figure
imshow(imgGray) % show the original image
hold on
[X, Y] = find(circles); % find the position of the circles
plot(Y, X, 'w.'); % plot the circle positions on top of the original figure
nCircles = length(X)
This code counts 2710 circles, which is probably a slight (but not so bad) overestimation.
The following figure shows the original image with the circle positions indicated as white dots. Some wrong detections are made at the border of the object. You can try to make some adjustments to the constants sigma, k and blobThresshold to obtain better results. In particular, higher k may be beneficial. See wikipedia, for more information about the Harris corner detector.

Fourier transform for fiber alignment

I'm working on an application to determine from an image the degree of alignment of a fiber network. I've read several papers on this issue and they basically do this:
Find the 2D discrete Fourier transform (DFT = F(u,v)) of the image (gray, range 0-255)
Find the Fourier Spectrum (FS = abs(F(u,v))) and the Power Spectrum (PS = FS^2)
Convert spectrum to polar coordinates and divide it into 1º intervals.
Calculate number-averaged line intensities (FI) for each interval (theta), that is, the average of all the intensities (pixels) forming "theta" degrees with respect to the horizontal axis.
Transform FI(theta) to cartesian coordinates
Cxy(theta) = [FI*cos(theta), FI*sin(theta)]
Find eigenvalues (lambda1 and lambda2) of the matrix Cxy'*Cxy
Find alignment index as alpha = 1 - lamda2/lambda1
I've implemented this in MATLAB (code below), but I'm not sure whether it is ok since point 3 and 4 are not really clear for me (I'm getting similar results to those of the papers, but not in all cases). For instance, in point 3, "spectrum" is referring to FS or to PS?. And in point 4, how should this average be done? are all the pixels considered? (even though there are more pixels in the diagonal).
rgb = imread('network.tif');%513x513 pixels
im = rgb2gray(rgb);
im = imrotate(im,-90);%since FFT space is rotated 90º
FT = fft2(im) ;
FS = abs(FT); %Fourier spectrum
PS = FS.^2; % Power spectrum
FS = fftshift(FS);
PS = fftshift(PS);
xoffset = (513-1)/2;
yoffset = (513-1)/2;
% Avoid low frequency points
x1 = 5;
y1 = 0;
% Maximum high frequency pixels
x2 = 255;
y2 = 0;
for theta = 0:pi/180:pi
% Transposed rotation matrix
Rt = [cos(theta) sin(theta);
-sin(theta) cos(theta)];
% Find radial lines necessary for improfile
xy1_rot = Rt * [x1; y1] + [xoffset; yoffset];
xy2_rot = Rt * [x2; y2] + [xoffset; yoffset];
plot([xy1_rot(1) xy2_rot(1)], ...
[xy1_rot(2) xy2_rot(2)], ...
'linestyle','none', ...
'marker','o', ...
'color','k');
prof = improfile(F,[xy1_rot(1) xy2_rot(1)],[xy1_rot(2) xy2_rot(2)]);
i = i + 1;
FI(i) = sum(prof(:))/length(prof);
Cxy(i,:) = [FI(i)*cos(theta), FI(i)*sin(theta)];
end
C = Cxy'*Cxy;
[V,D] = eig(C)
lambda2 = D(1,1);
lambda1 = D(2,2);
alpha = 1 - lambda2/lambda1
Figure: A) original image, B) plot of log(P+1), C) polar plot of FI.
My main concern is that when I choose an artificial image perfectly aligned (attached figure), I get alpha = 0.91, and it should be exactly 1.
Any help will be greatly appreciated.
PD: those black dots in the middle plot are just the points used by improfile.
I believe that there are a couple sources of potential error here that are leading to you not getting a perfect alpha value.
Discrete Fourier Transform
You have discrete imaging data which forces you to take a discrete Fourier transform which inevitably (depending on the resolution of the input data) have some accuracy issues.
Binning vs. Sampling Along a Line
The way that you have done the binning is that you literally drew a line (rotated by a particular angle) and sampled the image along that line using improfile. Using improfile performs interpolation of your data along that line introducing yet another potential source of error. The default is nearest neighbor interpolation which in the example shown below can cause multiple "profiles" to all pick up the same points.
This was with a rotation of 1-degree off-vertical when technically you'd want those peaks to only appear for a perfectly vertical line. It is clear to see how this sort of interpolation of the Fourier spectrum can lead to a spread around the "correct" answer.
Data Undersampling
Similar to Nyquist sampling in the Fourier domain, sampling in the spatial domain has some requirements as well.
Imagine for a second that you wanted to use 45-degree bin widths instead of the 1-degree. Your approach would still sample along a thin line and use that sample to represent 45-degrees worth or data. Clearly, this is a gross under-sampling of the data and you can imagine that the result wouldn't be very accurate.
It becomes more and more of an issue the further you get from the center of the image since the data in this "bin" is really pie wedge shaped and you're approximating it with a line.
A Potential Solution
A different approach to binning would be to determine the polar coordinates (r, theta) for all pixel centers in the image. Then to bin the theta components into 1-degree bins. Then sum all of the values that fall into that bin.
This has several advantages:
It removes the undersampling that we talked about and draws samples from the entire "pie wedge" regardless of the sampling angle.
It ensures that each pixel belongs to one and only one angular bin
I have implemented this alternate approach in the code below with some false horizontal line data and am able to achieve an alpha value of 0.988 which I'd say is pretty good given the discrete nature of the data.
% Draw a bunch of horizontal lines
data = zeros(101);
data([5:5:end],:) = 1;
fourier = fftshift(fft2(data));
FS = abs(fourier);
PS = FS.^2;
center = fliplr(size(FS)) / 2;
[xx,yy] = meshgrid(1:size(FS,2), 1:size(FS, 1));
coords = [xx(:), yy(:)];
% De-mean coordinates to center at the middle of the image
coords = bsxfun(#minus, coords, center);
[theta, R] = cart2pol(coords(:,1), coords(:,2));
% Convert to degrees and round them to the nearest degree
degrees = mod(round(rad2deg(theta)), 360);
degreeRange = 0:359;
% Band pass to ignore high and low frequency components;
lowfreq = 5;
highfreq = size(FS,1)/2;
% Now average everything with the same degrees (sum over PS and average by the number of pixels)
for k = degreeRange
ps_integral(k+1) = mean(PS(degrees == k & R > lowfreq & R < highfreq));
fs_integral(k+1) = mean(FS(degrees == k & R > lowfreq & R < highfreq));
end
thetas = deg2rad(degreeRange);
Cxy = [ps_integral.*cos(thetas);
ps_integral.*sin(thetas)]';
C = Cxy' * Cxy;
[V,D] = eig(C);
lambda2 = D(1,1);
lambda1 = D(2,2);
alpha = 1 - lambda2/lambda1;

MATLAB 3D sparse reconstruction issues. Somehow I can't get the final scatter plot of the points to work

I'm trying to reconstruct the shape of a sail. I'm using the 3D sparse reconstruction method. I'm using two cameras with which I took two pictures. I managed to do the calibration of such cameras too. In the pictures it is possible to see the checkerboard and the code I wrote detects it properly.
Now, since my pictures are black and white and the quality of the cameras is quite low, I cannot use the detectFeatures method properly. Problems arise when I'm trying to use matchFeatures. To overcome this problem I decided to use instead a cpselect command. By doing so I can manually click on the features. The matching between points from the two views seems now to be correct. When I carry on with the code and try to reconstruct the 3D plot I get points all over the place. It seems deformed. The plot clearly does not represent the sail and I don't know why.
The code follows.
Thank you in advance
% % Load precomputed camera parameters
load IP_CalibrationCarlos.mat %Calibration feature
%
I1 = imread('/Users/riccardocamin/Documents/MATLAB/Frames/Scan1.1.jpg');
I2 = imread('/Users/riccardocamin/Documents/MATLAB/Frames/Scan2.1.jpg');
%
[I1, newOrigin1] = undistortImage(I1, cameraParameters, 'OutputView', 'full');
[I2, newOrigin2] = undistortImage(I2, cameraParameters, 'OutputView', 'full');
%
I1 = imcrop(I1, [80 10 1040 1300]); %Necessary so images have same size
I2 = imcrop(I2, [0 10 1067 1300]);
%
squareSize = 82; % checkerboard square size in millimeters
%
[imagePoints, boardSize, pairsUsed] = detectCheckerboardPoints(rgb2gray(I1), rgb2gray(I2));
[refPoints1, boardSize] = detectCheckerboardPoints(rgb2gray(I1));
[refPoints2, boardSize] = detectCheckerboardPoints(rgb2gray(I2));
%
% % Translate detected points back into the original image coordinates
refPoints1 = bsxfun(#plus, refPoints1, newOrigin1);
refPoints2 = bsxfun(#plus, refPoints2, newOrigin2);
%
worldPoints = generateCheckerboardPoints(boardSize, squareSize);
%
[R1, t1] = extrinsics(refPoints1, worldPoints, cameraParameters); %R = r t = translation
[R2, t2] = extrinsics(refPoints2, worldPoints, cameraParameters);
%
% % Calculate camera matrices using the |cameraMatrix| function.
cameraMatrix1 = cameraMatrix(cameraParameters, R1, t1);
cameraMatrix2 = cameraMatrix(cameraParameters, R2, t2);
%
cpselect(I1, I2); % Save them as 'matchedPoints1'and 'matchedPoints2'
%
indexPairs = matchFeatures(matchedPoints1, matchedPoints2);
% Visualize correspondences
figure;
showMatchedFeatures(I1, I2, matchedPoints1, matchedPoints2);
title('Matched Features');
%
[points3D] = triangulate(matchedPoints1, matchedPoints2, ...
cameraMatrix1, cameraMatrix2);
%
x = -points3D(:,1);
y = -points3D(:,2);
z = -points3D(:,3);
figure
scatter3(x,y,z, 25);
xlabel('X');
ylabel('Y');
zlabel('Z');
The first problem you have is that you are cropping the images. Once you do that, all your coordinates are off. You do not need to do that here, because the images do not need to be the same size.
The second question is how precisely did you select the matching points? From the picture you have posted, it seems that your matches can be off by a few pixels, which can result in a large reconstruction error. Can you try finding the centroids of the spots on the sail using regionprops?
Also, are the two cameras stationary relative to each other? If so, then you may be better off calibrating the stereo pair, and doing the dense reconstruction as in this example. In that case, the two images would have to be of the same size.

Matlab inverse FFT from phase/magnitude only

So I have this image 'I'. I take F = fft2(I) to get the 2D fourier transform. To reconstruct it, I could go ifft2(F).
The problem is, I need to reconstruct this image from only the a) magnitude, and b) phase components of F. How can I separate these two components of the fourier transform, and then reconstruct the image from each?
I tried the abs() and angle() functions to get magnitude and phase, but the phase one won't reconstruct properly.
Help?
You need one matrix with the same magnitude as F and 0 phase, and another with the same phase as F and uniform magnitude. As you noted abs gives you the magnitude. To get the uniform magnitude same phase matrix, you need to use angle to get the phase, and then separate the phase back into real and imaginary parts.
> F_Mag = abs(F); %# has same magnitude as F, 0 phase
> F_Phase = cos(angle(F)) + j*(sin(angle(F)); %# has magnitude 1, same phase as F
> I_Mag = ifft2(F_Mag);
> I_Phase = ifft2(F_Phase);
it's too late to put another answer to this post, but...anyway
# zhilevan, you can use the codes I have written using mtrw's answer:
image = rgb2gray(imread('pillsetc.png'));
subplot(131),imshow(image),title('original image');
set(gcf, 'Position', get(0, 'ScreenSize')); % maximize the figure window
%:::::::::::::::::::::
F = fft2(double(image));
F_Mag = abs(F); % has the same magnitude as image, 0 phase
F_Phase = exp(1i*angle(F)); % has magnitude 1, same phase as image
% OR: F_Phase = cos(angle(F)) + 1i*(sin(angle(F)));
%:::::::::::::::::::::
% reconstruction
I_Mag = log(abs(ifft2(F_Mag*exp(i*0)))+1);
I_Phase = ifft2(F_Phase);
%:::::::::::::::::::::
% Calculate limits for plotting
% To display the images properly using imshow, the color range
% of the plot must the minimum and maximum values in the data.
I_Mag_min = min(min(abs(I_Mag)));
I_Mag_max = max(max(abs(I_Mag)));
I_Phase_min = min(min(abs(I_Phase)));
I_Phase_max = max(max(abs(I_Phase)));
%:::::::::::::::::::::
% Display reconstructed images
% because the magnitude and phase were switched, the image will be complex.
% This means that the magnitude of the image must be taken in order to
% produce a viewable 2-D image.
subplot(132),imshow(abs(I_Mag),[I_Mag_min I_Mag_max]), colormap gray
title('reconstructed image only by Magnitude');
subplot(133),imshow(abs(I_Phase),[I_Phase_min I_Phase_max]), colormap gray
title('reconstructed image only by Phase');