How to plot a 2D FFT in Matlab? - matlab

I am using fft2 to compute the Fourier Transform of a grayscale image in MATLAB.
What is the common way to plot the magnitude of the result?

Assuming that I is your input image and F is its Fourier Transform (i.e. F = fft2(I))
You can use this code:
F = fftshift(F); % Center FFT
F = abs(F); % Get the magnitude
F = log(F+1); % Use log, for perceptual scaling, and +1 since log(0) is undefined
F = mat2gray(F); % Use mat2gray to scale the image between 0 and 1
imshow(F,[]); % Display the result

Here is an example from my HOW TO Matlab page:
close all; clear all;
img = imread('lena.tif','tif');
imagesc(img)
img = fftshift(img(:,:,2));
F = fft2(img);
figure;
imagesc(100*log(1+abs(fftshift(F)))); colormap(gray);
title('magnitude spectrum');
figure;
imagesc(angle(F)); colormap(gray);
title('phase spectrum');
This gives the magnitude spectrum and phase spectrum of the image. I used a color image, but you can easily adjust it to use gray image as well.
ps. I just noticed that on Matlab 2012a the above image is no longer included. So, just replace the first line above with say
img = imread('ngc6543a.jpg');
and it will work. I used an older version of Matlab to make the above example and just copied it here.
On the scaling factor
When we plot the 2D Fourier transform magnitude, we need to scale the pixel values using log transform to expand the range of the dark pixels into the bright region so we can better see the transform. We use a c value in the equation
s = c log(1+r)
There is no known way to pre detrmine this scale that I know. Just need to
try different values to get on you like. I used 100 in the above example.

Related

convert image from Cartesian to Polar

I want to convert an image from Cartesian to Polar and to use it for opengl texture.
So I used matlab referring to the two articles below.
Link 1
Link 2
My code is exactly same with Link 2's anwser
% load image
img = imread('my_image.png');
% convert pixel coordinates from cartesian to polar
[h,w,~] = size(img);
[X,Y] = meshgrid((1:w)-floor(w/2), (1:h)-floor(h/2));
[theta,rho] = cart2pol(X, Y);
Z = zeros(size(theta));
% show pixel locations (subsample to get less dense points)
XX = X(1:8:end,1:4:end);
YY = Y(1:8:end,1:4:end);
tt = theta(1:8:end,1:4:end);
rr = rho(1:8:end,1:4:end);
subplot(121), scatter(XX(:),YY(:),3,'filled'), axis ij image
subplot(122), scatter(tt(:),rr(:),3,'filled'), axis ij square tight
% show images
figure
subplot(121), imshow(img), axis on
subplot(122), warp(theta, rho, Z, img), view(2), axis square
The result was exactly what I wanted, and I was very satisfied except for one thing. It's the area (red circled area) in the picture just below. Considering that the opposite side (blue circled area) is not, I think this part should also be filled. Because of this part is empty, so there is a problem when using it as a texture.
And I wonder how I can fill this part. Thank you.
(little difference from Link 2's answer code like degree<->radian and axis values. but i think it is not important.)
Those issues you show in your question happen because your algorithm is wrong.
What you did (push):
throw a grid on the source image
transform those points
try to plot these colored points and let MATLAB do some magic to make it look like a dense picture
Do it the other way around (pull):
throw a grid on the output
transform that backwards
sample the input at those points
The distinction is called "push" (into output) vs "pull" (from input). Only Pull gives proper results.
Very little MATLAB code is necessary. You just need pol2cart and interp2, and a meshgrid.
With interp2 you get to choose the interpolation (linear, cubic, ...). Nearest-neighbor interpolation leaves visible artefacts.
im = im2single(imread("PQFax.jpg"));
% center of polar map, manually picked
cx = 10 + 409/2;
cy = 7 + 413/2;
% output parameters
radius = 212;
dRho = 1;
dTheta = 2*pi / (2*pi * radius);
Thetas = pi/2 - (0:dTheta:2*pi);
Rhos = (0:dRho:radius);
% polar mesh
[Theta, Rho] = meshgrid(Thetas, Rhos);
% transform...
[Xq,Yq] = pol2cart(Theta, Rho);
% translate to sit on the circle's center
Xq = Xq + cx;
Yq = Yq + cy;
% sample image at those points
Ro = interp2(im(:,:,1), Xq,Yq, "cubic");
Go = interp2(im(:,:,2), Xq,Yq, "cubic");
Bo = interp2(im(:,:,3), Xq,Yq, "cubic");
Vo = cat(3, Ro, Go, Bo);
Vo = imrotate(Vo, 180);
imshow(Vo)
The other way around (get a "torus" from a "ribbon") is quite similar. Throw a meshgrid on the torus space, subtract center, transform from cartesian to polar, and use those to sample from the "ribbon" image into the "torus" image.
I'm more familiar with OpenCV than with MATLAB. Perhaps MATLAB has something like OpenCV's warpPolar(), or a generic remap(). In any case, the operation is trivial to do entirely "by hand" but there are enough supporting functions to take the heavy lifting off your hands (interp2, pol2cart, meshgrid).
1.- The white arcs tell that the used translation pol-cart introduces significant errors.
2.- Reversing the following script solves your question.
It's a script that goes from cart-pol without introducing errors or ignoring input data, which is what happens when such wide white arcs show up upon translation apparently correct.
clear all;clc;close all
clc,cla;
format long;
A=imread('shaffen dass.jpg');
[sz1 sz2 sz3]=size(A);
szx=sz2;szy=sz1;
A1=A(:,:,1);A2=A(:,:,2);A3=A(:,:,3); % working with binary maps or grey scale images this wouldn't be necessary
figure(1);imshow(A);
hold all;
Cx=floor(szx/2);Cy=floor(szy/2);
plot(Cx,Cy,'co'); % because observe image centre not centered
Rmin=80;Rmax=400; % radius search range for imfindcircles
[centers, radii]=imfindcircles(A,[Rmin Rmax],... % outer circle
'ObjectPolarity','dark','Sensitivity',0.9);
h=viscircles(centers,radii);
hold all; % inner circle
[centers2, radii2]=imfindcircles(A,[Rmin Rmax],...
'ObjectPolarity','bright');
h=viscircles(centers2,radii2);
% L=floor(.5*(radii+radii2)); % this is NOT the length X that should have the resulting XY morphed graph
L=floor(2*pi*radii); % expected length of the morphed graph
cx=floor(.5*(centers(1)+centers2(1))); % coordinates of averaged circle centres
cy=floor(.5*(centers(2)+centers2(2)));
plot(cx,cy,'r*'); % check avg centre circle is not aligned to figure centre
plot([cx 1],[cy 1],'r-.');
t=[45:360/L:404+1-360/L]; % if step=1 then we only get 360 points but need an amount of L points
% if angle step 1/L over minute waiting for for loop to finish
R=radii+5;x=R*sind(t)+cx;y=R*cosd(t)+cy; % build outer perimeter
hL1=plot(x,y,'m'); % axis equal;grid on;
% hold all;
% plot(hL1.XData,hL1.YData,'ro');
x_ref=hL1.XData;y_ref=hL1.YData;
% Sx=zeros(ceil(R),1);Sy=zeros(ceil(R),1);
Sx={};Sy={};
for k=1:1:numel(hL1.XData)
Lx=floor(linspace(x_ref(k),cx,ceil(R)));
Ly=floor(linspace(y_ref(k),cy,ceil(R)));
% plot(Lx,Ly,'go'); % check
% plot([cx x(k)],[cy y(k)],'r');
% L1=unique([Lx;Ly]','rows');
Sx=[Sx Lx'];Sy=[Sy Ly'];
end
sx=cell2mat(Sx);sy=cell2mat(Sy);
[s1 s2]=size(sx);
B1=uint8(zeros(s1,s2));
B2=uint8(zeros(s1,s2));
B3=uint8(zeros(s1,s2));
for n=1:1:s2
for k=1:1:s1
B1(k,n)=A1(sx(k,n),sy(k,n));
B2(k,n)=A2(sx(k,n),sy(k,n));
B3(k,n)=A3(sx(k,n),sy(k,n));
end
end
C=uint8(zeros(s1,s2,3));
C(:,:,1)=B1;
C(:,:,2)=B2;
C(:,:,3)=B3;
figure(2);imshow(C);
the resulting
3.- let me know if you'd like some assistance writing pol-cart from this script.
Regards
John BG

Expanding a 2D Matrix in Matlab with Interpolation

Let's say I have a 4 pixel by 4 pixel image with values between 0 and 255 and I want to expand it to an 8-by-8 image, interpolating where necessary. I know how to interpolate a vector this way with interp1:
interp1(linspace(0,1,numel(vector)), vector, linspace(0,1,newSize))
But I'm unclear how to use interp2 to do the same thing for a matrix.
EDIT: Would it be the same if I were to create a meshgrid after using linspace for each dimension?
EDIT2: Yep, that worked. It's the same, but with a meshgrid.
For data structured in the form of a regular grid, you can use interp2 as you correctly stated, but since you are dealing with an image, I suggest you to use the imresize function, which is fine-tuned for rescaling images using appropriate interpolation algorithms:
% Load an image and display it...
img = imread('peppers.png');
figure(),imshow(img);
% Create a second image that is
% twice the size of the original
% one and display it...
img2 = imresize(img,2);
figure(),imshow(img2);
Anyway, if you really want to use the aforementioned function, here is how I would perform the interpolation:
% Load an image, convert it into double format
% and retrieve its basic properties...
img = imread('rice.png');
img = im2double(img);
[h,w] = size(img);
% Perform the interpolation...
x = linspace(1,w,w*2);
y = linspace(1,h,h*2);
[Xq,Yq] = meshgrid(x,y);
img2 = interp2(img,Xq,Yq,'cubic') ./ 255;
% Display the original image...
figure(),imshow(img);
%Display the rescaled image...
figure(),imshow(img2,[]);

Lowpass filter non working

I have a noisy image that I am trying to clean using a lowpass filter (code below, modified from here). The image I get as a result is essentially identical to the one I gave as an input.
I'm not an expert, but my conclusion would be that the input image is so noisy that no patterns are found. Do you agree? Do you have any suggestion on how to interpret the result?
Result from the code:
Input image:
Code:
clear; close all;
frame = 20;
size_y = 512; % This is actually size_x
size_x = 256; % This is actually size_y
roi=5;thresh=100000;
AA = imread('image.png');
A = zeros(size_x, size_y);
A = AA(1:size_x, 1:size_y);
A(isnan(A)) = 0 ;
B = fftshift(fft2(A));
fabs = abs(B);
figure; imshow(B);
local_extr = ordfilt2(fabs, roi^2, ones(roi)); % find local maximum within 3*3 range
result = (fabs == local_extr) & (fabs > thresh);
[r, c] = find(result);
for i=1:length(r)
if (r(i)-128)^2+(c(i)-128)^2>thresh % periodic noise locates in the position outside the 20-pixel-radius circle
B(r(i)-2:r(i)+2,c(i)-2:c(i)+2)=0; % zero the frequency components
end
end
Inew=ifft2(fftshift(B));
figure;
subplot(2,1,1); imagesc(A), colormap(gray); title('Original image');
subplot(2,1,2);imagesc(real(Inew)),colormap(gray); title('Filtered image');
For filtering this kind of signal, you can try to use the median filter. It might be more appropriated than a means or Gaussian filter. The median filter is very effective on "salt and paper" noise when the mean just blur the noise.
As the signal seems very noisy, you need to try to find the good size of kernel for the filter. You can also try to increase the contrast of the image (after filtering) in order to see more the difference between the gray levels.

Sparse 3D reconstruction MATLAB example

I have a stereo camera system and I am trying this MATLAB's Computer Vision toolbox example (http://www.mathworks.com/help/vision/ug/sparse-3-d-reconstruction-from-multiple-views.html) with my own images and camera calibration files. I used Caltech's camera calibration toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/).
First I tried each camera separately based on first example and found the intrinsic camera calibration matrices for each camera and saved them. I also undistorted the left and right images using Caltech toolbox. Therefore I commented out the code for that from MATLAB example.
Here are the instrinsic camera matrices:
K1=[1050 0 630;0 1048 460;0 0 1];
K2=[1048 0 662;0 1047 468;0 0 1];
BTW, these are the right and center lenses from bumblebee XB3 cameras.
Question: aren't they supposed to be the same?
Then I did stereo calibration based on fifth example. I saved the rotation matrix (R) and translation matrix (T) from that. Therefore I commented out the code for that from MATLAB example.
Here are the rotation and translation matrices:
R=[0.9999 -0.0080 -0.0086;0.0080 1 0.0048;0.0086 -0.0049 1];
T=[120.14 0.55 1.04];
Then I fed all these images and calibration files and camera matrices to the MATLAB example and tried to find the 3-D point cloud but the results are not promising. I am attaching the code here. I think here are two problems:
1- My epipolar constraint value is too large!(to the power of 16)
2- I am not sure about the camera matrices and how I calculated them from R, and T from Caltech toolbox!
P.S. as far as feature extraction goes that is fine.
would be great if someone can help.
clear
close all
clc
files = {'Left1.tif';'Right1.tif'};
for i = 1:numel(files)
files{i}=fullfile('...\sparse_matlab', files{i});
images(i).image = imread(files{i});
end
figure;
montage(files); title('Pair of Original Images')
% Intrinsic camera parameters
load('Calib_Results_Left.mat')
K1 = KK;
load('Calib_Results_Right.mat')
K2 = KK;
%Extrinsics using stereo calibration
load('Calib_Results_stereo.mat')
Rotation=R;
Translation=T';
images(1).CameraMatrix=[Rotation; Translation] * K1;
images(2).CameraMatrix=[Rotation; Translation] * K2;
% Detect feature points and extract SURF descriptors in images
for i = 1:numel(images)
%detect SURF feature points
images(i).points = detectSURFFeatures(rgb2gray(images(i).image),...
'MetricThreshold',600);
%extract SURF descriptors
[images(i).featureVectors,images(i).points] = ...
extractFeatures(rgb2gray(images(i).image),images(i).points);
end
% Visualize several extracted SURF features from the Left image
figure; imshow(images(1).image);
title('1500 Strongest Feature Points from Globe01');
hold on;
plot(images(1).points.selectStrongest(1500));
indexPairs = ...
matchFeatures(images(1).featureVectors, images(2).featureVectors,...
'Prenormalized', true,'MaxRatio',0.4) ;
matchedPoints1 = images(1).points(indexPairs(:, 1));
matchedPoints2 = images(2).points(indexPairs(:, 2));
figure;
% Visualize correspondences
showMatchedFeatures(images(1).image,images(2).image,matchedPoints1,matchedPoints2,'montage' );
title('Original Matched Features from Globe01 and Globe02');
% Set a value near zero, It will be used to eliminate matches that
% correspond to points that do not lie on an epipolar line.
epipolarThreshold = .05;
for k = 1:length(matchedPoints1)
% Compute the fundamental matrix using the example helper function
% Evaluate the epipolar constraint
epipolarConstraint =[matchedPoints1.Location(k,:),1]...
*helperCameraMatricesToFMatrix(images(1).CameraMatrix,images(2).CameraMatrix)...
*[matchedPoints2.Location(k,:),1]';
%%%% here my epipolarConstraint results are bad %%%%%%%%%%%%%
% Only consider feature matches where the absolute value of the
% constraint expression is less than the threshold.
valid(k) = abs(epipolarConstraint) < epipolarThreshold;
end
validpts1 = images(1).points(indexPairs(valid, 1));
validpts2 = images(2).points(indexPairs(valid, 2));
figure;
showMatchedFeatures(images(1).image,images(2).image,validpts1,validpts2,'montage');
title('Matched Features After Applying Epipolar Constraint');
% convert image to double format for plotting
doubleimage = im2double(images(1).image);
points3D = ones(length(validpts1),4); % store homogeneous world coordinates
color = ones(length(validpts1),3); % store color information
% For all point correspondences
for i = 1:length(validpts1)
% For all image locations from a list of correspondences build an A
pointInImage1 = validpts1(i).Location;
pointInImage2 = validpts2(i).Location;
P1 = images(1).CameraMatrix'; % Transpose to match the convention in
P2 = images(2).CameraMatrix'; % in [1]
A = [
pointInImage1(1)*P1(3,:) - P1(1,:);...
pointInImage1(2)*P1(3,:) - P1(2,:);...
pointInImage2(1)*P2(3,:) - P2(1,:);...
pointInImage2(2)*P2(3,:) - P2(2,:)];
% Compute the 3-D location using the smallest singular value from the
% singular value decomposition of the matrix A
[~,~,V]=svd(A);
X = V(:,end);
X = X/X(end);
% Store location
points3D(i,:) = X';
% Store pixel color for visualization
y = round(pointInImage1(1));
x = round(pointInImage1(2));
color(i,:) = squeeze(doubleimage(x,y,:))';
end
% add green point representing the origin
points3D(end+1,:) = [0,0,0,1];
color(end+1,:) = [0,1,0];
% show images
figure('units','normalized','outerposition',[0 0 .5 .5])
subplot(1,2,1); montage(files,'Size',[1,2]); title('Original Images')
% plot point-cloud
hAxes = subplot(1,2,2); hold on; grid on;
scatter3(points3D(:,1),points3D(:,2),points3D(:,3),50,color,'fill')
xlabel('x-axis (mm)');ylabel('y-axis (mm)');zlabel('z-axis (mm)')
view(20,24);axis equal;axis vis3d
set(hAxes,'XAxisLocation','top','YAxisLocation','left',...
'ZDir','reverse','Ydir','reverse');
grid on
title('Reconstructed Point Cloud');
First of all, the Computer Vision System Toolbox now includes a Camera Calibrator App for calibrating a single camera, and also support for programmatic stereo camera calibration. It would be easier for you to use those tools, because the example you are using and the Caltech Calibration Toolbox use somewhat different conventions.
The example uses the pre-multiply convention, i.e. row vector * matrix, while the Caltech toolbox uses the post-multiply convention (matrix * column vector). That means that if you do use the camera parameters from Caltech, you would have to transpose the intrinsic matrix and the rotation matrices. That could be the main cause of your problems.
As far as the intrinsics being different between your two cameras, that is perfectly normal. All cameras are slightly different.
It would also help to see the matched features that you've used for triangulation. Given that you are reconstructing an elongated object, it doesn't seem too surprising to see the reconstructed points form a line in 3D...
You could also try rectifying the images and doing a dense reconstruction, as in the example I've linked to above.

Matlab inverse FFT from phase/magnitude only

So I have this image 'I'. I take F = fft2(I) to get the 2D fourier transform. To reconstruct it, I could go ifft2(F).
The problem is, I need to reconstruct this image from only the a) magnitude, and b) phase components of F. How can I separate these two components of the fourier transform, and then reconstruct the image from each?
I tried the abs() and angle() functions to get magnitude and phase, but the phase one won't reconstruct properly.
Help?
You need one matrix with the same magnitude as F and 0 phase, and another with the same phase as F and uniform magnitude. As you noted abs gives you the magnitude. To get the uniform magnitude same phase matrix, you need to use angle to get the phase, and then separate the phase back into real and imaginary parts.
> F_Mag = abs(F); %# has same magnitude as F, 0 phase
> F_Phase = cos(angle(F)) + j*(sin(angle(F)); %# has magnitude 1, same phase as F
> I_Mag = ifft2(F_Mag);
> I_Phase = ifft2(F_Phase);
it's too late to put another answer to this post, but...anyway
# zhilevan, you can use the codes I have written using mtrw's answer:
image = rgb2gray(imread('pillsetc.png'));
subplot(131),imshow(image),title('original image');
set(gcf, 'Position', get(0, 'ScreenSize')); % maximize the figure window
%:::::::::::::::::::::
F = fft2(double(image));
F_Mag = abs(F); % has the same magnitude as image, 0 phase
F_Phase = exp(1i*angle(F)); % has magnitude 1, same phase as image
% OR: F_Phase = cos(angle(F)) + 1i*(sin(angle(F)));
%:::::::::::::::::::::
% reconstruction
I_Mag = log(abs(ifft2(F_Mag*exp(i*0)))+1);
I_Phase = ifft2(F_Phase);
%:::::::::::::::::::::
% Calculate limits for plotting
% To display the images properly using imshow, the color range
% of the plot must the minimum and maximum values in the data.
I_Mag_min = min(min(abs(I_Mag)));
I_Mag_max = max(max(abs(I_Mag)));
I_Phase_min = min(min(abs(I_Phase)));
I_Phase_max = max(max(abs(I_Phase)));
%:::::::::::::::::::::
% Display reconstructed images
% because the magnitude and phase were switched, the image will be complex.
% This means that the magnitude of the image must be taken in order to
% produce a viewable 2-D image.
subplot(132),imshow(abs(I_Mag),[I_Mag_min I_Mag_max]), colormap gray
title('reconstructed image only by Magnitude');
subplot(133),imshow(abs(I_Phase),[I_Phase_min I_Phase_max]), colormap gray
title('reconstructed image only by Phase');