Fourier transformation - matlab

I'm writing currently a program in Matlab which is related on image hashing. Loading the image and performing a simple down-sampling was not a problem.Here is my code
clear all;
close all;
clc;
%%load Image
I = im2single(imread('cameraman.tif'));
%%Perform filtering and downsampling
gaussPyramid = vision.Pyramid('PyramidLevel', 2);
J = step(gaussPyramid, I); %%Preprocessed Image
%%Get 2D Fourier Transform of Input Image
Y = fft2(I); %%fft of input image
The algorithm next assumes that 2D Fourier Transform ( Y in my case ) must be in the form Y(f_x,f_y) where f_x,f_y are the normalized spatial frequencies
in the range [0, 1].I'm not able to transform the output of fft2 function from Matlab as it is required by the algorithm.

I looked up the paper in question (Multimedia Forensic Analysis via Intrinsic and Extrinsic Fingerprints by Ashwin Swaminathan) and found
We take a Fourier transform on the preprocessed image
to obtain I(fx, fy). The Fourier transform output is converted into polar co-ordinates to arrive at I′(ρ, θ)
By this, they seem to mean that I(fx, fy) is the Fourier transformation of the image. But they're trying to use the Fourier-Mellin transformation, which is different from a simple fft2 on the image. Based on the information found in this set of slides,
If an input image is expressed in terms of coordinates that are natural logarithms of the original coordinates, then the magnitude of its FT is insensitive to any change in scale of the original image (since it is Mellin Transform in the original coordinates)
and in files on the MATLAB Central File exchange, you have to do additional work to get the Mellin-Fourier transform; particularly, taking the magnitude (which appears to be your missing step), converting the coordinates to log polar and taking a second fft2. It's unclear to me why the log of log polar coordinates was omitted from the paper in the steps required. See an implementation for image registration here for an example. Note that this code is old, and the transformImage method appears not to exist; it does the log polar transform.

Related

Matlab watershed algorithm - control separation width

I am interested in separating features on an image using the watershed algorithm. Using the matlab tutorial, I tried to write a small proof of principle algorithm that I can further use in my image analysis.
Im = imread('../../Pictures/testrec.png');
bw = rgb2gray(Im);
figure
imshow(bw,'InitialMagnification','fit'), title('bw')
%Compute the distance transform of the complement of the binary image.
D = bwdist(~bw);
figure
imshow(D,[],'InitialMagnification','fit')
title('Distance transform of ~bw')
%Complement the distance transform, and force pixels that don't belong to the objects to be at Inf .
D = -D;
D(~bw) = Inf;
%Compute the watershed transform and display the resulting label matrix as an RGB image.
L = watershed(D);
L(~bw) = 0;
rgb = label2rgb(L,'jet',[.5 .5 .5]);
figure
imshow(rgb,'InitialMagnification','fit')
title('Watershed transform of D')
It appears that the feature separation is somewhat random, as can be seen from the prolonged feature in the middle. However, there does not seem to be any parameters for the watershed algorithm, that could be used to optimize its performance. Can you suggest how such parameter can be introduced, or a better algorithm to process the data.
Bonus Question: I am interested to first separate my image using bwconncomp, then selectively apply the watershed algorithm to only some of the regions. Assume I know which of the cc.PixelIdxList regions I want to apply the algorithm to - how do I get a new PixelIdxList with separated components.
Watershed transformation cannot separate convex shapes.
There is no way to change that. A convex shape always results in one object.
Blobs very close to convex will always result in poor watershed results.
The only reason why you have that "somewhat random" result instead of a single basin is that a few pixels are a bit off the perimeter.
Results of watershed are improved by pre- and post-processing. But that would be very specific to a certain problem.

perfect reconstruction of wavelet transform using CWT

If I perform a standard wavelet transform and then perform the inverse, I was expecting to get the original signal back:
% dummy series:
Fs = 1e3;
t = 0:1/Fs:1;
x = exp(cos(2*pi*32*t).*(t>=0.1 & t<0.3) + sin(2*pi*64*t).*(t>0.7));
% perform default transform and inverse
wt=cwt(x)
rx=icwt(wt)
% plot
plot(t,x,t,rx)
Apart from the offset, the flat period signals are distorted.
It seems to be possible to perform a transform/inverse and have something close to the identity function, as here Wavelet reconstruction of time series , but reading the tutorials/help for cwt I do not see how to achieve this.
The matlab documentation explains that the CWT is not the best choice for perfect reconstruction. However if you want to compare different bands as signals with the same size as the original, you can use the MODWT (or the shift-invariant DWT by cycle-spinning, sometimes calles à trous).

How to calculate image histogram without normalizing data in Matlab

I have an image I which pixel intensities fall within the range of 0-1. I can calculate the image histogram by normalizing it but I found the curves is not exactly the same as the histogram of raw data. This will cause some issue for the later peaks finding process(See attached two images).
My question is in Matlab, is there any way I can plot the image histogram without normalization the data so that I can keep the curve shape unchanged? This will benefit for those raw images when their pixel intensities are not within 0-1 ranges. Currently, I cannot calculate their histogram if I don't normalize the data.
The Matlab code for normalization and histogram calculation is attached. Any suggestion will be appreciated!
h = imhist(mat2gray(I));
Documentation of imhist tells us that the function checks the data type of the input and scale the values accordingly. Therefore, without testing with your attached data, this may work:
h = imhist(uint8(I));
An alternatively you may scale the integer-representation to floating-representation, by either using argument of mat2gray
h = imhist(mat2gray(I, [0,255]));
or just divide it.
h = imhist(I/255);
The imhist answer in this thread describing normalizing or casting is completely correctly. Alternatively, you could use the histogram function in MATLAB which will work with unnormalized floating point data:
A = 255*rand(500,500);
histogram(A);

How to make the Fourier Descriptor result to be Insensitive?

I am trying to distinguish shape for images in Matlab by using the Fourier Descriptor. What I want to do is : 1. Generate the Fourier Descriptors for each image; 2. Calculate the Euclidian Distance between these Fourier Descriptors to compare the shapes.
My problem is I cannot make the result of calculating the Fourier Descriptor to be insensitive for the geometric transformation (e.g. Rotation & Scaling).
The code I use now is the "Gonzales matlab version", the one in this link. I have tried to normalize the result by doing this:
% Normalization
DC = f(1);
f = f(2:11); % getting the first 20 & deleting the dc component
f = abs(f) ; % use magnitudes to be invariant to translation & rotation
f = f/DC; % devide the fourier coeffients by the DC-coefficient to be invariant to scale
But I don't think it worked as I expected. The result is different if I change the direction or the scale of a same image.
I have been trapped by this question for a couple of days. I will appreciate any suggestion, thank you all in advance!
I recommend you to read
"Feauture Extraction and Image Processing for Computer Vision"
by Nixon and Aguado.
You will find there what you are looking for

fftshift before calculating fourier transform: Matlab

I am looking at some FFT code in a matlab project and the FFT and inverse FFT is computed this way:
% Here image is a 2D image.
image_fft = fftshift(image,1);
image_fft = fftshift(image_fft,2);
image_fft = fft(image_fft,[],1);
image_fft = fft(image_fft,[],2);
image_fft = fftshift(image_fft,1);
image_fft = fftshift(image_fft,2);
% Some processing and then same sequence of fftshift, ifft and fftshift to move to
% time domain
I tried to find some information online but am having trouble wondering why the fftshift needs to be done before computing the FFT.
Another question I have is whether this is something really Matlab specific. For example, I am planning to port this code to C++ and use KISS FFT. Do I need to be vary of this?
The reason why people like to swap prior to the DFT is because it makes the center pixel of the image the one with zero-phase shift. It often makes algorithms that depend on phase easier to think about and implement. It is not matlab specific and if you want to port an exact version of the code to another language, you'll need to perform the quadswap beforehand too.
EDIT:
Let me give an example that I hope will clear things up. Let's say that our image is the sum of a bunch of sinc functions with varying locations throughout the image. In the frequency domain, each of these sinc functions is a rect function with the same amplitude but with a different linear phase component that determines the location of the sinc in the image domain. By swapping the image prior to taking the DFT, we make the linear phase component of the frequency domain representation of the center pixel be zero. Moreover, the linear phase components of the other sinc functions will now be a function of their distance from the center pixel. If we didn't swap the image beforehand, then the linear phase components of the rect functions would be a function of their distance from the pixel in the top-left of the image. This would be non-intutive and would involve the same kind of phase wrapping considerations that one sees with equating the frequencies in the range (pi,2pi) rad/sample with more the intuitive (-pi,0) rad/sample.
For images, it's better to use fft2. It's matlab's convention to arrange 2D ffts with the DC in the corners. Presumably because of the row/array conventions. fftshift allows for a more intuitive display of the FFT with the DC in the center.
I don't fully understand what the piece of code you copied is about, here is an example of fft and inverse fft of an image using matlab.
And a more detailed tutorial here.