I have a weird problem with the discrete fft. I know that the Fourier Transform of a Gauss function exp(-x^2/2) is again the same Gauss function exp(-k^2/2). I tried to test that with some simple code in MatLab and FFTW but I get strange results.
First, the imaginary part of the result is not zero (in MatLab) as it should be.
Second, the absolute value of the real part is a Gauss curve but without the absolute value half of the modes have a negative coefficient. More precisely, every second mode has a coefficient that is the negative of that what it should be.
Third, the peak of the resulting Gauss curve (after taking the absolute value of the real part) is not at one but much higher. Its height is proportional to the number of points on the x-axis. However, the proportionality factor is not 1 but nearly 1/20.
Could anyone explain me what I am doing wrong?
Here is the MatLab code that I used:
function [nooutput,M] = fourier_test
Nx = 512; % number of points in x direction
Lx = 50; % width of the window containing the Gauss curve
x = linspace(-Lx/2,Lx/2,Nx); % creating an equidistant grid on the x-axis
input_1d = exp(-x.^2/2); % Gauss function as an input
input_1d_hat = fft(input_1d); % computing the discrete FFT
input_1d_hat = fftshift(input_1d_hat); % ordering the modes such that the peak is centred
plot(real(input_1d_hat), '-')
hold on
plot(imag(input_1d_hat), 'r-')
The answer is basically what Paul R suggests in his second comment, you introduce a phase shift (linearly dependent on the frequency) because the center of the Gaussian described by input_1d_hat is effectively at k>0, where k+1 is the index into input_1d_hat. Instead if you center your data (such that input_1d_hat(1) corresponds to the center) as follows you get a phase-corrected Gaussian in the frequency domain:
Nx = 512; % number of points in x direction
Lx = 50; % width of the window containing the Gauss curve
x = linspace(-Lx/2,Lx/2,Nx); % creating an equidistant grid on the x-axis
%%%%%%%%%%%%%%%%
x=fftshift(x); % <-- center
%%%%%%%%%%%%%%%%
input_1d = exp(-x.^2/2); % Gauss function as an input
input_1d_hat = fft(input_1d); % computing the discrete FFT
input_1d_hat = fftshift(input_1d_hat); % ordering the modes such that the peak is centered
plot(real(input_1d_hat), '-')
hold on
plot(imag(input_1d_hat), 'r-')
From the definition of the DFT, if the Gaussian is not centered such that maximum occurs at k=0, you will see a phase twist. The effect off fftshift is to perform a circular shift or swapping of left and right sides of the dataset, which is equivalent to shifting the center of the peak to k=0.
As for the amplitude scaling, that is an issue with the definition of the DFT implemented in Matlab. From the documentation for the FFT:
For length N input vector x, the DFT is a length N vector X,
with elements
N
X(k) = sum x(n)*exp(-j*2*pi*(k-1)*(n-1)/N), 1 <= k <= N.
n=1
The inverse DFT (computed by IFFT) is given by
N
x(n) = (1/N) sum X(k)*exp( j*2*pi*(k-1)*(n-1)/N), 1 <= n <= N.
k=1
Note that in the forward step the summation is not normalized by N. Therefore if you increase the number of points Nx in the summation while keeping the width Lx of the Gaussian function constant you will increase X(k) proportionately.
As for signal leaking into the imaginary frequency dimension, that is due to the discrete form of the DFT, which results in truncation and other effects, as noted again by Paul R. If you reduce Lx while keeping Nx constant, you should see a reduction in the amount of signal in the imaginary dimension relative to the real dimension (compare the spectra while keeping peak intensities in the real dimension equal).
You'll find additional answers to similar questions here and here.
Related
I have adapted the code in Comparing FFT of Function to Analytical FT Solution in Matlab for this question. I am trying to do FFTs and comparing the result with analytical expressions in the Wikipedia tables.
My code is:
a = 1.223;
fs = 1e5; %sampling frequency
dt = 1/fs;
t = 0:dt:30-dt; %time vector
L = length(t); % no. sample points
t = t - 0.5*max(t); %center around t=0
y = ; % original function in time
Y = dt*fftshift(abs(fft(y))); %numerical soln
freq = (-L/2:L/2-1)*fs/L; %freq vector
w = 2*pi*freq; % angular freq
F = ; %analytical solution
figure; subplot(1,2,1); hold on
plot(w,real(Y),'.')
plot(w,real(F),'-')
xlabel('Frequency, w')
title('real')
legend('numerical','analytic')
xlim([-5,5])
subplot(1,2,2); hold on;
plot(w,imag(Y),'.')
plot(w,imag(F),'-')
xlabel('Frequency, w')
title('imag')
legend('numerical','analytic')
xlim([-5,5])
If I study the Gaussian function and let
y = exp(-a*t.^2); % original function in time
F = exp(-w.^2/(4*a))*sqrt(pi/a); %analytical solution
in the above code, looks like there is good agreement when the real and imaginary parts of the function are plotted:
But if I study a decaying exponential multiplied with a Heaviside function:
H = #(x)1*(x>0); % Heaviside function
y = exp(-a*t).*H(t);
F = 1./(a+1j*w); %analytical solution
then
Why is there a discrepancy? I suspect it's related to the line Y = but I'm not sure why or how.
Edit: I changed the ifftshift to fftshift in Y = dt*fftshift(abs(fft(y)));. Then I also removed the abs. The second graph now looks like:
What is the mathematical reason behind the 'mirrored' graph and how can I remove it?
The plots at the bottom of the question are not mirrored. If you plot those using lines instead of dots you'll see the numeric results have very high frequencies. The absolute component matches, but the phase doesn't. When this happens, it's almost certainly a case of a shift in the time domain.
And indeed, you define the time domain function with the origin in the middle. The FFT expects the origin to be at the first (leftmost) sample. This is what ifftshift is for:
Y = dt*fftshift(fft(ifftshift(y)));
ifftshift moves the origin to the first sample, in preparation for the fft call, and fftshift moves the origin of the result to the middle, for display.
Edit
Your t does not have a 0:
>> t(L/2+(-1:2))
ans =
-1.5000e-05 -5.0000e-06 5.0000e-06 1.5000e-05
The sample at t(floor(L/2)+1) needs to be 0. That is the sample that ifftshift moves to the leftmost sample. (I use floor there in case L is odd in size, not the case here.)
To generate a correct t do as follows:
fs = 1e5; % sampling frequency
L = 30 * fs;
t = -floor(L/2):floor((L-1)/2);
t = t / fs;
I first generate an integer t axis of the right length, with 0 at the correct location (t(floor(L/2)+1)==0). Then I convert that to seconds by dividing by the sampling frequency.
With this t, the Y as I suggest above, and the rest of your code as-is, I see this for the Gaussian example:
>> max(abs(F-Y))
ans = 4.5254e-16
For the other function I see larger differences, in the order of 6e-6. This is due to the inability to sample the Heaviside function. You need t=0 in your sampled function, but H doesn't have a value at 0. I noticed that the real component has an offset of similar magnitude, which is caused by the sample at t=0.
Typically, the sampled Heaviside function is set to 0.5 for t=0. If I do that, the offset is removed completely, and max difference for the real component is reduced by 3 orders of magnitude (largest errors happen for values very close to 0, where I see a zig-zag pattern). For the imaginary component, the max error is reduced to 3e-6, still quite large, and is maximal at high frequencies. I attribute these errors to the difference between the ideal and sampled Heaviside functions.
You should probably limit yourself to band-limited functions (or nearly-band-limited ones such as the Gaussian). You might want to try to replace the Heaviside function with an error function (integral of Gaussian) with a small sigma (sigma = 0.8 * fs is the smallest sigma I would consider for proper sampling). Its Fourier transform is known.
I would like to create 500 ms of band-limited (100-640 Hz) white Gaussian noise with a (relatively) flat frequency spectrum. The noise should be normally distributed with mean = ~0 and 99.7% of values between ± 2 (i.e. standard deviation = 2/3). My sample rate is 1280 Hz; thus, a new amplitude is generated for each frame.
duration = 500e-3;
rate = 1280;
amplitude = 2;
npoints = duration * rate;
noise = (amplitude/3)* randn( 1, npoints );
% Gaus distributed white noise; mean = ~0; 99.7% of amplitudes between ± 2.
time = (0:npoints-1) / rate
Could somebody please show me how to filter the signal for the desired result (i.e. 100-640 Hz)? In addition, I was hoping somebody could also show me how to generate a graph to illustrate that the frequency spectrum is indeed flat.
I intend on importing the waveform to Signal (CED) to output as a form of transcranial electrical stimulation.
The following is Matlab implementation of the method alluded to by "Some Guy" in a comment to your question.
% In frequency domain, white noise has constant amplitude but uniformly
% distributed random phase. We generate this here. Only half of the
% samples are generated here, the rest are computed later using the complex
% conjugate symmetry property of the FFT (of real signals).
X = [1; exp(i*2*pi*rand(npoints/2-1,1)); 1]; % X(1) and X(NFFT/2) must be real
% Identify the locations of frequency bins. These will be used to zero out
% the elements of X that are not in the desired band
freqbins = (0:npoints/2)'/npoints*rate;
% Zero out the frequency components outside the desired band
X(find((freqbins < 100) | (freqbins > 640))) = 0;
% Use the complex conjugate symmetry property of the FFT (for real signals) to
% generate the other half of the frequency-domain signal
X = [X; conj(flipud(X(2:end-1)))];
% IFFT to convert to time-domain
noise = real(ifft(X));
% Normalize such that 99.7% of the times signal lies between ±2
noise = 2*noise/prctile(noise, 99.7);
Statistical analysis of around a million samples generated using this method results in the following spectrum and distribution:
Firstly, the spectrum (using Welch method) is, as expected, flat in the band of interest:
Also, the distribution, estimated using histogram of the signal, matches the Gaussian PDF quite well.
I'm trying to understand how the FFT in matlab works, particularly, how to define the frequency range to plot it. It happens that I have read from matlab help links and from other discussions here and I think (guess) that I'm confused about it.
In the matlab link:
http://es.mathworks.com/help/matlab/math/fast-fourier-transform-fft.html
they define such a frequency range as:
f = (0:n-1)*(fs/n)
with n and fs as:
n = 2^nextpow2(L); % Next power of 2 from length of signal x
fs = N/T; % N number of samples in x and T the total time of the recorded signal
But, on the other hand, in the previous post Understanding Matlab FFT example
(based on previous version of matlab), the resulting frequency range is defined as:
f = fs/2*linspace(0,1,NFFT/2+1);
with NFFT as the aforementioned n (Next power of 2 from length of signal x).
So, based on that, how these different vectors (equation 1 and final equation) could be the same?
If you can see, the vectors are different since the former has n points and the later has NFFT/2 points! In fact, the factor (fs/n) is different from fs/2.
So, based on that, how these different vectors (equation 1 and final equation) could be the same?
The example in the documentation from Mathworks plots the entire n-point output of the FFT. This covers the frequencies from 0 to nearly fs (exactly (n-1)/n * fs). They then make the following observation (valid for real inputs to the FFT):
The first half of the frequency range (from 0 to the Nyquist frequency fs/2) is sufficient to identify the component frequencies in the data, since the second half is just a reflection of the first half.
The other post you refer to just chooses to not show that redundant second half. It then uses half the number of points which also cover half the frequency range.
In fact, the factor (fs/n) is different from fs/2.
Perhaps the easiest way to make sense of it is to compare the output of the two expressions for some small value of n, says n=8 and setting fs=1 (since fs multiplies both expressions). On the one hand the output of the first expression [0:n-1]*(fs/n) would be:
0.000 0.125 0.250 0.500 0.625 0.750 0.875
whereas the output of fs/2*linspace(0,1,n/2+1) would be:
0.000 0.125 0.250 0.500
As you can see the set of frequencies are exactly the same up to Nyquist frequency fs/2.
The confusion is perhaps arising from the fact that the two examples which you have referenced are plotting results of the fft differently. Please refer to the code below for the references made in this explanation.
In the first example, the plot is of the power spectrum (periodogram) over the frequency range. Note, in the first plot, that the periodogram is not centered at 0, meaning that the frequency range appears to be twice that of the Nyquist sampling frequency. As mentioned in the mathworks link, it is common practice to center the periodogram at 0 to avoid this confusion (figure 2).
For the second example, taking the same parameters, the original plot is of amplitude of the fourier spectrum with a different normalization than in the first example (figure 3). Using the syntax of Matlab's full frequency ordering (as commented in the code), it is trivial to convert this seemingly different fft result to that of example 1; the identical result of the 0-centered periodogram is replicated in figure 4.
Thus, to answer your question specifically, the frequency ranges in both cases are the same, with the maximum frequency equal to the Nyquist sampling frequency as in:
f = fs/2*linspace(0,1,NFFT/2+1);
The key to understanding how the dfft works (also in Matlab) is to understand that you are simply performing a projection of your discrete data set into fourier space where what is returned by the fft() function in matlab are the coefficients of the expansion for each frequency component and the order of the coefficients is given (in Matlab as in example 2) by:
f = [f(1:end-1) -fliplr(f(1,2:end))];
See the Wikipedia page on the DFT for additional details:
https://en.wikipedia.org/wiki/Discrete_Fourier_transform
It might also be helpful for you to take the fft omitting the length as a power of 2 parameter as
y = fft(x).
In this case, you would see only a few non-zero components in y corresponding to the exact coefficients of your input signal. The mathworks page claims the following as a motivation for using or not using this length:
"Using a power of two for the transform length optimizes the FFT algorithm, though in practice there is usually little difference in execution time from using n = m."
%% First example:
% http://www.mathworks.com/help/matlab/math/fast-fourier-transform-fft.html
fs = 10; % Sample frequency (Hz)
t = 0:1/fs:10-1/fs; % 10 sec sample
x = (1.3)*sin(2*pi*15*t) ... % 15 Hz component
+ (1.7)*sin(2*pi*40*(t-2)); % 40 Hz component
% Removed the noise
m = length(x); % Window length
n = pow2(nextpow2(m)); % Transform length
y = fft(x,n); % DFT
f = (0:n-1)*(fs/n); % Frequency range
power = y.*conj(y)/n; % Power of the DFT
subplot(2,2,1)
plot(f,power,'-o')
xlabel('Frequency (Hz)')
ylabel('Power')
title('{\bf Periodogram}')
y0 = fftshift(y); % Rearrange y values
f0 = (-n/2:n/2-1)*(fs/n); % 0-centered frequency range
power0 = y0.*conj(y0)/n; % 0-centered power
subplot(2,2,2)
plot(f0,power0,'-o')
% plot(f0,sqrt_power0,'-o')
xlabel('Frequency (Hz)')
ylabel('Power')
title('{\bf 0-Centered Periodogram} Ex. 1')
%% Second example:
% http://stackoverflow.com/questions/10758315/understanding-matlab-fft-example
% Let's redefine the parameters for consistency between the two examples
Fs = fs; % Sampling frequency
% T = 1/Fs; % Sample time (not required)
L = m; % Length of signal
% t = (0:L-1)*T; % Time vector (as above)
% % Sum of a 3 Hz sinusoid and a 2 Hz sinusoid
% x = 0.7*sin(2*pi*3*t) + sin(2*pi*2*t); %(as above)
NFFT = 2^nextpow2(L); % Next power of 2 from length of y
% NFFT == n (from above)
Y = fft(x,NFFT)/L;
f = Fs/2*linspace(0,1,NFFT/2+1);
% Plot single-sided amplitude spectrum.
subplot(2,2,3)
plot(f,2*abs(Y(1:NFFT/2+1)),'-o')
title('Single-Sided Amplitude Spectrum of y(t)')
xlabel('Frequency (Hz)')
ylabel('|Y(f)|')
% Get the 0-Centered Periodogram using the parameters of the second example
f = [f(1:end-1) -fliplr(f(1,2:end))]; % This is the frequency ordering used
% by the full fft in Matlab
power = (Y*L).*conj(Y*L)/NFFT;
% Rearrange for nicer plot
ToPlot = [f; power]; [~,ind] = sort(f);
ToPlot = ToPlot(:,ind);
subplot(2,2,4)
plot(ToPlot(1,:),ToPlot(2,:),'-o')
xlabel('Frequency (Hz)')
ylabel('Power')
title('{\bf 0-Centered Periodogram} Ex. 2')
I am trying to use Matlab's nlinfit function to estimate the best fitting Gaussian for x,y paired data. In this case, x is a range of 2D orientations and y is the probability of a "yes" response.
I have copied #norm_funct from relevant posts and I'd like to return a smoothed, normal distribution that best approximates the observed data in y, and returns the magnitude, mean and SD of the best fitting pdf. At the moment, the fitted function appears to be incorrectly scaled and less than smooth - any help much appreciated!
x = -30:5:30;
y = [0,0.20,0.05,0.15,0.65,0.85,0.88,0.80,0.55,0.20,0.05,0,0;];
% plot raw data
figure(1)
plot(x, y, ':rs');
axis([-35 35 0 1]);
% initial paramter guesses (based on plot)
initGuess(1) = max(y); % amplitude
initGuess(2) = 0; % mean centred on 0 degrees
initGuess(3) = 10; % SD in degrees
% equation for Gaussian distribution
norm_func = #(p,x) p(1) .* exp(-((x - p(2))/p(3)).^2);
% use nlinfit to fit Gaussian using Least Squares
[bestfit,resid]=nlinfit(y, x, norm_func, initGuess);
% plot function
xFine = linspace(-30,30,100);
figure(2)
plot(x, y, 'ro', x, norm_func(xFine, y), '-b');
Many thanks
If your data actually represent probability estimates which you expect come from normally distributed data, then fitting a curve is not the right way to estimate the parameters of that normal distribution. There are different methods of different sophistication; one of the simplest is the method of moments, which means you choose the parameters such that the moments of the theoretical distribution match those of your sample. In the case of the normal distribution, these moments are simply mean and variance (or standard deviation). Here's the code:
% normalize y to be a probability (sum = 1)
p = y / sum(y);
% compute weighted mean and standard deviation
m = sum(x .* p);
s = sqrt(sum((x - m) .^ 2 .* p));
% compute theoretical probabilities
xs = -30:0.5:30;
pth = normpdf(xs, m, s);
% plot data and theoretical distribution
plot(x, p, 'o', xs, pth * 5)
The result shows a decent fit:
You'll notice the factor 5 in the last line. This is due to the fact that you don't have probability (density) estimates for the full range of values, but from points at distances of 5. In my treatment I assumed that they correspond to something like an integral over the probability density, e.g. over an interval [x - 2.5, x + 2.5], which can be roughly approximated by multiplying the density in the middle by the width of the interval. I don't know if this interpretation is correct for your data.
Your data follow a Gaussian curve and you describe them as probabilities. Are these numbers (y) your raw data – or did you generate them from e.g. a histogram over a larger data set? If the latter, the estimate of the distribution parameters could be improved by using the original full data.
(Disclaimer: I thought about posting this on math.statsexchange, but found similar questions there that were moved to SO, so here I am)
The context:
I'm using fft/ifft to determine probability distributions for sums of random variables.
So e.g. I'm having two uniform probability distributions - in the simplest case two uniform distributions on the interval [0,1].
So to get the probability distribution for the sum of two random variables sampled from these two distributions, one can calculate the product of the fourier-transformed of each probabilty density.
Doing the inverse fft on this product, you get back the probability density for the sum.
An example:
function usumdist_example()
x = linspace(-1, 2, 1e5);
dx = diff(x(1:2));
NFFT = 2^nextpow2(numel(x));
% take two uniform distributions on [0,0.5]
intervals = [0, 0.5;
0, 0.5];
figure();
hold all;
for i=1:size(intervals,1)
% construct the prob. dens. function
P_x = x >= intervals(i,1) & x <= intervals(i,2);
plot(x, P_x);
% for each pdf, get the characteristic function fft(pdf,NFFT)
% and form the product of all char. functions in Y
if i==1
Y = fft(P_x,NFFT) / NFFT;
else
Y = Y .* fft(P_x,NFFT) / NFFT;
end
end
y = ifft(Y, NFFT);
x_plot = x(1) + (0:dx:(NFFT-1)*dx);
plot(x_plot, y / max(y), '.');
end
My issue is, the shape of the resulting prob. dens. function is perfect.
However, the x-axis does not fit to the x I create in the beginning, but is shifted.
In the example, the peak is at 1.5, while it should be 0.5.
The shift changes if I e.g. add a third random variable or if I modify the range of x.
But I can't get figure how.
I'm afraid it might have to do with the fact that I'm having negative x values, while fourier transforms usually work in a time/frequency domain, where frequencies < 0 don't make sense.
I'm aware I could find e.g. the peak and shift it to its proper place, but seems nasty and error prone...
Glad about any ideas!
The problem is that your x origin is -1, not 0. You expect the center of the triangular pdf to be at .5, because that's twice the value of the center of the uniform pdf. However, the correct reasoning is: the center of the uniform pdf is 1.25 above your minimum x, and you get the center of the triangle at 2*1.25 = 2.5 above the minimum x (that is, at 1.5).
In other words: although your original x axis is (-1, 2), the convolution (or the FFT) behave as if it were (0, 3). In fact, the FFT knows nothing about your x axis; it only uses the y samples. Since your uniform is zero for the first samples, that zero interval of width 1 is amplified to twice its width when you do the convolution (or the FFT). I suggest drawing the convolution on paper to see this (draw original signal, reflected signal about y axis, displace the latter and see when both begin to overlap). So you need a correction in the x_plot line to compensate for this increased width of the zero interval: use
x_plot = 2*x(1) + (0:dx:(NFFT-1)*dx);
and then plot(x_plot, y / max(y), '.') will give the correct graph: