I have a 2D Gaussian beam at its waist (where the phase is zero throughout the transverse plane). When I use fft2 to find the 2D spatial Fourier transform and plot the phase, I observe that there is a pi phase shift between any 2 adjacent data points. However, I don't observe this when I use for loops to calculate the Fourier transform instead of using fft2.
Is this due to phase wrapping? How do I overcome this?
Thanks.
Edit: I'm posting the code for the fft of a circular aperture, since the same results are observed, and because it is much simpler.
Nx = 200; Ny = Nx;
%creating coordinate grids
x = -Nx/2:Nx/2 - 1; y = -Ny/2:Ny/2 - 1;
[X,Y] = meshgrid(x,y);
r = 15; %radius of aperture
Eip = ((X.^2 + Y.^2 ) <= r^2); %aperture
figure;pcolor(abs(Eip));axis square; shading flat; colorbar;
figure;pcolor(angle(Eip));axis square; shading flat; colorbar;
Cip = fftshift(fft2(Eip)); %FFT
figure;pcolor(abs(Cip));axis square; shading flat; colorbar;
figure;pcolor(angle(Cip));axis square; shading flat; colorbar;
Short answer
Cip turns out to be real. You're just seeing sign changes between contiguous points.
Long answer
Eip is obviously real. Besides, it exhibits the following symmetry along the x and y axes. Let's take the x axis first. For any fixed y, consider Eip as a finite signal defined at x values 0,1,...,Nx-1. If that finite signal is extended into a periodic sequence, then this periodic sequence turns out to be even. This is true for your specific definition of Eip. Together with the fact that the values are real, this implies1 that the DFT along x is also real and has the same type of symmetry. Now, regarding the y axis, the same holds. The final result is that the 2D-DFT Cip is real and has the mentioned x and y symmetries.
(As a side note, Cip as obtained in your code is not real but complex. However, the imaginary part is of the order of eps, and thus negligible. You can check:
>> max(real(Cip(:)))
ans =
709
>> max(imag(Cip(:)))
ans =
4.6262e-014
So the imaginary part is only an artifact of finite numerical precision. We can do
Cip = real(Cip);
to remove that spurious imaginary part.)
Now that we know that Cip is real (up to numerical precision), it's natural that all its values have phase either 0 or pi (or equivalently -pi). That is, what you are seeing is just changes of sign between consecutive values in the DFT. To illustrate, see the following graph, which shows the variation of Cip along a line parallel to the y axis (corresponding to x equal to 120; just as an example):
stem(1:Nx, Cip(120,:))
1
See Discrete-time signal processing, Oppenheim et al., 2nd ed., pp. 568--570.
To avoid this, you need an additional ifftshift.
This is due to the algorithm Matlab uses and the results you expect.
To avoid the pi phase jumps use this instead:
fftshift(fft2(ifftshift(Eip)));
Related
Suppose I have the following data and commands:
clc;clear;
t = [0:0.1:1];
t_new = [0:0.01:1];
y = [1,2,1,3,2,2,4,5,6,1,0];
p = interp1(t,y,t_new,'spline');
plot(t,y,'o',t_new,p)
You can see they work quite fine, in the sense interpolating function matches the data points at the nodes fine. But my problem is, I need to compute the exact derivative of y (i.e., p function) w.r.t. time and plot it against the t vector. How can it be done? I shall not use diff commands, because I need to make sure the derivative function has the same length as t vector. Thanks a lot.
Method A: Using the derivative
This method calculates the actual derivative of the polynomial. If you have the curve fitting toolbox you can use:
% calculate the polynominal
pp = interp1(t,y,'spline','pp')
% take the first order derivative of it
pp_der=fnder(pp,1);
% evaluate the derivative at points t (or any other points you wish)
slopes=ppval(pp_der,t);
If you don't have the curve fitting toolbox you can replace the fnderline with:
% piece-wise polynomial
[breaks,coefs,l,k,d] = unmkpp(pp);
% get its derivative
pp_der = mkpp(breaks,repmat(k-1:-1:1,d*l,1).*coefs(:,1:k-1),d);
Source: This mathworks question. Thanks to m7913d for linking it.
Appendix:
Note that
p = interp1(t,y,t_new,'spline');
is a shortcut for
% get the polynomial
pp = interp1(t,y,'spline','pp');
% get the height of the polynomial at query points t_new
p=ppval(pp,t_new);
To get the derivative we obviously need the polynomial and can't just work with the new interpolated points. To avoid interpolating the points twice which can take quite long for a lot of data, you should replace the shortcut with the longer version. So a fully working example that includes your code example would be:
t = [0:0.1:1];
t_new = [0:0.01:1];
y = [1,2,1,3,2,2,4,5,6,1,0];
% fit a polynomial
pp = interp1(t,y,'spline','pp');
% get the height of the polynomial at query points t_new
p=ppval(pp,t_new);
% plot the new interpolated curve
plot(t,y,'o',t_new,p)
% piece-wise polynomial
[breaks,coefs,l,k,d] = unmkpp(pp);
% get its derivative
pp_der = mkpp(breaks,repmat(k-1:-1:1,d*l,1).*coefs(:,1:k-1),d);
% evaluate the derivative at points t (or any other points you wish)
slopes=ppval(pp_der,t);
Method B: Using finite differences
A derivative of a continuous function is at its base just the difference of f(x) to f(x+infinitesimal difference) divided by said infinitesimal difference.
In matlab, eps is the smallest difference possible with a double precision. Therefore after each t_new we add a second point which is eps larger and interpolate y for the new points. Then the difference between each point and it's +eps pair divided by eps gives the derivative.
The problem is that if we work with such small differences the precision of the output derivatives is severely limited, meaning it can only have integer values. Therefore we add values slightly larger than eps to allow for higher precisions.
% how many floating points the derivatives can have
precision = 10;
% add after each t_new a second point with +eps difference
t_eps=[t_new; t_new+eps*precision];
t_eps=t_eps(:).';
% interpolate with those points and get the differences between them
differences = diff(interp1(t,y,t_eps,'spline'));
% delete all differences wich are not between t_new and t_new + eps
differences(2:2:end)=[];
% get the derivatives of each point
slopes = differences./(eps*precision);
You can of course replace t_new with t (or any other time you want to get the differential of) if you want to get the derivatives at the old points.
This method is slightly inferior to method a) in your case, as it is slower and a bit less precise. But maybe it's useful to somebody else who is in a different situation.
I have a discrete function y(n), n=1..8000 with two areas, which can be approximated by almost horizontal straight lines as shown in image.
I'd like to find coordinates x1, x2 of points where these areas meet the rapidly growing part of function. In Matlab, y(n) is a one-dimensional vector.
If data set is very smooth, you can use simple derivative magnitude detector.
If some noise peaks and small false fronts are possible, you'd better to use more robust method, for example
Choose some initial value for x1 (for example, 600)
Calculate linear regression parameters y = a*x + b for range 1..x1
If slope parameter a is small enough, increase x1 and repeat calculation
if slope is larger than some reasonable threshold, decrease x1 and repeat.
Binary search algorithm is rather quick way to reach needed x1 value.
Do the same for x2..N range.
As mentioned by #PatronBernard, you could look at the derivative of your trace, knowing that the derivative deviates from zero once you are no longer in the flat bit. This would involve manually picking a threshold which might have to be done for each new trace, and would be sensitive to noise in your trace. Here is an implementation, also showing the issues you might get with a noisy trace:
threshold = 0.2;
y1 = diff(y)/(x(2) - x(1));
y2 = diff(y1)/(x(2) - x(1));
figure;
hold on
plot(x,y, 'b')
xi = find(y1>threshold,1);
line([x(xi) x(xi)], ylim, 'Color', 'r')
xi = find(flipud(y1)>threshold,1);
line([x(length(x) - xi + 1) x(length(x) - xi + 1)], ylim, 'Color', 'r')
Alternatively, if you have genuinely noisy data I would recommend looking at IEC standard 60469:2013 "Transitions, Pulses and Related Waveforms - Terms, Definitions and Algorithms".
I'm using MATLAB's fit function:
fourier_series=(x,y,'fourier8');
to fit an 8th order Fourier series to a set of discrete data (x,y). I need the period of the Fourier series to be 2*pi. However I can't work out how to fix this so that when I call the function it fits the series to my required period. Any suggestions would be greatly appreciated. Thanks.
Background to problem:
I am analysing video capture data of cyclist's pedalling which outputs as a cloud of joint positions in 3D space over time. The joint positions change slightly every pedal stroke. I am therefore wishing to fit Fourier series' to these joint positions and joint angles as a function of crank arm angle to find the cyclist's "average" position. The period of the Fourier series' therefore need to be constrained to be 2*pi as the "average" positions must return to the same location when the crank arm angle is zero (i.e. top dead centre, TDC) and the crank arm angle is 2*pi (i.e. TDC after one crank arm rotation).
Currently MATLAB is selecting the period to be slightly greater than 2*pi which means that when I use the Fourier series' to calculate the cyclist's position, the cyclist's position changes for the same crank arm angle on consecutive pedal strokes.
The best way to force the fit function on a certain period is to resort to a custom equation model, via fittype. Another option (that will throw a warning) is to fix the lower and upper bounds of the parameter w to the same value, and select as solution method LinearLeastSquares.
A cleaner solution is obtained by observing that, since you already know the period the fitting problem is linear in the parameters, and so you can resort to the linear least-squares method. I'll show hereafter an example of this approach.
%// Build a simple time series with period 2*pi.
t = (0:0.01:4*2*pi)';
y = sawtooth(t);
T = 2*pi;
%// Compute the angular speed and the azimuth.
Omega = 2*pi/T;
alpha = Omega*t;
%// Build the regressor matrix, with n harmonics.
n = 8;
A = ones(length(t), 2*n+1);
for i = 1:n
A(:,[2*i 2*i+1]) = [cos(i*alpha) sin(i*alpha)];
end
%// Solve the linear system.
%// The parameters are sorted as:
%// p = [y0 a1 b1 a2 b2 ...]'
%// being y0 the average of y, a_i the terms multiplying the sines
%// and b_i the terms multiplying the cosines.
p = A\y;
%// Evaluate the Fourier series.
yApprox = A*p;
%// Compare the solution with the data.
figure();
hold on;
plot(t, y, 'b');
plot(t, yApprox, 'r');
I have a matrix z(x,y)
This is an NxN abitary pdf constructed from a unique Kernel density estimation (i.e. not a usual pdf and it doesn't have a function). It is multivariate and can't be separated and is discrete data.
I wan't to construct a NxN matrix (F(x,y)) that is the cumulative distribution function in 2 dimensions of this pdf so that I can then randomly sample the F(x,y) = P(x < X ,y < Y);
Analytically I think the CDF of a multivariate function is the surface integral of the pdf.
What I have tried is using the cumsum function in order to calculate the surface integral and tested this with a multivariate normal against the analytical solution and there seems to be some discrepancy between the two:
% multivariate parameters
delta = 100;
mu = [1 1];
Sigma = [0.25 .3; .3 1];
x1 = linspace(-2,4,delta); x2 = linspace(-2,4,delta);
[X1,X2] = meshgrid(x1,x2);
% Calculate Normal multivariate pdf
F = mvnpdf([X1(:) X2(:)],mu,Sigma);
F = reshape(F,length(x2),length(x1));
% My attempt at a numerical surface integral
FN = cumsum(cumsum(F,1),2);
% Normalise the CDF
FN = FN./max(max(FN));
X = [X1(:) X2(:)];
% Analytic solution to a multivariate normal pdf
p = mvncdf(X,mu,Sigma);
p = reshape(p,delta,delta);
% Highlight the difference
dif = p - FN;
error = max(max(sqrt(dif.^2)));
% %% Plot
figure(1)
surf(x1,x2,F);
caxis([min(F(:))-.5*range(F(:)),max(F(:))]);
xlabel('x1'); ylabel('x2'); zlabel('Probability Density');
figure(2)
surf(X1,X2,FN);
xlabel('x1'); ylabel('x2');
figure(3);
surf(X1,X2,p);
xlabel('x1'); ylabel('x2');
figure(5)
surf(X1,X2,dif)
xlabel('x1'); ylabel('x2');
Particularly the error seems to be in the transition region which is the most important.
Does anyone have any better solution to this problem or see what I'm doing wrong??
Any help would be much appreciated!
EDIT: This is the desired outcome of the cumulative integration, The reason this function is of value to me is that when you randomly generate samples from this function on the closed interval [0,1] the higher weighted (i.e. the more likely) values appear more often in this way the samples converge on the expected value(s) (in the case of multiple peaks) this is desired outcome for algorithms such as particle filters, neural networks etc.
Think of the 1-dimensional case first. You have a function represented by a vector F and want to numerically integrate. cumsum(F) will do that, but it uses a poor form of numerical integration. Namely, it treats F as a step function. You could instead do a more accurate numerical integration using the Trapezoidal rule or Simpson's rule.
The 2-dimensional case is no different. Your use of cumsum(cumsum(F,1),2) is again treating F as a step function, and the numerical errors resulting from that assumption only get worse as the number of dimensions of integration increases. There exist 2-dimensional analogues of the Trapezoidal rule and Simpson's rule. Since there's a bit too much math to repeat here, take a look here:
http://onestopgate.com/gate-study-material/mathematics/numerical-analysis/numerical-integration/2d-trapezoidal.asp.
You DO NOT need to compute the 2-dimensional integral of the probability density function in order to sample from the distribution. If you are computing the 2-d integral, you are going about the problem incorrectly.
Here are two ways to approach the sampling problem.
(1) You write that you have a kernel density estimate. A kernel density estimate is a special case of a mixture density. Any mixture density can be sampled by first selecting one kernel (perhaps differently or equally weighted, same procedure applies), and then sampling from that kernel. (That applies in any number of dimensions.) Typically the kernels are some relatively simple distribution such as a Gaussian distribution so that it is easy to sample from it.
(2) Any joint density P(X, Y) is equal to P(X | Y) P(Y) (and equivalently P(Y | X) P(X)). Therefore you can sample from P(Y) (or P(X)) and then from P(X | Y). In order to sample from P(X | Y), you will need to integrate P(X, Y) along a line Y = y (where y is the sampled value of Y), but (this is crucial) you only need to integrate along that line; you don't need to integrate over all values of X and Y.
If you tell us more about your problem, I can help with the details.
I know the fundamental frequency of my signal and therefore I also know the other frequencies for the harmonics, I have used the FFT command to compute the first 5 harmonics (for which I know their frequencies). Is it possible for me to find the phase with this available information?
Please note I cant be sure my signal is only one period and therefore need to calculate the phase via the known frequency values.
Code seems to be working:
L = length(te(1,:)); % Length of signal
x = te(1,:);
NFFT = 2^nextpow2(L); % Next power of 2 from length of y
Y = fft(x,NFFT)/L;
f = linspace(1,5,5);
Y(1) = []; % First Value is a sum of all harmonics
figure(1);
bar(f,2*abs(Y(1:5)), 'red')
title('Transmission Error Harmonics')
xlabel('Harmonic')
ylabel('|Y(f)|')
figure(2);
bar(f,(angle(Y(1:5))))
title('Transmission Error Phase')
xlabel('Harminic')
ylabel('Angle (radians)')
Note that if your fundamental frequency is not exactly integer periodic in the fft length, then the resulting phase (atan2(xi,xr)) will be flipping signs between adjacent bins due to the discontinuity between the fft ends (or due to the rectangular window convolution), making phase interpolation interesting. So you may want to re-reference the FFT phase estimation to the center of the data window by doing an fftshift (pre, by shift/rotating elements, or post, by flipping signs in the fft result), making phase interpolation look more reasonable.
In general your Fourier transformed is complex. So if you want to know the phase of a certain frequency you calculate it with tan(ImaginaryPart(Sample)/RealPart(Sample)). This can be done by using angle().
In your case you
1- calculate fft()
2- calculate angle() for all samples of the FFT or for the samples you are interested in (i.e. the sample at your fundamental frequency/harmonic)
EDIT: an example would be
t = [0 0 0 1 0 0 0];
f = fft(t);
phase = angle(f);
phase = angle(f(3)); % If you are interested in the phase of only one frequency
EDIT2: You should not mix up a real valued spectrum [which is basically abs(fft())] with a complex fourier transformed [which is only fft()]. But as you wrote that you calculated the fft yourself I guess you have the 'original' FFT with the complex numbers.