My fourier series doesn't fit the graph - matlab

I'm trying to plot a Fourier series that should fit the original graph (which is right), but I don't know what's wrong. I also double-checked the Fourier approximation.
The original graph is generated with:
t=-pi:0.01:0;
x=ones(size(t));
plot(t,x)
axis([-3*pi 3*pi -1 4])
hold on
t=0:0.01:pi;
y=cos(t);
plot(t,y)
whereas the Fourier series is generated with:
t=-pi:0.01:pi;
f=1/2;
for n=1:5
costerm=0;
if n/2== round(n/2)
sinterm=((-2*n)/(pi*(1-n^2)))*sin(2*n*t);
else
sinterm= (-2/(pi*n))*sin(2*n*t);
end
f=f+sinterm+costerm;
end
plot(t,f)
The graph looks like this:
Can someone tell me why this isn't working?

The first thing that can be noticed is that the generated series in your plot runs for two periods in the support interval [-pi:pi]. This point to an incorrect constant in your sin(2*n*t) argument, which should instead be sin(n*t).
Also, as a general rule
odd functions have only sin terms
even functions have only cos terms
otherwise, the Fourier series contain a mixture of sin and cos terms.
In your case the function is neither even nor odd, so you should expect both sin and cos terms to be present. However you are only computing the sinterm and leaving costerm=0. More specifically, while the cosine series coefficients evaluate to 0 for all n>1, you are in fact missing the term for n=1 which is 0.5*cos(t).
With these corrections you should get
f=1/2 + 0.5*cos(t);
for n=1:5
if 0==mod(n,2)
sinterm=((-2*n)/(pi*(1-n^2)))*sin(n*t);
else
sinterm= (-2/(pi*n))*sin(n*t);
end
f=f+sinterm;
end
which should give you the following plot (blue line being the original function, and the red line being the Fourier series expansion):

Related

MATLAB: polyval function for N greater than 1

I am trying trying to graph the polynomial fit of a 2D dataset in Matlab.
This is what I tried:
rawTable = readtable('Test_data.xlsx','Sheet','Sheet1');
x = rawTable.A;
y = rawTable.B;
figure(1)
scatter(x,y)
c = polyfit(x,y,2);
y_fitted = polyval(c,x);
hold on
plot(x,y_fitted,'r','LineWidth',2)
rawTable.A and rawTable.A are randomly generated numbers. (i.e. the x dataset cannot be represented in the following form : x=0:0.1:100)
The result:
second-order polynomial
But the result I expect looks like this (generated in Excel):
enter image description here
How can I graph the second-order polynomial fit in MATLAB?
I sense some confusion regarding what the output of each of those Matlab function mean. So I'll clarify. And I think we need some details as well. So expect some verbosity. A quick answer, however, is available at the end.
c = polyfit(x,y,2) gives the coefficient vectors of the polynomial fit. You can get the fit information such as error estimate following the documentation.
Name this polynomial as P. P in Matlab is actually the function P=#(x)c(1)*x.^2+c(2)*x+c(3).
Suppose you have a single point X, then polyval(c,X) outputs the value of P(X). And if x is a vector, polyval(c,x) is a vector corresponding to [P(x(1)), P(x(2)),...].
Now that does not represent what the fit is. Just as a quick hack to see something visually, you can try plot(sort(x),polyval(c,sort(x)),'r','LineWidth',2), ie. you can first sort your data and try plotting on those x-values.
However, it is only a hack because a) your data set may be so irregularly spaced that the spline doesn't represent function or b) evaluating on the whole of your data set is unnecessary and inefficient.
The robust and 'standard' way to plot a 2D function of known analytical form in Matlab is as follows:
Define some evenly-spaced x-values over the interval you want to plot the function. For example, x=1:0.1:10. For example, x=linspace(0,1,100).
Evaluate the function on these x-values
Put the above two components into plot(). plot() can either plot the function as sampled points, or connect the points with automatic spline, which is the default.
(For step 1, quadrature is ambiguous but specific enough of a term to describe this process if you wish to communicate with a single word.)
So, instead of using the x in your original data set, you should do something like:
t=linspace(min(x),max(x),100);
plot(t,polyval(c,t),'r','LineWidth',2)

matlab: cdfplot of relative error

The figure shown above is the plot of cumulative distribution function (cdf) plot for relative error (attached together the code used to generate the plot). The relative error is defined as abs(measured-predicted)/(measured). May I know the possible error/interpretation as the plot is supposed to be a smooth curve.
X = load('measured.txt');
Xhat = load('predicted.txt');
idx = find(X>0);
x = X(idx);
xhat = Xhat(idx);
relativeError = abs(x-xhat)./(x);
cdfplot(relativeError);
The input data file is a 4x4 matrix with zeros on the diagonal and some unmeasured entries (represent with 0). Appreciate for your kind help. Thanks!
The plot should be a discontinuous one because you are using discrete data. You are not plotting an analytic function which has an explicit (or implicit) function that maps, say, x to y. Instead, all you have is at most 16 points that relates x and y.
The CDF only "grows" when new samples are counted; otherwise its value remains steady, just because there isn't any satisfying sample that could increase the "frequency".
You can check the example in Mathworks' `cdfplot1 documentation to understand the concept of "empirical cdf". Again, only when you observe a sample can you increase the cdf.
If you really want to "get" a smooth curve, either 1) add more points so that the discontinuous line looks smoother, or 2) find any statistical model of whatever you are working on, and plot the analytic function instead.

Fit sine wave with a distorted time-base

I want to know the best way to fit a sine-wave with a distorted time base, in Matlab.
The distortion in time is given by a n-th order polynomial (n~10), of the form t_distort = P(t).
For example, consider the distortion t_distort = 8 + 12t + 6t^2 + t^3 (which is just the power series expansion of (t-2)^3).
This will distort a sine-wave as follows:
I want to be able to find the distortion given this distorted sine-wave. (i.e. I want to find the function t = G(t_distort), but t_distort = P(t) is unknown.)
If your resolution is high enough, then this is basically an angle-demodulation problem. The standard way to demodulate an angle-modulated signal is to take the derivative, followed by an envelope detector, followed by an integrator.
Since I don't know the exact numbers you're using, I'll show an example with my own numbers. Suppose my original timebase has 10 million points from 0 to 100:
t = 0:0.00001:100;
I then get the distorted timebase and calculate the distorted sine wave:
td = 0.02*(t+2).^3;
yd = sin(td);
Now I can demodulate it. Take the "derivative" using approximate differences divided by the step size from before:
ydot = diff(yd)/0.00001;
The envelope can be easily detected:
envelope = abs(hilbert(ydot));
This gives an approximation for the derivative of P(t). The last step is an integrator, which I can approximate using a cumulative sum (we have to scale it again by the step size):
tdguess = cumsum(envelope)*0.00001;
This gives a curve that's almost identical to the original distorted timebase (so, it gives a good approximation of P(t)):
You won't be able to get the constant term of the polynomial since we made our approximation from its derivative, which of course eliminates the constant term. You wouldn't even be able to find a unique constant term mathematically from just yd, since infinitely many values will yield the same distorted sine wave. You can get the other three coefficients of P(t) using polyfit if you know the degree of P(t) (ignore the last number, it's the constant term):
>> polyfit(t(1:10000000), tdguess, 3)
ans =
0.0200 0.1201 0.2358 0.4915
This is pretty close to the original, aside from the constant term: 0.02*(t+2)^3 = 0.02t^3 + 0.12t^2 + 0.24t + 0.16.
You wanted the inverse mapping Q(t). Can you do that knowing a close approximation for P(t) as found so far?
Here's an analytical driven route that takes asin of the signal with proper unwrapping of the angle. Then you can fit a polynomial using polyfit on the angle or using other fit methods (search for fit and see). Last, take a sin of the fitted function and compare the signal to the fitted one... see this pedagogical example:
% generate data
t=linspace(0,10,1e2);
x=0.02*(t+2).^3;
y=sin(x);
% take asin^2 to obtain points of "discontinuity" where then asin hits +-1
da=(asin(y).^2);
[val locs]=findpeaks(da); % this can be done in many other ways too...
% construct the asin according to the proper phase unwrapping
an=NaN(size(y));
an(1:locs(1)-1)=asin(y(1:locs(1)-1));
for n=2:numel(locs)
an(locs(n-1)+1:locs(n)-1)=(n-1)*pi+(-1)^(n-1)*asin(y(locs(n-1)+1:locs(n)-1));
end
an(locs(n)+1:end)=n*pi+(-1)^(n)*asin(y(locs(n)+1:end));
r=~isnan(an);
p=polyfit(t(r),an(r),3);
figure;
subplot(2,1,1); plot(t,y,'.',t,sin(polyval(p,t)),'r-');
subplot(2,1,2); plot(t,x,'.',t,(polyval(p,t)),'r-');
title(['mean error ' num2str(mean(abs(x-polyval(p,t))))]);
p =
0.0200 0.1200 0.2400 0.1600
The reason I preallocate with NaNand avoid taking the asin at points of discontinuity (locs) is to reduce the error of the fit later. As you can see, for a 100 points between 0,10 the average error is of the order of floating point accuracy, and the polynomial coefficients are as exact as you can have them.
The fact that you are not taking a derivative (as in the very elegant Hilbert transform) allows to be numerically exact. For the same conditions the Hilbert transform solution will have a much bigger average error (order of unity vs 1e-15).
The only limitation of this method is that you need enough points in the regime where the asin flips direction and that function inside the sin is well behaved. If there's a sampling issue you can truncate the data and only maintain a smaller range closer to zero, such that it'll be enough to characterize the function inside the sin. After all, you don't need millions op points to fit to a 3 parameter function.

Matlab plot function defined on a complex coordinate

I would like to plot some figures like this one:
-axis being real and imag part of some complex valued vector(usually either pure real or imag)
-have some 3D visualization like in the given case
First, define your complex function as a function of (Re(x), Im(x)). In complex analysis, you can decompose any complex function into its real parts and imaginary parts. In other words:
F(x) = Re(x) + i*Im(x)
In the case of a two-dimensional grid, you can obviously extend to defining the function in terms of (x,y). In other words:
F(x,y) = Re(x,y) + i*Im(x,y)
In your case, I'm assuming you'd want the 2D approach. As such, let's use I and J to represent the real parts and imaginary parts separately. Also, let's start off with a simple example, like cos(x) + i*sin(y) which is based on the very popular Euler exponential function. It isn't exact, but I modified it slightly as the plot looks nice.
Here are the steps you would do in MATLAB:
Define your function in terms of I and J
Make a set of points in both domains - something like meshgrid will work
Use a 3D visualization plot - You can plot the individual points, or plot it on a surface (like surf, or mesh).
NB: Because this is a complex valued function, let's plot the magnitude of the output. You were pretty ambiguous with your details, so let's assume we are plotting the magnitude.
Let's do this in code line by line:
% // Step #1
F = #(I,J) cos(I) + i*sin(J);
% // Step #2
[I,J] = meshgrid(-4:0.01:4, -4:0.01:4);
% // Step #3
K = F(I,J);
% // Let's make it look nice!
mesh(I,J,abs(K));
xlabel('Real');
ylabel('Imaginary');
zlabel('Magnitude');
colorbar;
This is the resultant plot that you get:
Let's step through this code slowly. Step #1 is an anonymous function that is defined in terms of I and J. Step #2 defines I and J as matrices where each location in I and J gives you the real and imaginary co-ordinates at their matching spatial locations to be evaluated in the complex function. I have defined both of the domains to be between [-4,4]. The first parameter spans the real axis while the second parameter spans the imaginary axis. Obviously change the limits as you see fit. Make sure the step size is small enough so that the plot is smooth. Step #3 will take each complex value and evaluate what the resultant is. After, you create a 3D mesh plot that will plot the real and imaginary axis in the first two dimensions and the magnitude of the complex number in the third dimension. abs() takes the absolute value in MATLAB. If the contents within the matrix are real, then it simply returns the positive of the number. If the contents within the matrix are complex, then it returns the magnitude / length of the complex value.
I have labeled the axes as well as placed a colorbar on the side to visualize the heights of the surface plot as colours. It also gives you an idea of how high and how long the values are in a more pleasing and visual way.
As a gentle push in your direction, let's take a slice out of this complex function. Let's make the real component equal to 0, while the imaginary components span between [-4,4]. Instead of using mesh or surf, you can use plot3 to plot your points. As such, try something like this:
F = #(I,J) cos(I) + i*sin(J);
J = -4:0.01:4;
I = zeros(1,length(J));
K = F(I,J);
plot3(I, J, abs(K));
xlabel('Real');
ylabel('Imaginary');
zlabel('Magnitude');
grid;
plot3 does not provide a grid by default, which is why the grid command is there. This is what I get:
As expected, if the function is purely imaginary, there should only be a sinusoidal contribution (i*sin(y)).
You can play around with this and add more traces if you need to.
Hope this helps!

Time Series from spectrum

I am having a samll problem while converting a spectrum to a time series. I have read many article sand I htink I am applying the right procedure but I do not get the right results. Could you help to find the error?
I have a time series like:
When I compute the spectrum I do:
%number of points
nPoints=length(timeSeries);
%time interval
dt=time(2)-time(1);
%Fast Fourier transform
p=abs(fft(timeSeries))./(nPoints/2);
%power of positive frequencies
spectrum=p(1:(nPoints/2)).^2;
%frequency
dfFFT=1/tDur;
frequency=(1:nPoints)*dfFFT;
frequency=frequency(1:(nPoints)/2);
%plot spectrum
semilogy(frequency,spectrum); grid on;
xlabel('Frequency [Hz]');
ylabel('Power Spectrum [N*m]^2/[Hz]');
title('SPD load signal');
And I obtain:
I think the spectrum is well computed. However now I need to go back and obtain a time series from this spectrum and I do:
df=frequency(2)-frequency(1);
ap = sqrt(2.*spectrum*df)';
%random number form -pi to pi
epsilon=-pi + 2*pi*rand(1,length(ap));
%transform to time series
randomSeries=length(time).*real(ifft(pad(ap.*exp(epsilon.*i.*2.*pi),length(time))));
%Add the mean value
randomSeries=randomSeries+mean(timeSeries);
However, the plot looks like:
Where it is one order of magnitude lower than the original serie.
Any recommendation?
There are (at least) two things going on here. The first is that you are throwing away information, and then substituting random numbers for that information.
The FFT of a real sequence is a sequence of complex numbers consisting of a real and imaginary part. Converting those numbers to polar form gives you magnitude and phase angle. You are capturing the magnitude part with p=aps(fft(...)), but you are not capturing the phase angle (which would involve atan2(...)). You are then making up random numbers (epsilon=...) and using those to replace the original numbers when you reconstruct your time-series. Also, as the FFT of a real sequence has a particular symmetry, substituting random numbers for the phase angle destroys that symmetry, which means that the IFFT will in general no longer be a real sequence, but a sequence of complex numbers - and again, you're only looking at the real portion of the IFFT, so you're throwing away information again. If this is an audio signal, the results may sound somewhat like the original (or they may be completely different), but the waveform definitely won't match...
The second issue is that in many implementations, ifft(fft(...)) will scale the result by the number of points in the signal. There are several different ways to avoid that, with differing results, but sometimes more attractive in different scenarios, depending on what you are trying to do. You can either scale the fft() result before you do the ifft(), or scale the ifft() result at the end, or in some cases, I've even seen both being scaled by a factor of sqrt(N) - doing it twice has the end result of scaling the final result by N, but it is a bit less efficient since you do the scaling twice...