Why MATLAB plots cosine in abnormal manner? - matlab

I wrote code below in MATLAB to plot cosine function from derivative of sine function but output plot was not what I expect to be!
clear;
clc;
close all;
delta = 1e-15;
t = linspace(0, 20, 1000);
y_derived = (sin(t + delta) - sin(t)) / delta;
y_expected = cos(t);
hold on
plot(y_derived)
plot(y_expected)
legend('y_{derived}', 'y_{expected}')
grid on
output plot is like this :
Can any one help me whats happen?

MATLAB plots exactly what you tell it to plot. The issue is in the way you compute the derivative: Your finite difference quotient uses a delta = 1e-15 which is very close to the machine precision eps = 2.2e-16, which is why you get lots of rounding errors. Actually the staircase-ness nicely shows the discrete nature of the number type you're using. Set e.g. delta = 1e-6 and it will probably look a lot better.

Related

Struggling with plotting the solution of ODE having step function using MATLAB

I am trying to use MATLAB to solve the ODE involving step function
Here is my code to solve the above ODE:
syms t s Y y(t) Dy(t) u(t) f(t) k
Dy = diff(y,t)
D2y = diff(Dy,t)
f = heaviside(t) + 2*symsum((-1)^k*heaviside(t-k*pi()),k,1,Inf)
eqn = D2y +0.1*Dy + y == f
leqn = laplace(eqn,t,s)
LT_Y=subs (leqn, laplace (y,t,s),Y);
LT_Y=subs (LT_Y, y(0), 0);
LT_Y=subs(LT_Y, subs(diff(y(t), t), t, 0), 0);
Y=solve(LT_Y,Y);
y=ilaplace(Y,s,t)
However, the result seems strange and I cannot plot the graph of the solution. Could anyone tell me what should I do to my code so that I can plot the solution? Thank you so much
The decay rate of the friction term is 0.1/2=0.05, so you need about t=40 for a decay of 0.1. In a standard plot features down to 1% of the plot size are visible, which would require a time span of t=80, or longer for some visual confirmation of the stabilization of the amplitude at that level.
The right side is about in resonance with the left one, so the steady-state solution will roughly look like -10*cos(t) plus higher frequency components.
Just for plotting a numerical solution is sufficient
opts=odeset("AbsTol",1e-6,"RelTol",1e-9);
[T,Y] = ode45(#(t,y)[y(2); f(t)-0.1*y(2)-y(1)], [0, 100], [0;0], opts);
plot(T,Y(:,1),T,-13*cos(T));
function z=f(t)
z = sign(sin(t));
end
This gives the plot below, blue for the solution and green for the reference (guessed amplitude)
Computing the Fourier series for the right side, the amplitude of the cosine component of the solution is 40/pi=12.732.

Why convolution obtained from matlab differs from what is obtained theoretically?

Theoretical result for convolution of an exponential function with a sinusoidal function is shown below.
When I plot the function directly using matlab, I get this,
However Matlab conv command yields this,
The two plots look similar but they are not the same, see the scales. The matlab yield is ten times of the theoretical result. Why?
The matlab code is here.
clc;
clear all;
close all;
t = 0:0.1:50;
x1 = exp(-t);
x2 = sin(t);
x = conv(x1,x2);
x_theory = 0.5.*(exp(-t) + sin(t) - cos(t));
figure(1)
subplot(313), plot(t, x(1:length(t))); subplot(311), plot(t, x1(1:length(t))); subplot(312), plot(t, x2(1:length(t)))
figure(2)
subplot(313), plot(t, x_theory); subplot(311), plot(t, x1(1:length(t))); subplot(312), plot(t, x2(1:length(t)))
conv does discrete-time convolution, it does not do the mathematical integral function. Numerically, this basically means multiplying and adding the result of the two signals many times, once for each point, with a small shift of one of the signals.
If you think about this, you will realize that the sampling of the signals will have an effect. i.e. if you have a point every 0.1 values, or 0.001 values, the amount of points you multiply is different, and thus the result is different in value (not in shape).
Therefore, every time you do numerical convolution, you always need to multiply with the sampling rate, to "normalize" the operation.
Just change your code to do
sampling_rate= 0.1;
t = 0:sampling_rate:50;
x = conv(x1,x2)*sampling_rate;

Plotting of complex exponential function using Matlab

I am using matlab to plot complex exponential function.But i am not getting the required waveform as output.
My Signal is exp ( j2πmf ) where m takes various positive values.
My code is shown below.
close all;
clc;
f= -0.5:0.5;
Rez = cos(2*pi*1*f);
Imz = sin(2*pi*1*f)*j;
z = Rez + Imz;
z_n = exp(z);
plot(f,z_n);
xlabel('Frequency ->');
ylabel('Amplitude->');
grid on
axis tight
My Output Signal
But I want the signal shown below as my output
First of all, you try to make a plot of complex numbers z_n.
This does not make any sense.
You can plot the real part (real(z_n), the imaginary part (imag(z_n) or the absolute values (abs(z_n).
However, your exceptions in the second diagram are wrong too.
Your function exp(j2πmf) is a rotating vector with absolute amplitude 1.
This results in:
real(exp(j2πmf) = cos(2πmf)
imag (exp(j2πmf) = sin(2πmf); By the way, imag does not include j, it is the value which has j as a factor.
abs(exp(j2πmf) = 1

Analytical Fourier transform vs FFT of functions in Matlab

I have adapted the code in Comparing FFT of Function to Analytical FT Solution in Matlab for this question. I am trying to do FFTs and comparing the result with analytical expressions in the Wikipedia tables.
My code is:
a = 1.223;
fs = 1e5; %sampling frequency
dt = 1/fs;
t = 0:dt:30-dt; %time vector
L = length(t); % no. sample points
t = t - 0.5*max(t); %center around t=0
y = ; % original function in time
Y = dt*fftshift(abs(fft(y))); %numerical soln
freq = (-L/2:L/2-1)*fs/L; %freq vector
w = 2*pi*freq; % angular freq
F = ; %analytical solution
figure; subplot(1,2,1); hold on
plot(w,real(Y),'.')
plot(w,real(F),'-')
xlabel('Frequency, w')
title('real')
legend('numerical','analytic')
xlim([-5,5])
subplot(1,2,2); hold on;
plot(w,imag(Y),'.')
plot(w,imag(F),'-')
xlabel('Frequency, w')
title('imag')
legend('numerical','analytic')
xlim([-5,5])
If I study the Gaussian function and let
y = exp(-a*t.^2); % original function in time
F = exp(-w.^2/(4*a))*sqrt(pi/a); %analytical solution
in the above code, looks like there is good agreement when the real and imaginary parts of the function are plotted:
But if I study a decaying exponential multiplied with a Heaviside function:
H = #(x)1*(x>0); % Heaviside function
y = exp(-a*t).*H(t);
F = 1./(a+1j*w); %analytical solution
then
Why is there a discrepancy? I suspect it's related to the line Y = but I'm not sure why or how.
Edit: I changed the ifftshift to fftshift in Y = dt*fftshift(abs(fft(y)));. Then I also removed the abs. The second graph now looks like:
What is the mathematical reason behind the 'mirrored' graph and how can I remove it?
The plots at the bottom of the question are not mirrored. If you plot those using lines instead of dots you'll see the numeric results have very high frequencies. The absolute component matches, but the phase doesn't. When this happens, it's almost certainly a case of a shift in the time domain.
And indeed, you define the time domain function with the origin in the middle. The FFT expects the origin to be at the first (leftmost) sample. This is what ifftshift is for:
Y = dt*fftshift(fft(ifftshift(y)));
ifftshift moves the origin to the first sample, in preparation for the fft call, and fftshift moves the origin of the result to the middle, for display.
Edit
Your t does not have a 0:
>> t(L/2+(-1:2))
ans =
-1.5000e-05 -5.0000e-06 5.0000e-06 1.5000e-05
The sample at t(floor(L/2)+1) needs to be 0. That is the sample that ifftshift moves to the leftmost sample. (I use floor there in case L is odd in size, not the case here.)
To generate a correct t do as follows:
fs = 1e5; % sampling frequency
L = 30 * fs;
t = -floor(L/2):floor((L-1)/2);
t = t / fs;
I first generate an integer t axis of the right length, with 0 at the correct location (t(floor(L/2)+1)==0). Then I convert that to seconds by dividing by the sampling frequency.
With this t, the Y as I suggest above, and the rest of your code as-is, I see this for the Gaussian example:
>> max(abs(F-Y))
ans = 4.5254e-16
For the other function I see larger differences, in the order of 6e-6. This is due to the inability to sample the Heaviside function. You need t=0 in your sampled function, but H doesn't have a value at 0. I noticed that the real component has an offset of similar magnitude, which is caused by the sample at t=0.
Typically, the sampled Heaviside function is set to 0.5 for t=0. If I do that, the offset is removed completely, and max difference for the real component is reduced by 3 orders of magnitude (largest errors happen for values very close to 0, where I see a zig-zag pattern). For the imaginary component, the max error is reduced to 3e-6, still quite large, and is maximal at high frequencies. I attribute these errors to the difference between the ideal and sampled Heaviside functions.
You should probably limit yourself to band-limited functions (or nearly-band-limited ones such as the Gaussian). You might want to try to replace the Heaviside function with an error function (integral of Gaussian) with a small sigma (sigma = 0.8 * fs is the smallest sigma I would consider for proper sampling). Its Fourier transform is known.

Estimate Poisson PDF Parameters Using Curve Fitting in MATLAB

I am trying to fit histogram data that seem to follow a poisson distribution. I declare the function as follows and try to fit it by using the least squares method.
xdata; ydata; % Arrays in which I have stored the data.
%Ydata tell us how many times the xdata is repeated in the set.
fun= #(x,xdata) (exp(-x(1))*(x(1).^(xdata)) )./(factorial(xdata)) %Function I
% want to use in the fit. It is a poisson distribution.
x0=[60]; %Approximated value of the parameter lambda to help the fit
p=lsqcurvefit(fun,x0,xdata,ydata); % Fit in the least square sense
I however encounter the next problem
Error using snls (line 48)
Objective function is returning undefined values at initial point.
lsqcurvefit cannot continue.
I have seen online that it sometimes had to do with a division by zero for example. This can be solved by adding a small amount in the denominator so that that indetermination never happens. However, that is not my case. What is the problem then?
I implemented both methods (Maximum Likelihood and PDF Curve Fitting).
You can see the code in my Stack Overflow Q45118312 Github Repository.
Results:
As you can see, the Maximum Likelihood is simpler and better (MSE Wise).
So you have no reason to use the PDF Curve Fitting method.
Part of the code which does the heavy lifting is:
%% Simulation Parameters
numTests = 50;
numSamples = 1000;
paramLambdaBound = 10;
epsVal = 1e-6;
hPoissonPmf = #(paramLambda, vParamK) ((paramLambda .^ vParamK) * exp(-paramLambda)) ./ factorial(vParamK);
for ii = 1:ceil(1000 * paramLambdaBound)
if(hPoissonPmf(paramLambdaBound, ii) <= epsVal)
break;
end
end
vValGrid = [0:ii];
vValGrid = vValGrid(:);
vParamLambda = zeros([numTests, 1]);
vParamLambdaMl = zeros([numTests, 1]); %<! Maximum Likelihood
vParamLambdaCf = zeros([numTests, 1]); %<! Curve Fitting
%% Generate Data and Samples
for ii = 1:numTests
paramLambda = paramLambdaBound * rand([1, 1]);
vDataSamples = poissrnd(paramLambda, [numSamples, 1]);
vDataHist = histcounts(vDataSamples, [vValGrid - 0.5; vValGrid(end) + 0.5]) / numSamples;
vDataHist = vDataHist(:);
vParamLambda(ii) = paramLambda;
vParamLambdaMl(ii) = mean(vDataSamples); %<! Maximum Likelihood
vParamLambdaCf(ii) = lsqcurvefit(hPoissonPmf, 2, vValGrid, vDataHist, 0, inf); %<! Curve Fitting
end
This is the wrong way to do so.
You have data you believe acts according to Poisson Distribution.
Since the Poisson Distribution is parameterized by single parameter (Lambda) then what you need to do is apply Parameter Estimation.
The classic way to do so is by Maximum Likelihood Estimation.
For this case, Poisson Distribution, you need to follow the MLE of Poisson Distribution.
Namely, just calculate the sample mean of data as can be seen in poissfit().