Struggling with plotting the solution of ODE having step function using MATLAB - matlab

I am trying to use MATLAB to solve the ODE involving step function
Here is my code to solve the above ODE:
syms t s Y y(t) Dy(t) u(t) f(t) k
Dy = diff(y,t)
D2y = diff(Dy,t)
f = heaviside(t) + 2*symsum((-1)^k*heaviside(t-k*pi()),k,1,Inf)
eqn = D2y +0.1*Dy + y == f
leqn = laplace(eqn,t,s)
LT_Y=subs (leqn, laplace (y,t,s),Y);
LT_Y=subs (LT_Y, y(0), 0);
LT_Y=subs(LT_Y, subs(diff(y(t), t), t, 0), 0);
Y=solve(LT_Y,Y);
y=ilaplace(Y,s,t)
However, the result seems strange and I cannot plot the graph of the solution. Could anyone tell me what should I do to my code so that I can plot the solution? Thank you so much

The decay rate of the friction term is 0.1/2=0.05, so you need about t=40 for a decay of 0.1. In a standard plot features down to 1% of the plot size are visible, which would require a time span of t=80, or longer for some visual confirmation of the stabilization of the amplitude at that level.
The right side is about in resonance with the left one, so the steady-state solution will roughly look like -10*cos(t) plus higher frequency components.
Just for plotting a numerical solution is sufficient
opts=odeset("AbsTol",1e-6,"RelTol",1e-9);
[T,Y] = ode45(#(t,y)[y(2); f(t)-0.1*y(2)-y(1)], [0, 100], [0;0], opts);
plot(T,Y(:,1),T,-13*cos(T));
function z=f(t)
z = sign(sin(t));
end
This gives the plot below, blue for the solution and green for the reference (guessed amplitude)
Computing the Fourier series for the right side, the amplitude of the cosine component of the solution is 40/pi=12.732.

Related

Why MATLAB plots cosine in abnormal manner?

I wrote code below in MATLAB to plot cosine function from derivative of sine function but output plot was not what I expect to be!
clear;
clc;
close all;
delta = 1e-15;
t = linspace(0, 20, 1000);
y_derived = (sin(t + delta) - sin(t)) / delta;
y_expected = cos(t);
hold on
plot(y_derived)
plot(y_expected)
legend('y_{derived}', 'y_{expected}')
grid on
output plot is like this :
Can any one help me whats happen?
MATLAB plots exactly what you tell it to plot. The issue is in the way you compute the derivative: Your finite difference quotient uses a delta = 1e-15 which is very close to the machine precision eps = 2.2e-16, which is why you get lots of rounding errors. Actually the staircase-ness nicely shows the discrete nature of the number type you're using. Set e.g. delta = 1e-6 and it will probably look a lot better.

Why convolution obtained from matlab differs from what is obtained theoretically?

Theoretical result for convolution of an exponential function with a sinusoidal function is shown below.
When I plot the function directly using matlab, I get this,
However Matlab conv command yields this,
The two plots look similar but they are not the same, see the scales. The matlab yield is ten times of the theoretical result. Why?
The matlab code is here.
clc;
clear all;
close all;
t = 0:0.1:50;
x1 = exp(-t);
x2 = sin(t);
x = conv(x1,x2);
x_theory = 0.5.*(exp(-t) + sin(t) - cos(t));
figure(1)
subplot(313), plot(t, x(1:length(t))); subplot(311), plot(t, x1(1:length(t))); subplot(312), plot(t, x2(1:length(t)))
figure(2)
subplot(313), plot(t, x_theory); subplot(311), plot(t, x1(1:length(t))); subplot(312), plot(t, x2(1:length(t)))
conv does discrete-time convolution, it does not do the mathematical integral function. Numerically, this basically means multiplying and adding the result of the two signals many times, once for each point, with a small shift of one of the signals.
If you think about this, you will realize that the sampling of the signals will have an effect. i.e. if you have a point every 0.1 values, or 0.001 values, the amount of points you multiply is different, and thus the result is different in value (not in shape).
Therefore, every time you do numerical convolution, you always need to multiply with the sampling rate, to "normalize" the operation.
Just change your code to do
sampling_rate= 0.1;
t = 0:sampling_rate:50;
x = conv(x1,x2)*sampling_rate;

Analytical Fourier transform vs FFT of functions in Matlab

I have adapted the code in Comparing FFT of Function to Analytical FT Solution in Matlab for this question. I am trying to do FFTs and comparing the result with analytical expressions in the Wikipedia tables.
My code is:
a = 1.223;
fs = 1e5; %sampling frequency
dt = 1/fs;
t = 0:dt:30-dt; %time vector
L = length(t); % no. sample points
t = t - 0.5*max(t); %center around t=0
y = ; % original function in time
Y = dt*fftshift(abs(fft(y))); %numerical soln
freq = (-L/2:L/2-1)*fs/L; %freq vector
w = 2*pi*freq; % angular freq
F = ; %analytical solution
figure; subplot(1,2,1); hold on
plot(w,real(Y),'.')
plot(w,real(F),'-')
xlabel('Frequency, w')
title('real')
legend('numerical','analytic')
xlim([-5,5])
subplot(1,2,2); hold on;
plot(w,imag(Y),'.')
plot(w,imag(F),'-')
xlabel('Frequency, w')
title('imag')
legend('numerical','analytic')
xlim([-5,5])
If I study the Gaussian function and let
y = exp(-a*t.^2); % original function in time
F = exp(-w.^2/(4*a))*sqrt(pi/a); %analytical solution
in the above code, looks like there is good agreement when the real and imaginary parts of the function are plotted:
But if I study a decaying exponential multiplied with a Heaviside function:
H = #(x)1*(x>0); % Heaviside function
y = exp(-a*t).*H(t);
F = 1./(a+1j*w); %analytical solution
then
Why is there a discrepancy? I suspect it's related to the line Y = but I'm not sure why or how.
Edit: I changed the ifftshift to fftshift in Y = dt*fftshift(abs(fft(y)));. Then I also removed the abs. The second graph now looks like:
What is the mathematical reason behind the 'mirrored' graph and how can I remove it?
The plots at the bottom of the question are not mirrored. If you plot those using lines instead of dots you'll see the numeric results have very high frequencies. The absolute component matches, but the phase doesn't. When this happens, it's almost certainly a case of a shift in the time domain.
And indeed, you define the time domain function with the origin in the middle. The FFT expects the origin to be at the first (leftmost) sample. This is what ifftshift is for:
Y = dt*fftshift(fft(ifftshift(y)));
ifftshift moves the origin to the first sample, in preparation for the fft call, and fftshift moves the origin of the result to the middle, for display.
Edit
Your t does not have a 0:
>> t(L/2+(-1:2))
ans =
-1.5000e-05 -5.0000e-06 5.0000e-06 1.5000e-05
The sample at t(floor(L/2)+1) needs to be 0. That is the sample that ifftshift moves to the leftmost sample. (I use floor there in case L is odd in size, not the case here.)
To generate a correct t do as follows:
fs = 1e5; % sampling frequency
L = 30 * fs;
t = -floor(L/2):floor((L-1)/2);
t = t / fs;
I first generate an integer t axis of the right length, with 0 at the correct location (t(floor(L/2)+1)==0). Then I convert that to seconds by dividing by the sampling frequency.
With this t, the Y as I suggest above, and the rest of your code as-is, I see this for the Gaussian example:
>> max(abs(F-Y))
ans = 4.5254e-16
For the other function I see larger differences, in the order of 6e-6. This is due to the inability to sample the Heaviside function. You need t=0 in your sampled function, but H doesn't have a value at 0. I noticed that the real component has an offset of similar magnitude, which is caused by the sample at t=0.
Typically, the sampled Heaviside function is set to 0.5 for t=0. If I do that, the offset is removed completely, and max difference for the real component is reduced by 3 orders of magnitude (largest errors happen for values very close to 0, where I see a zig-zag pattern). For the imaginary component, the max error is reduced to 3e-6, still quite large, and is maximal at high frequencies. I attribute these errors to the difference between the ideal and sampled Heaviside functions.
You should probably limit yourself to band-limited functions (or nearly-band-limited ones such as the Gaussian). You might want to try to replace the Heaviside function with an error function (integral of Gaussian) with a small sigma (sigma = 0.8 * fs is the smallest sigma I would consider for proper sampling). Its Fourier transform is known.

Determining time-dependent frequency using a sliding-window FFT

I have an instrument which produces roughly sinusoidal data, but with frequency varying slightly in time. I am using MATLAB to prototype some code to characterize the time dependence, but I'm running into some issues.
I am generating an idealized approximation of my data, I(t) = sin(2 pi f(t) t), with f(t) variable but currently tested as linear or quadratic. I then implement a sliding Hamming window (of width w) to generate a set of Fourier transforms F[I(t), t'] corresponding to the data points in I(t), and each F[I(t), t'] is fit with a Gaussian to more precisely determine the peak location.
My current MATLAB code is:
fs = 1000; %Sample frequency (Hz)
tlim = [0,1];
t = (tlim(1)/fs:1/fs:tlim(2)-1/fs)'; %Sample domain (t)
N = numel(t);
f = #(t) 100-30*(t-0.5).^2; %Frequency function (Hz)
I = sin(2*pi*f(t).*t); %Sample function
w = 201; %window width
ww=floor(w/2); %window half-width
for i=0:2:N-w
%Take the FFT of a portion of I, convolved with a Hamming window
II = 1/(fs*N)*abs(fft(I((1:w)+i).*hamming(w))).^2;
II = II(1:floor(numel(II)/2));
p = (0:fs/w:(fs/2-fs/w))';
%Find approximate FFT maximum
[~,maxIx] = max(II);
maxLoc = p(maxIx);
%Fit the resulting FFT with a Gaussian function
gauss = #(c,x) c(1)*exp(-(x-c(2)).^2/(2*c(3)^2));
op = optimset('Display','off');
mdl = lsqcurvefit(gauss,[max(II),maxLoc,10],p,II,[],[],op);
%Generate diagnostic plots
subplot(3,1,1);plot(p,II,p,gauss(mdl,p))
line(f(t(i+ww))*[1,1],ylim,'color','r');
subplot(3,1,2);plot(t,I);
line(t(1+i)*[1,1],ylim,'color','r');line(t(w+i)*[1,1],ylim,'color','r')
subplot(3,1,3);plot(t(i+ww),f(t(i+ww)),'b.',t(i+ww),mdl(2),'r.');
hold on
xlim([0,max(t)])
drawnow
end
hold off
My thought process is that the peak location in each F[I(t), t'] should be a close approximation of the frequency at the center of the window which was used to produce it. However, this does not seem to be the case, experimentally.
I have had some success using discrete Fourier analysis for engineering problems in the past, but I've only done coursework on continuous Fourier transforms--so there may be something obvious that I'm missing. Also, this is my first question on StackExchange, so constructive criticism is welcome.
So it turns out that my problem was a poor understanding of the mathematics of the sine function. I had assumed that the frequency of the wave was equal to whatever was multiplied by the time variable (e.g. the f in sin(ft)). However, it turns out that the frequency is actually defined by the derivative of the entire argument of the sine function--the rate of change of the phase.
For constant f the two definitions are equal, since d(ft)/dt = f. But for, say, f(t) = sin(t):
d(f(t)t)/dt = d(sin(t) t)/dt = t cos(t) + sin(t)
The frequency varies as a function very different from f(t). Changing the function definition to the following fixed my problem:
f = #(t) 100-30*(t-0.5).^2; %Frequency function (Hz)
G = cumsum(f(t))/fs; %Phase function (Hz)
I = sin(2*pi*G); %Sampling function

Simulate a 2D point mass moving along y=x^2 in MATLAB

I have a mixed set of differential and algebraic equations for which I have found the analytical solution in MATLAB. It concerns a point mass moving along a 2D curve constrained by y=x^2.
How would I go about using an ode-solver in MATLAB (or something else if it's easier) to simulate the ball rolling over the curve? The animation I can do myself, I'm more concerned with finding the velocities, xd yd, for each consecutive step. That's where I kind of get lost.
These are the equations of motion which I've derived using Lagrange multipliers. Hence the lambda. The lambda is the reaction force. I can calculate the accelerations, xdd ydd, but I also need the velocities in the state if I want to properly simulate this, I assume.
% Symbolic functions
syms y x xd yd xdd ydd
syms m g lambda
% Parameters
A = [m 0 -2*x; 0 m 1; -2*x 1 0];
X = [xdd ydd lambda].';
b = [0 -m*g -2*xd^2].';
sol = A\b % these are the states stored in X
So if you work out your problem using the lagrancian you would get the following formula seen below (see https://physics.stackexchange.com/questions/47154/ball-rolling-in-a-parabolic-bowl). The k-value comes from y=kx^2 (can be 1 for your example).
So rewrite this in the following form.
Now you just use
ddx= Formula seen above..
x = x + dx *dt
dx = dx + ddx *dt
t = t + dt
y = k*x*x
You make a loop with a sufficient small dt, and update you x-position velocity and acceleration.
Now you need
to specify the following starting values -> x0 dx0 ddx0 and dt.
I hope this helped
Cheers:)