Solving DDE in Matlab - matlab

I am trying to learn how to solve DDE (delay diff. eq) on Matlab and I am using a very helpful (Youtube-tutorial) where the guy solves examples. In the case of a 3-dimensional system, the code goes as follows:
tau = [1 0.5];
tf = 10;
sol = dde23(#dde,tau,#history,[0 tf]);
t = linspace(0,tf,200);
y = deval(sol,t);
figure(1)
plot(t,y)
function y = history(t)
y = [1;0;-1];
end
The dde function according to the tutorial is:
function dydt = dde(t,y,tau)
y1tau1 = tau(:,1);
y2tau2 = tau(:,2);
dydt = [y1tau1(1)
y(1) - y1tau1(1) + y2tau2(2)
y(2) - y(3)];
end
, where if I get it well, we have a 3by2 matrix with the state variables and their respective delays: the first column is about the first delay tau_1 for each of the three state variables, and the second column is for the tau_2.
Nevertheless, this matrix has only 2 non-zero components and they are the tau_1 delay for y_1 and the tau_2 for y_2.
Given that, I thought it would be the same (at least in this case where we only have one delay for y_1 and one for y_2) as if the function were:
function dydt = dde(t,y,tau)
dydt = [tau(1)
y(1) - tau(1) + tau(2)
y(2) - y(3)];
end
I ran the script for both of them and the results are totally different, both qualitatively and quantitatively, and I cannot figure out why. Could someone explain the difference?

y1tau1(1) and y2tau2(2) are tau(1,1) and tau(2,2)
while
tau(1) and tau(2) are the same as tau(1,1) and tau(2,1).
So the second one is different, there is no reason why the two different, if only slightly so, DDE systems should have the same solutions.
As you have also a general interpretation question, the DDE system is mathematically
dy1(t)/dt = y1(t-1)
dy2(t)/dt = y1(t) - y1(t-1) + y2(t-0.5)
dy3(t)/dt = y2(t) - y3(t)
It is unfortunate that the delayed values are also named tau, it would be better to use something else like yd,
function dydt = dde(t,y,yd)
dydt = [ yd(1,1)
y(1) - yd(1,1) + yd(2,2)
y(2) - y(3) ];
end
This yd has two columns, the first is the value y(t-tau(1))=y(t-1), the second column is y(t-tau(2))=y(t-0.5). The DDE function does not know the delays themselves, it only computes with the values at these delays. These are obtained from a piecewise interpolation function that continues the history function with the solution so far. The history function takes the place of the initial condition.

Related

Input equations into Matlab for Simulink Function

I am currently working on an assignment where I need to create two different controllers in Matlab/Simulink for a robotic exoskeleton leg. The idea behind this is to compare both of them and see which controller is better at assisting a human wearing it. I am having a lot of trouble putting specific equations into a Matlab function block to then run in Simulink to get results for an AFO (adaptive frequency oscillator). The link has the equations I'm trying to put in and the following is the code I have so far:
function [pos_AFO, vel_AFO, acc_AFO, offset, omega, phi, ampl, phi1] = LHip(theta, eps, nu, dt, AFO_on)
t = 0;
% syms j
% M = 6;
% j = sym('j', [1 M]);
if t == 0
omega = 3*pi/2;
theta = 0;
phi = pi/2;
ampl = 0;
else
omega = omega*(t-1) + dt*(eps*offset*cos(phi1));
theta = theta*(t-1) + dt*(nu*offset);
phi = phi*(t-1) + dt*(omega + eps*offset*cos(phi*core(t-1)));
phi1 = phi*(t-1) + dt*(omega + eps*offset*cos(phi*core(t-1)));
ampl = ampl*(t-1) + dt*(nu*offset*sin(phi));
offset = theta - theta*(t-1) - sym(ampl*sin(phi), [1 M]);
end
pos_AFO = (theta*(t-1) + symsum(ampl*(t-1)*sin(phi* (t-1))))*AFO_on; %symsum needs input argument for index M and range
vel_AFO = diff(pos_AFO)*AFO_on;
acc_AFO = diff(vel_AFO)*AFO_on;
end
https://www.pastepic.xyz/image/pg4mP
Essentially, I don't know how to do the subscripts, sigma, or the (t+1) function. Any help is appreciated as this is due next week
You are looking to find the result of an adaptive process therefore your algorithm needs to consider time as it progresses. There is no (t-1) operator as such. It is just a mathematical notation telling you that you need to reuse an old value to calculate a new value.
omega_old=0;
theta_old=0;
% initialize the rest of your variables
for [t=1:N]
omega[t] = omega_old + % here is the rest of your omega calculation
theta[t] = theta_old + % ...
% more code .....
% remember your old values for next iteration
omega_old = omega[t];
theta_old = theta[t];
end
I think you forgot to apply the modulo operation to phi judging by the original formula you linked. As a general rule, design your code in small pieces, make sure the output of each piece makes sense and then combine all pieces and make sure the overall result is correct.

Taylor series for (exp(x) - exp(-x))/(2*x)

I've been asked to write a function that calculates the Taylor series for (exp(x) - exp(-x))/(2*x) until the absolute error is smaller than the eps of the machine.
function k = tayser(xo)
f = #(x) (exp(x) - exp(-x))/(2*x);
abserror = 1;
sum = 1;
n=2;
while abserror > eps
sum = sum + (xo^n)/(factorial(n+1));
n=n+2;
abserror = abs(sum-f(xo));
disp(abserror);
end
k=sum;
My issue is that the abserror never goes below the eps of the machine which results to an infinite loop.
The problem is expression you're using. For small numbers exp(x) and exp(-x) are approximately equal, so exp(x)-exp(-x) is close to zero and definitely below 1. Since you start with 1 and only add positive numbers, you'll never reach the function value.
Rewriting the expression as
f = #(x) sinh(x)/x;
will work, because it's more stable for these small values.
You can also see this by plotting both functions:
x = -1e-14:1e-18:1e-14;
plot(x,(exp(x) - exp(-x))./(2*x),x,sinh(x)./x)
legend('(exp(x) - exp(-x))/(2*x)','sinh(x)/x')
gives

Runge-kutta for coupled ODEs

I’m building a function in Octave that can solve N coupled ordinary differential equation of the type:
dx/dt = F(x,y,…,z,t)
dy/dt = G(x,y,…,z,t)
dz/dt = H(x,y,…,z,t)
With any of these three methods (Euler, Heun and Runge-Kutta-4).
The following code correspond to the function:
function sol = coupled_ode(E, dfuns, steps, a, b, ini, method)
range = b-a;
h=range/steps;
rows = (range/h)+1;
columns = size(dfuns)(2)+1;
sol= zeros(abs(rows),columns);
heun=zeros(1,columns-1);
for i=1:abs(rows)
if i==1
sol(i,1)=a;
else
sol(i,1)=sol(i-1,1)+h;
end
for j=2:columns
if i==1
sol(i,j)=ini(j-1);
else
if strcmp("euler",method)
sol(i,j)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end));
elseif strcmp("heun",method)
heun(j-1)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end));
elseif strcmp("rk4",method)
k1=h*dfuns{j-1}(E, [sol(i-1,1), sol(i-1,2:end)]);
k2=h*dfuns{j-1}(E, [sol(i-1,1)+(0.5*h), sol(i-1,2:end)+(0.5*h*k1)]);
k3=h*dfuns{j-1}(E, [sol(i-1,1)+(0.5*h), sol(i-1,2:end)+(0.5*h*k2)]);
k4=h*dfuns{j-1}(E, [sol(i-1,1)+h, sol(i-1,2:end)+(h*k3)]);
sol(i,j)=sol(i-1,j)+((1/6)*(k1+(2*k2)+(2*k3)+k4));
end
end
end
if strcmp("heun",method)
if i~=1
for k=2:columns
sol(i,k)=sol(i-1,k)+(h/2)*((dfuns{k-1}(E, sol(i-1,1:end)))+(dfuns{k-1}(E, [sol(i,1),heun])));
end
end
end
end
end
When I use the function for a single ordinary differential equation, the RK4 method is the best as expected, but when I ran the code for a couple system of differential equation, RK4 is the worst, I've been checking and checking and I don't know what I am doing wrong.
The following code is an example of how to call the function
F{1} = #(e, y) 0.6*y(3);
F{2} = #(e, y) -0.6*y(3)+0.001407*y(4)*y(3);
F{3} = #(e, y) -0.001407*y(4)*y(3);
steps = 24;
sol1 = coupled_ode(0,F,steps,0,24,[0 5 995],"euler");
sol2 = coupled_ode(0,F,steps,0,24,[0 5 995],"heun");
sol3 = coupled_ode(0,F,steps,0,24,[0 5 995],"rk4");
plot(sol1(:,1),sol1(:,4),sol2(:,1),sol2(:,4),sol3(:,1),sol3(:,4));
legend("Euler", "Heun", "RK4");
Careful: there's a few too many h's in the RK4 formulæ:
k2 = h*dfuns{ [...] +(0.5*h*k1)]);
k3 = h*dfuns{ [...] +(0.5*h*k2]);
should be
k2 = h*dfuns{ [...] +(0.5*k1)]);
k3 = h*dfuns{ [...] +(0.5*k2]);
(last h's removed).
However, this makes no difference for the example that you provided, since h=1 there.
But other than that little bug, I don't think you're actually doing anything wrong.
If I plot the solution generated by the more advanced, adaptive 4ᵗʰ/5ᵗʰ order RK implemented in ode45:
F{1} = #(e,y) +0.6*y(3);
F{2} = #(e,y) -0.6*y(3) + 0.001407*y(4)*y(3);
F{3} = #(e,y) -0.001407*y(4)*y(3);
tend = 24;
steps = 24;
y0 = [0 5 995];
plotN = 2;
sol1 = coupled_ode(0,F, steps, 0,tend, y0, 'euler');
sol2 = coupled_ode(0,F, steps, 0,tend, y0, 'heun');
sol3 = coupled_ode(0,F, steps, 0,tend, y0, 'rk4');
figure(1), clf, hold on
plot(sol1(:,1), sol1(:,plotN+1),...
sol2(:,1), sol2(:,plotN+1),...
sol3(:,1), sol3(:,plotN+1));
% New solution, generated by ODE45
opts = odeset('AbsTol', 1e-12, 'RelTol', 1e-12);
fcn = #(t,y) [F{1}(0,[0; y])
F{2}(0,[0; y])
F{3}(0,[0; y])];
[t,solN] = ode45(fcn, [0 tend], y0, opts);
plot(t, solN(:,plotN))
legend('Euler', 'Heun', 'RK4', 'ODE45');
xlabel('t');
Then we have something more believable to compare to.
Now, plain-and-simple RK4 indeed performs terribly for this isolated case:
However, if I simply flip the signs of the last term in the last two functions:
% ±
F{2} = #(e,y) +0.6*y(3) - 0.001407*y(4)*y(3);
F{3} = #(e,y) +0.001407*y(4)*y(3);
Then we get this:
The main reason RK4 performs badly for your case is because of the step size. The adaptive RK4/5 (with a tolerance set to 1 instead of 1e-12 as above) produces an average δt = 0.15. This means that basic error analysis has indicated that for this particular problem, h = 0.15 is the largest step you can take without introducing unacceptable error.
But you were taking h = 1, which then indeed gives a large accumulated error.
The fact that Heun and Euler perform so well for your case is, well, just plain luck, as demonstrated by the sign inversion example above.
Welcome to the world of numerical mathematics - there never is 1 method that's best for all problems under all circumstances :)
Apart from the error described in the older answer, there is indeed a fundamental methodological error in the implementation. First, the implementation is correct for scalar order-one differential equations. But the moment you try to use it on a coupled system, the de-coupled treatment of the stages in the Runge-Kutta method (note that Heun is just a copy of the Euler step) reduces them to an order-one method.
Specifically, starting in
k2=h*dfuns{j-1}(E, [sol(i-1,1)+(0.5*h), sol(i-1,2:end)+(0.5*h*k1)]);
the addition of 0.5*k1 to sol(i-1,2:end) means to add the vector of slopes of the first stage, not to add the same slope value to all components of the position vector.
Taking this into account results in the change to the implementation
function sol = coupled_ode(E, dfuns, steps, a, b, ini, method)
range = b-a;
h=range/steps;
rows = steps+1;
columns = size(dfuns)(2)+1;
sol= zeros(rows,columns);
k = ones(4,columns);
sol(1,1)=a;
sol(1,2:end)=ini(1:end);
for i=2:abs(rows)
sol(i,1)=sol(i-1,1)+h;
if strcmp("euler",method)
for j=2:columns
sol(i,j)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end));
end
elseif strcmp("heun",method)
for j=2:columns
k(1,j) = h*dfuns{j-1}(E, sol(i-1,1:end));
end
for j=2:columns
sol(i,j)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end)+k(1,1:end));
end
elseif strcmp("rk4",method)
for j=2:columns
k(1,j)=h*dfuns{j-1}(E, sol(i-1,:));
end
for j=2:columns
k(2,j)=h*dfuns{j-1}(E, sol(i-1,:)+0.5*k(1,:));
end
for j=2:columns
k(3,j)=h*dfuns{j-1}(E, sol(i-1,:)+0.5*k(2,:));
end
for j=2:columns
k(4,j)=h*dfuns{j-1}(E, sol(i-1,:)+k(3,:));
end
sol(i,2:end)=sol(i-1,2:end)+(1/6)*(k(1,2:end)+(2*k(2,2:end))+(2*k(3,2:end))+k(4,2:end));
end
end
end
As can be seen, the loop over the vector components is recurring frequently. One can hide this by using a full vectorization using a vector-valued function for the right side of the coupled ODE system.
The plot for the second component of the solution with these changes gives the much more reasonable plot for step size 1
and with a subdivision into 120 intervals for step size 0.2
where the graph for RK4 did not change much while the other two moved towards it from below and above.

MATLAB: Saving parameters inside ode45 using 'assignin'

I'm running a set of ODEs with ode45 in MATLAB and I need to save one of the variables (that's not the derivative) for later use. I'm using the function 'assignin' to assign a temporary variable in the base workspace and updating it at each step. This seems to work, however, the size of the array does not match the size of the solution vector acquired from ode45. For example, I have the following nested function:
function [Z,Y] = droplet_momentum(theta,K,G,P,zspan,Y0)
options = odeset('RelTol',1e-7,'AbsTol',1e-7);
[Z,Y] = ode45(#momentum,zspan,Y0,options);
function DY = momentum(z,y)
DY = zeros(4,1);
%Entrained Total Velocity
Ve = sin(theta)*(y(4));
%Total Relative Velocity
Urs = sqrt((y(1) - y(4))^2 + (y(2) - Ve*cos(theta))^2 + (y(3))^2);
%Coefficients
PSI = K*Urs/y(1);
PHI = P*Urs/y(1);
%Liquid Axial Velocity
DY(1) = PSI*sign(y(1) - y(4))*(1 + (1/6)*(abs(y(1) - y(4))*G)^(2/3));
%Liquid Radial Velocity
DY(2) = PSI*sign(y(2) - Ve*cos(theta))*(1 + (1/6)*(abs(y(2) - ...
Ve*cos(theta))*G)^(2/3));
%Liquid Tangential Velocity
DY(3) = PSI*sign(y(3))*(1 + (1/6)*(abs(y(3))*G)^(2/3));
%Gaseous Axial Velocity
DY(4) = (1/z/y(4))*((PHI/z)*sign(y(1) - y(4))*(1 + ...
(1/6)*(abs(y(1) - y(4))*G)^(2/3)) + Ve*Ve - y(4)*y(4));
assignin('base','Ve_step',Ve);
evalin('base','Ve_out(end+1) = Ve_step');
end
end
In the above code, theta (radians), K (negative value), P, & G are constants and for the sake of this example can be taken as any value. Zspan is just the integration time step for the ODE solver and Y0 is the initial conditions vector (4x1). Again, for the sake of this example these can take any reasonable value. Now in the main file, the function is called with the following:
Ve_out = 0;
[Z,Y] = droplet_momentum(theta,K,G,P,zspan,Y0);
Ve_out = Ve_out(2:end);
This method works without complaint from MATLAB, but the problem is that the size of Ve_out is not the same as the size of Z or Y. The reason for this is because MATLAB calls the ODE function multiple times for its algorithm, so the solution is going to be slightly smaller than Ve_out. As am304 suggested, I could just simply calculated DY by giving the ode function a Z and Y vector such as DY = momentum(Z,Y), however, I need to get this working with 'assignin' (or similar method) because another version of this problem has an implicit dependence between DY and Ve and it would be too computationally expensive to calculate DY at every iteration (I will be running this problem for many iterations).
Ok, so let's start off with a quick example of an SSCCE:
function [Z,Y] = khan
options = odeset('RelTol',1e-7,'AbsTol',1e-7);
[Z,Y] = ode45(#momentum,[0 12],[0 0],options);
end
function Dy = momentum(z,y)
Dy = [0 0]';
Dy(1) = 3*y(1) + 2* y(2) - 2;
Dy(2) = y(1) - y(2);
Ve = Dy(1)+ y(2);
assignin('base','Ve_step',Ve);
evalin('base','Ve_out(end+1) = Ve_step;');
assignin('base','T_step',z);
evalin('base','T_out(end+1) = T_step;');
end
By running [Z,Y] = khan as the command line, I get a complete functional code that demonstrates your problem, without all the headaches associated. My patience for this has been exhausted: live and learn.
This seems to work, however, the size of the array does not match the
size of the solution vector acquired from ode45
Note that I added two lines to your code which extracts time variable. From the command prompt, one simply has to run the following to understand what's going on:
Ve_out = [];
T_out = [];
[Z,Y] = khan;
size (Z)
size (T_out)
size (Ve_out)
plot (diff(T_out))
ans =
109 1
ans =
1 163
ans =
1 163
Basically ode45 is an iterative algorithm, which means it will regularly course correct (that's why you regularly see diff(T) = 0). You can't force the algorithm to do what you want, you have to live with it.
So your options are
1. Use a fixed step algorithm
2. Have a function call that reproduces what you want after the ode45 algorithm has done its dirty work. (am304's solution)
3. Collects the data with the time function, then have an algorithm parse through everything to removes the extra data.
Can you not do something like that? Obviously check the sizes of the matrices/vectors are correct and amend the code accordingly.
[Z,Y] = droplet_momentum2(theta,K,G,P,zspan,Y0);
DY = momentum(Z,Y);
Ve = sin(theta)*(0.5*z*DY(4) + y(4));
i.e. once the ODE is solved, computed the derivative DY as a function of Z and Y (which have just been solved by the ODE) and finally Ve.

Implementing iterative solution of integral equation in Matlab

We have an equation similar to the Fredholm integral equation of second kind.
To solve this equation we have been given an iterative solution that is guaranteed to converge for our specific equation. Now our only problem consists in implementing this iterative prodedure in MATLAB.
For now, the problematic part of our code looks like this:
function delta = delta(x,a,P,H,E,c,c0,w)
delt = #(x)delta_a(x,a,P,H,E,c0,w);
for i=1:500
delt = #(x)delt(x) - 1/E.*integral(#(xi)((c(1)-c(2)*delt(xi))*ms(xi,x,a,P,H,w)),0,a-0.001);
end
delta=delt;
end
delta_a is a function of x, and represent the initial value of the iteration. ms is a function of x and xi.
As you might see we want delt to depend on both x (before the integral) and xi (inside of the integral) in the iteration. Unfortunately this way of writing the code (with the function handle) does not give us a numerical value, as we wish. We can't either write delt as two different functions, one of x and one of xi, since xi is not defined (until integral defines it). So, how can we make sure that delt depends on xi inside of the integral, and still get a numerical value out of the iteration?
Do any of you have any suggestions to how we might solve this?
Using numerical integration
Explanation of the input parameters: x is a vector of numerical values, all the rest are constants. A problem with my code is that the input parameter x is not being used (I guess this means that x is being treated as a symbol).
It looks like you can do a nesting of anonymous functions in MATLAB:
f =
#(x)2*x
>> ff = #(x) f(f(x))
ff =
#(x)f(f(x))
>> ff(2)
ans =
8
>> f = ff;
>> f(2)
ans =
8
Also it is possible to rebind the pointers to the functions.
Thus, you can set up your iteration like
delta_old = #(x) delta_a(x)
for i=1:500
delta_new = #(x) delta_old(x) - integral(#(xi),delta_old(xi))
delta_old = delta_new
end
plus the inclusion of your parameters...
You may want to consider to solve a discretized version of your problem.
Let K be the matrix which discretizes your Fredholm kernel k(t,s), e.g.
K(i,j) = int_a^b K(x_i, s) l_j(s) ds
where l_j(s) is, for instance, the j-th lagrange interpolant associated to the interpolation nodes (x_i) = x_1,x_2,...,x_n.
Then, solving your Picard iterations is as simple as doing
phi_n+1 = f + K*phi_n
i.e.
for i = 1:N
phi = f + K*phi
end
where phi_n and f are the nodal values of phi and f on the (x_i).