Solving system of second order ODEs in MATLAB - matlab

I have the system of second order ODEs:
A*u(t) + B*u''(t) = q(t) + b_A + b_B.
Here, A and B are known matrices, b_A is a known vector, b_B is a known vector, and q(t) is a time dependent vector which I can compute for a given t-value.
The goal of my problem is to numerically approximate the functions u_1 , ... , u_n, which are entries in u(t). Also, u''(t) denotes the second time derivative of u(t). I also have the initial condition vector:
u0 = zeros(n,1).
How would I solve this problem using MATLABs built in ode solvers (ode45)?
All of the examples I have seen thus far involve converting the system of second order ODEs into a system of first order ODEs, but they have all been very small examples. Thanks for the help.
To convert this into a system of first order ODEs, I would do
y_1 = u
y_2 = u', so :
y_1' = y_2, and
y_2' = A^(-1)*(q(t) + b_A + b_B - A*y_1).
How should I implement this in MATLAB?

Related

Solving system of equations on MATLAB, when a constant exists in variable matrix?

How do I solve the following system of equations on MATLAB when one of the elements of the variable vector is a constant? Please do give the code if possible.
More generally, if the solution is to use symbolic math, how will I go about generating large number of variables, say 12 (rather than just two) even before solving them?
For example, create a number of symbolic variables using syms, and then make the system of equations like below.
syms a1 a2
A = [matrix]
x = [1;a1;a2];
y = [1;0;0];
eqs = A*x == y
sol = solve(eqs,[a1, a2])
sol.a1
sol.a2
In case you have a system with many variables, you could define all the symbols using syms, and solve it like above.
You could also perform a parameter optimization with fminsearch. First you have to define a cost function, in a separate function file, in this example called cost_fcn.m.
function J = cost_fcn(p)
% make sure p is a vector
p = reshape(p, [length(p) 1]);
% system of equations, can be linear or nonlinear
A = magic(12); % your system, I took some arbitrary matrix
sol = A*p;
% the goal of the system of equations to reach, can be zero, or some other
% vector
goal = zeros(12,1);
% calculate the error
error = goal - sol;
% Use a cost criterion, e.g. sum of squares
J = sum(error.^2);
end
This cost function will contain your system of equations, and goal solution. This can be any kind of system. The vector p will contain the parameters that are being estimated, which will be optimized, starting from some initial guess. To do the optimization, you will have to create a script:
% initial guess, can be zeros, or some other starting point
p0 = zeros(12,1);
% do the parameter optimization
p = fminsearch(#cost_fcn, p0);
In this case p0 is the initial guess, which you provide to fminsearch. Then the values of this initial guess will be incremented, until a minimum to the cost function is found. When the parameter optimization is finished, p will contain the parameters that will result in the lowest error for your system of equations. It is however possible that this is a local minimum, if there is no exact solution to the problem.
Your system is over-constrained, meaning you have more equations than unknown, so you can't solve it. What you can do is find a least square solution, using mldivide. First re-arrange your equations so that you have all the constant terms on the right side of the equal sign, then use mldivide:
>> A = [0.0297 -1.7796; 2.2749 0.0297; 0.0297 2.2749]
A =
0.029700 -1.779600
2.274900 0.029700
0.029700 2.274900
>> b = [1-2.2749; -0.0297; 1.7796]
b =
-1.274900
-0.029700
1.779600
>> A\b
ans =
-0.022191
0.757299

Solving 2nd order ODE, Matlab- the acceleration in the equation needs its own value in order to include another different term

I have this 2nd order ODE to solve in Matlab:
(a + f(t))·(dx/dt)·(d²x/dt²) + g(t) + ((h(t) + i(t)·(d²x/dt² > b·(c-x)))·(dx/dt) + j(t))·(dx/dt)² + k(t)·(t > d) = 0
where
a,b,c,d are known constants
f(t),g(t),h(t),i(t),j(t),k(t) are known functions dependent on t
x is the position
dx/dt is the velocity
d²x/dt² is the acceleration
and notice the two conditions that
i(t) is introduced in the equation if (d²x/dt² > b·(c-x))
k(t) is introduced in the equation if (t > d)
So, the problem could be solved with a similar structure in Matlab as this example:
[T,Y] = ode45(#(t,y) [y(2); 'the expression of the acceleration'], tspan, [x0 v0]);
where
T is the time vector, Y is the vector of position (column 1 as y(1)) and velocity (column 2 as y(2)).
ode45 is the ODE solver, but another one could be used.
tspan,x0,v0 are known.
the expression of the acceleration means an expression for d²x/dt², but here comes the problem, since it is inside the condition for i(t) and 'outside' at the same time multiplying (a + f(t))·(dx/dt). So, the acceleration cannot be written in matlab as d²x/dt² = something
Some issues that could help:
once the condition (d²x/dt² > b·(c-x)) and/or (t > d) is satisfied, the respective term i(t) and/or k(t) will be introduced until the end of the determined time in tspan.
for the condition (d²x/dt² > b·(c-x)), the term d²x/dt² could be written as the difference of velocities, like y(2) - y(2)', if y(2)' is the velocity of the previous instant, divided by the step-time defined in tspan. But I do not know how to access the previous value of the velocity during the solving of the ODE
Thank you in advanced !
First of all, you should reduce your problem to a first-order differential equation, by substituting dx/dt with a dynamical variable for the velocity.
This is something you have to do anyway for solving the ODE and this way you do not need to access the previous values of the velocity.
As for realising your conditions, just modify the function you pass to ode45 to account for this.
For this purpose you can use that d²x/dt² is in the right-hand side of your ODE.
Keep in mind though that ODE solvers do not like discontinuities, so you may want to smoothen the step or just restart the solver with a different function, once the condition is met (credit to Steve).
The second conditional term k(t)*(t>d) should be simple enough to implement, so I'll pass over that.
I would split up your equation into two part:
F1(t,x,x',x'') := (a+f(t))x'x'' + g(t) + (h(t)x'+j(t))x'' + k(t)(t>d),
F2(t,x,x',x'') := F1(t,x,x',x'') + i(t)x'x'',
where prime denote time derivatives. As suggested in this other answer
[...] or just restart the solver with a different function
you could solve the ODE F1 for t \in [t0, t1] =: tspan. Next, you'd find the first time tstar where x''> b(c-x) and the values x(tstar) and x'(tstar), and solve F2 for t \in [tstar,t1] with x(tstar), x'(tstar) as starting conditions.
Having said all that, the proper implementation of this should be using events, as suggested in LutzL's comment.
So, I should use something that looks like this:
function [value,isterminal,direction] = ODE_events(t,y,b,c)
value = d²x/dt² - b*(c-y(1)); % detect (d²x/dt² > b·(c-x)), HOW DO I WRITE d²x/dt² HERE?
isterminal = 0; % continue integration
direction = 0; % zero can be approached in either direction
and then include in the file (.m), where my ode is, this:
refine = 4; % I do not get exactly how this number would affect the results
options = odeset('Events',#ODE_events,'OutputFcn',#odeplot,'OutputSel',1, 'Refine',refine);
[T,Y] = ode45(#(t,y) [y(2); ( 1/(a + f(t))*(y(2)))*( - g(t) - ((h(t) + i(t))·(y(2)) - j(t)·(y(2))² - k(t)*(t > d)) ], tspan, [x0 v0], options);
How do I handle i(t)? because i(t)*(d²x/dt² > b*(c-y(1))))*(y(2)) has to be included somehow.
Thank you again

Solving an ODE when the function is given as discrete values -matlab-

I have the following ODE:
x_dot = 3*x.^0.5-2*x.^1.5 % (Equation 1)
I am using ode45 to solve it. My solution is given as a vector of dim(k x 1) (usually k = 41, which is given by the tspan).
On the other hand, I have made a model that approximates the model from (1), but in order to compare how accurate this second model is, I want to solve it (solve the second ODE) by means of ode45. My problem is that this second ode is given discrete:
x_dot = f(x) % (Equation 2)
f is discrete and not a continuous function like in (1). The values I have for f are:
0.5644
0.6473
0.7258
0.7999
0.8697
0.9353
0.9967
1.0540
1.1072
1.1564
1.2016
1.2429
1.2803
1.3138
1.3435
1.3695
1.3917
1.4102
1.4250
1.4362
1.4438
1.4477
1.4482
1.4450
1.4384
1.4283
1.4147
1.3977
1.3773
1.3535
1.3263
1.2957
1.2618
1.2246
1.1841
1.1403
1.0932
1.0429
0.9893
0.9325
0.8725
What I want now is to solve this second ode using ode45. Hopefully I will get a solution very similar that the one from (1). How can I solve a discrete ode applying ode45? Is it possible to use ode45? Otherwise I can use Runge-Kutta but I want to be fair comparing the two methods, which means that I have to solve them by the same way.
You can use interp1 to create an interpolated lookup table function:
fx = [0.5644 0.6473 0.7258 0.7999 0.8697 0.9353 0.9967 1.0540 1.1072 1.1564 ...
1.2016 1.2429 1.2803 1.3138 1.3435 1.3695 1.3917 1.4102 1.4250 1.4362 ...
1.4438 1.4477 1.4482 1.4450 1.4384 1.4283 1.4147 1.3977 1.3773 1.3535 ...
1.3263 1.2957 1.2618 1.2246 1.1841 1.1403 1.0932 1.0429 0.9893 0.9325 0.8725];
x = 0:0.25:10
f = #(xq)interp1(x,fx,xq);
Then you should be able to use ode45 as normal:
tspan = [0 1];
x0 = 2;
xout = ode45(#(t,x)f(x),tspan,x0);
Note that you did not specify what values of of x your function (fx here) is evaluated over so I chose zero to ten. You'll also not want to use the copy-and-pasted values from the command window of course because they only have four decimal places of accuracy. Also, note that because ode45 required the inputs t and then x, I created a separate anonymous function using f, but f can created with an unused t input if desired.

Solving matrix Riccati differential equation in Matlab with terminal boundary condition

In the optimal control tracking problem, there is a Riccati equation of the gain matrix K(t) which is:
\dot{K}(t) = -K(t) A - A^{T} K(t) - Q + K(t) B R^{-1} B^{T} K(t)
At the final time of Tf, the terminal boundary condition K(Tf) is given.
Edit: After consideration, I think the question is how to numerically backward integrate the gain matrix with the given terminal boundary condition and save the results in a lookup table to obtain the solution over the interval [t0,Tf] for further computations in Simulink ?
The numerical solution to this equation is found in the book Optimal Control Systems
For example, the following is an excerpt of the technique:
E=B*inv(R)*B'; % the matrix E = BR^{-1}B'
%
% solve matrix difference Riccati equation backwards
% starting from kf to kO
% use the form P(k) = A'P(k+1)[I + EP(k+1)]^{-1}A + Q
% first fix the final condition P(k_f) = F
Pkplus1=F;
p11(N)=F(1);
p12(N)=F(2);
p21(N)=F(3);
p22(N)=F(4);
for k=N-1:-1:1,
Pk = A' *Pkplus1*inv(I+E*Pkplus1)*A+Q;
p11 (k) = Pk(1);
p12(k) = Pk(2);
p21(k) = Pk(3);
p22(k) = Pk(4);
Pkplus1 = Pk;
end
For further information, you may check this book. It's great and informative.

difference equations in MATLAB - why the need to switch signs?

Perhaps this is more of a math question than a MATLAB one, not really sure. I'm using MATLAB to compute an economic model - the New Hybrid ISLM model - and there's a confusing step where the author switches the sign of the solution.
First, the author declares symbolic variables and sets up a system of difference equations. Note that the suffixes "a" and "2t" both mean "time t+1", "2a" means "time t+2" and "t" means "time t":
%% --------------------------[2] MODEL proc-----------------------------%%
% Define endogenous vars ('a' denotes t+1 values)
syms y2a pi2a ya pia va y2t pi2t yt pit vt ;
% Monetary policy rule
ia = q1*ya+q2*pia;
% ia = q1*(ya-yt)+q2*pia; %%option speed limit policy
% Model equations
IS = rho*y2a+(1-rho)*yt-sigma*(ia-pi2a)-ya;
AS = beta*pi2a+(1-beta)*pit+alpha*ya-pia+va;
dum1 = ya-y2t;
dum2 = pia-pi2t;
MPs = phi*vt-va;
optcon = [IS ; AS ; dum1 ; dum2; MPs];
Edit: The equations that are going into the matrix, as they would appear in a textbook are as follows (curly braces indicate time period values, greek letters are parameters):
First equation:
y{t+1} = rho*y{t+2} + (1-rho)*y{t} - sigma*(i{t+1}-pi{t+2})
Second equation:
pi{t+1} = beta*pi{t+2} + (1-beta)*pi{t} + alpha*y{t+1} + v{t+1}
Third and fourth are dummies:
y{t+1} = y{t+1}
pi{t+1} = pi{t+1}
Fifth is simple:
v{t+1} = phi*v{t}
Moving on, the author computes the matrix A:
%% ------------------ [3] Linearization proc ------------------------%%
% Differentiation
xx = [y2a pi2a ya pia va y2t pi2t yt pit vt] ; % define vars
jopt = jacobian(optcon,xx);
% Define Linear Coefficients
coef = eval(jopt);
B = [ -coef(:,1:5) ] ;
C = [ coef(:,6:10) ] ;
% B[c(t+1) l(t+1) k(t+1) z(t+1)] = C[c(t) l(t) k(t) z(t)]
A = inv(C)*B ; %(Linearized reduced form )
As far as I understand, this A is the solution to the system. It's the matrix that turns time t+1 and t+2 variables into t and t+1 variables (it's a forward-looking model). My question is essentially why is it necessary to reverse the signs of all the partial derivatives in B in order to get this solution? I'm talking about this step:
B = [ -coef(:,1:5) ] ;
Reversing the sign here obviously reverses the sign of every component of A, but I don't have a clear understanding of why it's necessary. My apologies if the question is unclear or if this isn't the best place to ask.
I think the key is that the model is forward-looking, so the slopes (the partial derivatives) need to be reversed to go backward in time. One way to think of it is to say that the jacobian() function always calculates derivatives in the forward-time direction.
You've got an output vector of states called optcon = [IS;AS;dum1;dum2;MPs], and two vectors of input states [y2 pi2 y pi v]. The input vector at time t+1 is [y2a pi2a ya pia va], and the input vector at time t is [y2t pi2t yt pit vt]. These two are concatenated into a single vector for the call to jacobian(), then separated after. The same thing could have been done in two calls. The first 5 columns of the output of jacobian() are the partial derivatives of optcon with respect to the input vector at time t+1, and the second 5 columns are with respect to the input vector at time t.
In order to get the reduced form, you need to come up with two equations for optcon at time t+1. The second half of coef is just what is needed. But the first half of coef is the equation for optcon at time t+2. The trick is to reverse the signs of the partial derivatives to get linearized coefficients that take the input vector at t+1 to the output optcon at t+1.