Equation system with derivatives - matlab

I have to solve the following system:
X'(t) = -D(t)x(t)+μ(s(t), p(t))x(t);
S'(t) = D(t)(s(t)^in - s(t)) - Yxsμ(s(t), p(t))x(t)
p'(t) = -D(t)p(t)+(aμ(s(t), p(t))+b)x(t)
where
μ(s(t), p(t)) = Μmax ((1 - (p(t)/pm))s(t)) / (km+s(t)+(s(t) ^ 2)/ki)
where Yxs, a,b, Mmax, Pm, km, ki are constant variables, then I have to linearizate the system and find the balance Points of thiw system. any suggestion how to do it with Matlab or Mathematica??

Matlab can help with some steps, but there might be few where you do have to write down some equations yourself.
To start with a simple side note: Matlabs ODE45 function allows to simulate any function of the form dx/dt = f(x,u), regardless of how non-linear or time variant they might become.
to linearize such a system, you need to derive a jacobian matrix and substitute the linearization point in this matrix. This linearization point is any point where all state derivatives equal 0, it does not need to be a balance point. However, it is desired to have it a balance point as this means the linearization point is a stable equilibrium.
So in MATLAB:
create symbolic variables for all states and inputs (so x(t), s(t) and p(t))
create symbolic state equations dx/dt = f(x,u) and output equation y = g(x,u)
derive symbolic State space matrices A,B,C,D using the "jacobian" function
substitute linearization point in these symbolic state space matrices using "subs"
use eval(symbolic matrix) to retrieve a numeric matrix.
Depending on the non-linear complexity and the chosen linearization point, the linearized system might only within acceptable bounds from the actual system for a very tight region, so be aware of that.

Related

Matlab ode15s: postive dx/dt, decreasing x(t)

In my script, I call the ODE solver ode15s which solves a system of 9 ODE's. A simplified structure of the code:
[t, x] = ode15s(#odefun,tini:tend,options)
...
function dx = odefun(t,x)
r1=... %rate equation 1, dependent on x(1) and x(3) for example
r2=... %rate equation 2
...
dx(1) = r1+r2-...
dx(2) = ...
...
dx(9) = ...
end
When reviewing the results I was curious why the profile of one state variable was increasing at a certain range. In order to investigate this, I used conditional debugging within the ode function so I could check all the rates and all the dx(i)/dt equations.
To my big surprise, I found out that the differential equation of the decreasing state variable was positive. So, I simulated multiple rounds with the F5-debug function, and noticed that indeed the state variable consistently decreased, while the dx(i)/dt would always remain positive.
Can anyone explain me how this is possible?
It is not advisable to pause the integration in the middle like that and examine the states and derivatives. ode15s does not simply step through the solution like a naive ODE solver. It makes a bunch of calls to the ODE function with semi-random states in order to compute higher-order derivatives. These states are not solutions to system but are used internally by ode15s to get a more accurate solution later.
If you want to get the derivative of your system at particular times, first compute the entire solution and then call your ODE function with slices of that solution at the times you are interested in.

Simple script that computes a solution of linear ODEs giving wrong result

this a question that envolves both programming and mathematics. So, I'm trying to write a code that computes the general solution of a system of linear ODEs described by . The mathematical formula it's shown above:
where the greek symbol \PHI that appers in the equation is the expm(A*t)
clear all
A=[-2]; %system matrix
t0=1; %initial time of simulation
tf=2; %final time of simulation
syms t x_0
x0=x_0;
hom=expm(A*t); %hom means "homogeneous solution"
hom_initialcond=hom*x0;%this is the homogeneous solution multiplied by the initial conditon
invhom=inv(hom); %this is the inverse of the greek letter at which, multiplied by the input of the system, composes the integrand of the integral
g=5*cos(2*t); %system input
integrand=invhom*g; %computation of the integrand
integral=int(integrand,t0,t); %computation of the definite integral from t0 to t, as shown by the math formula
partsol=hom*integral; %this is the particular solution
gen_sol=partsol+hom_initialcond %this is the general solution
x_0=1; %this is the initial condition
t=linspace(t0,tf); %vector of time from t0 to tf
y=double(subs(gen_sol)); %here I am evaluating my symbolic expression
plot(t,y)
The problem is that my plot of the ODE's solution it's not looking well, as you can see:
The solution it's wrong because the curve shown in the graph doesnt start at the initial value equals 1. But the shape it's very similar from the plot gave by the MATLAB ODE solver:
However, if I set t0=0 then the plot gave by my code and by MATLAB solver it's exacly equal to each other. So, my code it's fine for t0=0 but with any other values my code goes wrong.
The general solution in terms of fundamental matrices is
or more often seen as
But since the initial time is often taken to be zero, the inverse of the fundamental matrix is often omitted since it is the identity for linear, constant coefficient problems at zero (i.e., expm(zeros(n)) == eye(n)) and the c vector is equivalent to the initial condition vector.
Swapping some of the lines around near your symbolic declaration to this
syms t x_0 c_0
hom = expm(A*t) ;
invhom = inv(hom) ;
invhom_0 = subs(invhom,t,sym(t0)) ;
c_0 = invhom_0 * x_0 ;
hom_initialcond = hom * c_0 ;
should provide the correct solution for non-zero initial time.

matlab differential equation

I have the following differential equation which I'm not able to solve.
We know the following about the equation:
D(r) is a third grade polynom
D'(1)=D'(2)=0
D(2)=2D(1)
u(1)=450
u'(2)=-K * (u(2)-Te)
Where K and Te are constants.
I want to approximate the problem using a matrix and I managed to solve
the similiar equation: with the same limit conditions for u(1) and u'(2).
On this equation I approximated u' and u'' with central differences and used a finite difference method between r=1 to r=2. I then placed the results in a matrix A in matlab and the limit conditions in the vector Y in matlab and ran u=A\Y to get how the u value changes. Heres my matlab code for the equation I managed to solve:
clear
a=1;
b=2;
N=100;
h = (b-a)/N;
K=3.20;
Ti=450;
Te=20;
A = zeros(N+2);
A(1,1)=1;
A(end,end)=1/(2*h*K);
A(end,end-1)=1;
A(end,end-2)=-1/(2*h*K);
r=a+h:h:b;
%y(i)
for i=1:1:length(r)
yi(i)=-r(i)*(2/(h^2));
end
A(2:end-1,2:end-1)=A(2:end-1,2:end-1)+diag(yi);
%y(i-1)
for i=1:1:length(r)-1
ymin(i)=r(i+1)*(1/(h^2))-1/(2*h);
end
A(3:end-1,2:end-2) = A(3:end-1,2:end-2)+diag(ymin);
%y(i+1)
for i=1:1:length(r)
ymax(i)=r(i)*(1/(h^2))+1/(2*h);
end
A(2:end-1,3:end)=A(2:end-1,3:end)+diag(ymax);
Y=zeros(N+2,1);
Y(1) =Ti;
Y(2)=-(Ti*(r(1)/(h^2)-(1/(2*h))));
Y(end) = Te;
r=[1,r];
u=A\Y;
plot(r,u(1:end-1));
My question is, how do I solve the first differential equation?
As TroyHaskin pointed out in comments, one can determine D up to a constant factor, and that constant factor cancels out in D'/D anyway. Put another way: we can assume that D(1)=1 (a convenient number), since D can be multiplied by any constant. Now it's easy to find the coefficients (done with Wolfram Alpha), and the polynomial turns out to be
D(r) = -2r^3+9r^2-12r+6
with derivative D'(r) = -6r^2+18r-12. (There is also a smarter way to find the polynomial by starting with D', which is quadratic with known roots.)
I would probably use this information right away, computing the coefficient k of the first derivative:
r = a+h:h:b;
k = 1+r.*(-6*r.^2+18*r-12)./(-2*r.^3+9*r.^2-12*r+6);
It seems that k is always positive on the interval [1,2], so if you want to minimize the changes to existing code, just replace r(i) by r(i)/k(i) in it.
By the way, instead of loops like
for i=1:1:length(r)
yi(i)=-r(i)*(2/(h^2));
end
one usually does simply
yi=-r*(2/(h^2));
This vectorization makes the code more compact and can benefit the performance too (not so much in your example, where solving the linear system is the bottleneck). Another benefit is that yi is properly initialized, while with your loop construction, if yi happened to already exist and have length greater than length(r), the resulting array would have extraneous entries. (This is a potential source of hard-to-track bugs.)

ODE System, IVP with differential initial condition

I am trying to model a system of three differential equations. This is a droplet model, parametrized vs the arc length, s.
The equations are:
dx/ds=cos(theta)
dz/ds=sin(theta)
(theta)/ds=2*b+c*z-sin(theta)/x
The initial conditions are that x,z, and theta are all 0 at s=0. To avoid the singularity on d(theta)/ds, I also have the condition that, at s=0, d(theta)/ds=b. I have already written this code:
[s,x]=ode23(#(s,x)drpivp(s,x,p),sspan,x0);
%where p contains two parameters and x0 contains initial angle theta, x, z values.
%droplet ODE function:
function drpivp = drpivp(s,x,p);
%x(1)=theta
%x(2)=x
%x(3)=z
%b is curvature at apex
%c is capillarity constant
b=p(1);
c=p(2);
drpivp=[2/p(1)+p(2)*x(3)-sin(x(1))/x(2); cos(x(1)); sin(x(1))];
Which yields a solution that spirals out. Instead of creating one droplet profile, it creates many. Of course, here I have not initialized the equation properly, because I am not certain how to use a different equation for theta at s=0.
So the question is: How do I include the initial condition that d(theta)/ds=b instead of it's usual at s=0? Is this possible using the built-in solvers on matlab?
Thanks.
There are several ways of doing this, the easiest is to simply add an if statement into your equation:
function drpivp = drpivp(s,x,p);
%x(1)=theta
%x(2)=x
%x(3)=z
%b is curvature at apex
%c is capillarity constant
b=p(1);
c=p(2);
if (s == 0)
drpivp=[b; cos(x(1)); sin(x(1))];
else
drpivp=[2/p(1)+p(2)*x(3)-sin(x(1))/x(2); cos(x(1)); sin(x(1))];
end

How to find minimum of nonlinear, multivariate function using Newton's method (code not linear algebra)

I'm trying to do some parameter estimation and want to choose parameter estimates that minimize the square error in a predicted equation over about 30 variables. If the equation were linear, I would just compute the 30 partial derivatives, set them all to zero, and use a linear-equation solver. But unfortunately the equation is nonlinear and so are its derivatives.
If the equation were over a single variable, I would just use Newton's method (also known as Newton-Raphson). The Web is rich in examples and code to implement Newton's method for functions of a single variable.
Given that I have about 30 variables, how can I program a numeric solution to this problem using Newton's method? I have the equation in closed form and can compute the first and second derivatives, but I don't know quite how to proceed from there. I have found a large number of treatments on the web, but they quickly get into heavy matrix notation. I've found something moderately helpful on Wikipedia, but I'm having trouble translating it into code.
Where I'm worried about breaking down is in the matrix algebra and matrix inversions. I can invert a matrix with a linear-equation solver but I'm worried about getting the right rows and columns, avoiding transposition errors, and so on.
To be quite concrete:
I want to work with tables mapping variables to their values. I can write a function of such a table that returns the square error given such a table as argument. I can also create functions that return a partial derivative with respect to any given variable.
I have a reasonable starting estimate for the values in the table, so I'm not worried about convergence.
I'm not sure how to write the loop that uses an estimate (table of value for each variable), the function, and a table of partial-derivative functions to produce a new estimate.
That last is what I'd like help with. Any direct help or pointers to good sources will be warmly appreciated.
Edit: Since I have the first and second derivatives in closed form, I would like to take advantage of them and avoid more slowly converging methods like simplex searches.
The Numerical Recipes link was most helpful. I wound up symbolically differentiating my error estimate to produce 30 partial derivatives, then used Newton's method to set them all to zero. Here are the highlights of the code:
__doc.findzero = [[function(functions, partials, point, [epsilon, steps]) returns table, boolean
Where
point is a table mapping variable names to real numbers
(a point in N-dimensional space)
functions is a list of functions, each of which takes a table like
point as an argument
partials is a list of tables; partials[i].x is the partial derivative
of functions[i] with respect to 'x'
epilson is a number that says how close to zero we're trying to get
steps is max number of steps to take (defaults to infinity)
result is a table like 'point', boolean that says 'converged'
]]
-- See Numerical Recipes in C, Section 9.6 [http://www.nrbook.com/a/bookcpdf.php]
function findzero(functions, partials, point, epsilon, steps)
epsilon = epsilon or 1.0e-6
steps = steps or 1/0
assert(#functions > 0)
assert(table.numpairs(partials[1]) == #functions,
'number of functions not equal to number of variables')
local equations = { }
repeat
if Linf(functions, point) <= epsilon then
return point, true
end
for i = 1, #functions do
local F = functions[i](point)
local zero = F
for x, partial in pairs(partials[i]) do
zero = zero + lineq.var(x) * partial(point)
end
equations[i] = lineq.eqn(zero, 0)
end
local delta = table.map(lineq.tonumber, lineq.solve(equations, {}).answers)
point = table.map(function(v, x) return v + delta[x] end, point)
steps = steps - 1
until steps <= 0
return point, false
end
function Linf(functions, point)
-- distance using L-infinity norm
assert(#functions > 0)
local max = 0
for i = 1, #functions do
local z = functions[i](point)
max = math.max(max, math.abs(z))
end
return max
end
You might be able to find what you need at the Numerical Recipes in C web page. There is a free version available online. Here (PDF) is the chapter containing the Newton-Raphson method implemented in C. You may also want to look at what is available at Netlib (LINPack, et. al.).
As an alternative to using Newton's method the Simplex Method of Nelder-Mead is ideally suited to this problem and referenced in Numerical Recpies in C.
Rob
You are asking for a function minimization algorithm. There are two main classes: local and global. Your problem is least squares so both local and global minimization algorithms should converge to the same unique solution. Local minimization is far more efficient than global so select that.
There are many local minimization algorithms but one particularly well suited to least squares problems is Levenberg-Marquardt. If you don't have such a solver to hand (e.g. from MINPACK) then you can probably get away with Newton's method:
x <- x - (hessian x)^-1 * grad x
where you compute the inverse matrix multiplied by a vector using a linear solver.
Since you already have the partial derivatives, how about a general gradient-descent approach?
Maybe you think you have a good-enough solution, but for me, the easiest way to think about this is to understand it in the 1-variable case first, and then extend it to the matrix case.
In the 1-variable case, if you divide the first derivative by the second derivative, you get the (negative) step size to your next trial point, e.g. -V/A.
In the N-variable case, the first derivative is a vector and the second derivative is a matrix (the Hessian). You multiply the derivative vector by the inverse of the second derivative, and the result is the negative step-vector to your next trial point, e.g. -V*(1/A)
I assume you can get the 2nd-derivative Hessian matrix. You will need a routine to invert it. There are plenty of these around in various linear algebra packages, and they are quite fast.
(For readers who are not familiar with this idea, suppose the two variables are x and y, and the surface is v(x,y). Then the first derivative is the vector:
V = [ dv/dx, dv/dy ]
and the second derivative is the matrix:
A = [dV/dx]
[dV/dy]
or:
A = [ d(dv/dx)/dx, d(dv/dy)/dx]
[ d(dv/dx)/dy, d(dv/dy)/dy]
or:
A = [d^2v/dx^2, d^2v/dydx]
[d^2v/dxdy, d^2v/dy^2]
which is symmetric.)
If the surface is parabolic (constant 2nd derivative) it will get to the answer in 1 step. On the other hand, if the 2nd derivative is very not-constant, you could encounter oscillation. Cutting each step in half (or some fraction) should make it stable.
If N == 1, you'll see that it does the same thing as in the 1-variable case.
Good luck.
Added: You wanted code:
double X[N];
// Set X to initial estimate
while(!done){
double V[N]; // 1st derivative "velocity" vector
double A[N*N]; // 2nd derivative "acceleration" matrix
double A1[N*N]; // inverse of A
double S[N]; // step vector
CalculateFirstDerivative(V, X);
CalculateSecondDerivative(A, X);
// A1 = 1/A
GetMatrixInverse(A, A1);
// S = V*(1/A)
VectorTimesMatrix(V, A1, S);
// if S is small enough, stop
// X -= S
VectorMinusVector(X, S, X);
}
My opinion is to use a stochastic optimizer, e.g., a Particle Swarm method.