Simulations hang when running ode45 - matlab

I'm suppose to make a model for an algae population. Here's the code I have so far (all written from examples online). When i run Solve_algaepop, it just hangs for a long time.
Any ideas why? Is there any obvious thing I'm doing wrong? The equations are from a research paper.
This is Solve_algaepop.m. In the equations for r1 and r2, P10 and P20 are supposed to be the values
P1 = x(1) and P2 = x(2) defined in algaepop_model.m. I don't know how to access the values when I'm in Solve_algaepop.m
% Initial conditions
P10 = 560000; %from Chattopadhyay; estimated from graph
P20 = 250000; %same as above
Z0 = 280000; %
N0 = 0.6; %from Edwards
%some variables that the expressions of the parameters use
lambda = .6;
mu = .035;
k = 0.05;
%define parameters (start with estimates from Edwards paper):
r1 = (N0/(.03+N0))*((.2*P10)/(.2 + .4*P10));
r2 = (N0/(.03+N0))*((.2*P20)/(.2 + .4*P20));
a = Z0*((lambda*P10^2)/(mu^2 + P10^2));%G1: zooplankton growth function from Edwards paper
% m1 = .15; %r in Edwards paper
m1 = .075; % q in Edwards
m2 = .15;% r in Edwards paper
m3 = .15; % r in Edwards paper
d = 0.5;
cN = k;%*(N-N0);
par = [r1 r2 a m1 m2 m3 d cN]; % Creates vector of parameter values to pass to the ode solver
tspan = 0:1:300; %(Note: can also use the function linspace)
x0 = [P10 P20 Z0 N0]; % Creates vector of initial conditions
[t,x] = ode45(#algaepop_model,tspan,x0,[],par);
plot(t,x)
And here is algaepop_model.m
function dxdt = algaepop_model(t,x,par)
P1 = x(1);
P2 = x(2);
Z = x(3);
N = x(4);
r1 = par(1);
r2 = par(2);
a = par(3);
m1 = par(4);
m2 = par(5);
m3 = par(6);
d = par(7);
cN = par(8);
dxdt = zeros(4,1);
dxdt(1) = r1*N*P1 - m3*P1 - a*P1*Z;
dxdt(2) = r2*N*P2 - a*P2*Z - m2*P2;
dxdt(3) = a*P2*Z + a*P1*Z - m1*Z;
dxdt(4) = d*m2*P2 + d*m1*Z + d*m3*P1 + cN - r2*N*P2 - r1*N*P1;
end
Thanks for the help.

Let's debug. One of the simplest things that you can do is print out t and x inside of your integration function, algaepop_model. As soon as you do this, you'll probably notice what's happening: ode45 is taking extremely small steps. They're on the order of 1.9e-9. With steps that small, it will take forever to simulate to t = 300 (and even longer if you print stuff out on each step).
This might be caused by a poor choice of initial conditions, poor scaling or dimensionalization, a typo resulting in the wrong equations, or simply that you're using an inappropriate solver (and/or tolerances) for the particular problem. I can't really address the first two situations and must assume that you don't have any errors. Thus, in this case you have what is effectively a stiff system and ode45 is not a particularly good choice in such cases. Simply changing the solver to ode15s results in the following plot almost immediately:
As you can see, there are very large changes over a short period of time in the initial portion of the plot. If you zoom in you'' see that the huge spike happens in the first unit of time (you might output more time points or just let tspan = [0 300]). Some state variables are changing rapidly while others are varying more gradually. Such high frequencies and differences in time scales are the hallmarks of stiff systems. I'd suggest that, in addition to confirming that your code is correct, you also try adjusting the integration tolerances as well via odeset. Make sure that tighter tolerances produce qualitatively similar results. You can also try the other stiff solvers in the ODE suite if you like.
Lastly, it's more efficient and up-to-date to pass your parameters via the function handle itself rather than how you're doing it. Here's how:
[t,x] = ode15s(#(t,x)algaepop_model(t,x,par),tspan,x0);

Related

Applying runge kutta for coupled equations

so I have 2 second order nonlinear ODE and after applying the state-space theorm I have 4 first order ODE.
I'm trying to apply RK4 but I think I'm doing it wrong because the graphs diverge.
I'm getting a hard time applying it because the equations are coupled.
Those are the main equations. L and Fa also have state-space variables in them but it doesn't make a diffrence for my qustion.
Equations image
After applying the state-space theorm, Those are my equations:
f1 = #(x2) x2; % = x1'
f2 = #(x1, x2, x3) K/m*(l_0/sqrt((X_d(t, t1, t2, a_x0, X_d0, X_d0_tag)-x1).^2+(Z_d(t, t0, a_z0, Z_d0, Z_d0_tag)-x3).^2)-1)*(x1-X_d(t, t1, t2, a_x0, X_d0, X_d0_tag)) ...
- 0.5*Rho*A*C_d*(x2-Interpolation(Z_d(t, t0, a_z0, Z_d0, Z_d0_tag), data)).^2*sgn(x2, Z_d(t, t0, a_z0, Z_d0, Z_d0_tag), data)/m;
% = x2'
f3 = #(x1) x1; % = x3'
f4 = #(x1, x3) K/m*(l_0/sqrt((X_d(t, t1, t2, a_x0, X_d0, X_d0_tag)-x1).^2+(Z_d(t, t0, a_z0, Z_d0, Z_d0_tag)-x3).^2)-1)*(x3-Z_d(t, t0, a_z0, Z_d0, Z_d0_tag))-g;
% x4'
Than I tried to apply RK4. Heads up, it might be a complete nonsense. I also applied initial conditions but I don't want to make it messy.
h=0.2; % step size
t_array = 0:h:10;
w = zeros(1,length(t_array));
x = zeros(1,length(t_array));
y = zeros(1,length(t_array));
z = zeros(1,length(t_array));
for i=1:(length(t_array)-1) % calculation loop
t = 0 +h*i; % A parameter needed for the interpolation in f2
k_1 = f1(x(i));
k_2 = f1(x(i)+0.5*h*k_1);
k_3 = f1(x(i)+0.5*h*k_2);
k_4 = f1(x(i)+k_3*h);
x(i+1) = x(i) + (1/6)*(k_1+2*k_2+2*k_3+k_4)*h;
disp(x(i+1));
m_1 = f3(z(i));
m_2 = f3(z(i)+0.5*h*k_1);
m_3 = f3(z(i)+0.5*h*k_2);
m_4 = f3(z(i)+k_3*h);
z(i+1) = z(i) + (1/6)*(m_1+2*m_2+2*m_3+m_4)*h;
n_1 = f2(x(i), z(i), w(i));
n_2 = f2(x(i), z(i) ,w(i)+0.5*h*k_1);
n_3 = f2(x(i), z(i) ,w(i)+0.5*h*k_2);
n_4 = f2(x(i), z(i) ,w(i)+k_3*h);
w(i+1) = w(i) + (1/6)*(k_1+2*k_2+2*k_3+k_4)*h;
l_1 = f4(x(i), z(i));
l_2 = f4(x(i), z(i));
l_3 = f4(x(i), z(i));
l_4 = f4(x(i), z(i));
y(i+1) = y(i) + (1/6)*(k_1+2*k_2+2*k_3+k_4)*h;
end
As I said my graphs are divering (they souldn't be) so I suspect my code is wrong.
Please help me fix the algorithm.
Thank you very much!
What are XD and ZD? Why don't they have differential equations associated with them? Can you give more details about this?
Also, it would help if you used vectors for the state variable, and had one single function handle that produced a vector output. That cuts down on the code you write, and allows you to compare your results with ode45( ) since it can take the same function handle as input.
It appears that the fundamental flaw in your code is that although you state the equations are coupled, you are attempting to integrate them piecemeal. E.g., you do the RK4 scheme for one variable using k_1, k_2, k_3, k_4 to propagate x(i) to x(i+1). But during this process all the other coupled variables z(i) and w(i) and y(i) remain static in your code. This is a flaw. All of the coupled variables need to propagate at the same time through those intermediate calculations. I.e., you need to generate all of the k_1, m_1, n_1, and l_1 first, then using those results calculate all of the k_2, m_2, n_2, and l_2. Then using those results calculate all of the k_3, m_3, n_3, and l_3. Then using all of those results calculate k_4, m_4, n_4, and l_4. And finally use all of this to propagate all your variables forward one step. This is where a vector function handle can greatly help you. By making one function handle that takes a vector input (each element of the vector representing one of your variables) and returns a vector output, you can boil your code down to writing only one set of RK4 equations that automatically propagates all the variables forward at the same time because they are all part of the same vector. This will also make your code easier to debug.
Finally, you are mixing variables and derivatives. x's should go with k's, z's should go with m's, w's should go with n's, and y's should go with l's. In particular, the l's don't even have RK4 scheme implemented.

SIR model using fsolve and Euler 3BDF

Hi i've been asked to solve SIR model using fsolve command in MATLAB, and Euler 3 point backward. I'm really confused on how to proceed, please help. This is what i have so far. I created a function for 3BDF scheme but i'm not sure how to proceed with fsolve and solve the system of nonlinear ODEs. The SIR model is shown as and 3BDF scheme is formulated as
clc
clear all
gamma=1/7;
beta=1/3;
ode1= #(R,S,I) -(beta*I*S)/(S+I+R);
ode2= #(R,S,I) (beta*I*S)/(S+I+R)-I*gamma;
ode3= #(I) gamma*I;
f(t,[S,I,R]) = [-(beta*I*S)/(S+I+R); (beta*I*S)/(S+I+R)-I*gamma; gamma*I];
R0=0;
I0=10;
S0=8e6;
odes={ode1;ode2;ode3}
fun = #root2d;
x0 = [0,0];
x = fsolve(fun,x0)
function [xs,yb] = ThreePointBDF(f,x0, xmax, h, y0)
% This function should return the numerical solution of y at x = xmax.
% (It should not return the entire time history of y.)
% TO BE COMPLETED
xs=x0:h:xmax;
y=zeros(1,length(xs));
y(1)=y0;
yb(1)=y0+f(x0,y0)*h;
for i=1:length(xs)-1
R =R0;
y1(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - R, y1(i-1,:)+2*h*F(i,:))
S = S0;
y2(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - S, y2(i-1,:)+2*h*F(i,:))
I= I0;
y3(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - I, y3(i-1,:)+2*h*F(i,:))
end
end
You have an implicit equation
y(i+1) - 2*h/3*f(t(i+1),y(i+1)) = G = (4*y(i) - y(i-1))/3
where the right-side term G is constant in the call to fsolve, that is, during the solution of the implicit step equation.
Note that this is for the vector valued system y'(t)=f(t,y(t)) where
f(t,[S,I,R]) = [-(beta*I*S)/(S+I+R); (beta*I*S)/(S+I+R)-I*gamma; gamma*I];
To solve this write
G = (4*y(i,:) - y(i-1,:))/3
y(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - G, y(i-1,:)+2*h*F(i,:))
where a midpoint step is used to get an order 2 approximation as initial guess, F(i,:)=f(t(i),y(i,:)). Add solver options for error tolerances as necessary, you want the error in the implicit equation smaller than the truncation error O(h^3) of the step. One can also keep only a short array of function values, then one has to be careful for the correspondence of the position in the short array to the time index.
Using all that and a reference solution by a higher order standard solver produces the following error graphs for the components
where one can see that the first order error of the constant first step results in a first order global error, while with a second order error in the first step using the Euler method results in a clear second order global error.
Implement the method in general terms
from scipy.optimize import fsolve
def BDF2(f,t,y0,y1):
N, h = len(t)-1, t[1]-t[0];
y = (N+1)*[np.asarray(y0)];
y[1] = y1;
for i in range(1,N):
t1, G = t[i+1], (4*y[i]-y[i-1])/3
y[i+1] = fsolve(lambda u: u-2*h/3*f(t1,u)-G, y[i-1]+2*h*f(t[i],y[i]), xtol=1e-3*h**3)
return np.vstack(y)
Set up the model to be solved
gamma=1/7;
beta=1/3;
print beta, gamma
y0 = np.array([8e6, 10, 0])
P = sum(y0); y0 = y0/P
def f(t,y): S,I,R = y; trns = beta*S*I/(S+I+R); recv=gamma*I; return np.array([-trns, trns-recv, recv])
Compute a reference solution and method solutions for the two initialization variants
from scipy.integrate import odeint
tg = np.linspace(0,120,25*128)
yg = odeint(f,y0,tg,atol=1e-12, rtol=1e-14, tfirst=True)
M = 16; # 8,4
t = tg[::M];
h = t[1]-t[0];
y1 = BDF2(f,t,y0,y0)
e1 = y1-yg[::M]
y2 = BDF2(f,t,y0,y0+h*f(0,y0))
e2 = y2-yg[::M]
Plot the errors, computation as above, but embedded in the plot commands, could be separated in principle by first computing a list of solutions
fig,ax = plt.subplots(3,2,figsize=(12,6))
for M in [16, 8, 4]:
t = tg[::M];
h = t[1]-t[0];
y = BDF2(f,t,y0,y0)
e = (y-yg[::M])
for k in range(3): ax[k,0].plot(t,e[:,k],'-o', ms=1, lw=0.5, label = "h=%.3f"%h)
y = BDF2(f,t,y0,y0+h*f(0,y0))
e = (y-yg[::M])
for k in range(3): ax[k,1].plot(t,e[:,k],'-o', ms=1, lw=0.5, label = "h=%.3f"%h)
for k in range(3):
for j in range(2): ax[k,j].set_ylabel(["$e_S$","$e_I$","$e_R$"][k]); ax[k,j].legend(); ax[k,j].grid()
ax[0,0].set_title("Errors: first step constant");
ax[0,1].set_title("Errors: first step Euler")

Maximization under non-linear constraints in MATLAB using fmincon

I am trying to solve maximize a function s.t. 14 constraints and with 15 variables.
Some of them are linear, some are not. They are all equations (no inequalities).
I have tries using 'fsolve' and 'solve', using lagrange (ended up with 29 equations and 30 variables) - that didn't go that well...
I moved on to fmincon. I've set a script with an objective function in a file named objectfun.m:
function f = objectfun(x,I,rho)
% SWAPPING VARIABLE NAMES FOR READABILITY:
w = x(1);
t = x(2);
beta = x(3);
r = x(4);
% VALUE FUNCTION TO BE MINIMIZES:
f = -(rho*w*(1-t)+beta+I*(1+(1-t)*r));
I have set another one with the constraints:
function [c, ceq] = confun(x,...
Bs,Bu,sigma_s,sigma_u,rho,c_bar,I,A,alpha)
% SWAPPING VARIABLE NAMES FOR READABILITY:
w = x(1);
t = x(2);
beta = x(3);
r = x(4);
vum = x(5);
vsm = x(6);
ms = x(7);
cstar = x(8);
zs = x(9);
zu = x(10);
H = x(11);
K = x(12);
Y = x(13);
L = x(14);
mu = x(15);
% INEQUILITY CONSTRAINTS:
c = [];
% EQUALITY CONSTRAINTS:
ceq = [ms-Bs*vsm^sigma_s; % 1
mu-Bu*vum^sigma_u; % 2
(1+r*(1-t))*cstar-(1-rho)*w*(1-t); % 3
zs-cstar/c_bar; % 4
zu-1+zs; % 5
H-(cstar^2)/(2*c_bar); % 6
I-K-H; % 7
A*(K^alpha)*(L^(1-alpha))-Y; % 8
L-(zs+rho*zu+ms+rho*mu); % 9
w-(1-alpha)*A*(K/L)^alpha; % 10
r-alpha*A*(K/L)^(alpha-1); % 11
t*Y-beta*(1+ms+mu); % 12
vum-rho*w*(1-t)-beta; % 13
vsm-w*(1-t)-beta]; % 14
And a main script:
%% Parameters:
Bs =0.0;
Bu =0.0;
sigma_s = 1.5;
sigma_u = 1.5;
rho = 0.33;
c_bar = 6;
I = 3;
A = 1;
alpha = 0.33;
%% Numeric Solution:
x0 = 0.5*ones(length(var_names),1);
objective = #(x)objectfun(x,I,rho);
constraints = #(x)confun(x,...
Bs,Bu,sigma_s,sigma_u,rho,c_bar,I,A,alpha);
options = optimoptions(#fmincon);
[s,fval] = fmincon(objective,x0,[],[],[],[],[],[],constraints,options);
The question:
The solution is nonsense. I went over the equations many times - let's assume their good! (please...[= )
Did I choose the right application for my problem (fmincon)?
Is there a problem with the structure or in the code specifically?
Do you have any suggestions on how to make my life a bit easier?
I plan on iterating on the solution using different parameter values. Is there any way of verifying the solution, for a given set, just to see if the solution is correct?
Thanks in advance!!!
Here are some things you can do:
Provide reasonable bounds on all variables. This will prevent the solver to go to areas where things do not make sense, and where the functions (and gradients) can not be evaluated.
Provide a better starting point.
Provide gradients. Unless you have a system that can do automatic differentiation it is often important to provide correct and exact gradients. (Some solvers may also require second derivatives).
If you have a known solution for a given data set, pass this on as initial point and see what happens. Further you can fix this solution by setting lower and upper-bounds equal to this known solution and then see what happens then.
Try a more powerful modeling system/solver. E.g. GAMS with CONOPT is often used in economic modeling and provides automatic differentiation and gives often better feedback if something goes wrong. This is a small problem so you should be able to run it with the free student/demo version (http://gams.com/download/). The other suggestions can be applied here also.

How to implement an guess correcting algorithm when solving a BVP with shooting method?

I have a boundary value problem (specified in the picture below) that is supposed to be solved with shooting method. Note that I am working with MATLAB when doing this question. I'm pretty sure that I have rewritten the differential equation from a 2nd order differential equation to a system of 1st order differential equations and also approximated the missed value for the derivative of this differential equation when x=0 using the secant method correctly, but you could verify this so you'll be sure.
I have done solving this BVP with shooting method and my codes currently for this problem is as follows:
clear, clf;
global I;
I = 0.1; %Strength of the electricity on the wire
L = 0.400; %The length of the wire
xStart = 0; %Start point
xSlut = L/2; %End point
yStart = 10; %Function value when x=0
err = 5e-10; %Error tolerance in calculations
g1 = 128; %First guess on y'(x) when x=0
g2 = 89; %Second guess on y'(x) when x=0
state = 0;
X = [];
Y = [];
[X,Y] = ode45(#calcWithSec,[xStart xSlut],[yStart g1]');
F1 = Y(end,2);
iter = 0;
h = 1;
currentY = Y;
while abs(h)>err && iter<100
[X,Y] = ode45(#calcWithSec,[xStart xSlut],[yStart g2]');
currentY = Y;
F2 = Y(end,2);
Fp = (g2-g1)/(F2-F1);
h = -F2*Fp;
g1 = g2;
g2 = g2 + h;
F1 = F2;
iter = iter + 1;
end
if iter == 100
disp('No convergence')
else
plot(X,Y(:,1))
end
calcWithSec:
function fp = calcWithSec(x,y)
alpha = 0.01; %Constant
beta = 10^13; %Constant
global I;
fp = [y(2) alpha*(y(1)^4)-beta*(I^2)*10^(-8)*(1+y(1)/32.5)]';
end
My problem with this program is that for different given I's in the differential equation, I get strange curves that does not make any sense in physical meaning. For instance, the only "good" graph I get is when I=0.1. The graph to such differential equations is as follows:
But when I set I=0.2, then I get a graph that looks like this:
Again, in physical meaning and according to the given assignment, this should not happen since it gets hotter you closer you get to the middle of the mentioned wire. I want be able to calculate all I between 0.1 and 20, where I is the strength of the electricity.
I have a theory that it has something to do with my guessing values and therefore, my question is about if there is possible to implement an algorithm that forces the program to adjust the guessing values so I can get a graph that is "correct" in physical meaning? Or is it impossible to achieve this? If so, then explain why.
I have struggled with this assignment many days in row now, so all help I can get with this assignment is worth gold to me now.
Thank you all in advance for helping me out of this!

Regression in Matlab assuming Student's t Distributed Error Terms

I see that it is possible to use regress/regstats for OLS, and I found an online implementation of L1-Regression (Laplace), but I can't quite seem to figure out how to implement t distributed error terms. I have tried maximizing the log-likelihood of the residuals, but don't seem to be coming up with the right answer.
classdef student < handle
methods (Static)
% Find the sigma that maximizes the Log Liklihood function given a B
function s = findLonS(r,df)
n = length(r);
% if x ~ t location, scale distribution with df
% degrees of freedom, then (x-u)/sigma ~ t(df)
f = #(s) -sum(log(tpdf(r ./ s, df)));
s = fminunc(f, (r'*r)/n);
end
function B = regress(X,Y,df)
[n,m] = size(X);
bInit = ones(m, 1);
r = (Y - X*bInit);
s = student.findLonS(r, df);
% if x ~ t location, scale distribution with df
% degrees of freedom, then (x-u)/sigma ~ t(df)
f = #(b) -sum(log(tpdf((Y - X*b) ./ s, df)));
options = optimset('MaxFunEvals', 10000, 'TolX', 1e-16, 'TolFun', 1e-16);
[B, fval] = fminunc(f, bInit, options);
end
end
end
Comparing to an R implementation (which I know has been tested and is accurate), the solutions I am getting to this is wrong.
Any suggestions for fixing or ideas where I could find a solution already available?
my guess would be you have to adjust the scale s for the given b. This would either mean doing something like alternatively optimizing b, then adjusting s, and optimizing b again, or possibly rewriting your objective as
f = #(b)(-sum(log(tpdf((Y-X*b) ./ student.findLonS(Y-X*b,df),df))));