In my script, I call the ODE solver ode15s which solves a system of 9 ODE's. A simplified structure of the code:
[t, x] = ode15s(#odefun,tini:tend,options)
...
function dx = odefun(t,x)
r1=... %rate equation 1, dependent on x(1) and x(3) for example
r2=... %rate equation 2
...
dx(1) = r1+r2-...
dx(2) = ...
...
dx(9) = ...
end
When reviewing the results I was curious why the profile of one state variable was increasing at a certain range. In order to investigate this, I used conditional debugging within the ode function so I could check all the rates and all the dx(i)/dt equations.
To my big surprise, I found out that the differential equation of the decreasing state variable was positive. So, I simulated multiple rounds with the F5-debug function, and noticed that indeed the state variable consistently decreased, while the dx(i)/dt would always remain positive.
Can anyone explain me how this is possible?
It is not advisable to pause the integration in the middle like that and examine the states and derivatives. ode15s does not simply step through the solution like a naive ODE solver. It makes a bunch of calls to the ODE function with semi-random states in order to compute higher-order derivatives. These states are not solutions to system but are used internally by ode15s to get a more accurate solution later.
If you want to get the derivative of your system at particular times, first compute the entire solution and then call your ODE function with slices of that solution at the times you are interested in.
Related
I'm solving a pair of non-linear equations for each voxel in a dataset of a ~billion voxels using fsolve() in MATLAB 2016b.
I have done all the 'easy' optimizations that I'm aware of. Memory localization is OK, I'm using parfor, the equations are in fairly numerically simple form. All discontinuities of the integral are fed to integral(). I'm using the Levenberg-Marquardt algorithm with good starting values and a suitable starting damping constant, it converges on average with 6 iterations.
I'm now at ~6ms per voxel, which is good, but not good enough. I'd need a order of magnitude reduction to make the technique viable. There's only a few things that I can think of improving before starting to hammer away at accuracy:
The splines in the equation are for quick sampling of complex equations. There are two for each equation, one is inside the 'complicated nonlinear equation'. They represent two equations, one which is has a large amount of terms but is smooth and has no discontinuities and one which approximates a histogram drawn from a spectrum. I'm using griddedInterpolant() as the editor suggested.
Is there a faster way to sample points from pre-calculated distributions?
parfor i=1:numel(I1)
sols = fsolve(#(x) equationPair(x, input1, input2, ...
6 static inputs, fsolve options)
output1(i) = sols(1); output2(i) = sols(2)
end
When calling fsolve, I'm using the 'parametrization' suggested by Mathworks to input the variables. I have a nagging feeling that defining a anonymous function for each voxel is taking a large slice of the time at this point. Is this true, is there a relatively large overhead for defining the anonymous function again and again? Do I have any way to vectorize the call to fsolve?
There are two input variables which keep changing, all of the other input variables stay static. I need to solve one equation pair for each input pair so I can't make it a huge system and solve it at once. Do I have any other options than fsolve for solving pairs of nonlinear equations?
If not, some of the static inputs are the fairly large. Is there a way to keep the inputs as persistent variables using MATLAB's persistent, would that improve performance? I only saw examples of how to load persistent variables, how could I make it so that they would be input only once and future function calls would be spared from the assumedly largish overhead of the large inputs?
EDIT:
The original equations in full form look like:
Where:
and:
Everything else is known, solving for x_1 and x_2. f_KN was approximated by a spline. S_low (E) and S_high(E) are splines, the histograms they are from look like:
So, there's a few things I thought of:
Lookup table
Because the integrals in your function do not depend on any of the parameters other than x, you could make a simple 2D-lookup table from them:
% assuming simple (square) range here, adjust as needed
[x1,x2] = meshgrid( linspace(0, xmax, N) );
LUT_high = zeros(size(x1));
LUT_low = zeros(size(x1));
for ii = 1:N
LUT_high(:,ii) = integral(#(E) Fhi(E, x1(1,ii), x2(:,ii)), ...
0, E_high, ...
'ArrayValued', true);
LUT_low(:,ii) = integral(#(E) Flo(E, x1(1,ii), x2(:,ii)), ...
0, E_low, ...
'ArrayValued', true);
end
where Fhi and Flo are helper functions to compute those integrals, vectorized with scalar x1 and vector x2 in this example. Set N as high as memory will allow.
Those lookup tables you then pass as parameters to equationPair() (which allows parfor to distribute the data). Then just use interp2 in equationPair():
F(1) = I_high - interp2(x1,x2,LUT_high, x(1), x(2));
F(2) = I_low - interp2(x1,x2,LUT_low , x(1), x(2));
So, instead of recomputing the whole integral every time, you evaluate it once for the expected range of x, and reuse the outcomes.
You can specify the interpolation method used, which is linear by default. Specify cubic if you're really concerned about accuracy.
Coarse/Fine
Should the lookup table method not be possible for some reason (memory limitations, in case the possible range of x is too big), here's another thing you could do: split up the whole procedure in 2 parts, which I'll call coarse and fine.
The intent of the coarse method is to improve your initial estimates really quickly, but perhaps not so accurately. The quickest way to approximate that integral by far is via the rectangle method:
do not approximate S with a spline, just use the original tabulated data (so S_high/low = [S_high/low#E0, S_high/low#E1, ..., S_high/low#E_high/low]
At the same values for E as used by the S data (E0, E1, ...), evaluate the exponential at x:
Elo = linspace(0, E_low, numel(S_low)).';
integrand_exp_low = exp(x(1)./Elo.^3 + x(2)*fKN(Elo));
Ehi = linspace(0, E_high, numel(S_high)).';
integrand_exp_high = exp(x(1)./Ehi.^3 + x(2)*fKN(Ehi));
then use the rectangle method:
F(1) = I_low - (S_low * Elo) * (Elo(2) - Elo(1));
F(2) = I_high - (S_high * Ehi) * (Ehi(2) - Ehi(1));
Running fsolve like this for all I_low and I_high will then have improved your initial estimates x0 probably to a point close to "actual" convergence.
Alternatively, instead of the rectangle method, you use trapz (trapezoidal method). A tad slower, but possibly a bit more accurate.
Note that if (Elo(2) - Elo(1)) == (Ehi(2) - Ehi(1)) (step sizes are equal), you can further reduce the number of computations. In that case, the first N_low elements of the two integrands are identical, so the values of the exponentials will only differ in the N_low + 1 : N_high elements. So then just compute integrand_exp_high, and set integrand_exp_low equal to the first N_low elements of integrand_exp_high.
The fine method then uses your original implementation (with the actual integral()s), but then starting at the updated initial estimates from the coarse step.
The whole objective here is to try and bring the total number of iterations needed down from about 6 to less than 2. Perhaps you'll even find that the trapz method already provides enough accuracy, rendering the whole fine step unnecessary.
Vectorization
The rectangle method in the coarse step outlined above is easy to vectorize:
% (uses R2016b implicit expansion rules)
Elo = linspace(0, E_low, numel(S_low));
integrand_exp_low = exp(x(:,1)./Elo.^3 + x(:,2).*fKN(Elo));
Ehi = linspace(0, E_high, numel(S_high));
integrand_exp_high = exp(x(:,1)./Ehi.^3 + x(:,2).*fKN(Ehi));
F = [I_high_vector - (S_high * integrand_exp_high) * (Ehi(2) - Ehi(1))
I_low_vector - (S_low * integrand_exp_low ) * (Elo(2) - Elo(1))];
trapz also works on matrices; it will integrate over each column in the matrix.
You'd call equationPair() then using x0 = [x01; x02; ...; x0N], and fsolve will then converge to [x1; x2; ...; xN], where N is the number of voxels, and each x0 is 1×2 ([x(1) x(2)]), so x0 is N×2.
parfor should be able to slice all of this fairly easily over all the workers in your pool.
Similarly, vectorization of the fine method should also be possible; just use the 'ArrayValued' option to integral() as shown above:
F = [I_high_vector - integral(#(E) S_high(E) .* exp(x(:,1)./E.^3 + x(:,2).*fKN(E)),...
0, E_high,...
'ArrayValued', true);
I_low_vector - integral(#(E) S_low(E) .* exp(x(:,1)./E.^3 + x(:,2).*fKN(E)),...
0, E_low,...
'ArrayValued', true);
];
Jacobian
Taking derivatives of your function is quite easy. Here is the derivative w.r.t. x_1, and here w.r.t. x_2. Your Jacobian will then have to be a 2×2 matrix
J = [dF(1)/dx(1) dF(1)/dx(2)
dF(2)/dx(1) dF(2)/dx(2)];
Don't forget the leading minus sign (F = I_hi/lo - g(x) → dF/dx = -dg/dx)
Using one or both of the methods outlined above, you can implement a function to compute the Jacobian matrix and pass this on to fsolve via the 'SpecifyObjectiveGradient' option (via optimoptions). The 'CheckGradients' option will come in handy there.
Because fsolve usually spends the vast majority of its time computing the Jacobian via finite differences, manually computing a value for it manually will normally speed the algorithm up tremendously.
It will be faster, because
fsolve doesn't have to do extra function evaluations to do the finite differences
the convergence rate will increase due to the improved precision of the Jacobian
Especially if you use the rectangle method or trapz like above, you can reuse many of the computations you've already done for the function values themselves, meaning, even more speed-up.
Rody's answer was the correct one. Supplying the Jacobian was the single largest factor. Especially with the vectorized version, there were 3 orders of magnitude of difference in speed with the Jacobian supplied and not.
I had trouble finding information about this subject online so I'll spell it out here for future reference: It is possible to vectorize independant parallel equations with fsolve() with great gains.
I also did some work with inlining fsolve(). After supplying the Jacobian and being smarter about the equations, the serial version of my code was mostly overhead at ~1*10^-3 s per voxel. At that point most of the time inside the function was spent passing around a options -struct and creating error-messages which are never sent + lots of unused stuff assumedly for the other optimization functions inside the optimisation function (levenberg-marquardt for me). I succesfully butchered the function fsolve and some of the functions it calls, dropping the time to ~1*10^-4s per voxel on my machine. So if you are stuck with a serial implementation e.g. because of having to rely on the previous results it's quite possible to inline fsolve() with good results.
The vectorized version provided the best results in my case, with ~5*10^-5 s per voxel.
The main function of ode is given as follows.
function dxdt = state( t,x,vgth,vgval1,vgval2)
vgval=vgval1+vgval2;
p=1;
k=10^0.7;
window1=1-((2*x)-1).^(2*p);
dxdt=k*(vgval-vgth+1.2)*window1;
end
The Script is given below.
step=0.01;
t = 0:step:10;
f=4*0.157;
vgate1= #(t) abs(5*sin(2*f*t)).*heaviside(5-t);
vgate2=#(t) -abs(5*sin(2*f*t)).*heaviside(t-5);
The Function call part is as follows.
x0=0.01;
vgth=1.9;
[t,x] = ode45(#(t,x) state1 (t,x,vgth,vgate1(t),vgate2(t)), t, x0);
plot(t,x)
Issue
It gives me error when I use the negative sign with vgate2 .
It works fine If i remove the negative sign of vgate2.
Desired Result
I want my plot with a negative sign in vgate2.Actually I want to use two positive sine pulses and two negative sign pulses.That is why I have used negative value of vgate2.
ODE integrators of order p expect the differential equation to be p+2 times continuously differentiable with decent sized derivatives.
Any deviation from that is seen as "stiffness" by step-size adaptation strategies, and reacted to by increasingly violent reductions in step-size. Any kink or jump in the lower derivatives is seen as similar to wild oscillations in the higher derivative, throwing the step-size adaptor for a loop.
However, if the integration is stopped directly at an event and then restarted using the values there as initial values, the integrator does not "see" the event, and thus has no need to react to it.
I want to numerically solve a stochastic differential equation (SDE) in MATLAB, The code I have written just simply does not recognize sde function!
The question is as below:
dz=v*dt +sqrt(2*Ds)*dw_t
where v = 1/(N*delta) * sigma f_i (i=1- N)
N= 100,
delta = e6,
and f_i is calculated form this equation:
for z>=z0 , f_i = -kappa*(z0_i -z) and kappa = .17
for z<z0 , f_i = -kappaT*(z0_i -z) and kappaT = 60
note that the initial values for z0_i is randomly distributed over 60nm range.
Ds = 4e4
and dw_t is an increment in Wiener process.
Firstly I don't know how to set the conditions for z while I don't have the value for it!
Secondly, the Euler algorithm is exactly matching the equation but I don't know why the code with sde function is not working!
In order to numerically solve a SDE, you would need a Initial Condition (IC) for the function you want to code. In your case, I guess it's z. If you want to do so without explicitly declaring the IC, you can write it as a function that takes an IC. Then to test, you would input random ICs in.
Also, it is not clear to me whether your z0 is also stochastic and changes with time, is randomly generated every time step, or a constant that was only randomly generated once. Even simpler, if z0 is just the IC of z, then your f_i just inspects whether z has increased or decreased through the time step to decide how z would change the next time step. Please clarify this and your problem will seem much clearer.
It is not too hard to simulate your SDE without the use of the solver. You can try it out and achieve better result since sometimes you will really need to learn the behavior of the solver in order for it to work. I would suggest Monte Carlo method if you choose to write your own solver to ensure accuracy.
I hope my answer helps. If you still have any questions, feel free to ask.
I need to numerically integrate the following system of ODEs:
dA/dR = f(R,A,B)
dB/dR = g(R,A,B)
I'm solving the ODEs for a Initial-value stability problem. In this problem, the system is initially stable but goes unstable at some radius. However, whilst stable, I don't want the amplitude to decay away from the starting value (to O(10^-5) for example) as this is non-physical since the system's stability is limited to the background noise amplitude. The amplitude should remain at the starting value of 1 until the system destabilises. Hence, I want to overwrite the derivative estimate to zero whenever it is negative.
I've written some 4th order Runge-Kutta code that achieves this, but I'd much prefer to simply pass ODE45 (or any of the built in solvers) a parameter to make it overwrite the derivative whenever it is negative. Is this possible?
A simple, fast, efficient way to implement this is via the max function. For example, if you want to make sure all of your derivatives remain non-negative, in your integration function:
function ydot = f(x,y)
ydot(1) = ...
ydot(2) = ...
...
ydot = max(ydot,0);
Note that this is not the same thing as the output states returned by ode45 remaining non-negative. The above should ensure that your state variables never decay.
Note, however, that that this effectively makes your integration function stiff. You might consider using a solver like ode15s instead, or at least confirming that the results are consistent with those from ode45. Alternatively, you could use a continuous sigmoid function, instead of the discontinuous, step-like max. This is partly a modeling decision.
Here is the MATLAB/FreeMat code I got to solve an ODE numerically using the backward Euler method. However, the results are inconsistent with my textbook results, and sometimes even ridiculously inconsistent. What is wrong with the code?
function [x,y] = backEuler(f,xinit,yinit,xfinal,h)
%f - this is your y prime
%xinit - initial X
%yinit - initial Y
%xfinal - final X
%h - step size
n = (xfinal-xinit)/h; %Calculate steps
%Inititialize arrays...
%The first elements take xinit and yinit corespondigly, the rest fill with 0s.
x = [xinit zeros(1,n)];
y = [yinit zeros(1,n)];
%Numeric routine
for i = 1:n
x(i+1) = x(i)+h;
ynew = y(i)+h*(f(x(i),y(i)));
y(i+1) = y(i)+h*f(x(i+1),ynew);
end
end
Your method is a method of a new kind. It is neither backward nor forward Euler. :-)
Forward Euler: y1 = y0 + h*f(x0,y0)
Backward Euler solve in y1: y1 - h*f(x1,y1) = y0
Your method: y1 = y0 +h*f(x0,x0+h*f(x0,y0))
Your method is not backward Euler.
You don't solve in y1, you just estimate y1 with the forward Euler method. I don't want to pursue the analysis of your method, but I believe it will behave poorly indeed, even compared with forward Euler, since you evaluate the function f at the wrong point.
Here is the closest method to your method that I can think of, explicit as well, which should give much better results. It's Heun's Method:
y1 = y0 + h/2*(f(x0,y0) + f(x1,x0+h*f(x0,y0)))
The only issue I can spot is that the line:
n=(xfinal-xinit)/h
Should be:
n = abs((xfinal-xinit)/h)
To avoid a negative step count. If you are moving in the negative-x direction, make sure to give the function a negative step size.
Your answers probably deviate because of how coarsely you are approximating your answer. To get a semi-accurate result, deltaX has to be very very small and your step size has to be very very very small.
PS. This isn't the "backward Euler method," it is just regular old Euler's method.
If this is homework please tag it so.
Have a look at numerical recipes, specifically chapter 16, integration of ordinary differential equations. Euler's method is known to have problems:
There are several reasons that Euler’s method is not recommended for practical use, among them, (i) the method is not very accurate when compared to other, fancier, methods run at the equivalent stepsize, and (ii) neither is it very stable
So unless you know your textbook is using Euler's method, I wouldn't expect the results to match. Even if it is, you probably have to use an identical step size to get an identical result.
Unless you really want to solve an ODE via Euler's method that you've written by yourself you should have a look at built-in ODE solvers.
On a sidenote: you don't need to create x(i) inside the loop like this: x(i+1) = x(i)+h;. Instead, you can simply write x = xinit:h:xfinal;. Also, you may want to write n = round(xfinal-xinit)/h); to avoid warnings.
Here are the solvers implemented by MATLAB.
ode45 is based on an explicit
Runge-Kutta (4,5) formula, the
Dormand-Prince pair. It is a one-step
solver – in computing y(tn), it needs
only the solution at the immediately
preceding time point, y(tn-1). In
general, ode45 is the best function to
apply as a first try for most
problems.
ode23 is an implementation of an
explicit Runge-Kutta (2,3) pair of
Bogacki and Shampine. It may be more
efficient than ode45 at crude
tolerances and in the presence of
moderate stiffness. Like ode45, ode23
is a one-step solver.
ode113 is a variable order
Adams-Bashforth-Moulton PECE solver.
It may be more efficient than ode45 at
stringent tolerances and when the ODE
file function is particularly
expensive to evaluate. ode113 is a
multistep solver — it normally needs
the solutions at several preceding
time points to compute the current
solution.
The above algorithms are intended to
solve nonstiff systems. If they appear
to be unduly slow, try using one of
the stiff solvers below.
ode15s is a variable order solver
based on the numerical differentiation
formulas (NDFs). Optionally, it uses
the backward differentiation formulas
(BDFs, also known as Gear's method)
that are usually less efficient. Like
ode113, ode15s is a multistep solver.
Try ode15s when ode45 fails, or is
very inefficient, and you suspect that
the problem is stiff, or when solving
a differential-algebraic problem.
ode23s is based on a modified
Rosenbrock formula of order 2. Because
it is a one-step solver, it may be
more efficient than ode15s at crude
tolerances. It can solve some kinds of
stiff problems for which ode15s is not
effective.
ode23t is an implementation of the
trapezoidal rule using a "free"
interpolant. Use this solver if the
problem is only moderately stiff and
you need a solution without numerical
damping. ode23t can solve DAEs.
ode23tb is an implementation of
TR-BDF2, an implicit Runge-Kutta
formula with a first stage that is a
trapezoidal rule step and a second
stage that is a backward
differentiation formula of order two.
By construction, the same iteration
matrix is used in evaluating both
stages. Like ode23s, this solver may
be more efficient than ode15s at crude
tolerances.
I think this code could work. Try this.
for i =1:n
t(i +1)=t(i )+dt;
y(i+1)=solve('y(i+1)=y(i)+dt*f(t(i+1),y(i+1)');
end
The code is fine. Just you have to add another loop within the for loop. To check the level of consistency.
if abs((y(i+1) - ynew)/ynew) > 0.0000000001
ynew = y(i+1);
y(i+1) = y(i)+h*f(x(i+1),ynew);
end
I checked for a dummy function and the results were promising.