First off, I'm not sure if this is the best place to post this, but since there isn't a dedicated Matlab community I'm posting this here.
To give a little background, I'm currently prototyping a plasma physics simulation which involves triple integration. The innermost integral can be done analytically, but for the outer two this is just impossible. I always thought it's best to work with values close to unity and thus normalized the my innermost integral such that it is unit-less and usually takes values close to unity. However, compared to an earlier version of the code where the this innermost integral evaluated to values of the order of 1e-50, the numerical double integration, which uses the native Matlab function integral2 with target relative tolerance of 1e-6, now requires around 1000 times more function evaluations to converge. As a consequence my simulation now takes roughly 12h instead of the previous 20 minutes.
Question
So my questions are:
Is it possible that the faster convergence in the older version is simply due to the additional evaluations vanishing as roundoff errors and that the results thus arn't trustworthy even though it passes the 1e-6 relative tolerance? In the few tests I run the results seemed to be the same in both versions though.
What is the best practice concerning the normalization of the integrand for numerical integration?
Is there some way to improve the convergence of numerical integrals, especially if the integrand might have singularities?
I'm thankful for any help or insight, especially since I don't fully understand the inner workings of Matlab's integral2 function and what should be paid attention to when using it.
If I didn't know any better I would actually conclude, that the integrand which is of the order of 1e-50 works way better than one of say the order of 1e+0, but that doesn't seem to make sense. Is there some numerical reason why this could actually be the case?
TL;DR when multiplying the function to be integrated numerically by Matlab 's integral2 with a factor 1e-50 and then the result in turn with a factor 1e+50, the integral gives the same result but converges way faster and I don't understand why.
edit:
I prepared a short script to illustrate the problem. Here the relative difference between the two results was of the order of 1e-4 and thus below the actual relative tolerance of integral2. In my original problem however the difference was even smaller.
fun = #(x,y,l) l./(sqrt(1-x.*cos(y)).^5).*((1-x).*sin(y));
x = linspace(0,1,101);
y = linspace(0,pi,101).';
figure
surf(x,y,fun(x,y,1));
l = linspace(0,1,101); l=l(2:end);
v1 = zeros(1,100); v2 = v1;
tval = tic;
for i=1:100
fun1 = #(x,y) fun(x,y,l(i));
v1(i) = integral2(fun1,0,1,0,pi,'RelTol',1e-6);
end
t1 = toc(tval)
tval = tic;
for i=1:100
fun1 = #(x,y) 1e-50*fun(x,y,l(i));
v2(i) = 1e+50*integral2(fun1,0,1,0,pi,'RelTol',1e-6);
end
t2 = toc(tval)
figure
hold all;
plot(l,v1);
plot(l,v2);
plot(l,abs((v2-v1)./v1));
Related
I'm solving a pair of non-linear equations for each voxel in a dataset of a ~billion voxels using fsolve() in MATLAB 2016b.
I have done all the 'easy' optimizations that I'm aware of. Memory localization is OK, I'm using parfor, the equations are in fairly numerically simple form. All discontinuities of the integral are fed to integral(). I'm using the Levenberg-Marquardt algorithm with good starting values and a suitable starting damping constant, it converges on average with 6 iterations.
I'm now at ~6ms per voxel, which is good, but not good enough. I'd need a order of magnitude reduction to make the technique viable. There's only a few things that I can think of improving before starting to hammer away at accuracy:
The splines in the equation are for quick sampling of complex equations. There are two for each equation, one is inside the 'complicated nonlinear equation'. They represent two equations, one which is has a large amount of terms but is smooth and has no discontinuities and one which approximates a histogram drawn from a spectrum. I'm using griddedInterpolant() as the editor suggested.
Is there a faster way to sample points from pre-calculated distributions?
parfor i=1:numel(I1)
sols = fsolve(#(x) equationPair(x, input1, input2, ...
6 static inputs, fsolve options)
output1(i) = sols(1); output2(i) = sols(2)
end
When calling fsolve, I'm using the 'parametrization' suggested by Mathworks to input the variables. I have a nagging feeling that defining a anonymous function for each voxel is taking a large slice of the time at this point. Is this true, is there a relatively large overhead for defining the anonymous function again and again? Do I have any way to vectorize the call to fsolve?
There are two input variables which keep changing, all of the other input variables stay static. I need to solve one equation pair for each input pair so I can't make it a huge system and solve it at once. Do I have any other options than fsolve for solving pairs of nonlinear equations?
If not, some of the static inputs are the fairly large. Is there a way to keep the inputs as persistent variables using MATLAB's persistent, would that improve performance? I only saw examples of how to load persistent variables, how could I make it so that they would be input only once and future function calls would be spared from the assumedly largish overhead of the large inputs?
EDIT:
The original equations in full form look like:
Where:
and:
Everything else is known, solving for x_1 and x_2. f_KN was approximated by a spline. S_low (E) and S_high(E) are splines, the histograms they are from look like:
So, there's a few things I thought of:
Lookup table
Because the integrals in your function do not depend on any of the parameters other than x, you could make a simple 2D-lookup table from them:
% assuming simple (square) range here, adjust as needed
[x1,x2] = meshgrid( linspace(0, xmax, N) );
LUT_high = zeros(size(x1));
LUT_low = zeros(size(x1));
for ii = 1:N
LUT_high(:,ii) = integral(#(E) Fhi(E, x1(1,ii), x2(:,ii)), ...
0, E_high, ...
'ArrayValued', true);
LUT_low(:,ii) = integral(#(E) Flo(E, x1(1,ii), x2(:,ii)), ...
0, E_low, ...
'ArrayValued', true);
end
where Fhi and Flo are helper functions to compute those integrals, vectorized with scalar x1 and vector x2 in this example. Set N as high as memory will allow.
Those lookup tables you then pass as parameters to equationPair() (which allows parfor to distribute the data). Then just use interp2 in equationPair():
F(1) = I_high - interp2(x1,x2,LUT_high, x(1), x(2));
F(2) = I_low - interp2(x1,x2,LUT_low , x(1), x(2));
So, instead of recomputing the whole integral every time, you evaluate it once for the expected range of x, and reuse the outcomes.
You can specify the interpolation method used, which is linear by default. Specify cubic if you're really concerned about accuracy.
Coarse/Fine
Should the lookup table method not be possible for some reason (memory limitations, in case the possible range of x is too big), here's another thing you could do: split up the whole procedure in 2 parts, which I'll call coarse and fine.
The intent of the coarse method is to improve your initial estimates really quickly, but perhaps not so accurately. The quickest way to approximate that integral by far is via the rectangle method:
do not approximate S with a spline, just use the original tabulated data (so S_high/low = [S_high/low#E0, S_high/low#E1, ..., S_high/low#E_high/low]
At the same values for E as used by the S data (E0, E1, ...), evaluate the exponential at x:
Elo = linspace(0, E_low, numel(S_low)).';
integrand_exp_low = exp(x(1)./Elo.^3 + x(2)*fKN(Elo));
Ehi = linspace(0, E_high, numel(S_high)).';
integrand_exp_high = exp(x(1)./Ehi.^3 + x(2)*fKN(Ehi));
then use the rectangle method:
F(1) = I_low - (S_low * Elo) * (Elo(2) - Elo(1));
F(2) = I_high - (S_high * Ehi) * (Ehi(2) - Ehi(1));
Running fsolve like this for all I_low and I_high will then have improved your initial estimates x0 probably to a point close to "actual" convergence.
Alternatively, instead of the rectangle method, you use trapz (trapezoidal method). A tad slower, but possibly a bit more accurate.
Note that if (Elo(2) - Elo(1)) == (Ehi(2) - Ehi(1)) (step sizes are equal), you can further reduce the number of computations. In that case, the first N_low elements of the two integrands are identical, so the values of the exponentials will only differ in the N_low + 1 : N_high elements. So then just compute integrand_exp_high, and set integrand_exp_low equal to the first N_low elements of integrand_exp_high.
The fine method then uses your original implementation (with the actual integral()s), but then starting at the updated initial estimates from the coarse step.
The whole objective here is to try and bring the total number of iterations needed down from about 6 to less than 2. Perhaps you'll even find that the trapz method already provides enough accuracy, rendering the whole fine step unnecessary.
Vectorization
The rectangle method in the coarse step outlined above is easy to vectorize:
% (uses R2016b implicit expansion rules)
Elo = linspace(0, E_low, numel(S_low));
integrand_exp_low = exp(x(:,1)./Elo.^3 + x(:,2).*fKN(Elo));
Ehi = linspace(0, E_high, numel(S_high));
integrand_exp_high = exp(x(:,1)./Ehi.^3 + x(:,2).*fKN(Ehi));
F = [I_high_vector - (S_high * integrand_exp_high) * (Ehi(2) - Ehi(1))
I_low_vector - (S_low * integrand_exp_low ) * (Elo(2) - Elo(1))];
trapz also works on matrices; it will integrate over each column in the matrix.
You'd call equationPair() then using x0 = [x01; x02; ...; x0N], and fsolve will then converge to [x1; x2; ...; xN], where N is the number of voxels, and each x0 is 1×2 ([x(1) x(2)]), so x0 is N×2.
parfor should be able to slice all of this fairly easily over all the workers in your pool.
Similarly, vectorization of the fine method should also be possible; just use the 'ArrayValued' option to integral() as shown above:
F = [I_high_vector - integral(#(E) S_high(E) .* exp(x(:,1)./E.^3 + x(:,2).*fKN(E)),...
0, E_high,...
'ArrayValued', true);
I_low_vector - integral(#(E) S_low(E) .* exp(x(:,1)./E.^3 + x(:,2).*fKN(E)),...
0, E_low,...
'ArrayValued', true);
];
Jacobian
Taking derivatives of your function is quite easy. Here is the derivative w.r.t. x_1, and here w.r.t. x_2. Your Jacobian will then have to be a 2×2 matrix
J = [dF(1)/dx(1) dF(1)/dx(2)
dF(2)/dx(1) dF(2)/dx(2)];
Don't forget the leading minus sign (F = I_hi/lo - g(x) → dF/dx = -dg/dx)
Using one or both of the methods outlined above, you can implement a function to compute the Jacobian matrix and pass this on to fsolve via the 'SpecifyObjectiveGradient' option (via optimoptions). The 'CheckGradients' option will come in handy there.
Because fsolve usually spends the vast majority of its time computing the Jacobian via finite differences, manually computing a value for it manually will normally speed the algorithm up tremendously.
It will be faster, because
fsolve doesn't have to do extra function evaluations to do the finite differences
the convergence rate will increase due to the improved precision of the Jacobian
Especially if you use the rectangle method or trapz like above, you can reuse many of the computations you've already done for the function values themselves, meaning, even more speed-up.
Rody's answer was the correct one. Supplying the Jacobian was the single largest factor. Especially with the vectorized version, there were 3 orders of magnitude of difference in speed with the Jacobian supplied and not.
I had trouble finding information about this subject online so I'll spell it out here for future reference: It is possible to vectorize independant parallel equations with fsolve() with great gains.
I also did some work with inlining fsolve(). After supplying the Jacobian and being smarter about the equations, the serial version of my code was mostly overhead at ~1*10^-3 s per voxel. At that point most of the time inside the function was spent passing around a options -struct and creating error-messages which are never sent + lots of unused stuff assumedly for the other optimization functions inside the optimisation function (levenberg-marquardt for me). I succesfully butchered the function fsolve and some of the functions it calls, dropping the time to ~1*10^-4s per voxel on my machine. So if you are stuck with a serial implementation e.g. because of having to rely on the previous results it's quite possible to inline fsolve() with good results.
The vectorized version provided the best results in my case, with ~5*10^-5 s per voxel.
I'm trying to code a MATLAB program and I have arrived at a point where I need to do the following. I have this equation:
I must find the value of the constant "Xcp" (greater than zero), that is the value that makes the integral equal to zero.
In order to do so, I have coded a loop in which the the value of Xcp advances with small increments on each iteration and the integral is performed and checked if it's zero, if it reaches zero the loop finishes and the Xcp is stored with this value.
However, I think this is not an efficient way to do this task. The running time increases a lot, because this loop is long and has the to perform the integral and the integration limits substitution every time.
Is there a smarter way to do this in Matlab to obtain a better code efficiency?
P.S.: I have used conv() to multiply both polynomials. Since cl(x) and (x-Xcp) are both polynomials.
EDIT: Piece of code.
p = [1 -Xcp]; % polynomial (x-Xcp)
Xcp=0.001;
i=1;
found=false;
while(i<=x_te && found~=true) % Xcp is upper bounded by x_te
int_cl_p = polyint(conv(cl,p));
Cm_cp=(-1/c^2)*diff(polyval(int_cl_p,[x_le,x_te]));
if(Cm_cp==0)
found=true;
else
Xcp=Xcp+0.001;
end
end
This is the code I used to run this section. Another problem is that I have to do it for different cases (different cl functions), for this reason the code is even more slow.
As far as I understood, you need to solve the equation for X_CP.
I suggest using symbolic solver for this. This is not the most efficient way for large polynomials, but for polynomials of degree 20 it takes less than 1 second. I do not claim that this solution is fastest, but this provides generic solution to the problem. If your polynomial does not change every iteration, then you can use this generic solution many times and not spend time for calculating integral.
So, generic symbolic solution in terms of xLE and xTE is obtained using this:
syms xLE xTE c x xCP
a = 1:20;
%//arbitrary polynomial of degree 20
cl = sum(x.^a.*randi([-100,100],1,20));
tic
eqn = -1/c^2 * int(cl * (x-xCP), x, xLE, xTE) == 0;
xCP = solve(eqn,xCP);
pretty(xCP)
toc
Elapsed time is 0.550371 seconds.
You can further use matlabFunction for finding the numerical solutions:
xCP_numerical = matlabFunction(xCP);
%// we then just plug xLE = 10 and xTE = 20 values into function
answer = xCP_numerical(10,20)
answer =
19.8038
The slight modification of the code can allow you to use this for generic coefficients.
Hope that helps
If you multiply by -1/c^2, then you can rearrange as
and integrate however you fancy. Since c_l is a polynomial order N, if it's defined in MATLAB using the usual notation for polyval, where coefficients are stored in a vector a such that
then integration is straightforward:
MATLAB code might look something like this
int_cl_p = polyint(cl);
int_cl_x_p = polyint([cl 0]);
X_CP = diff(polyval(int_cl_x_p,[x_le,x_te]))/diff(polyval(int_cl_p,[x_le,x_te]));
I am performing logistic regression in MATLAB with L2 regularization on text data. My program works well for small datasets. For larger sets, it keeps running infinitely.
I have seen the potentially duplicate question (matlab fminunc not quitting (running indefinitely)). In that question, the cost for initial theta was NaN and there was an error printed in the console. For my implementation, I am getting a real valued cost and there is no error even with verbose parameters being passed to fminunc(). Hence I believe this question might not be a duplicate.
I need help in scaling it to larger sets. The size of the training data I am currently working on is roughly 10k*12k (10k text files cumulatively containing 12k words). Thus, I have m=10k training examples and n=12k features.
My cost function is defined as follows:
function [J gradient] = costFunction(X, y, lambda, theta)
[m n] = size(X);
g = inline('1.0 ./ (1.0 + exp(-z))');
h = g(X*theta);
J =(1/m)*sum(-y.*log(h) - (1-y).*log(1-h))+ (lambda/(2*m))*norm(theta(2:end))^2;
gradient(1) = (1/m)*sum((h-y) .* X(:,1));
for i = 2:n
gradient(i) = (1/m)*sum((h-y) .* X(:,i)) - (lambda/m)*theta(i);
end
end
I am performing optimization using MATLAB's fminunc() function. The parameters I pass to fminunc() are:
options = optimset('LargeScale', 'on', 'GradObj', 'on', 'MaxIter', MAX_ITR);
theta0 = zeros(n, 1);
[optTheta, functionVal, exitFlag] = fminunc(#(t) costFunction(X, y, lambda, t), theta0, options);
I am running this code on a machine with these specifications:
Macbook Pro i7 2.8GHz / 8GB RAM / MATLAB R2011b
The cost function seems to behave correctly. For initial theta, I get acceptable values of J and gradient.
K>> theta0 = zeros(n, 1);
K>> [j g] = costFunction(X, y, lambda, theta0);
K>> j
j =
0.6931
K>> max(g)
ans =
0.4082
K>> min(g)
ans =
-2.7021e-05
The program takes incredibly long to run. I started profiling keeping MAX_ITR = 1 for fminunc(). With a single iteration, the program did not complete execution even after a couple of hours had elapsed. My questions are:
Am I doing something wrong mathematically?
Should I use any other optimizer instead of fminunc()? With LargeScale=on, fminunc() uses trust-region algorithms.
Is this problem cluster-scale and should not be run on a single machine?
Any other general tips will be appreciated. Thanks!
This helped solve the problem: I was able to get this working by setting the LargeScale flag to 'off' in fminunc(). From what I gather, LargeScale = 'on' uses trust region algorithms, while keeping it 'off' uses quasi-newton methods. Using quasi-newton methods and passing the gradient worked a lot faster for this particular problem and gave very nice results.
I was able to get this working by setting the LargeScale flag to 'off' in fminunc(). From what I gather, LargeScale = 'on' uses trust region algorithms, while keeping it 'off' uses quasi-newton methods. Using quasi-newton methods and passing the gradient worked a lot faster for this particular problem and gave very nice results.
Here is my advise:
-Set the Matlab flag to show debug output during run. If not just print out in your cost function the cost, which will allow you to monitor iteration count and error.
And second, which is very important:
Your problem is illposed, or so to say underdetermined. You have a 12k feature space and provide only 10k examples, which means for an unconstrained optimization the answer is -Inf. To make a quick example why this is, your problem is like:
Minimize x+y+z given that x+y-z = 2. Feature space dim 3, spanned vector space - 1d. I suggest use PCA or CCA to reduce the dimensionality of the the text files by retaining their variation up to 99%. This will probably give you a feature space ~100-200dim.
PS: Just to point out that the problem is very fram from cluster size requirement, which usually is 1kk+ data points and that fminunc is not at all an overkill, and LIBSVM has nothing to do with it because fminunc is just a quadratic optimizer, while LIBSVM is a classifier. To clear out LIBSVM uses something similar to fminunc just with different objective function.
Here's what I suspect to be the issue, based on my experience with this type of problem. You're using a dense representation for X instead of a sparse one. You're also seeing the typical effect in text classification that the number of terms increasing roughly linearly with the number of samples. Effectively, the cost of the matrix multiplication X*theta goes up quadratically with the number of samples.
By contrast, a good sparse matrix representation only iterates over the non-zero elements to do a matrix multiplication, which tends to be roughly constant per document if they're of appropriately constant length, causing linear instead of quadratic slowdown in the number of samples.
I'm not a Matlab guru, but I know it has a sparse matrix package, so try to use that.
I have simple question. I'm trying to evaluate improper integral of 0th order Bessel function using Matlab R2012a:
v = integral(#(x)(besselj(0, x), 0, Inf)
which gives me v = 3.7573e+09. However this should be v = 1 in theory. When I'm trying to do
v = integral(#(l)besselj(0,l), 0, 1000)
it results to v = 1.0047. Could you briefly explain me, what is going wrong with integration? And how to properly integrate Bessel-type functions?
From the docs to do an improper integral on an oscillatory function:
q = integral(fun,0,Inf,'RelTol',1e-8,'AbsTol',1e-13)
in the docs the example is
fun = #(x)x.^5.*exp(-x).*sin(x);
but I guess in your case try:
q = integral(#(x)(besselj(0, x),0,Inf,'RelTol',1e-8,'AbsTol',1e-13)
At first I was sceptical that taking an integral over a Bessel function would produce finite results. Mathematica/Wofram Alpha however showed that the result is finite, but it is not for the faint of heart.
However, then I was pointed to this site where it is explained how to do it properly, and that the value of the integral should be 1.
I experimented a bit to verify the correctness of their statements:
F = #(z) arrayfun(#(y) quadgk(#(x)besselj(0,x), 0, y), z);
z = 10:100:1e4;
plot(z, F(z))
which gave:
so clearly, the integral indeed seems to converges to 1. Shame on Wolfram Alpha!
(Note that this is kind of a misleading plot; try to do it with z = 10:1e4; and you'll see why. But oh well, the principle is the same anyway).
This figure also shows precicely what the problem is you're experiencing in Matlab; the value of the integral is like a damped oscillation around 1 for increasing x. Problem is, the dampening is very weak -- as you can see, my z needed to go all the way to 10,000 just to produce this plot, whereas the oscillation amplitude was only decreased by ~0.5.
When you try to do the improper integral by messing around with the 'MaxIntervalCount' setting, you get this:
>> quadgk(#(x)besselj(0,x), 0, inf, 'maxintervalcount', 1e4)
Warning: Reached the limit on the maximum number of intervals in use.
Approximate bound on error is 1.2e+009. The integral may not exist, or
it may be difficult to approximate numerically.
Increase MaxIntervalCount to 10396 to enable QUADGK to continue for
another iteration.
> In quadgk>vadapt at 317
In quadgk at 216
It doesn't matter how high you set the MaxIntervalCount; you'll keep running into this error. Similar things also happen when using quad, quadl, or similar (these underly the R2012 integral function).
As this warning and plot show, the integral is just not suited to approximate accurately by any quadrature method implemented in standard MATLAB (at least, that I know of).
I believe the proper analytical derivation, as done on the physics forum, is really the only way to get to the result without having to resort to specialized quadrature methods.
Here is the MATLAB/FreeMat code I got to solve an ODE numerically using the backward Euler method. However, the results are inconsistent with my textbook results, and sometimes even ridiculously inconsistent. What is wrong with the code?
function [x,y] = backEuler(f,xinit,yinit,xfinal,h)
%f - this is your y prime
%xinit - initial X
%yinit - initial Y
%xfinal - final X
%h - step size
n = (xfinal-xinit)/h; %Calculate steps
%Inititialize arrays...
%The first elements take xinit and yinit corespondigly, the rest fill with 0s.
x = [xinit zeros(1,n)];
y = [yinit zeros(1,n)];
%Numeric routine
for i = 1:n
x(i+1) = x(i)+h;
ynew = y(i)+h*(f(x(i),y(i)));
y(i+1) = y(i)+h*f(x(i+1),ynew);
end
end
Your method is a method of a new kind. It is neither backward nor forward Euler. :-)
Forward Euler: y1 = y0 + h*f(x0,y0)
Backward Euler solve in y1: y1 - h*f(x1,y1) = y0
Your method: y1 = y0 +h*f(x0,x0+h*f(x0,y0))
Your method is not backward Euler.
You don't solve in y1, you just estimate y1 with the forward Euler method. I don't want to pursue the analysis of your method, but I believe it will behave poorly indeed, even compared with forward Euler, since you evaluate the function f at the wrong point.
Here is the closest method to your method that I can think of, explicit as well, which should give much better results. It's Heun's Method:
y1 = y0 + h/2*(f(x0,y0) + f(x1,x0+h*f(x0,y0)))
The only issue I can spot is that the line:
n=(xfinal-xinit)/h
Should be:
n = abs((xfinal-xinit)/h)
To avoid a negative step count. If you are moving in the negative-x direction, make sure to give the function a negative step size.
Your answers probably deviate because of how coarsely you are approximating your answer. To get a semi-accurate result, deltaX has to be very very small and your step size has to be very very very small.
PS. This isn't the "backward Euler method," it is just regular old Euler's method.
If this is homework please tag it so.
Have a look at numerical recipes, specifically chapter 16, integration of ordinary differential equations. Euler's method is known to have problems:
There are several reasons that Euler’s method is not recommended for practical use, among them, (i) the method is not very accurate when compared to other, fancier, methods run at the equivalent stepsize, and (ii) neither is it very stable
So unless you know your textbook is using Euler's method, I wouldn't expect the results to match. Even if it is, you probably have to use an identical step size to get an identical result.
Unless you really want to solve an ODE via Euler's method that you've written by yourself you should have a look at built-in ODE solvers.
On a sidenote: you don't need to create x(i) inside the loop like this: x(i+1) = x(i)+h;. Instead, you can simply write x = xinit:h:xfinal;. Also, you may want to write n = round(xfinal-xinit)/h); to avoid warnings.
Here are the solvers implemented by MATLAB.
ode45 is based on an explicit
Runge-Kutta (4,5) formula, the
Dormand-Prince pair. It is a one-step
solver – in computing y(tn), it needs
only the solution at the immediately
preceding time point, y(tn-1). In
general, ode45 is the best function to
apply as a first try for most
problems.
ode23 is an implementation of an
explicit Runge-Kutta (2,3) pair of
Bogacki and Shampine. It may be more
efficient than ode45 at crude
tolerances and in the presence of
moderate stiffness. Like ode45, ode23
is a one-step solver.
ode113 is a variable order
Adams-Bashforth-Moulton PECE solver.
It may be more efficient than ode45 at
stringent tolerances and when the ODE
file function is particularly
expensive to evaluate. ode113 is a
multistep solver — it normally needs
the solutions at several preceding
time points to compute the current
solution.
The above algorithms are intended to
solve nonstiff systems. If they appear
to be unduly slow, try using one of
the stiff solvers below.
ode15s is a variable order solver
based on the numerical differentiation
formulas (NDFs). Optionally, it uses
the backward differentiation formulas
(BDFs, also known as Gear's method)
that are usually less efficient. Like
ode113, ode15s is a multistep solver.
Try ode15s when ode45 fails, or is
very inefficient, and you suspect that
the problem is stiff, or when solving
a differential-algebraic problem.
ode23s is based on a modified
Rosenbrock formula of order 2. Because
it is a one-step solver, it may be
more efficient than ode15s at crude
tolerances. It can solve some kinds of
stiff problems for which ode15s is not
effective.
ode23t is an implementation of the
trapezoidal rule using a "free"
interpolant. Use this solver if the
problem is only moderately stiff and
you need a solution without numerical
damping. ode23t can solve DAEs.
ode23tb is an implementation of
TR-BDF2, an implicit Runge-Kutta
formula with a first stage that is a
trapezoidal rule step and a second
stage that is a backward
differentiation formula of order two.
By construction, the same iteration
matrix is used in evaluating both
stages. Like ode23s, this solver may
be more efficient than ode15s at crude
tolerances.
I think this code could work. Try this.
for i =1:n
t(i +1)=t(i )+dt;
y(i+1)=solve('y(i+1)=y(i)+dt*f(t(i+1),y(i+1)');
end
The code is fine. Just you have to add another loop within the for loop. To check the level of consistency.
if abs((y(i+1) - ynew)/ynew) > 0.0000000001
ynew = y(i+1);
y(i+1) = y(i)+h*f(x(i+1),ynew);
end
I checked for a dummy function and the results were promising.