Matlab understanding ode solver - matlab

I have a system of linked differential equations that I am solving with the ode23 solver. When a certain threshold is reached one of the parameters changes which reverses the slope of my function.
I followed the behavior of the ode with the debugging function and noticed that it starts to jump back in "time" around this point. Basically it generates more data points.However, these are not all represented in the final solution vector.
Can somebody explain this behavior, especially why not all calculated values find their way into the solution vector?
//Edit: To clarify, the behavior starts when v changes from 0 to any other value. (When I write every value of v to a vector it has more than a 1000 components while the ode solver solution only has ~300).
Find the code of my equations below:
%chemostat model, based on:
%DCc=-v0*Cc/V + umax*Cs*Cc/(Ks+Cs)-rd
%Dcs=(v0/V)*(Cs0-Cs) - Cc*(Ys*umax*Cs/(Ks+Cs)-m)
function dydt=systemEquationsRibose(t,y,funV0Ribose,V,umax,Ks,rd,Cs0,Ys,m)
v=funV0Ribose(t,y); %funV0Ribose determines v dependent on y(1)
if y(2)<0
y(2)=0
end
dydt=[-(v/V)*y(1)+(umax*y(1)*y(2))/(Ks+y(2))-rd;
(v/V)*(Cs0-y(2))-((1/Ys)*(umax*y(2)*y(1))/(Ks+y(2)))];
Thanks in advance!
Cheers,
dahlai

The first conditional can also be expressed as
y(2) = max(0, y(2)).
As one can see, this is still a continuous function, but with a kink, i.e., a discontinuity in the first derivative. One can this also interpret as a point with curvature radius 0, i.e., infinite curvature.
ode23 uses an order 2 method to integrate, an order 3 method to estimate the error and probably the order 1 Euler step to estimate stiffness.
An integration step over the kink renders all discretization errors to be order 1 (or 2, depending on the convention), confounding the logic of the step size control. This forces a rather radical step-size reduction, but since that small step then falls, most probably, short of the kink, the correct orders are found again, resulting in a step-size increase in the next step which could again go over the kink etc.
The return array only contains successful integration steps, not the failed attempts of the step-size control.

Related

I have some problems with the derivative in Matlab

In MATLAB:
Using the X and Y values below, write a MATLAB function SECOND_DERIV in MATLAB. The output of the function should be the approximate value for the second derivative of the data at x, the input variable of the function.
Use the forward difference method and interpolate to get your final answer;
X=[1,1.2,1.44,1.73,2.07,2.49,2.99,3.58,4.3,5.16,6.19,7.43,8.92,10.7,12.84,15.41,18.49];
Y=[18.89,19.25,19.83,20.71,21.96,23.6,25.56,27.52,28.67,27.2,19.38,-2.05,-50.9,-152.82,-354.73,-741.48,-1465.11];​
This is my coding:
function output = SECOND_DERIV(R)
X=[1,1.2,1.44,1.73,2.07,2.49,2.99,3.58,4.3,5.16,6.19,7.43,8.92,10.7,12.84,15.41,18.49];
Y=[18.89,19.25,19.83,20.71,21.96,23.6,25.56,27.52,28.67,27.2,19.38,-2.05,-50.9,-152.82,-354.73,-741.48,-1465.11];
%forward difference method first time.
XX=X(1:end-1)
%first derivative.
dydx=diff(Y)./diff(X)
%second derivative.
dydx2=diff(dydx)
%forward difference method second time.
XXX=XX(1:end-1)
%get the second derivative from input x.
output= interp1(XXX,dydx2,x,'linear','extrap')
end
I do not know what wrong with it.
This is the result I got from my course's web
First, there is no "the" approximate value but rather only "an" approximate value among an infinite set of approximation schemes. In that sense your excercise is ill-defined (but, to be fair, there is probably something you had in the lessons, that completes information).
Using forward differences twice is almost as bad an approximation as it can get. With each forward difference you are displacing the abscissa of the preferred (central difference) approximation by half a sample distance towards the "past".
For the first difference this can be justified by the fact that you might want to stick with the original X-samples. But in the second step you introduce a second displacement by half a sample distance. In order to keep approximation error at least reasonably low, the least you can do is to correct the displacement afterwards by one sample distance towards the "future". This doesn't bring you exactly back to central differences because of non-equidistance, but it's the minimal correction that should be done for the sake of accuracy.
Hence I would replace
XXX=XX(1:end-1)
by
XXX=XX(2:end)
But again, like so many school excercises, the problem is ill-defined and it is difficult to tell from the distance, if this is what is expected from you.

Matlab ode15s: postive dx/dt, decreasing x(t)

In my script, I call the ODE solver ode15s which solves a system of 9 ODE's. A simplified structure of the code:
[t, x] = ode15s(#odefun,tini:tend,options)
...
function dx = odefun(t,x)
r1=... %rate equation 1, dependent on x(1) and x(3) for example
r2=... %rate equation 2
...
dx(1) = r1+r2-...
dx(2) = ...
...
dx(9) = ...
end
When reviewing the results I was curious why the profile of one state variable was increasing at a certain range. In order to investigate this, I used conditional debugging within the ode function so I could check all the rates and all the dx(i)/dt equations.
To my big surprise, I found out that the differential equation of the decreasing state variable was positive. So, I simulated multiple rounds with the F5-debug function, and noticed that indeed the state variable consistently decreased, while the dx(i)/dt would always remain positive.
Can anyone explain me how this is possible?
It is not advisable to pause the integration in the middle like that and examine the states and derivatives. ode15s does not simply step through the solution like a naive ODE solver. It makes a bunch of calls to the ODE function with semi-random states in order to compute higher-order derivatives. These states are not solutions to system but are used internally by ode15s to get a more accurate solution later.
If you want to get the derivative of your system at particular times, first compute the entire solution and then call your ODE function with slices of that solution at the times you are interested in.

Forcing matlab ODE solvers to use dy/dx = 0 IF dy/dx is negative

I need to numerically integrate the following system of ODEs:
dA/dR = f(R,A,B)
dB/dR = g(R,A,B)
I'm solving the ODEs for a Initial-value stability problem. In this problem, the system is initially stable but goes unstable at some radius. However, whilst stable, I don't want the amplitude to decay away from the starting value (to O(10^-5) for example) as this is non-physical since the system's stability is limited to the background noise amplitude. The amplitude should remain at the starting value of 1 until the system destabilises. Hence, I want to overwrite the derivative estimate to zero whenever it is negative.
I've written some 4th order Runge-Kutta code that achieves this, but I'd much prefer to simply pass ODE45 (or any of the built in solvers) a parameter to make it overwrite the derivative whenever it is negative. Is this possible?
A simple, fast, efficient way to implement this is via the max function. For example, if you want to make sure all of your derivatives remain non-negative, in your integration function:
function ydot = f(x,y)
ydot(1) = ...
ydot(2) = ...
...
ydot = max(ydot,0);
Note that this is not the same thing as the output states returned by ode45 remaining non-negative. The above should ensure that your state variables never decay.
Note, however, that that this effectively makes your integration function stiff. You might consider using a solver like ode15s instead, or at least confirming that the results are consistent with those from ode45. Alternatively, you could use a continuous sigmoid function, instead of the discontinuous, step-like max. This is partly a modeling decision.

Solving multiple phase angles for multiple equations

I have several equations and each have their own individual frequencies and amplitudes. I would like to sum the equations together and adjust the individual phases, phase1,phase2, and phase3 to keep the total amplitude value of eq_total under a specific value like 0.8. I know I can normalize the signal or change the vertical offset, but for my purposes I need to have the amplitude controlled by changing/finding the values for just the phases in phase1,phase2, and phase3 that will limit the maximum amplitude when the equations are summed.
Note: I'm using constructive and destructive phase interference to adjust the maximum amplitude of the summed equations.
Example:
eq1=0.2*cos(2pi*t*3+phase1)+vertical offset1
eq2=0.7*cos(2pi*t*9+phase2)+vertical offset2
eq3=0.8*cos(2pi*t*5+phase3)+vertical offset3
eq_total=eq1+eq2+eq3
Is there a way to solve for phase1,phase2, and phase3 so that the amplitude of the summed signals in eq_total never goes over 0.8 by just adjusting/finding the values of phase1,phase2,and phase3?
Here's a picture of a geogebra applet I tested this idea with.
Here's the geogebra ggb file I used to edit/test idea with. (I used this to see if my idea would work) Java is required if you want to dynamically interact with the applet
http://dl.dropbox.com/u/6576402/questions/ggb/sin_find_phases_example.ggb
I'm using matlab/octave
Thanks
Your example
eq1=0.2*cos(2pi*t*3+phase1)+vertical offset1
eq2=0.7*cos(2pi*t*9+phase2)+vertical offset2
eq3=0.8*cos(2pi*t*5+phase3)+vertical offset3
eq_total=eq1+eq2+eq3
where the maximum amplitude should be less than 0.8, has infinitely many solutions. Unless you have some additional objective you'd like to achieve, I suggest that you modify the problem such that you find the combination of phase angles that has a maximum amplitude of exactly 0.8 (or 0.79, such that you're guaranteed to be below).
Furthermore only two out of three phase angles are independent; if you increase all by, say, pi/3, the solution still holds. Thus, you have only two unknowns in eq_total.
You can solve the nonlinear optimization problem using e.g. FMINSEARCH. You formulate the problem such that max(abs(eq_total(phase1,phase2))) should equal 0.79.
Thus:
%# define the vector t, verticalOffset here
%# objectiveFunction is (eq_total-0.79)^2, so the phase shifts 1 and 2 that
%# satisfy this (approximately) should guarantee that signal never exceeds 0.8
objectiveFunction = #(phase)(max(abs(0.2*cos(2*pi*t+phase(1))+0.7*cos(2*pi*t*9+phase(2))+0.8*cos(2*pi*t*5)+verticalOffset)) - 0.79)^2;
%# search for optimal phase shift, starting at no shift
solution = fminsearch(objectiveFunction,[0;0]);
EDIT
Unfortunately when I Try this code and plot the results the maximum amplitude is not 0.79 it's over 1. Am I doing something wrong? see code below t=linspace(0,1,8000); verticalOffset=0; objectiveFunction = #(phase)(max(abs(0.2*cos(2*pi*t+phase(1))+0.7*cos(2*pi*t*9+phase(2))+0.8*cos(2*p‌​i*t*5)+verticalOffset)) - 0.79)^2; s1 = fminsearch(objectiveFunction,[0;0]) eqt=0.2*cos(2*pi*t+s1(1))+0.7*cos(2*pi*t*9+s1(2))+0.8*cos(2*pi*t*5)+verticalOffs‌​et; plot(eqt)
fminsearch will find a minimum of the objective function. Whether this solution satisfies all your conditions is something you have to test. In this case, the solution given by fminsearch with the starting value [0;0] gives a maximum of ~1.3, which is obviously not good enough. However, when you plot the maximum for a range of phase angles from 0 to 2pi, you'll see that `fminsearch didn't get stuck in a bad local minimum. Rather, there is no good solution at all (z-axis is the maximum).
If I understand you correctly, you are trying to find a phase to vary the amplitude of a signal. To my knowledge, this is not possible.
For a signal
s = A * cos (w*t + phi)
only A allows you to change the amplitude. With w you change the frequency of the signal and phi regulates the "horizontal shift".
Furthermore, I think you are missing a "moving variable" like the time t in the equation above.
Maybe this article clarifies things a little.
If you set all the vertical offsets to be equal to -1, then it solves your problem because each eq# will never be > 0, so the sum can never be >0.8.
I know that this isn't that helpful, but I'm hoping that this will help you understand your problem better.

How to overcome singularities in numerical integration (in Matlab or Mathematica)

I want to numerically integrate the following:
where
and a, b and β are constants which for simplicity, can all be set to 1.
Neither Matlab using dblquad, nor Mathematica using NIntegrate can deal with the singularity created by the denominator. Since it's a double integral, I can't specify where the singularity is in Mathematica.
I'm sure that it is not infinite since this integral is based in perturbation theory and without the
has been found before (just not by me so I don't know how it's done).
Any ideas?
(1) It would be helpful if you provide the explicit code you use. That way others (read: me) need not code it up separately.
(2) If the integral exists, it has to be zero. This is because you negate the n(y)-n(x) factor when you swap x and y but keep the rest the same. Yet the integration range symmetry means that amounts to just renaming your variables, hence it must stay the same.
(3) Here is some code that shows it will be zero, at least if we zero out the singular part and a small band around it.
a = 1;
b = 1;
beta = 1;
eps[x_] := 2*(a-b*Cos[x])
n[x_] := 1/(1+Exp[beta*eps[x]])
delta = .001;
pw[x_,y_] := Piecewise[{{1,Abs[Abs[x]-Abs[y]]>delta}}, 0]
We add 1 to the integrand just to avoid accuracy issues with results that are near zero.
NIntegrate[1+Cos[(x+y)/2]^2*(n[x]-n[y])/(eps[x]-eps[y])^2*pw[Cos[x],Cos[y]],
{x,-Pi,Pi}, {y,-Pi,Pi}] / (4*Pi^2)
I get the result below.
NIntegrate::slwcon:
Numerical integration converging too slowly; suspect one of the following:
singularity, value of the integration is 0, highly oscillatory integrand,
or WorkingPrecision too small.
NIntegrate::eincr:
The global error of the strategy GlobalAdaptive has increased more than
2000 times. The global error is expected to decrease monotonically after a
number of integrand evaluations. Suspect one of the following: the
working precision is insufficient for the specified precision goal; the
integrand is highly oscillatory or it is not a (piecewise) smooth
function; or the true value of the integral is 0. Increasing the value of
the GlobalAdaptive option MaxErrorIncreases might lead to a convergent
numerical integration. NIntegrate obtained 39.4791 and 0.459541
for the integral and error estimates.
Out[24]= 1.00002
This is a good indication that the unadulterated result will be zero.
(4) Substituting cx for cos(x) and cy for cos(y), and removing extraneous factors for purposes of convergence assessment, gives the expression below.
((1 + E^(2*(1 - cx)))^(-1) - (1 + E^(2*(1 - cy)))^(-1))/
(2*(1 - cx) - 2*(1 - cy))^2
A series expansion in cy, centered at cx, indicates a pole of order 1. So it does appear to be a singular integral.
Daniel Lichtblau
The integral looks like a Cauchy Principal Value type integral (i.e. it has a strong singularity). That's why you can't apply standard quadrature techniques.
Have you tried PrincipalValue->True in Mathematica's Integrate?
In addition to Daniel's observation about integrating an odd integrand over a symmetric range (so that symmetry indicates the result should be zero), you can also do this to understand its convergence better (I'll use latex, writing this out with pen and paper should make it easier to read; it took a lot longer to write than to do, it's not that complicated):
First, epsilon(x)-\epsilon(y)\propto\cos(y)-\cos(x)=2\sin(\xi_+)\sin(\xi_-) where I have defined \xi_\pm=(x\pm y)/2 (so I've rotated the axes by pi/4). The region of integration then is \xi_+ between \pi/\sqrt{2} and -\pi/\sqrt{2} and \xi_- between \pm(\pi/\sqrt{2}-\xi_-). Then the integrand takes the form \frac{1}{\sin^2(\xi_-)\sin^2(\xi_+)} times terms with no divergences. So, evidently, there are second-order poles, and this isn't convergent as presented.
Perhaps you could email the persons who obtained an answer with the cos term and ask what precisely it is they did. Perhaps there's a physical regularisation procedure being employed. Or you could have given more information on the physical origin of this (some sort of second order perturbation theory for some sort of bosonic system?), had that not been off-topic here...
May be I am missing something here, but the integrand
f[x,y]=Cos^2[(x+y)/2]*(n[x]-n[y])/(eps[x]-eps[y]) with n[x]=1/(1+Exp[Beta*eps[x]]) and eps[x]=2(a-b*Cos[x]) is indeed a symmetric function in x and y: f[x,-y]= f[-x,y]=f[x,y].
Therefore its integral over any domain [-u,u]x[-v,v] is zero. No numerical integration seems to be needed here. The result is just zero.