Finding an unknown ordinary differential equation - matlab

Given
d²x/dt² + a·dx/dt + 7.9·x³ = 3.2·sin(xt)
with initial conditions
x(0) = +1.2
dx/dt(0) = −3.3
x(2.3) = −0.6
Find numerically all the possible values of a, each accurate to at least 3 significant digits.
Is there any method other than brute force for solving this?

As far as I can see, it is not possible to solve this problem as stated.
Here is what I did. I implemented your problem in a reasonably general way:
%{
Find all 'a' for which
d²x/dt² + a·dx/dt + 7.9·x³ - 3.2·sin(xt) = 0
with initial conditions
x(0) = +1.2
dx/dt(0) = −3.3
x(2.3) = −0.6
%}
function odetest
% See how the function search_a(a) behaves around a = 0:
test_as = 0 : 0.1 : 10;
da = zeros(size(test_as));
for ii = 1:numel(test_as)
da(ii) = search_a(test_as(ii)); end
figure(100), clf, hold on
plot(test_as, da)
axis tight
xlabel('a')
ylabel('|x(2.3) - 0.6|')
% Roughly cherry-pick some positive values, improve the estimate, and
% plot the solutions
opt = optimset('tolfun',1e-14, 'tolx',1e-12);
plot_x(fminsearch(#search_a, 0.0, opt), 1)
plot_x(fminsearch(#search_a, 1.4, opt), 2)
plot_x(fminsearch(#search_a, 3.2, opt), 3)
% Plot single solution
function plot_x(a,N)
[xt, t] = solve_ode(a);
figure(N), clf, hold on
plot(t,xt)
plot(2.3, -0.6, 'rx', 'markersize', 20)
title (['x(t) for a = ' num2str(a)])
xlabel('t')
ylabel('x(t)')
end
end
% Solve the problem for a value a, and return the difference between the
% actual value and desired value (-0.6)
function da = search_a(a)
a_desired = -0.6;
xt = solve_ode(a);
da = abs(xt(end) - a_desired);
end
% Solve the problem for any given value of a
function [xt, t] = solve_ode(a)
y0 = [1.2 -3.3];
tfinal = 2.3;
opt = odeset('AbsTol',1e-12, 'RelTol',1e-6);
[t,yt] = ode45(#(y,t) odefun(y,t,a), [0 tfinal], y0, opt);
xt = yt(:,1); % transform back to x(t)
end
% Most ODE solvers solve first-order systems. This is not a problem for a
% second-order system, because if we make the transformation
%
% y(t) = [ x (t)
% x'(t) ]
%
% Then we can solve for
%
% y'(t) = [ x' (t)
% x''(t) ] <- the second-order homogeneous DE
%
function dydt = odefun(t,y,a)
dydt = [y(2)
-a*y(2) - 7.9*y(1)^3 + 3.2*sin(y(1)*t)];
end
The first part gave me this figure:
Some further investigation suggests that this only grows for larger a.
This figure gave rise to the initial estimates a = [0, 1.4, 3.2], which I improved via fminsearch() and plotted the solutions of:
So, that probably enables you to hand in your homework :)
However, why I say it's impossible to answer the question as stated, is because this is what the first plot looks like for negative a:
The oscillatory behavior seems to continue indefinitely, and the spacing in between the zeros seems to decrease in a non-predictable way.
Now, my university days are long behind me, and I'm not so well-versed in ODE theory anymore. Perhaps there is a pattern to it, that just doesn't show because of numerical problems. Or perhaps the oscillation stops after some value, never to return again. Or perhaps another zero turns up at a = +1053462664212.25.
I can't prove any of these things, I just know how to brute-force it; the rest is up to you.

Related

How to miss out matrix elements in pcolor plot?

I'm solving a set of nonlinear simultaneous equations using Matlab's fsolve to find unknown parameters x1 and x2. The simultaneous equations have two independent parameters a and b, as defined in the root2d function:
function F = root2d(x,a,b)
F(1) = exp(-exp(-(x(1)+x(2)))) - x(2)*(a+x(1)^2);
F(2) = x(1)*cos(x(2)) + x(2)*sin(x(1)) - b;
end
I use the following code to solve the simultaneous equations, and plot the results as a 2d figure using pcolor.
alist = linspace(0.6,1.2,10);
blist = linspace(0.4,0.8,5);
% results
x1list = zeros(length(blist),length(alist));
x2list = zeros(length(blist),length(alist));
% solver options
options = optimoptions('fsolve','Display','None');
for ii = 1:length(blist)
b = blist(ii);
for jj = 1:length(alist)
a = alist(jj);
x0 = [0 0]; % init guess
[xopt,yopt,exitflag] = fsolve(#(x0)root2d(x0,a,b),x0,options);
% optimised values
x1list(ii,jj) = xopt(1);
x2list(ii,jj) = xopt(2);
success(ii,jj) = exitflag; % did solver succeed?
end
end
% plotting
figure
s = pcolor(alist(success>0),blist(success>0),x1list(success>0));
xlabel('a')
ylabel('b')
title('my data x_1')
figure
s = pcolor(alist(success>0),blist(success>0),x2list(success>0));
xlabel('a')
ylabel('b')
title('my data x_2')
However I only want to plot the x1 and x2 where the solver has successfully converged to a solution. This is where the success matrix element (or exitflag) has a value greater than 0. Usually you just write x1list(success>0) when using the plot function and Matlab omits any solutions where (success<=0), but pcolor doesn't have that functionality.
Is there a way around this? For example, displaying all (success<=0) solutions as a black area.
Yes there is!
The easiest way is to make them NaN, so they are simply not drawn.
just do
x1list(~success)=NaN;
pcolor(alist,blist,x1list)

ode45 converges to correct curve shape, but with wrong solution

Thanks in advance for your help. I'm not looking for an explicit solution to my problem, but rather to have my probably obvious errors pointed out.
I have been plugging away at solving a system of non-linear, first order ODEs in MATLAB. The system was solved numerically in this study: http://web.math.ku.dk/~moller/e04/bio/ludwig78.pdf
I have been following the documentation for ode45, and have code that runs.
I have done all of the work to understand and recreate the model from scratch. I presented the qualitative part for a class project. What I am doing now is taking that project a step farther by solving the system in MATLAB with runge-kutta (or any method that works). Finally, I want to dive into the theory behind the numerical analysis to find out why the chosen method converges.
Here is a plot of the numerically solved system, which I am trying to re-create:
I have found that I can create a plot with roughly the same shape, but there are several problems:
The time-scale over which the change occurs is three times that of the above plot.
The range of function values is is vastly wrong.
The desired shapes only occur if I tweak the initial conditions to
be significantly different than what is shown near t=0 above.
So what I'm looking for is a reason for these discrepancies. I've checked my system of ODEs and parameter values so many times my eyes are blurry. Perhaps I am missing something conceptually?
Code:
% System Parameters:
r_b = 1.52;
k_b = 355;
alph = 1.11;
bet = 43200;
r_e = 0.92;
k_e = 1;
p = 0.00195;
r_s = 0.095;
k_s = 25440;
tspan = [0 200];
init = [1 1 1];
[t, Y] = ode45(#(t,y) odefcn(t, y, r_b, k_b, alph, bet, r_e, k_e, p, r_s, k_s), tspan, init);
subplot(3,1,1);
plot(t,Y(:,1),'b');
title('Budworm Density');
subplot(3,1,2)
plot(t,Y(:,2),'g');
title('Branch Density');
subplot(3,1,3);
plot(t,Y(:,3),'r');
title('Foliage Condition');
function dydt = odefcn(t, y, r_b, k_b, alph, bet, r_e, k_e, p, r_s, k_s)
dydt = [ r_b*y(1)*(1 - y(1)/(k_b*y(2))) - bet*(y(1)^2/((alph*y(2))^2 + y(1)^2));
r_s*y(2)*(1 - (y(2)*k_e)/(k_s*y(3)));
r_e*y(3)*(1 - (y(3)/k_e)) - p*y(1)/y(2)
];
end
I don't see anything wrong with your code as such. But I think there are some subtleties involved in producing the figure which are not well explained in the paper.
1) The S axis is scaled (it says 'relative' in the label). I believe they've scaled S by k_s. I think you also need to scale the parameter p (set p = p*k_s) else the final term in the equation for E will be tiny and the E population won't decrease over the required timescales.
2) I think they must have enforced some lower limit on E, to avoid dividing by 0. You can see in the figure that E->0 first, but in your equation for S, if this happened then you would be dividing by 0 and the solver wouldn't converge.
Putting these together, the following slight modification of your code produces a result more similar to that in the paper:
% System Parameters:
r_b = 1.52;
k_b = 355;
alph = 1.11;
bet = 43200;
r_e = 0.92;
k_e = 1;
p = 0.00195;
r_s = 0.095;
k_s = 25440;
% Scale p with k_s
p = p*k_s;
tspan = [0 50]; % [0 200];
init = [1e-16 0.075*k_s 1]; % [1 1 1];
[t, Y] = ode45(#(t,y) odefcn(t, y, r_b, k_b, alph, bet, r_e, k_e, p, r_s, k_s), tspan, init);
% To scale before plotting, so everything fits on a 0->1 y axis.
maxB = 500;
S_scale = k_s;
figure('Position', [200 200 1000 600]);
hold on;
plot(t,Y(:,1)/maxB,'b');
plot(t,Y(:,2)/(S_scale),'g');
plot(t,Y(:,3),'r');
ylim([0, 1]);
hold off;
box on;
legend({['Budworm Density, B / ', num2str(maxB)], 'Branch Density, S / 0.75', 'Foliage Condition, E'}, ...
'Location', 'eastoutside')
function dydt = odefcn(t, y, r_b, k_b, alph, bet, r_e, k_e, p, r_s, k_s)
% Place lower limit on E
E = max(y(3), 1e-5);
dydt = [ r_b*y(1)*(1 - y(1)/(k_b*y(2))) - bet*(y(1)^2/((alph*y(2))^2 + y(1)^2));
r_s*y(2)*(1 - (y(2)*k_e)/(k_s*E));
r_e*E*(1 - (E/k_e)) - p*y(1)/y(2)
];
end
There is a lot of sensitivity to the initial conditions.
A further tweak gets you closer still to the original figure, but I'm not sure if this is just a hack: in the first equation, replace k_b*y(2) with just k_b. Without this, the Budworm density becomes too big before decreasing. The new plot is below.

Least squares assuming functional form of the solution

I am trying to solve the following least squares problem:
b(alpha)=A(alpha,beta)x(beta)
I am trying to use an alternative approach, which is to assume the functional form of x(beta) through the use of tunable parameters, say x(beta, a, c). How can I solve this problem in MATLAB for a least squares solution for those parameters?
I second the comments -this would be much easier if you gave a slightly more verbose description of your problem and most importantly add a minimal working example.
As far as I understand though, you want to solve a linear system of equations with some additional assumptions about the fitted parameters. This can be done by expressing them as an optimisation problem.
Here for example I've fitted a quadratic where the coefficients of x^0 and x^1 are both dependant on some other arbitrary parameter a (for this example a = 6 - that's what we're trying to recover from the data).
There are 2 different approaches plotted here - unconstrained and constrained optimisation. You can see that all of them approximate our data well, but only the constrained optimisation recovers a value of a close to 6 (5.728). Anyway, have a look at the code and I hope this helps with your problem somewhat. If you can, try to use the reduced number of parameters approach. It is always better to reduce your fitting problems to lower dimensional spaces if possible - much less risk of local minima and much faster solutions.
Here is the code:
close all; clear; clc;
%% Generate test data
x = 1:100;
rng(0); % Seed rng
% Polynomial where we know something about the parameters - we know that if
% the coefficient of x^0 is 'a'm then the coefficient of x^1 is (1-a).
a = 6;
y = a + (1-a).*x + 0.1*x.^2;
y = y + 30*randn(size(x)); % Add some noise
%% Fit with mrdivide and Vandermonde matrix
A = vander(x); A = A(:,end-2:end)';
b = y;
k1 = b/A;
%% Fit with an unconstrained optimiser
f = #(k) optimfun1(x,y,k);
k0 = [1 1 1]; % Starting point
k2 = fminsearch(f,k0);
%% Fit with a constrained optimiser
f = #(k) optimfun1(x,y,k);
k0 = [1 1 1];
Aeq = [0 1 1]; beq = 1; % Constrain k2 = 1 - k3 which is equivalent to k2+k3 = 1
k3 = fmincon(f,k0,[],[],Aeq,beq);
%% Fit with a reduced number of parameters
f = #(k) optimfun2(x,y,k);
k0 = [1 1];
k4 = fminsearch(f,k0);
k4 = [k4 1-k4(2)]; % Infer the last coeff.
%% Plot
plot(x,y,'ko');
hold on
plot(x,polyval(k1,x));
plot(x,polyval(k2,x));
plot(x,polyval(k3,x));
plot(x,polyval(k4,x));
legend('k^{dat} = [6.000 -5.000 0.100];',...
sprintf('k^{unc}_1 = [%.3f %.3f %.3f]',flipud(k1(:))),...
sprintf('k^{unc}_2 = [%.3f %.3f %.3f]',flipud(k2(:))),...
sprintf('k^{cns}_1 = [%.3f %.3f %.3f]',flipud(k3(:))),...
sprintf('k^{cns}_2 = [%.3f %.3f %.3f]',flipud(k4(:))),...
'location','northwest');
grid on;
xlabel('x');
ylabel('f(x)');
title(sprintf('f(x) = a + (1-a)x + 0.1x^2; a = %d',a));
function diff = optimfun1(x,y,k)
yfit = polyval(k,x);
dy = yfit-y;
diff = sum(dy.^2); % Sum of squared residuals
end
function diff = optimfun2(x,y,k)
k = [k 1-k(2)]; % Infer the last coeff.
yfit = polyval(k,x);
dy = yfit-y;
diff = sum(dy.^2);
end
Without knowing exactly how does the parameter works, it is difficult to figure out what to do. For example if the parameter is
x(beta, a, c) = a * x(beta) + c
Then your equation becomes
b(alpha)= A(alpha,beta) * (a * x(beta) + c)
b(alpha) - c*A(alpha,beta) = A(alpha,beta) * a * x(beta)
which then perhaps you can solve in the standard way (I'm treating b and A as numbers and x as the only variable here disregarding the alpha and beta). For more non-linear relation, it gets complex.

Replicating a figure of an article in MATLAB

I want to replicate a figure from this article. More specifically, I want to replicate Figure number 4, which I believe is the representation of Equation 9.
So far I have come up with this code:
% implementing equation 9 and figure 4
step = 0.01; t = 1:step:3600;
d = 3; % dimension
N = 8000; % number of molecules
H = 0.01; % H = [0.01,0.1,1] is in mol/micrometer^3
H = H*6.02214078^5; % hence I scaled the Avogadro's number (right or wrong?)
D = 10; % diffusion coefficient in micrometer^2/sec
u(1) = 1./(1.^(d/2)); % inner function in equation 9; first pulse
for i = 2:numel(t)/1000
u(i) = u(i-1)+(1./(i.^(d/2))); % u-> the pulse number
lmda(i) = (1/(4*pi*D))*((N/(H)).*sum(u)).^(2/d);
end
figure;plot(lmda)
But I am not able to replicate it.
Equation 9
For details on the parameters, refer to the above code. The authors did mention that the summation in equation 9 is a Reimann Zeta series. Wonder if that has anything to do with the result?
Figure 4, which I am trying to replicate:
Could someone kindly tell me the mistake I am making?
P.s: This is not a homework.
Problem 1: You think you are scaling by Avogadro's number on this line
H = H*6.02214078^5;
In fact, you're scaling by approximately 7920=6.022^5. If you wanted to scale by the Avogadro number then you should do:
H = H * 6.02214078e23 % = 6.02214078 * 10^23 : the Avogadro number
Problem 2: You aren't plotting against t, you're plotting against the sample number which doesn't really make sense (unless your t happened to be in integer seconds). Remove the /1000 from your loop
for i = 2:numel(t)
% ...
end
% Then plot
plot(t, lmda)
At this stage we can see something is really wrong. Now that we're scaling by the correct Avo number, the orders of magnitude are way out. I suggest that you trust the H in figure 4 and the H in equation 9 are the same H, it would be very confusing if the author intended anything different!
On that basis, I would suggest you are using the wrong D, N, or time between pulses. I've set up the pulse timing a bit clearer in my code below. I've also streamlined your loop somewhat using vectorisation, and removed the H scaling.
If you tweak it so dtPulses=100 as well as D=100, then the plots are almost identical. You perhaps need to consider how these two numbers affect the result...
% implementing equation 9 and figure 4
d = 3; % dimension
N = 8000; % number of molecules
D = 100; % diffusion coefficient in micrometer^2/sec
dtPulses = 10; % Seconds between pulses
tPulses = 1:dtPulses:3600; % Time array to plot against
nt = numel(tPulses);
i = 1:nt; % pulse numbers
u = 1 ./ (i.^(d/2)); % inner function in equation 9: individual pulse
for k = 2:nt % Running sum
u(k) = u(k-1)+u(k);
end
% Now plot for different H (mol/micrometer^3)
H = [0.01, 0.1, 1];
figure; hold on; linestyles = {':k', '--k', '-k'};
for nH = 1:3
lmda = ((1/(4*pi*D))*(N/H(nH)).*u).^(2/d);
plot(tPulses, lmda, linestyles{nH}, 'linewidth', 2)
end

Matlab solution for two graphs

I have a function f(t) and want to get all the points where it intersects y=-1 and y=1 in the range 0 to 6*pi.
The only way I cold do it is ploting them and trying to locate the x-axis pt where f(t) meets the y=1 graph. But this doesn't give me the exact point. Instead gives me a near by value.
clear;
clc;
f=#(t) (9*(sin(t))/t) + cos(t);
fplot(f,[0 6*pi]);
hold on; plot(0:0.01:6*pi,1,'r-');
plot(0:0.01:6*pi,-1,'r-');
x=0:0.2:6*pi; h=cos(x); plot(x,h,':')
You are essentially trying to solve a system of two equations, at least in general. For the simple case where one of the equations is a constant, thus y = 1, we can solve it using fzero. Of course, it is always a good idea to use graphical means to find a good starting point.
f=#(t) (9*(sin(t))./t) + cos(t);
y0 = 1;
The idea is if you want to find where the two curves intersect, is to subtract them, then look for a root of the resulting difference.
(By the way, note that I used ./ for the divide, so that MATLAB won't have problem for vector or array input in f. This is a good habit to develop.)
Note that f(t) is not strictly defined in MATLAB at zero, since it results in 0/0. (A limit exists of course for the function, and can be evaluated using my limest tool.)
limest(f,0)
ans =
10
Since I know the solution is not at 0, I'll just use the fzero bounds from looking there for a root.
format long g
fzero(#(t) f(t) - y0,[eps,6*pi])
ans =
2.58268206208857
But is this the only root? What if we have two or more solutions? Finding all the roots of a completely general function can be a nasty problem, as some roots may be infinitely close together, or there may be infinitely many roots.
One idea is to use a tool that knows how to look for multiple solutions to a problem. Again, found on the file exchange, we can use research.
y0 = 1;
rmsearch(#(t) f(t) - y0,'fzero',1,eps,6*pi)
ans =
2.58268206208857
6.28318530717959
7.97464518075547
12.5663706143592
13.7270312712311
y0 = -1;
rmsearch(#(t) f(t) - y0,'fzero',1,eps,6*pi)
ans =
3.14159265358979
5.23030501095915
9.42477796076938
10.8130654321854
15.707963267949
16.6967239156574
Try this:
y = fplot(f,[0 6*pi]);
now you can analyse y for the value you are looking for.
[x,y] = fplot(f,[0 6*pi]);
[~,i] = min(abs(y-1));
point = x(i);
this will find one, nearest crossing point. Otherwhise you going through the vector with for
Here is the variant with for I often use:
clear;
clc;
f=#(t) (9*(sin(t))/t) + cos(t);
fplot(f,[0 6*pi]);
[fx,fy] = fplot(f,[0 6*pi]);
hold on; plot(0:0.01:6*pi,1,'r-');
plot(0:0.01:6*pi,-1,'r-');
x=0:0.2:6*pi; h=cos(x); plot(x,h,':')
k = 1; % rising
kt = 1; % rising
pn = 0; % number of crossings
fy = abs(fy-1);
for n = 2:length(fx)
if fy(n-1)>fy(n)
k = 0; % falling
else
k = 1; % rising
end
if k==1 && kt ==0 % change from falling to rising
pn = pn +1;
p(pn) = fx(n);
end
kt = k;
end
You can make this faster, if you make an mex-file of this...