How to fix never ending runtimes in MATLAB - matlab

So I have three seperate function in MATLAB where each have its designated purpose.
The first one calculates the partial derivative
The second finds the roots for a system of two equations and two variables.
The third is supposed to find the critical point.
I have tried doing everything separately and it worked. However when I do this through my CriticalPoint function it just keeps running(it says "busy" in the lower left corner).
I have tried solving this manually. Meaning that I've used each function as intended, saved the values in my workspace and used for the next function.
The only thing different is how I treated my partials.
I have a function which takes evaluates the partial derivative at a point [x,y].
derivative = Pderiv(f, a, b, i)
%The i denotes if the partial derivative is take with respect to x or y
%where if i == 1, then it means that partial derivative is with respect to
%x and if i == 2 then it is with respect to y
The thing I did differently was that I found the partial derivative for f(x,y) manually using pen and paper.
I.e
df/dx and df/dy (these are partials, sorry for not using the proper symbol)
And then I inserted it into my function:
[x0,y0] = MyNewton(f, g, a, b)
Where df/dx is the argument for "f" and df/dy is the argument for "g".
This gives me the correct x & y values when df/dx=df/dy=0.
What I want to have it do is for a given function f(x,y)
I want to find the partial derivative where the input values are still represented by x and y.
I was told by my teacher that the following expression:
g=#(x,y)NPderiv(f,x,y,1);
would be sufficent. Is it though? Since it works when I do it manually but the program runs forever when I define my function g as above.
I'm not sure if it is relevant but here is my code for the partial derivative:
function derivative = NPderiv(f, a, b, i)
h = 0.0000001;
fn=zeros(1,2);
if i == 1
fn(i) = (f(a+h,b)-f(a,b))/h;
elseif i==2
fn(i) = (f(a,b+h)-f(a,b))/h;
end
derivative = fn(i);
end
function for my critical point function:
function [x,y] = CriticalPoint(f, a, b)
g=#(x,y)NPderiv(f,x,y,1); %partial derivative for f(x,y) with respect to x
z=#(x,y)NPderiv(f,x,y,2);%partial derivative for f(x,y) with respect to y
[x,y]=MyNewton(g,z,a,b);
end
MyNewton function:
function [x0,y0] = MyNewton(f, g, a, b)
y0 = b;
x0 = a;
tol = 1e-18;
resf = tol + 1;
resg = tol + 1;
while resf > tol && resg > tol
dfdx = NPderiv(f, x0, y0, 1);
dfdy = NPderiv(f, x0, y0, 2);
dgdx = NPderiv(g, x0, y0, 1);
dgdy = NPderiv(g, x0, y0, 2);
jf1 = [f(x0,y0) dfdy; ...
g(x0,y0) dgdy];
jg1 = [dfdx f(x0,y0); ...
dgdx g(x0,y0)];
j2 = [dfdx dfdy; ...
dgdx dgdy];
jx = det(jf1)./det(j2);
jy = det(jg1)./det(j2);
x0 = x0 - jx;
y0 = y0 - jy;
resf = abs(f(x0,y0));
resg = abs(g(x0,y0));
end
end

Related

How to describe the derivative of matlab symbolic variable?

For example, I named the variables x(t), xdot(t) about time: syms t x(t) xdot(t);, and the derivative of x about time is xdot, xdot=diff(x,t).
But when I calculate diff (x, t), the result is not xdot. How can I set the display of diff (x, t) in the calculation result to xdot?
In addition, the time variable in the calculation result is displayed in the form of variable(t). How to set the result to display only variable?
syms t x(t) xdot(t);
xdot=diff(x,t);
y=diff(x,t);
y
the result of y is displayed as , however, I want it to be xdot
Consider the following code:
a = 3;
b = 3;
b
It appears to me, you would like this to display b = a, instead of b = 3. This is not how MATLAB works. Both a and b are equal to 3, but that doesn't mean a is b.
The same is true for the code:
a = 3;
b = a;
b
This gives b = 3, not b = a. I won't go into details about why this is the case (and why it has to be the case in MATLAB). It's enough to know that this is how it works.
You have assigned diff(x,t) to two different variables, xdot and y. This means that, while xdot and y have the same value, xdot is not y.
If you do xdot = 3 after your code, then y would still be dx/dt.
Side note: Why do you want this behaviour? Are you sure it's necessary? You can always print the result however you like:
fprintf('xdot = %s', y(t))
xdot = diff(x(t), t)
Ugly workaround to do something I don't understand why you would do:
Create a function:
function res = xdot(t)
syms x(t) res;
res = diff(x,t);
end
And then define y as a function handle:
y = #xdot
y = function_handle with value:
#xdot
Now y(t) will still give dx/dt, but y will give xdot
I strongly recommend you don't do this.

How to use matlab to quickly judge whether a function is convex?

For example, FX = x ^ 2 + sin (x)
Just for curiosity, I don't want to use the CVX toolbox to do this.
You can check this within some interval [a,b] by checking if the second derivative is nonnegative. For this you have to define a vector of x-values, find the numerical second derivative and check whether it is not too negative:
a = 0;
b = 1;
margin = 1e-5;
point_count = 100;
f=#(x) x.^2 + sin(x);
x = linspace(a, b, point_count)
is_convex = all(diff(x, 2) > -margin);
Since this is a numerical test, you need to adjust the parameter to the properties of the function, that is if the function does wild things on a small scale we might not be able to pick it up. E.g. with the parameters above the test will falsely report the function f=#(x)sin(99.5*2*pi*x-3) as convex.
clear
syms x real
syms f(x) d(x) d1(x)
f = x^2 + sin(x)
d = diff(f,x,2)==0
d1 = diff(f,x,2)
expSolution = solve(d, x)
if size(expSolution,1) == 0
if eval(subs(d1,x,0))>0
disp("condition 1- the graph is concave upward");
else
disp("condition 2 - the graph is concave download");
end
else
disp("condition 3 -- not certain")
end

SIR model using fsolve and Euler 3BDF

Hi i've been asked to solve SIR model using fsolve command in MATLAB, and Euler 3 point backward. I'm really confused on how to proceed, please help. This is what i have so far. I created a function for 3BDF scheme but i'm not sure how to proceed with fsolve and solve the system of nonlinear ODEs. The SIR model is shown as and 3BDF scheme is formulated as
clc
clear all
gamma=1/7;
beta=1/3;
ode1= #(R,S,I) -(beta*I*S)/(S+I+R);
ode2= #(R,S,I) (beta*I*S)/(S+I+R)-I*gamma;
ode3= #(I) gamma*I;
f(t,[S,I,R]) = [-(beta*I*S)/(S+I+R); (beta*I*S)/(S+I+R)-I*gamma; gamma*I];
R0=0;
I0=10;
S0=8e6;
odes={ode1;ode2;ode3}
fun = #root2d;
x0 = [0,0];
x = fsolve(fun,x0)
function [xs,yb] = ThreePointBDF(f,x0, xmax, h, y0)
% This function should return the numerical solution of y at x = xmax.
% (It should not return the entire time history of y.)
% TO BE COMPLETED
xs=x0:h:xmax;
y=zeros(1,length(xs));
y(1)=y0;
yb(1)=y0+f(x0,y0)*h;
for i=1:length(xs)-1
R =R0;
y1(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - R, y1(i-1,:)+2*h*F(i,:))
S = S0;
y2(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - S, y2(i-1,:)+2*h*F(i,:))
I= I0;
y3(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - I, y3(i-1,:)+2*h*F(i,:))
end
end
You have an implicit equation
y(i+1) - 2*h/3*f(t(i+1),y(i+1)) = G = (4*y(i) - y(i-1))/3
where the right-side term G is constant in the call to fsolve, that is, during the solution of the implicit step equation.
Note that this is for the vector valued system y'(t)=f(t,y(t)) where
f(t,[S,I,R]) = [-(beta*I*S)/(S+I+R); (beta*I*S)/(S+I+R)-I*gamma; gamma*I];
To solve this write
G = (4*y(i,:) - y(i-1,:))/3
y(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - G, y(i-1,:)+2*h*F(i,:))
where a midpoint step is used to get an order 2 approximation as initial guess, F(i,:)=f(t(i),y(i,:)). Add solver options for error tolerances as necessary, you want the error in the implicit equation smaller than the truncation error O(h^3) of the step. One can also keep only a short array of function values, then one has to be careful for the correspondence of the position in the short array to the time index.
Using all that and a reference solution by a higher order standard solver produces the following error graphs for the components
where one can see that the first order error of the constant first step results in a first order global error, while with a second order error in the first step using the Euler method results in a clear second order global error.
Implement the method in general terms
from scipy.optimize import fsolve
def BDF2(f,t,y0,y1):
N, h = len(t)-1, t[1]-t[0];
y = (N+1)*[np.asarray(y0)];
y[1] = y1;
for i in range(1,N):
t1, G = t[i+1], (4*y[i]-y[i-1])/3
y[i+1] = fsolve(lambda u: u-2*h/3*f(t1,u)-G, y[i-1]+2*h*f(t[i],y[i]), xtol=1e-3*h**3)
return np.vstack(y)
Set up the model to be solved
gamma=1/7;
beta=1/3;
print beta, gamma
y0 = np.array([8e6, 10, 0])
P = sum(y0); y0 = y0/P
def f(t,y): S,I,R = y; trns = beta*S*I/(S+I+R); recv=gamma*I; return np.array([-trns, trns-recv, recv])
Compute a reference solution and method solutions for the two initialization variants
from scipy.integrate import odeint
tg = np.linspace(0,120,25*128)
yg = odeint(f,y0,tg,atol=1e-12, rtol=1e-14, tfirst=True)
M = 16; # 8,4
t = tg[::M];
h = t[1]-t[0];
y1 = BDF2(f,t,y0,y0)
e1 = y1-yg[::M]
y2 = BDF2(f,t,y0,y0+h*f(0,y0))
e2 = y2-yg[::M]
Plot the errors, computation as above, but embedded in the plot commands, could be separated in principle by first computing a list of solutions
fig,ax = plt.subplots(3,2,figsize=(12,6))
for M in [16, 8, 4]:
t = tg[::M];
h = t[1]-t[0];
y = BDF2(f,t,y0,y0)
e = (y-yg[::M])
for k in range(3): ax[k,0].plot(t,e[:,k],'-o', ms=1, lw=0.5, label = "h=%.3f"%h)
y = BDF2(f,t,y0,y0+h*f(0,y0))
e = (y-yg[::M])
for k in range(3): ax[k,1].plot(t,e[:,k],'-o', ms=1, lw=0.5, label = "h=%.3f"%h)
for k in range(3):
for j in range(2): ax[k,j].set_ylabel(["$e_S$","$e_I$","$e_R$"][k]); ax[k,j].legend(); ax[k,j].grid()
ax[0,0].set_title("Errors: first step constant");
ax[0,1].set_title("Errors: first step Euler")

How to solve for the upper limit of an integral using Newton's method?

I want to write a program that makes use of Newtons Method:
To estimate the x of this integral:
Where X is the total distance.
I have functions to calculate the Time it takes to arrive at a certain distance by using the trapezoid method for numerical integration. Without using trapz.
function T = time_to_destination(x, route, n)
h=(x-0)/n;
dx = 0:h:x;
y = (1./(velocity(dx,route)));
Xk = dx(2:end)-dx(1:end-1);
Yk = y(2:end)+y(1:end-1);
T = 0.5*sum(Xk.*Yk);
end
and it fetches its values for velocity, through ppval of a cubic spline interpolation between a set of data points. Where extrapolated values should not be fetcheable.
function [v] = velocity(x, route)
load(route);
if all(x >= distance_km(1))==1 & all(x <= distance_km(end))==1
estimation = spline(distance_km, speed_kmph);
v = ppval(estimation, x);
else
error('Bad input, please choose a new value')
end
end
Plot of the velocity spline if that's interesting to you evaluated at:
dx= 1:0.1:65
Now I want to write a function that can solve for distance travelled after a certain given time, using newton's method without fzero / fsolve . But I have no idea how to solve for the upper bound of a integral.
According to the fundamental theorem of calculus I suppose the derivative of the integral is the function inside the integral, which is what I've tried to recreate as Time_to_destination / (1/velocity)
I added the constant I want to solve for to time to destination so its
(Time_to_destination - (input time)) / (1/velocity)
Not sure if I'm doing that right.
EDIT: Rewrote my code, works better now but my stopcondition for Newton Raphson doesnt seem to converge to zero. I also tried to implement the error from the trapezoid integration ( ET ) but not sure if I should bother implementing that yet. Also find the route file in the bottom.
Stop condition and error calculation of Newton's Method:
Error estimation of trapezoid:
Function x = distance(T, route)
n=180
route='test.mat'
dGuess1 = 50;
dDistance = T;
i = 1;
condition = inf;
while condition >= 1e-4 && 300 >= i
i = i + 1 ;
dGuess2 = dGuess1 - (((time_to_destination(dGuess1, route,n))-dDistance)/(1/(velocity(dGuess1, route))))
if i >= 2
ET =(time_to_destination(dGuess1, route, n/2) - time_to_destination(dGuess1, route, n))/3;
condition = abs(dGuess2 - dGuess1)+ abs(ET);
end
dGuess1 = dGuess2;
end
x = dGuess2
Route file: https://drive.google.com/open?id=18GBhlkh5ZND1Ejh0Muyt1aMyK4E2XL3C
Observe that the Newton-Raphson method determines the roots of the function. I.e. you need to have a function f(x) such that f(x)=0 at the desired solution.
In this case you can define f as
f(x) = Time(x) - t
where t is the desired time. Then by the second fundamental theorem of calculus
f'(x) = 1/Velocity(x)
With these functions defined the implementation becomes quite straightforward!
First, we define a simple Newton-Raphson function which takes anonymous functions as arguments (f and f') as well as an initial guess x0.
function x = newton_method(f, df, x0)
MAX_ITER = 100;
EPSILON = 1e-5;
x = x0;
fx = f(x);
iter = 0;
while abs(fx) > EPSILON && iter <= MAX_ITER
x = x - fx / df(x);
fx = f(x);
iter = iter + 1;
end
end
Then we can invoke our function as follows
t_given = 0.3; % e.g. we want to determine distance after 0.3 hours.
n = 180;
route = 'test.mat';
f = #(x) time_to_destination(x, route, n) - t_given;
df = #(x) 1/velocity(x, route);
distance_guess = 50;
distance = newton_method(f, df, distance_guess);
Result
>> distance
distance = 25.5877
Also, I rewrote your time_to_destination and velocity functions as follows. This version of time_to_destination uses all the available data to make a more accurate estimate of the integral. Using these functions the method seems to converge faster.
function t = time_to_destination(x, d, v)
% x is scalar value of destination distance
% d and v are arrays containing measured distance and velocity
% Assumes d is strictly increasing and d(1) <= x <= d(end)
idx = d < x;
if ~any(idx)
t = 0;
return;
end
v1 = interp1(d, v, x);
t = trapz([d(idx); x], 1./[v(idx); v1]);
end
function v = velocity(x, d, v)
v = interp1(d, v, x);
end
Using these new functions requires that the definitions of the anonymous functions are changed slightly.
t_given = 0.3; % e.g. we want to determine distance after 0.3 hours.
load('test.mat');
f = #(x) time_to_destination(x, distance_km, speed_kmph) - t_given;
df = #(x) 1/velocity(x, distance_km, speed_kmph);
distance_guess = 50;
distance = newton_method(f, df, distance_guess);
Because the integral is estimated more accurately the solution is slightly different
>> distance
distance = 25.7771
Edit
The updated stopping condition can be implemented as a slight modification to the newton_method function. We shouldn't expect the trapezoid rule error to go to zero so I omit that.
function x = newton_method(f, df, x0)
MAX_ITER = 100;
TOL = 1e-5;
x = x0;
iter = 0;
dx = inf;
while dx > TOL && iter <= MAX_ITER
x_prev = x;
x = x - f(x) / df(x);
dx = abs(x - x_prev);
iter = iter + 1;
end
end
To check our answer we can plot the time vs. distance and make sure our estimate falls on the curve.
...
distance = newton_method(f, df, distance_guess);
load('test.mat');
t = zeros(size(distance_km));
for idx = 1:numel(distance_km)
t(idx) = time_to_destination(distance_km(idx), distance_km, speed_kmph);
end
plot(t, distance_km); hold on;
plot([t(1) t(end)], [distance distance], 'r');
plot([t_given t_given], [distance_km(1) distance_km(end)], 'r');
xlabel('time');
ylabel('distance');
axis tight;
One of the main issues with my code was that n was too low, the error of the trapezoidal sum, estimation of my integral, was too high for the newton raphson method to converge to a very small number.
Here was my final code for this problem:
function x = distance(T, route)
load(route)
n=10e6;
x = mean(distance_km);
i = 1;
maxiter=100;
tol= 5e-4;
condition=inf
fx = #(x) time_to_destination(x, route,n);
dfx = #(x) 1./velocity(x, route);
while condition > tol && i <= maxiter
i = i + 1 ;
Guess2 = x - ((fx(x) - T)/(dfx(x)))
condition = abs(Guess2 - x)
x = Guess2;
end
end

Pass extra variable parameters to ode15s function (MATLAB)

I'm trying to solve a system of ordinary differential equations in MATLAB.
I have a simple equation:
dy = -k/M *x - c/M *y+ F/M.
This is defined in my ode function test2.m, dependant on the values X and t. I want to trig 'F' with a signal, generated by my custom function squaresignal.m. The output hereof, is the variable u, spanding from 0 to 1, as it is a smooth heaviside function. - Think square wave. The inputs in squaresignal.m, is t and f.
u=squaresignal(t,f)
These values are to be used inside my function test2, in order to enable or disable variable 'F' with the value u==1 (enable). Disable for all other values.
My ode function test2.m reads:
function dX = test2(t ,X, u)
x = X (1) ;
y = X (2) ;
M = 10;
k = 50;
c = 10;
F = 300;
if u == 1
F = F;
else
F = 0,
end
dx = y ;
dy = -k/M *x - c/M *y+ F/M ;
dX = [ dx dy ]';
end
And my runscript reads:
clc
clear all
tstart = 0;
tend = 10;
tsteps = 0.01;
tspan = [0 10];
t = [tstart:tsteps:tend];
f = 2;
u = squaresignal(t,f)
for ii = 1:length(u)
options=odeset('maxstep',tsteps,'outputfcn',#odeplot);
[t,X]=ode15s(#(t,X)test2(t,X,u(ii)),[tstart tend],[0 0],u);
end
figure (1);
plot(t,X(:,1))
figure (2);
plot(t,X(:,2));
However, the for-loop does not seem to do it's magic. I still only get F=0, instead of F=F, at times when u==1. And i know, that u is equal to one at some times, because the output of squaresignal.m is visible to me.
So the real question is this. How do i properly pass my variable u, to my function test2.m, and use it there to trig F? Is it possible that the squaresignal.m should be inside the odefunction test2.m instead?
Here's an example where I pass a variable coeff to the differential equation:
function [T,Q] = main()
t_span = [0 10];
q0 = [0.1; 0.2]; % initial state
ode_options = odeset(); % currently no options... You could add some here
coeff = 0.3; % The parameter we wish to pass to the differential eq.
[T,Q] = ode15s(#(t,q)diffeq(t,q,coeff),t_span,q0, ode_options);
end
function dq = diffeq(t,q,coeff)
% Preallocate vector dq
dq = zeros(length(q),1);
% Update dq:
dq(1) = q(2);
dq(2) = -coeff*sin(q(1));
end
EDIT:
Could this be the problem?
tstart = 0;
tend = 10;
tsteps = 0.01;
tspan = [0 10];
t = [tstart:tsteps:tend];
f = 2;
u = squaresignal(t,f)
Here you create a time vector t which has nothing to do with the time vector returned by the ODE solver! This means that at first we have t[2]=0.01 but once you ran your ODE solver, t[2] can be anything. So yes, if you want to load an external signal source depending on time, then you need to call your squaresignal.m from within the differential equation and pass the solver's current time t! Your code should look like this (note that I'm passing f now as an additional argument to the diffeq):
function dX = test2(t ,X, f)
x = X (1) ;
y = X (2) ;
M = 10;
k = 50;
c = 10;
F = 300;
u = squaresignal(t,f)
if u == 1
F = F;
else
F = 0,
end
dx = y ;
dy = -k/M *x - c/M *y+ F/M ;
dX = [ dx dy ]';
end
Note however that matlab's ODE solvers do not like at all what you're doing here. You are drastically (i.e. non-smoothly) changing the dynamics of your system. What you should do is to use one of the following:
a) events if you want to trigger some behaviour (like termination) depending on the integrated variable x or
b) If you want to trigger the behaviour based on the time t, you should segment your integration into different parts where the differential equation does not vary during one segment. You can then resume your integration by using the current state and time as x0 and t0 for the next run of ode15s. Of course this only works of you're external signal source u is something simple like a step funcion or square wave. In case of the square wave you would only integrate for a timespan during which the wave does not jump. And then exactly at the time of the jump you start another integration with altered differential equations.