Pass extra variable parameters to ode15s function (MATLAB) - matlab

I'm trying to solve a system of ordinary differential equations in MATLAB.
I have a simple equation:
dy = -k/M *x - c/M *y+ F/M.
This is defined in my ode function test2.m, dependant on the values X and t. I want to trig 'F' with a signal, generated by my custom function squaresignal.m. The output hereof, is the variable u, spanding from 0 to 1, as it is a smooth heaviside function. - Think square wave. The inputs in squaresignal.m, is t and f.
u=squaresignal(t,f)
These values are to be used inside my function test2, in order to enable or disable variable 'F' with the value u==1 (enable). Disable for all other values.
My ode function test2.m reads:
function dX = test2(t ,X, u)
x = X (1) ;
y = X (2) ;
M = 10;
k = 50;
c = 10;
F = 300;
if u == 1
F = F;
else
F = 0,
end
dx = y ;
dy = -k/M *x - c/M *y+ F/M ;
dX = [ dx dy ]';
end
And my runscript reads:
clc
clear all
tstart = 0;
tend = 10;
tsteps = 0.01;
tspan = [0 10];
t = [tstart:tsteps:tend];
f = 2;
u = squaresignal(t,f)
for ii = 1:length(u)
options=odeset('maxstep',tsteps,'outputfcn',#odeplot);
[t,X]=ode15s(#(t,X)test2(t,X,u(ii)),[tstart tend],[0 0],u);
end
figure (1);
plot(t,X(:,1))
figure (2);
plot(t,X(:,2));
However, the for-loop does not seem to do it's magic. I still only get F=0, instead of F=F, at times when u==1. And i know, that u is equal to one at some times, because the output of squaresignal.m is visible to me.
So the real question is this. How do i properly pass my variable u, to my function test2.m, and use it there to trig F? Is it possible that the squaresignal.m should be inside the odefunction test2.m instead?

Here's an example where I pass a variable coeff to the differential equation:
function [T,Q] = main()
t_span = [0 10];
q0 = [0.1; 0.2]; % initial state
ode_options = odeset(); % currently no options... You could add some here
coeff = 0.3; % The parameter we wish to pass to the differential eq.
[T,Q] = ode15s(#(t,q)diffeq(t,q,coeff),t_span,q0, ode_options);
end
function dq = diffeq(t,q,coeff)
% Preallocate vector dq
dq = zeros(length(q),1);
% Update dq:
dq(1) = q(2);
dq(2) = -coeff*sin(q(1));
end
EDIT:
Could this be the problem?
tstart = 0;
tend = 10;
tsteps = 0.01;
tspan = [0 10];
t = [tstart:tsteps:tend];
f = 2;
u = squaresignal(t,f)
Here you create a time vector t which has nothing to do with the time vector returned by the ODE solver! This means that at first we have t[2]=0.01 but once you ran your ODE solver, t[2] can be anything. So yes, if you want to load an external signal source depending on time, then you need to call your squaresignal.m from within the differential equation and pass the solver's current time t! Your code should look like this (note that I'm passing f now as an additional argument to the diffeq):
function dX = test2(t ,X, f)
x = X (1) ;
y = X (2) ;
M = 10;
k = 50;
c = 10;
F = 300;
u = squaresignal(t,f)
if u == 1
F = F;
else
F = 0,
end
dx = y ;
dy = -k/M *x - c/M *y+ F/M ;
dX = [ dx dy ]';
end
Note however that matlab's ODE solvers do not like at all what you're doing here. You are drastically (i.e. non-smoothly) changing the dynamics of your system. What you should do is to use one of the following:
a) events if you want to trigger some behaviour (like termination) depending on the integrated variable x or
b) If you want to trigger the behaviour based on the time t, you should segment your integration into different parts where the differential equation does not vary during one segment. You can then resume your integration by using the current state and time as x0 and t0 for the next run of ode15s. Of course this only works of you're external signal source u is something simple like a step funcion or square wave. In case of the square wave you would only integrate for a timespan during which the wave does not jump. And then exactly at the time of the jump you start another integration with altered differential equations.

Related

Trouble using Runge-Kutta 2nd-order shooting method in Matlab

I'm having some issues getting my RK2 algorithm to work for a certain second-order linear differential equation. I have posted my current code (with the provided parameters) below. For some reason, the value of y1 deviates from the true value by a wider margin each iteration. Any input would be greatly appreciated. Thanks!
Code:
f = #(x,y1,y2) [y2; (1+y2)/x];
a = 1;
b = 2;
alpha = 0;
beta = 1;
n = 21;
h = (b-a)/(n-1);
yexact = #(x) 2*log(x)/log(2) - x +1;
ye = yexact((a:h:b)');
s = (beta - alpha)/(b - a);
y0 = [alpha;s];
[y1, y2] = RungeKuttaTwo2D(f, a, b, h, y0);
error = abs(ye - y1);
function [y1, y2] = RungeKuttaTwo2D(f, a, b, h, y0)
n = floor((b-a)/h);
y1 = zeros(n+1,1); y2 = y1;
y1(1) = y0(1); y2(1) = y0(2);
for i=1:n-1
ti = a+(i-1)*h;
fvalue1 = f(ti,y1(i),y2(i));
k1 = h*fvalue1;
fvalue2 = f(ti+h/2,y1(i)+k1(1)/2,y2(i)+k1(2)/2);
k2 = h*fvalue2;
y1(i+1) = y1(i) + k2(1);
y2(i+1) = y2(i) + k2(2);
end
end
Your exact solution is wrong. It is possible that your differential equation is missing a minus sign.
y2'=(1+y2)/x has as its solution y2(x)=C*x-1 and as y1'=y2 then y1(x)=0.5*C*x^2-x+D.
If the sign in the y2 equation were flipped, y2'=-(1+y2)/x, one would get y2(x)=C/x-1 with integral y1(x)=C*log(x)-x+D, which contains the given exact solution.
0=y1(1) = -1+D ==> D=1
1=y1(2) = C*log(2)-1 == C=1/log(2)
Additionally, the arrays in the integration loop have length n+1, so that the loop has to be from i=1 to n. Else the last element remains zero, which gives wrong residuals for the second boundary condition.
Correcting that and enlarging the computation to one secant step finds the correct solution for the discretization, as the ODE is linear. The error to the exact solution is bounded by 0.000285, which is reasonable for a second order method with step size 0.05.

How to solve for the upper limit of an integral using Newton's method?

I want to write a program that makes use of Newtons Method:
To estimate the x of this integral:
Where X is the total distance.
I have functions to calculate the Time it takes to arrive at a certain distance by using the trapezoid method for numerical integration. Without using trapz.
function T = time_to_destination(x, route, n)
h=(x-0)/n;
dx = 0:h:x;
y = (1./(velocity(dx,route)));
Xk = dx(2:end)-dx(1:end-1);
Yk = y(2:end)+y(1:end-1);
T = 0.5*sum(Xk.*Yk);
end
and it fetches its values for velocity, through ppval of a cubic spline interpolation between a set of data points. Where extrapolated values should not be fetcheable.
function [v] = velocity(x, route)
load(route);
if all(x >= distance_km(1))==1 & all(x <= distance_km(end))==1
estimation = spline(distance_km, speed_kmph);
v = ppval(estimation, x);
else
error('Bad input, please choose a new value')
end
end
Plot of the velocity spline if that's interesting to you evaluated at:
dx= 1:0.1:65
Now I want to write a function that can solve for distance travelled after a certain given time, using newton's method without fzero / fsolve . But I have no idea how to solve for the upper bound of a integral.
According to the fundamental theorem of calculus I suppose the derivative of the integral is the function inside the integral, which is what I've tried to recreate as Time_to_destination / (1/velocity)
I added the constant I want to solve for to time to destination so its
(Time_to_destination - (input time)) / (1/velocity)
Not sure if I'm doing that right.
EDIT: Rewrote my code, works better now but my stopcondition for Newton Raphson doesnt seem to converge to zero. I also tried to implement the error from the trapezoid integration ( ET ) but not sure if I should bother implementing that yet. Also find the route file in the bottom.
Stop condition and error calculation of Newton's Method:
Error estimation of trapezoid:
Function x = distance(T, route)
n=180
route='test.mat'
dGuess1 = 50;
dDistance = T;
i = 1;
condition = inf;
while condition >= 1e-4 && 300 >= i
i = i + 1 ;
dGuess2 = dGuess1 - (((time_to_destination(dGuess1, route,n))-dDistance)/(1/(velocity(dGuess1, route))))
if i >= 2
ET =(time_to_destination(dGuess1, route, n/2) - time_to_destination(dGuess1, route, n))/3;
condition = abs(dGuess2 - dGuess1)+ abs(ET);
end
dGuess1 = dGuess2;
end
x = dGuess2
Route file: https://drive.google.com/open?id=18GBhlkh5ZND1Ejh0Muyt1aMyK4E2XL3C
Observe that the Newton-Raphson method determines the roots of the function. I.e. you need to have a function f(x) such that f(x)=0 at the desired solution.
In this case you can define f as
f(x) = Time(x) - t
where t is the desired time. Then by the second fundamental theorem of calculus
f'(x) = 1/Velocity(x)
With these functions defined the implementation becomes quite straightforward!
First, we define a simple Newton-Raphson function which takes anonymous functions as arguments (f and f') as well as an initial guess x0.
function x = newton_method(f, df, x0)
MAX_ITER = 100;
EPSILON = 1e-5;
x = x0;
fx = f(x);
iter = 0;
while abs(fx) > EPSILON && iter <= MAX_ITER
x = x - fx / df(x);
fx = f(x);
iter = iter + 1;
end
end
Then we can invoke our function as follows
t_given = 0.3; % e.g. we want to determine distance after 0.3 hours.
n = 180;
route = 'test.mat';
f = #(x) time_to_destination(x, route, n) - t_given;
df = #(x) 1/velocity(x, route);
distance_guess = 50;
distance = newton_method(f, df, distance_guess);
Result
>> distance
distance = 25.5877
Also, I rewrote your time_to_destination and velocity functions as follows. This version of time_to_destination uses all the available data to make a more accurate estimate of the integral. Using these functions the method seems to converge faster.
function t = time_to_destination(x, d, v)
% x is scalar value of destination distance
% d and v are arrays containing measured distance and velocity
% Assumes d is strictly increasing and d(1) <= x <= d(end)
idx = d < x;
if ~any(idx)
t = 0;
return;
end
v1 = interp1(d, v, x);
t = trapz([d(idx); x], 1./[v(idx); v1]);
end
function v = velocity(x, d, v)
v = interp1(d, v, x);
end
Using these new functions requires that the definitions of the anonymous functions are changed slightly.
t_given = 0.3; % e.g. we want to determine distance after 0.3 hours.
load('test.mat');
f = #(x) time_to_destination(x, distance_km, speed_kmph) - t_given;
df = #(x) 1/velocity(x, distance_km, speed_kmph);
distance_guess = 50;
distance = newton_method(f, df, distance_guess);
Because the integral is estimated more accurately the solution is slightly different
>> distance
distance = 25.7771
Edit
The updated stopping condition can be implemented as a slight modification to the newton_method function. We shouldn't expect the trapezoid rule error to go to zero so I omit that.
function x = newton_method(f, df, x0)
MAX_ITER = 100;
TOL = 1e-5;
x = x0;
iter = 0;
dx = inf;
while dx > TOL && iter <= MAX_ITER
x_prev = x;
x = x - f(x) / df(x);
dx = abs(x - x_prev);
iter = iter + 1;
end
end
To check our answer we can plot the time vs. distance and make sure our estimate falls on the curve.
...
distance = newton_method(f, df, distance_guess);
load('test.mat');
t = zeros(size(distance_km));
for idx = 1:numel(distance_km)
t(idx) = time_to_destination(distance_km(idx), distance_km, speed_kmph);
end
plot(t, distance_km); hold on;
plot([t(1) t(end)], [distance distance], 'r');
plot([t_given t_given], [distance_km(1) distance_km(end)], 'r');
xlabel('time');
ylabel('distance');
axis tight;
One of the main issues with my code was that n was too low, the error of the trapezoidal sum, estimation of my integral, was too high for the newton raphson method to converge to a very small number.
Here was my final code for this problem:
function x = distance(T, route)
load(route)
n=10e6;
x = mean(distance_km);
i = 1;
maxiter=100;
tol= 5e-4;
condition=inf
fx = #(x) time_to_destination(x, route,n);
dfx = #(x) 1./velocity(x, route);
while condition > tol && i <= maxiter
i = i + 1 ;
Guess2 = x - ((fx(x) - T)/(dfx(x)))
condition = abs(Guess2 - x)
x = Guess2;
end
end

Solving coupled nonlinear differential equations

I have a differential equation that is as follows:
%d/dt [x;y] = [m11 m12;m11 m12][x;y]
mat = #(t) sin(cos(w*t))
m11 = mat(t) + 5 ;
m12 = 5;
m21 = -m12 ;
m22 = -m11 ;
So I have that my matrix is specifically dependent on t. For some reason, I am having a super difficult time solving this with ode45. My thoughts were to do as follows ( I want to solve for x,y at a time T that was defined):
t = linspace(0,T,100) ; % Arbitrary 100
x0 = (1 0); %Init cond
[tf,xf] = ode45(#ddt,t,x0)
function xprime = ddt(t,x)
ddt = [m11*x(1)+m12*x(2) ; m12*x(1)+m12*x(2) ]
end
The first error I get is that
Undefined function or variable 'M11'.
Is there a cleaner way I could be doing this ?
I'm assuming you're running this within a script, which means that your function ddt is a local function instead of a nested function. That means it doesn't have access to your matrix variables m11, etc. Another issue is that you will want to be evaluating your matrix variables at the specific value of t within ddt, which your current code doesn't do.
Here's an alternative way to set things up that should work for you:
% Define constants:
w = 1;
T = 10;
t = linspace(0, T, 100);
x0 = [1 0];
% Define anonymous functions:
fcn = #(t) sin(cos(w*t));
M = {#(t) fcn(t)+5, 5; -5 #(t) -fcn(t)-5};
ddt = #(t, x) [M{1, 1}(t)*x(1)+M{2, 1}*x(2); M{1, 2}*x(1)+M{2, 2}(t)*x(2)];
% Solve equations:
[tf, xf] = ode45(ddt, t, x0);
One glaring error is that the return value of function ddt is xprime, not ddt. Then as mentioned in the previous answer, mm1 at the time of definition should give an error as t is not defined. But even if there is a t value available at definition, it is not the same t the procedure ddt is called with.
mat = #(t) sin(cos(w*t))
function xprime = ddt(t,x)
a = mat(t) + 5 ;
b = 5;
ddt = [ a, b; -b, -a]*x
end
should work also as inner procedure.

Trying to solve Simultaneous equations in matlab, cannot work out how to format the functions

I was given a piece of Matlab code by a lecturer recently for a way to solve simultaneous equations using the Newton-Raphson method with a jacobian matrix (I've also left in his comments). However, although he's provided me with the basic code I cannot seem to get it working no matter how hard I try. I've spent many hours trying to introduce the 'func' function but to no avail, frequently getting the message that there aren't enough inputs. Any help would be greatly appreciated, especially with how to write the 'func' function.
function root = newtonRaphson2(func,x,tol)
% Newton-Raphson method of finding a root of simultaneous
% equations fi(x1,x2,...,xn) = 0, i = 1,2,...,n.
% USAGE: root = newtonRaphson2(func,x,tol)
% INPUT:
% func = handle of function that returns[f1,f2,...,fn].
% x = starting solution vector [x1,x2,...,xn].
% tol = error tolerance (default is 1.0e4*eps).
% OUTPUT:
% root = solution vector.
if size(x,1) == 1; x = x'; end % x must be column vector
for i = 1:30
[jac,f0] = jacobian(func,x);
if sqrt(dot(f0,f0)/length(x)) < tol
root = x; return
end
dx = jac\(-f0);
x = x + dx;
if sqrt(dot(dx,dx)/length(x)) < tol
root = x; return
end
end
error('Too many iterations')
function [jac,f0] = jacobian(func,x)
% Returns the Jacobian matrix and f(x).
h = 1.0e-4;
n = length(x);
jac = zeros(n);
f0 = feval(func,x);
for i =1:n
temp = x(i);
x(i) = temp + h;
f1 = feval(func,x);
x(i) = temp;
jac(:,i) = (f1 - f0)/h;
end
The simultaneous equations to be solved are:
sin(x)+y^2+ln(z)-7=0
3x+2^y-z^3+1=0
x+y+Z-=0
with the starting point (1,1,1).
However, these are arbitrary and can be replaced with anything, I mainly just need to know the general format.
Many thanks, I know this may be a very simple task but I've only recently started teaching myself Matlab.
You need to create a new file called myfunc.m (or whatever name you like) which takes a single input parameter - a column vector x - and returns a single output vector - a column vector y such that y = f(x).
For example,
function y = myfunc(x)
y = zeros(3, 1);
y(1) = sin(x(1)) + x(2)^2 + log(x(3)) - 7;
y(2) = 3*x(1) + 2^x(2) - x(3)^3 + 1;
y(3) = x(1) + x(2) + x(3);
end
You can then refer to this function as #myfunc as in
>> newtonRaphson2(#myfunc, [1;1;1], 1e-6);
The reason for the special notation is that Matlab allows you to call a function with no parameters by omitting the parens () that follow it. So for example, Matlab interprets myfunc as you calling the function with no arguments (so it tries to replace it with its result) whereas #myfunc refers to the function itself, rather than its result.
Alternatively you can write a function directly using the # notation, as in
>> newtonRaphson2(#(x) exp(x) - 3*x, 2, 1e-2)
ans =
1.5315
>> newtonRaphson2(#(x) exp(x) - 3*x, 1, 1e-2)
ans =
0.6190
which are the two roots of the equation exp(x) - 3 * x = 0.
Edit - as an aside, your professor has terrible coding style (if the code in your question is a direct copy-paste of what he gave you, and you haven't mangled it along the way). It would be better to write the code like this, with indentation making it clear what the structure of the code is.
function root = newtonRaphson2(func, x, tol)
% Newton-Raphson method of finding a root of simultaneous
% equations fi(x1,x2,...,xn) = 0, i = 1,2,...,n.
%
% USAGE: root = newtonRaphson2(func,x,tol)
%
% INPUT:
% func = handle of function that returns[f1,f2,...,fn].
% x = starting solution vector [x1,x2,...,xn].
% tol = error tolerance (default is 1.0e4*eps).
%
% OUTPUT:
% root = solution vector.
if size(x, 1) == 1; % x must be column vector
x = x';
end
for i = 1:30
[jac, f0] = jacobian(func, x);
if sqrt(dot(f0, f0) / length(x)) < tol
root = x; return
end
dx = jac \ (-f0);
x = x + dx;
if sqrt(dot(dx, dx) / length(x)) < tol
root = x; return
end
end
error('Too many iterations')
end
function [jac, f0] = jacobian(func,x)
% Returns the Jacobian matrix and f(x).
h = 1.0e-4;
n = length(x);
jac = zeros(n);
f0 = feval(func,x);
for i = 1:n
temp = x(i);
x(i) = temp + h;
f1 = feval(func,x);
x(i) = temp;
jac(:,i) = (f1 - f0)/h;
end
end

Matlab solving ODE applied to State Space System, inputs time dependent

I've got at State System, with "forced" inputs at bounds. My SS equation is: zp = A*z * B. (A is a square matrix, and B colunm)
If B is a step (along the time of experience), there is no problem, because I can use
tevent = 2;
tmax= 5*tevent;
n =100;
dT = n/tmax;
t = linspace(0,tmax,n);
u0 = 1 * ones(size(z'));
B = zeros(nz,n);
B(1,1)= utop(1)';
A = eye(nz,nz);
[tt,u]=ode23('SS',t,u0);
and SS is:
function zp = SS(t,z)
global A B
zp = A*z + B;
end
My problem is when I applied a slop, So B will be time dependent.
utop_init= 20;
utop_final = 50;
utop(1)=utop_init;
utop(tevent * dT)=utop_final;
for k = 2: tevent*dT -1
utop(k) = utop(k-1) +(( utop(tevent * dT) - utop(1))/(tevent * dT));
end
for k = (tevent * dT) +1 :(tmax*dT)
utop(k) = utop(k-1);
end
global A B
B = zeros(nz,1);
B(1,1:n) = utop(:)';
A = eye(nz,nz);
I wrote a new equation (to trying to solve), the problem, but I can't adjust "time step", and I don't get a u with 22x100 (which is the objective).
for k = 2 : n
u=solveSS(t,k,u0);
end
SolveSS has the code:
function [ u ] = solveSS( t,k,u0)
tspan = [t(k-1) t(k)];
[t,u] = ode15s(#SS,tspan,u0);
function zp = SS(t,z)
global A B
zp = A*z + B(:,k-1);
end
end
I hope that you can help!
You should define a function B that is continuously varying with t and pass it as a function handle. This way you will allow the ODE solver to adjust time steps efficiently (your use of ode15s, a stiff ODE solver, suggests that variable time stepping is even more crucial)
The form of your code will be something like this:
function [ u ] = solveSS( t,k,u0)
tspan = [t(k-1) t(k)];
[t,u] = ode15s(#SS,tspan,u0,#B);
function y = B(x)
%% insert B calculation
end
function zp = SS(t,z,B)
global A
zp = A*z + B(t);
end
end