Assume we have three equations:
eq1 = x1 + (x1 - x2) * t - X == 0;
eq2 = z1 + (z1 - z2) * t - Z == 0;
eq3 = ((X-x1)/a)^2 + ((Z-z1)/b)^2 - 1 == 0;
while six of known variables are:
a = 42 ;
b = 12 ;
x1 = 316190;
z1 = 234070;
x2 = 316190;
z2 = 234070;
So we are looking for three unknown variables that are:
X , Z and t
I wrote two method to solve it. But, since I need to run these code for 5.7 million data, it become really slow.
Method one (using "solve"):
tic
S = solve( eq1 , eq2 , eq3 , X , Z , t ,...
'ReturnConditions', true, 'Real', true);
toc
X = double(S.X(1))
Z = double(S.Z(1))
t = double(S.t(1))
results of method one:
X = 316190;
Z = 234060;
t = -2.9280;
Elapsed time is 0.770429 seconds.
Method two (using "fsolve"):
coeffs = [a,b,x1,x2,z1,z2]; % Known parameters
x0 = [ x2 ; z2 ; 1 ].'; % Initial values for iterations
f_d = #(x0) myfunc(x0,coeffs); % f_d considers x0 as variables
options = optimoptions('fsolve','Display','none');
tic
M = fsolve(f_d,x0,options);
toc
results of method two:
X = 316190; % X = M(1)
Z = 234060; % Z = M(2)
t = -2.9280; % t = M(3)
Elapsed time is 0.014 seconds.
Although, the second method is faster, but it still needs to be improved. Please let me know if you have a better solution for that. Thanks
* extra information:
if you are interested to know what those 3 equations are, the first two are equations of a line in 2D and the third equation is an ellipse equation. I need to find the intersection of the line with the ellipse. Obviously, we have two points as result. But, let's forget about the second answer for simplicity.
My suggestion it's to use the second approce,which it's the recommended by matlab for nonlinear equation system.
Declare a M-function
function Y=mysistem(X)
%X(1) = X
%X(2) = t
%X(3) = Z
a = 42 ;
b = 12 ;
x1 = 316190;
z1 = 234070;
x2 = 316190;
z2 = 234070;
Y(1,1) = x1 + (x1 - x2) * X(2) - X(1);
Y(2,1) = z1 + (z1 - z2) * X(2) - X(3);
Y(3,1) = ((X-x1)/a)^2 + ((Z-z1)/b)^2 - 1;
end
Then for solving use
x0 = [ x2 , z2 , 1 ];
M = fsolve(#mysistem,x0,options);
If you may want to reduce the default precision by changing StepTolerance (default 1e-6).
Also for more increare you may want to use the jacobian matrix for greater efficencies.
For more reference take a look in official documentation:
fsolve Nonlinear Equations with Analytic Jacobian
Basically giving the solver the Jacobian matrix of the system(and special options) you can increase method efficency.
Related
I'm having some issues getting my RK2 algorithm to work for a certain second-order linear differential equation. I have posted my current code (with the provided parameters) below. For some reason, the value of y1 deviates from the true value by a wider margin each iteration. Any input would be greatly appreciated. Thanks!
Code:
f = #(x,y1,y2) [y2; (1+y2)/x];
a = 1;
b = 2;
alpha = 0;
beta = 1;
n = 21;
h = (b-a)/(n-1);
yexact = #(x) 2*log(x)/log(2) - x +1;
ye = yexact((a:h:b)');
s = (beta - alpha)/(b - a);
y0 = [alpha;s];
[y1, y2] = RungeKuttaTwo2D(f, a, b, h, y0);
error = abs(ye - y1);
function [y1, y2] = RungeKuttaTwo2D(f, a, b, h, y0)
n = floor((b-a)/h);
y1 = zeros(n+1,1); y2 = y1;
y1(1) = y0(1); y2(1) = y0(2);
for i=1:n-1
ti = a+(i-1)*h;
fvalue1 = f(ti,y1(i),y2(i));
k1 = h*fvalue1;
fvalue2 = f(ti+h/2,y1(i)+k1(1)/2,y2(i)+k1(2)/2);
k2 = h*fvalue2;
y1(i+1) = y1(i) + k2(1);
y2(i+1) = y2(i) + k2(2);
end
end
Your exact solution is wrong. It is possible that your differential equation is missing a minus sign.
y2'=(1+y2)/x has as its solution y2(x)=C*x-1 and as y1'=y2 then y1(x)=0.5*C*x^2-x+D.
If the sign in the y2 equation were flipped, y2'=-(1+y2)/x, one would get y2(x)=C/x-1 with integral y1(x)=C*log(x)-x+D, which contains the given exact solution.
0=y1(1) = -1+D ==> D=1
1=y1(2) = C*log(2)-1 == C=1/log(2)
Additionally, the arrays in the integration loop have length n+1, so that the loop has to be from i=1 to n. Else the last element remains zero, which gives wrong residuals for the second boundary condition.
Correcting that and enlarging the computation to one secant step finds the correct solution for the discretization, as the ODE is linear. The error to the exact solution is bounded by 0.000285, which is reasonable for a second order method with step size 0.05.
I wrote a simple MatLab script to evaluate forward, backward and central difference approximations of first and second derivatives for a spesific function
(y = x^3-5x)
at two different x values
(x=0.5 and x = 1.5)
and for 7 different step sizes h and compare the relative errors of the approximations to the analytical derivatives.
However, i need to enter x and h values manually every time. Question is, how do I create a loop for 7 different h values and 2 different x values and get all the results as a matrix?
clc
clear all
close all
h = 0.00001;
x1 = 0.5;
y = #(x) x.^3 - 5*x;
dy = #(x) 3*x.^2 - 5;
ddy = #(x) 6*x;
d1 = dy(x1);
d2 = ddy(x1);
%Forward Differencing
f1 = (y(x1+h) - y(x1))/h;
f2 = (y(x1+2*h) - 2*y(x1+h) + y(x1))/(h.^2);
%Central Differencing
c1 = (y(x1+h)-y(x1-h))/(2*h);
c2 = (y(x1+h)-2*y(x1)+y(x1-h))/(h.^2);
% Backward Differencing
b1 = (y(x1) - y(x1-h))/h;
b2 = (y(x1)-2*y(x1-h)+y(x1-2*h))/(h.^2);
% Relative Errors
ForwardError1 = (f1 - dy(x1))/dy(x1);
ForwardError2 = (f2 - ddy(x1))/ddy(x1);
CentralError1 = (c1 - dy(x1))/dy(x1);
CentralError2 = (c2 - ddy(x1))/ddy(x1);
BackwardError1 = (b1 - dy(x1))/dy(x1);
BackwardError2 = (b2 - ddy(x1))/ddy(x1);
You don't need a loop. You can use meshgrid to create all combinations of your arguments (x and h in your case) and use these as inputs to your functions.
To get combinations of x = [0.5, 1.5] and h=0.00001:0.00001:0.00007 (I assume since you did not specify h values in the question), you would do:
[x, h] = meshgrid([0.5, 1.5], 0.00001:0.00001:0.00007);
y = #(x) x.^3 - 5*x;
f1 = (y(x1+h) - y(x1))./h;
Here x,h are matrices of size 7x2, and so is the result f1. Note that / was changed to ./ as h is a matrix and we want per-element operations.
Introduction
I am using Matlab to simulate some dynamic systems through numerically solving systems of Second Order Ordinary Differential Equations using ODE45. I found a great tutorial from Mathworks (link for tutorial at end) on how to do this.
In the tutorial the system of equations is explicit in x and y as shown below:
x''=-D(y) * x' * sqrt(x'^2 + y'^2)
y''=-D(y) * y' * sqrt(x'^2 + y'^2) + g(y)
Both equations above have form y'' = f(x, x', y, y')
Question
However, I am coming across systems of equations where the variables can not be solved for explicitly as shown in the example. For example one of the systems has the following set of 3 second order ordinary differential equations:
y double prime equation
y'' - .5*L*(x''*sin(x) + x'^2*cos(x) + (k/m)*y - g = 0
x double prime equation
.33*L^2*x'' - .5*L*y''sin(x) - .33*L^2*C*cos(x) + .5*g*L*sin(x) = 0
A single prime is first derivative
A double prime is second derivative
L, g, m, k, and C are given parameters.
How can Matlab be used to numerically solve a set of second order ordinary differential equations where second order can not be explicitly solved for?
Thanks!
Your second system has the form
a11*x'' + a12*y'' = f1(x,y,x',y')
a21*x'' + a22*y'' = f2(x,y,x',y')
which you can solve as a linear system
[x'', y''] = A\f
or in this case explicitly using Cramer's rule
x'' = ( a22*f1 - a12*f2 ) / (a11*a22 - a12*a21)
y'' accordingly.
I would strongly recommend leaving the intermediate variables in the code to reduce chances for typing errors and avoid multiple computation of the same expressions.
Code could look like this (untested)
function dz = odefunc(t,z)
x=z(1); dx=z(2); y=z(3); dy=z(4);
A = [ [-.5*L*sin(x), 1] ; [.33*L^2, -0.5*L*sin(x)] ]
b = [ [dx^2*cos(x) + (k/m)*y-g]; [-.33*L^2*C*cos(x) + .5*g*L*sin(x)] ]
d2 = A\b
dz = [ dx, d2(1), dy, d2(2) ]
end
Yes your method is correct!
I post the following code below:
%Rotating Pendulum Sym Main
clc
clear all;
%Define parameters
global M K L g C;
M = 1;
K = 25.6;
L = 1;
C = 1;
g = 9.8;
% define initial values for theta, thetad, del, deld
e_0 = 1;
ed_0 = 0;
theta_0 = 0;
thetad_0 = .5;
initialValues = [e_0, ed_0, theta_0, thetad_0];
% Set a timespan
t_initial = 0;
t_final = 36;
dt = .01;
N = (t_final - t_initial)/dt;
timeSpan = linspace(t_final, t_initial, N);
% Run ode45 to get z (theta, thetad, del, deld)
[t, z] = ode45(#RotSpngHndl, timeSpan, initialValues);
%initialize variables
e = zeros(N,1);
ed = zeros(N,1);
theta = zeros(N,1);
thetad = zeros(N,1);
T = zeros(N,1);
V = zeros(N,1);
x = zeros(N,1);
y = zeros(N,1);
for i = 1:N
e(i) = z(i, 1);
ed(i) = z(i, 2);
theta(i) = z(i, 3);
thetad(i) = z(i, 4);
T(i) = .5*M*(ed(i)^2 + (1/3)*L^2*C*sin(theta(i)) + (1/3)*L^2*thetad(i)^2 - L*ed(i)*thetad(i)*sin(theta(i)));
V(i) = -M*g*(e(i) + .5*L*cos(theta(i)));
E(i) = T(i) + V(i);
end
figure(1)
plot(t, T,'r');
hold on;
plot(t, V,'b');
plot(t,E,'y');
title('Energy');
xlabel('time(sec)');
legend('Kinetic Energy', 'Potential Energy', 'Total Energy');
Here is function handle file for ode45:
function dz = RotSpngHndl(~, z)
% Define Global Parameters
global M K L g C
A = [1, -.5*L*sin(z(3));
-.5*L*sin(z(3)), (1/3)*L^2];
b = [.5*L*z(4)^2*cos(z(3)) - (K/M)*z(1) + g;
(1/3)*L^2*C*cos(z(3)) + .5*g*L*sin(z(3))];
X = A\b;
% return column vector [ed; edd; ed; edd]
dz = [z(2);
X(1);
z(4);
X(2)];
I would like to apply loop over a function. I have the following "mother" code:
v = 1;
fun = #root;
x0 = [0,0]
options = optimset('MaxFunEvals',100000,'MaxIter', 10000 );
x = fsolve(fun,x0, options)
In addition, I have the following function in a separate file:
function D = root(x)
v = 1;
D(1) = x(1) + x(2) + v - 2;
D(2) = x(1) - x(2) + v - 1.8;
end
Now, I would like to find roots when v = sort(rand(1,1000)). In other words, I would like to iterate over function for each values of v.
You will need to modify root to accept an additional variable (v) and then change the function handle to root to an anonymous function which feeds in the v that you want
function D = root(x, v)
D(1) = x(1) + x(2) + v - 2;
D(2) = x(1) - x(2) + v - 1.8;
end
% Creates a function handle to root using a specific value of v
fun = #(x)root(x, v(k))
Just in case that equation is your actual equation (and not a dummy example): that equation is linear, meaning, you can solve it for all v with a simple mldivide:
v = sort(rand(1,1000));
x = [1 1; 1 -1] \ bsxfun(#plus, -v, [2; 1.8])
And, in case those are not your actual equations, you don't need to loop, you can vectorize the whole thing:
function x = solver()
options = optimset('Display' , 'off',...
'MaxFunEvals', 1e5,...
'MaxIter' , 1e4);
v = sort(rand(1, 1000));
x0 = repmat([0 0], numel(v), 1);
x = fsolve(#(x)root(x,v'), x0, options);
end
function D = root(x,v)
D = [x(:,1) + x(:,2) + v - 2
x(:,1) - x(:,2) + v - 1.8];
end
This may or may not be faster than looping, it depends on your actual equations.
It may be slower because fsolve will need to compute a Jacobian of 2000×2000 (4M elements), instead of 2×2, 1000 times (4k elements).
But, it may be faster because the startup cost of fsolve can be large, meaning, the overhead of many calls may in fact outweigh the cost of computing the larger Jacobian.
In any case, providing the Jacobian as a second output will speed everything up rather enormously:
function solver()
options = optimset('Display' , 'off',...
'MaxFunEvals', 1e5,...
'MaxIter' , 1e4,...
'Jacobian' , 'on');
v = sort(rand(1, 1000));
x0 = repmat([1 1], numel(v), 1);
x = fsolve(#(x)root(x,v'), x0, options);
end
function [D, J] = root(x,v)
% Jacobian is constant:
persistent J_out
if isempty(J_out)
one = speye(numel(v));
J_out = [+one +one
+one -one];
end
% Function values at x
D = [x(:,1) + x(:,2) + v - 2
x(:,1) - x(:,2) + v - 1.8];
% Jacobian at x:
J = J_out;
end
vvec = sort(rand(1,2));
x0 = [0,0];
for v = vvec,
fun = #(x) root(v, x);
options = optimset('MaxFunEvals',100000,'MaxIter', 10000 );
x = fsolve(fun, x0, options);
end
with function definition:
function D = root(v, x)
D(1) = x(1) + x(2) + v - 2;
D(2) = x(1) - x(2) + v - 1.8;
end
Got one more problem with matrix multiplication in Matlab. I have to plot Taylor polynomials for the given function. This question is similar to my previous one (but this time, the function is f: R^2 -> R^3) and I can't figure out how to make the matrices in order to make it work...
function example
clf;
M = 40;
N = 20;
% domain of f(x)
x1 = linspace(0,2*pi,M).'*ones(1,N);
x2 = ones(M,1)*linspace(0,2*pi,N);
[y1,y2,y3] = F(x1,x2);
mesh(y1,y2,y3,...
'facecolor','w',...
'edgecolor','k');
axis equal;
axis vis3d;
axis manual;
hold on
% point for our Taylor polynom
xx1 = 3;
xx2 = 0.5;
[yy1,yy2,yy3] = F(xx1,xx2);
% plots one discrete point
plot3(yy1,yy2,yy3,'ro');
[y1,y2,y3] = T1(xx1,xx2,x1,x2);
mesh(y1,y2,y3,...
'facecolor','w',...
'edgecolor','g');
% given function
function [y1,y2,y3] = F(x1,x2)
% constants
R=2; r=1;
y1 = (R+r*cos(x2)).*cos(x1);
y2 = (R+r*cos(x2)).*sin(x1);
y3 = r*sin(x2);
function [y1,y2,y3] = T1(xx1,xx2,x1,x2)
dy = [
-(R + r*cos(xx2))*sin(xx1) -r*cos(xx1)*sin(xx2)
(R + r*cos(xx2))*cos(xx1) -r*sin(xx1)*sin(xx2)
0 r*cos(xx2) ];
y = F(xx1, xx2) + dy.*[x1-xx1; x2-xx2];
function [y1,y2,y3] = T2(xx1,xx2,x1,x2)
% ?
I know that my code is full of mistakes (I just need to fix my T1 function). dy represents Jacobian matrix (total derivation of f(x) - I hope I got it right...). I am not sure how would the Hessian matrix in T2 look, by I hope I will figure it out, I'm just lost in Matlab...
edit: I tried to improve my formatting - here's my Jacobian matrix
[-(R + r*cos(xx2))*sin(xx1), -r*cos(xx1)*sin(xx2)...
(R + r*cos(xx2))*cos(xx1), -r*sin(xx1)*sin(xx2)...
0, r*cos(xx2)];
function [y1,y2,y3]=T1(xx1,xx2,x1,x2)
R=2; r=1;
%derivatives
y1dx1 = -(R + r * cos(xx2)) * sin(xx1);
y1dx2 = -r * cos(xx1) * sin(xx2);
y2dx1 = (R + r * cos(xx2)) * cos(xx1);
y2dx2 = -r * sin(xx1) * sin(xx2);
y3dx1 = 0;
y3dx2 = r * cos(xx2);
%T1
[f1, f2, f3] = F(xx1, xx2);
y1 = f1 + y1dx1*(x1-xx1) + y1dx2*(x2-xx2);
y2 = f2 + y2dx1*(x1-xx1) + y2dx2*(x2-xx2);
y3 = f3 + y3dx1*(x1-xx1) + y3dx2*(x2-xx2);