Solving ODEs in MATLAB using the Runga - Kutta Method - matlab

I am required to solve these particular ODEs using numerical methods in MATLAB. The ODEs essentially model the fall of a body of mass m, connected to a piece of elastic with spring constant k. The solutions to these ODEs are to represent the body's position and velocity at discrete positions in time.
The parameters for the ODEs are,
H = 74
D = 31
c = 0.9
m = 80
L = 25
k = 90
g = 9.8
C = c/m
K = k/m
T = 60
n = 10000
I've implemented the following two methods; Euler and Fourth Order Runge -
Kutta to approximate the solutions over the interval [0, 60].
Here is my Runge Kutta function,
function [t, y, v, h] = rk4_approx(T, n, g, C, K, L)
%% calculates interval width h for each iteration
h = (T / n);
%% creates time array t
t = 0:h:T;
%% initialises arrays y and v to hold solutions
y = zeros(1,n+1);
v = zeros(1,n+1);
%% functions
z = #(v) v;
q = #(v, y) (g - C*abs(v)*v - max(0, K*(y - L)));
%% initial values
v(1) = 0;
y(1) = 0;
%% performs iterations
for j = 1:n
%% jumper's position at each time-step
r1 = h*z(v(j));
r2 = h*z(v(j) + 0.5*h);
r3 = h*z(v(j) + 0.5*h);
r4 = h*z(v(j) + h);
y(j+1) = y(j) + (1/6)*(r1 + 2*r2 + 2*r3 + r4); %position solution
%% jumper's velocity at each time-step
k1 = h*q(v(j), y(j));
k2 = h*q(v(j) + 0.5*h, y(j) + 0.5*k1);
k3 = h*q(v(j) + 0.5*h, y(j) + 0.5*k2);
k4 = h*q(v(j) + h, y(j) + k3);
v(j+1) = v(j) + (1/6)*(k1 + 2*k2 + 2*k3 + k4); %velocity solution
end
end
Here is my Euler function,
function [t, y, v, h] = euler_approx(T, n, g, C, K, L)
% calculates interval width h
h = T / n;
% creates time array t
t = 0:h:T;
% initialise solution arrays y and v
y = zeros(1,n+1);
v = zeros(1,n+1);
% perform iterations
for j = 1:n
y(j+1) = y(j) + h*v(j);
v(j+1) = v(j) + h*(g - C*abs(v(j))*v(j) - max(0, K*(y(j) - L)));
end
end
However, after varying the parameter 'n' (where n is the number of 'steps' in the iteration) it appears the Euler solution for the position of the body converges to the maximum value of approximately y = 50 faster then the Runge - Kutta solution does. Since this ODE does not have a closed solution I have nothing to compare my answer to. I suspect the answer to be y = 50 though.
Therefore, I'm doubting my answer.
Is my code for the Runge - Kutta solution incorrect? Should it not converge faster than the Euler solution?
Sorry for my poor formatting.

The Runge-Kutta integration is incorrect.
It is vital to appreciate the difference between independent and dependent (also called state and a host of other names) variables. Time is the independent variable in this problem. Given a time, you can provide a height and a velocity; the reverse is not uniquely true. When you read a Runge-Kutta formula, such as the one provided by Wikipedia, t is the independent variable and y is vector of dependent variables. Also, when performing time integration of systems of equations (here we have a system of two equations), it is very important to keep track of which right-hand side belongs to which equation if you are going to perform the march element-wise, which I will do for simplicity.
All that said, the problem with the current RK integrator is two-fold
v is being stepped as if it were t; this is incorrect. Both v and y are stepped similarly.
y should be stepped with the r variables since the r variables come from y's right-hand side equation z. Similarly, v is stepped with the k variables.
The updated core of the integrator is thus:
r1 = h*z(v(j));
k1 = h*q(v(j), y(j));
r2 = h*z(v(j) + 0.5*k1);
k2 = h*q(v(j) + 0.5*k1, y(j) + 0.5*r1);
r3 = h*z(v(j) + 0.5*k2);
k3 = h*q(v(j) + 0.5*k2, y(j) + 0.5*r2);
r4 = h*z(v(j) + k3);
k4 = h*q(v(j) + k3, y(j) + r3);
y(j+1) = y(j) + (1/6)*(r1 + 2*r2 + 2*r3 + r4); %position solution
v(j+1) = v(j) + (1/6)*(k1 + 2*k2 + 2*k3 + k4); %velocity solution
Notice how both v and y are updated in a similar fashion and, therefore, are required to be updated in lock-step with one another. This form of the integrator will give far better performance than Euler.
Finally, if in doubt in the future about a solution you don't know, always remember you have the MATLAB ODE suite at your disposal, and a quick call to the extensively vetted and very robust ode45 can relieve a lot of concerns. I actually used this call
[t45,w45] = ode45(#(t,w) [z(w(2));q(w(2),w(1))],linspace(0,T,200).',[0;0]);
to check my work.

Related

Gauss-Legendre Matlab: How to use Newton method to approximate k_1 and k_2?

Recently as a project I have been working on a program in matlab designed to implement the Gauss-Legendre order 4 method of solving an IVP. Numerical analysis- and coding in particular- is somewhat of a weakness of mine, and thus it has been rather tough going. I used the explicit Euler method to initially seed the method, but I've been stuck for a very long time on how to use the Newton method to get closer values of k1 and k2.
Thusfar, my code looks as follows:
`
%Gauss-Butcher Order 4
function[y] = GBOF(f,fprime,y_0,maxit,ertol)
A = [1/4,1/4-sqrt(3)/6;1/4+sqrt(3)/6,1/4];
h = [1,0,0,0,0,0,0,0,0,0,0,0,0];
t = [0,0,0,0,0,0,0,0,0,0,0,0,0];
for n = 1:12
h(n+1) = 1/(2^n);
t(n+1) = t(n)+h(n);
end
y = zeros(size(t));
y(1) = y_0;
niter = 1;
%Declaration of Variables
for i = 1:12
k = f(y(i));
y1approx = y(i) + ((1/2-sqrt(3))*h(i)*k);
y2approx = y(i) + ((1/2+sqrt(3))*h(i)*k);
k1 = f(y1approx);
k2 = f(y2approx);
%Initial guess for newton seeding
errorFunc =#(k1,k2) [k1-f(y(i) +A(1,1)*k1+A(1,2)*k2*h(i)); k2-f(y(i)+A(2,1)*k1+A(2,2)*k2*h(i))];
error = errorFunc(k1,k2);
%Function for error and creation of error variable
while norm(error) > ertol && niter < maxit
niter = niter + 1;
** k1 = k1-f(k1)/fprime(k1);
k2 = k2-f(k2)/fprime(k2);
** error = errorFunc(k1,k2);
%Newton Raphston for estimating K1 and K2
end
y(i+1) = y(i) +(h(i)*(k1+k2)/2);
%Creation of next
end
disp(t);
`
The part of the code I believe is causing this to fail is highlighted. When I enter in a basic ivp (i.e. y' = y, y(0) =1), I get the output
Any input on how I could go about fixing this would be much appreciated.
Thank you.
I have tried replacing the k1s and k2s in the problem with the values used in the formula extrapolated from the butcher tableau, but nothing changed. I can't think of any other ways to tackle this issue.
The implicit system you have to solve is
k1 = f(y + h*(a11*k1 + a12*k2) )
k2 = f(y + h*(a21*k1 + a22*k2) )
This is also correct in your residual function errorFunc.
The naive way is just to iterate this system, like any other fixed-point iteration.
This system has a linearization rel. h at the base point y
k1 = f(y) + h*f'(y)*(a11*k1 + a12*k2) + O(h^2)
k2 = f(y) + h*f'(y)*(a21*k1 + a22*k2) + O(h^2)
Seen as simple iteration, the contraction factor is O(h), so if h is small enough, the factor is smaller than 1 and thus convergence is sure, increasing the order of h in the residual by 1 in each step. So with 6 iterations the error in the implicit system is O(h^6), which is one order smaller than the local truncation error.
One can reduce the number of iterations if k1,k2 start with some higher-order estimates, not just with k1=k2=f(y).
One can reduce the right side residual by removing the terms that are linear in h (on both sides of course).
k1 - h*f'(y)*(a11*k1 + a12*k2) = f(y + h*(a11*k1 + a12*k2) ) - h*f'(y)*(a11*k1 + a12*k2)
k2 - h*f'(y)*(a21*k1 + a22*k2) = f(y + h*(a21*k1 + a22*k2) ) - h*f'(y)*(a21*k1 + a22*k2)
The right side is evaluated at the current values, the left side is a linear system for the new values. So
K = K - solve(M, rhs)
with
K = [ k1; k2]
M = [ 1 - h*f'(y)*a11, -h*f'(y)*a12 ; -h*f'(y)*a21, 1 - h*f'(y)*a22 ]
= I - h*f'(y)*A
rhs = [ k1 - f(y + h*(a11*k1 + a12*k2) ); k2 - f(y + h*(a12*k1 + a12*k2) )
= K - f(Y)
where
Y = y+h*A*K
This should, probably, work for scalar equations, for systems this involves Kronecker products of matrices and vectors.
As the linear part is taken out of the residual, the contraction factor in this new fixed-point iteration is O(h^2), possibly with smaller constants, so it converges faster and, it has been argued, for larger step sizes.
What you have in the code regarding the implicit method step shows the steps of the algorithm in the right order, but with wrong arguments in the function calls.
What you do with h is not recognizable. One could guess that the task is to explore the results of the method for a collection of step sizes. This means that the h for each integration run is constant, halving the step size and increasing the step number for the next integration run.

Solving System of Second Order Ordinary Differential Equation in Matlab

Introduction
I am using Matlab to simulate some dynamic systems through numerically solving systems of Second Order Ordinary Differential Equations using ODE45. I found a great tutorial from Mathworks (link for tutorial at end) on how to do this.
In the tutorial the system of equations is explicit in x and y as shown below:
x''=-D(y) * x' * sqrt(x'^2 + y'^2)
y''=-D(y) * y' * sqrt(x'^2 + y'^2) + g(y)
Both equations above have form y'' = f(x, x', y, y')
Question
However, I am coming across systems of equations where the variables can not be solved for explicitly as shown in the example. For example one of the systems has the following set of 3 second order ordinary differential equations:
y double prime equation
y'' - .5*L*(x''*sin(x) + x'^2*cos(x) + (k/m)*y - g = 0
x double prime equation
.33*L^2*x'' - .5*L*y''sin(x) - .33*L^2*C*cos(x) + .5*g*L*sin(x) = 0
A single prime is first derivative
A double prime is second derivative
L, g, m, k, and C are given parameters.
How can Matlab be used to numerically solve a set of second order ordinary differential equations where second order can not be explicitly solved for?
Thanks!
Your second system has the form
a11*x'' + a12*y'' = f1(x,y,x',y')
a21*x'' + a22*y'' = f2(x,y,x',y')
which you can solve as a linear system
[x'', y''] = A\f
or in this case explicitly using Cramer's rule
x'' = ( a22*f1 - a12*f2 ) / (a11*a22 - a12*a21)
y'' accordingly.
I would strongly recommend leaving the intermediate variables in the code to reduce chances for typing errors and avoid multiple computation of the same expressions.
Code could look like this (untested)
function dz = odefunc(t,z)
x=z(1); dx=z(2); y=z(3); dy=z(4);
A = [ [-.5*L*sin(x), 1] ; [.33*L^2, -0.5*L*sin(x)] ]
b = [ [dx^2*cos(x) + (k/m)*y-g]; [-.33*L^2*C*cos(x) + .5*g*L*sin(x)] ]
d2 = A\b
dz = [ dx, d2(1), dy, d2(2) ]
end
Yes your method is correct!
I post the following code below:
%Rotating Pendulum Sym Main
clc
clear all;
%Define parameters
global M K L g C;
M = 1;
K = 25.6;
L = 1;
C = 1;
g = 9.8;
% define initial values for theta, thetad, del, deld
e_0 = 1;
ed_0 = 0;
theta_0 = 0;
thetad_0 = .5;
initialValues = [e_0, ed_0, theta_0, thetad_0];
% Set a timespan
t_initial = 0;
t_final = 36;
dt = .01;
N = (t_final - t_initial)/dt;
timeSpan = linspace(t_final, t_initial, N);
% Run ode45 to get z (theta, thetad, del, deld)
[t, z] = ode45(#RotSpngHndl, timeSpan, initialValues);
%initialize variables
e = zeros(N,1);
ed = zeros(N,1);
theta = zeros(N,1);
thetad = zeros(N,1);
T = zeros(N,1);
V = zeros(N,1);
x = zeros(N,1);
y = zeros(N,1);
for i = 1:N
e(i) = z(i, 1);
ed(i) = z(i, 2);
theta(i) = z(i, 3);
thetad(i) = z(i, 4);
T(i) = .5*M*(ed(i)^2 + (1/3)*L^2*C*sin(theta(i)) + (1/3)*L^2*thetad(i)^2 - L*ed(i)*thetad(i)*sin(theta(i)));
V(i) = -M*g*(e(i) + .5*L*cos(theta(i)));
E(i) = T(i) + V(i);
end
figure(1)
plot(t, T,'r');
hold on;
plot(t, V,'b');
plot(t,E,'y');
title('Energy');
xlabel('time(sec)');
legend('Kinetic Energy', 'Potential Energy', 'Total Energy');
Here is function handle file for ode45:
function dz = RotSpngHndl(~, z)
% Define Global Parameters
global M K L g C
A = [1, -.5*L*sin(z(3));
-.5*L*sin(z(3)), (1/3)*L^2];
b = [.5*L*z(4)^2*cos(z(3)) - (K/M)*z(1) + g;
(1/3)*L^2*C*cos(z(3)) + .5*g*L*sin(z(3))];
X = A\b;
% return column vector [ed; edd; ed; edd]
dz = [z(2);
X(1);
z(4);
X(2)];

Solving Set of Second Order ODEs with Matlab ODE45 function

Introduction
NOTE IN CODE AND DISUSSION:
A single d is first derivative A double d is second derivative
I am using Matlab to simulate some dynamic systems through numerically solving the governing LaGrange Equations. Basically a set of Second Order Ordinary Differential Equations. I am using ODE45. I found a great tutorial from Mathworks (link for tutorial below) on how to solve a basic set of second order ordinary differential equations.
https://www.mathworks.com/academia/student_center/tutorials/source/computational-math/solving-ordinary-diff-equations/player.html
Based on the tutorial I simulated the motion for an elastic spring pendulum by obtaining two second order ordinary differential equations (one for angle theta and the other for spring elongation) shown below:
theta double prime equation:
M*thetadd*(L + del)^2 + M*g*sin(theta)*(L + del) + M*deld*thetad*(2*L + 2*del) = 0
del (spring elongation) double prime equation:
K*del + M*deldd - (M*thetad^2*(2*L + 2*del))/2 - M*g*cos(theta) = 0
Both equations above have form ydd = f(x, xd, y, yd)
I solved the set of equations by a common reduction of order method; setting column vector z to [theta, thetad, del, deld] and therefore zd = [thetad, thetadd, deld, deldd]. Next I used two matlab files; a simulation file and a function handle file for ode45. See code below of simulation file and function handle file:
Simulation File
%ElasticPdlmSymMainSim
clc
clear all;
%Define parameters
global M K L g;
M = 1;
K = 25.6;
L = 1;
g = 9.8;
% define initial values for theta, thetad, del, deld
theta_0 = 0;
thetad_0 = .5;
del_0 = 1;
deld_0 = 0;
initialValues = [theta_0, thetad_0, del_0, deld_0];
% Set a timespan
t_initial = 0;
t_final = 36;
dt = .01;
N = (t_final - t_initial)/dt;
timeSpan = linspace(t_final, t_initial, N);
% Run ode45 to get z (theta, thetad, del, deld)
[t, z] = ode45(#OdeFunHndlSpngPdlmSym, timeSpan, initialValues);
Here is the function handle file:
function dz = OdeFunHndlSpngPdlmSym(~, z)
% Define Global Parameters
global M K L g
% Take output from SymDevFElSpringPdlm.m file for fy1 and fy2 and
% substitute into z2 and z4 respectively
% z1 and z3 are simply z2 and z4
% fy1=thetadd=z(2)= -(M*g*sin(z1)*(L + z3) + M*z2*z4*(2*L + 2*z3))/(M*(L + z3)^2)
% fy2=deldd=z(4)=((M*(2*L + 2*z3)*z2^2)/2 - K*z3 + M*g*cos(z1))/M
% return column vector [thetad; thetadd; deld; deldd]
dz = [z(2);
-(M*g*sin(z(1))*(L + z(3)) + M*z(2)*z(4)*(2*L + 2*z(3)))/(M*(L + z(3))^2);
z(4);
((M*(2*L + 2*z(3))*z(2)^2)/2 - K*z(3) + M*g*cos(z(1)))/M];
Question
However, I am coming across systems of equations where the variables can not be solved for explicitly as is the case with spring pendulum example. For one case I have the following set of ordinary differential equations:
y double prime equation
ydd - .5*L*(xdd*sin(x) + xd^2*cos(x) + (k/m)*y - g = 0
x double prime equation
.33*L^2*xdd - .5*L*ydd*sin(x) - .33*L^2*C*cos(x) + .5*g*L*sin(x) = 0
L, g, m, k, and C are given parameters.
Note that x'' term appears in y'' equation and y'' term appears in x'' equation so I am not able to use reduction of order method. Can I use Matlab ODE45 to solve the set of ordinary differential equations in the second example in a manner similar to first example?
Thanks!
This problem can be solved by working out some of the math by hand. The equations are linear in xdd and ydd so it should be straightforward to solve.
ydd - .5*L*(xdd*sin(x) + xd^2*cos(x)) + (k/m)*y - g = 0
.33*L^2*xdd - .5*L*ydd*sin(x) - .33*L^2*C*cos(x) + .5*g*L*sin(x) = 0
can be rewritten as
-.5*L*sin(x)*xdd + ydd = -.5*L*xd^2*cos(x) - (k/m)*y + g
.33*L^2*xdd - .5*L*sin(x)*ydd = .33*L^2*C*cos(x) - .5*g*L*sin(x)
which is the form A*x=b.
For more complex systems, you can look into the fsolve function.

Code wont produce the value of a definite integral in MATLAB

I've had problems with my code as I've tried to make an integral compute, but it will not for the power, P2.
I've tried using anonymous function handles to use the integral() function on MATLAB as well as just using int(), but it will still not compute. Are the values too small for MATLAB to integrate or am I just missing something small?
Any help or advice would be appreciated to push me in the right direction. Thanks!
The problem in the code is in the bottom of the section labelled "Power Calculations". My integral also gets quite messy if that makes a difference.
%%%%%%%%%%% Parameters %%%%%%%%%%%%
n0 = 1; %air
n1 = 1.4; %layer 1
n2 = 2.62; %layer 2
n3 = 3.5; %silicon
L0 = 650*10^(-9); %centre wavelength
L1 = 200*10^(-9): 10*10^(-9): 2200*10^(-9); %lambda from 200nm to 2200nm
x = ((pi./2).*(L0./L1)); %layer phase thickness
r01 = ((n0 - n1)./(n0 + n1)); %reflection coefficient 01
r12 = ((n1 - n2)./(n1 + n2)); %reflection coefficient 12
r23 = ((n2 - n3)./(n2 + n3)); %reflection coefficient 23
t01 = ((2.*n0)./(n0 + n1)); %transmission coefficient 01
t12 = ((2.*n1)./(n1 + n2)); %transmission coefficient 12
t23 = ((2.*n2)./(n2 + n3)); %transmission coefficient 23
Q1 = [1 r01; r01 1]; %Matrix Q1
Q2 = [1 r12; r12 1]; %Matrix Q2
Q3 = [1 r23; r23 1]; %Matrix Q3
%%%%%%%%%%%% Graph of L vs R %%%%%%%%%%%
R = zeros(size(x));
for i = 1:length(x)
P = [exp(j.*x(i)) 0; 0 exp(-j.*x(i))]; %General Matrix P
T = ((1./(t01.*t12.*t23)).*(Q1*P*Q2*P*Q3)); %Transmission
T11 = T(1,1); %T11 value
T21 = T(2,1); %T21 value
R(i) = ((abs(T21./T11))^2).*100; %Percent reflectivity
end
plot(L1,R)
title('Percent Reflectance vs. wavelength for 2 Layers')
xlabel('Wavelength (m)')
ylabel('Reflectance (%)')
%%%%%%%%%%% Power Calculation %%%%%%%%%%
syms L; %General lamda
y = ((pi./2).*(L0./L)); %Layer phase thickness with variable Lamda
P1 = [exp(j.*y) 0; 0 exp(-j.*y)]; %Matrix P with variable Lambda
T1 = ((1./(t01.*t12.*t23)).*(Q1*P1*Q2*P1*Q3)); %Transmittivity matrix T1
I = ((6.16^(15))./((L.^(5)).*exp(2484./L) - 1)); %Blackbody Irradiance
Tf11 = T1(1,1); %New T11 section of matrix with variable Lambda
Tf2 = (((abs(1./Tf11))^2).*(n3./n0)); %final transmittivity
P1 = Tf2.*I; %Power before integration
L_initial = 200*10^(-9); %Initial wavelength
L_final = 2200*10^(-9); %Final wavelength
P2 = int(P1, L, L_initial, L_final) %Power production
I've refactored your code
to make it easier to read
to improve code reuse
to improve performance
to make it easier to understand
Why do you use so many unnecessary parentheses?!
Anyway, there's a few problems I saw in your code.
You used i as a loop variable, and j as the imaginary unit. It was OK for this one instance, but just barely so. In the future it's better to use 1i or 1j for the imaginary unit, and/or m or ii or something other than i or j as the loop index variable. You're helping yourself and your colleagues; it's just less confusing that way.
Towards the end, you used the variable name P1 twice in a row, and in two different ways. Although it works here, it's confusing! Took me a while to unravel why a matrix-producing function was producing scalars instead...
But by far the biggest problem in your code is the numerical problems with the blackbody irradiance computation. The term
L⁵ · exp(2484/L) - 1
for λ₀ = 200 · 10⁻⁹ m will require computing the quantity
exp(1.242 · 10¹⁰)
which, needless to say, is rather difficult for a computer :) Actually, the problem with your computation is two-fold. First, the exponentiation is definitely out of range of 64 bit IEEE-754 double precision, and will therefore result in ∞. Second, the parentheses are wrong; Planck's law should read
C/L⁵ · 1/(exp(D) - 1)
with C and D the constants (involving Planck's constant, speed of light, and Boltzmann constant), which you've presumably precomputed (I didn't check the values. I do know choice of units can mess these up, so better check).
So, aside from the silly parentheses error, I suspect the main problem is that you simply forgot to rescale λ to nm. Changing everything in the blackbody equation to nm and correcting those parentheses gives the code
I = 6.16^(15) / ( (L*1e+9)^5 * (exp(2484/(L*1e+9)) - 1) );
With this, I got a finite value for the integral of
P2 = 1.052916498836486e-010
But, again, you'd better double-check everything.
Note that I used quadgk(), because it's one of the better ones available on R2010a (which I'm stuck with), but you can just as easily replace this with integral() available on anything newer than R2012a.
Here's the code I ended up with:
function pwr = my_fcn()
% Parameters
n0 = 1; % air
n1 = 1.4; % layer 1
n2 = 2.62; % layer 2
n3 = 3.5; % silicon
L0 = 650e-9; % centre wavelength
% Reflection coefficients
r01 = (n0 - n1)/(n0 + n1);
r12 = (n1 - n2)/(n1 + n2);
r23 = (n2 - n3)/(n2 + n3);
% Transmission coefficients
t01 = (2*n0) / (n0 + n1);
t12 = (2*n1) / (n1 + n2);
t23 = (2*n2) / (n2 + n3);
% Quality factors
Q1 = [1 r01; r01 1];
Q2 = [1 r12; r12 1];
Q3 = [1 r23; r23 1];
% Initial & Final wavelengths
L_initial = 200e-9;
L_final = 2200e-9;
% plot reflectivity for selected lambda range
plot_reflectivity(L_initial, L_final, 1000);
% Compute power production
pwr = quadgk(#power_production, L_initial, L_final);
% Helper functions
% ========================================
% Graph of lambda vs reflectivity
function plot_reflectivity(L_initial, L_final, N)
L = linspace(L_initial, L_final, N);
R = zeros(size(L));
for ii = 1:numel(L)
% Transmission
T = transmittivity(L(ii));
% Percent reflectivity
R(ii) = 100 * abs(T(2,1)/T(1,1))^2 ;
end
plot(L, R)
title('Percent Reflectance vs. wavelength for 2 Layers')
xlabel('Wavelength (m)')
ylabel('Reflectance (%)')
end
% Compute transmittivity matrix for a single wavelength
function T = transmittivity(L)
% Layer phase thickness with variable Lamda
y = pi/2 * L0/L;
% Matrix P with variable Lambda
P1 = [exp(+1j*y) 0
0 exp(-1j*y)];
% Transmittivity matrix T1
T = 1/(t01*t12*t23) * Q1*P1*Q2*P1*Q3;
end
% Power for a specific wavelength. Note that this function
% accepts vector-valued wavelengths; needed for quadgk()
function pwr = power_production(L)
pwr = zeros(size(L));
for ii = 1:numel(L)
% Transmittivity matrix
T1 = transmittivity(L(ii));
% Blackbody Irradiance
I = 6.16^(15) / ( (L(ii)*1e+9)^5 * (exp(2484/(L(ii)*1e+9)) - 1) );
% final transmittivity
Tf2 = abs(1/T1(1))^2 * n3/n0;
% Power before integration
pwr(ii) = Tf2 * I;
end
end
end

The Fastest Method of Solving System of Non-linear Equations in MATLAB

Assume we have three equations:
eq1 = x1 + (x1 - x2) * t - X == 0;
eq2 = z1 + (z1 - z2) * t - Z == 0;
eq3 = ((X-x1)/a)^2 + ((Z-z1)/b)^2 - 1 == 0;
while six of known variables are:
a = 42 ;
b = 12 ;
x1 = 316190;
z1 = 234070;
x2 = 316190;
z2 = 234070;
So we are looking for three unknown variables that are:
X , Z and t
I wrote two method to solve it. But, since I need to run these code for 5.7 million data, it become really slow.
Method one (using "solve"):
tic
S = solve( eq1 , eq2 , eq3 , X , Z , t ,...
'ReturnConditions', true, 'Real', true);
toc
X = double(S.X(1))
Z = double(S.Z(1))
t = double(S.t(1))
results of method one:
X = 316190;
Z = 234060;
t = -2.9280;
Elapsed time is 0.770429 seconds.
Method two (using "fsolve"):
coeffs = [a,b,x1,x2,z1,z2]; % Known parameters
x0 = [ x2 ; z2 ; 1 ].'; % Initial values for iterations
f_d = #(x0) myfunc(x0,coeffs); % f_d considers x0 as variables
options = optimoptions('fsolve','Display','none');
tic
M = fsolve(f_d,x0,options);
toc
results of method two:
X = 316190; % X = M(1)
Z = 234060; % Z = M(2)
t = -2.9280; % t = M(3)
Elapsed time is 0.014 seconds.
Although, the second method is faster, but it still needs to be improved. Please let me know if you have a better solution for that. Thanks
* extra information:
if you are interested to know what those 3 equations are, the first two are equations of a line in 2D and the third equation is an ellipse equation. I need to find the intersection of the line with the ellipse. Obviously, we have two points as result. But, let's forget about the second answer for simplicity.
My suggestion it's to use the second approce,which it's the recommended by matlab for nonlinear equation system.
Declare a M-function
function Y=mysistem(X)
%X(1) = X
%X(2) = t
%X(3) = Z
a = 42 ;
b = 12 ;
x1 = 316190;
z1 = 234070;
x2 = 316190;
z2 = 234070;
Y(1,1) = x1 + (x1 - x2) * X(2) - X(1);
Y(2,1) = z1 + (z1 - z2) * X(2) - X(3);
Y(3,1) = ((X-x1)/a)^2 + ((Z-z1)/b)^2 - 1;
end
Then for solving use
x0 = [ x2 , z2 , 1 ];
M = fsolve(#mysistem,x0,options);
If you may want to reduce the default precision by changing StepTolerance (default 1e-6).
Also for more increare you may want to use the jacobian matrix for greater efficencies.
For more reference take a look in official documentation:
fsolve Nonlinear Equations with Analytic Jacobian
Basically giving the solver the Jacobian matrix of the system(and special options) you can increase method efficency.