I need help with a problem. I written a program to calculate the value of a function using the Newton Raphson method. However, the function also has a variable i would like to iterate over, V. The program runs fine until the second iteration of the outer for loop, then the inner for loop will not run further once it reaches the Newton Raphson function. If someone has any ideas of what is wrong, I would greatly appreciate it. The error i get is: Warning: Solution does not exist because the system is inconsistent.
The code is below for detail.
for V = 1:50;
syms x;
f(V)= Il-x-Is.*(exp((q.*(V+x.*Rs))./(1000.*y.*K.*T))-1)-((V+x.*Rs)./Rsh);
g(V)=diff(f(V));
x0 = 0;
i = 1;
for i=1:10
f0=vpa(subs(f,x,x0));
f0_der=vpa(subs(g,x,x0));
y=x0-f0/f0_der; % Newton Raphson
x0=y;
end
end
Assuming you have a function defined like
func = #(x,V) V+x+exp(x);
There are plenty of options that avoid expensive symbolic calculations.
Firstly, making a vector of values of x0 using fzero and a for loop:
for V = 1:50
x0(V) = fzero(#(x) func(x,V),0);
end
Secondly, the same thing again but written as an anonymous function, so you can call x0(1.5) or x0(1:50):
x0 = #(V) arrayfun(#(s) fzero(#(x) func(x,s),0),V);
Finally, if you want to use ten steps of Newton's method and calculate the derivative symbolically (although this is not a great method),
syms y Vsym
g = matlabFunction(diff(func(y,Vsym),y),'Vars',[y Vsym]);
for V = 1:50
x0(V) = 0;
for i = 1:10
x0(V) = x0(V)-func(x0(V),V)/g(x0(V),V); % Newton Raphson
end
end
Which at least will be more efficient in the loops because it's just using anonymous functions.
Related
I have 100 equations with 5 variables. Is there a function in Matlab which I can use to find the optimal solution of these equations?
My problem is to find argmin ||(a-ic)^2 + (b-jd)^2 + e - h(i,j)|| over all i, j from -10 to 10. ie.
%% Note: not Matlab code. Just showing the Math.
for i = -10:10
for j = -10:10
(a-ic)^2 + (b-jd)^2 + e = h(i,j)
known: h(i,j) is a 10*10 matrix,and i,j are indexes
expected: the optimal result of a,b,c,d,e
You can try using lsqnonlin as follows.
%% define a helper function in your .m file
function f = fun(x)
a=x(1); b=x(2); c=x(3); d=x(4); e=x(5); % Using variable names from your question. In other situations, be careful when overwriting e.
f=zeros(21*21,0); % size(f) is taken from your question. You should make this a variable for good practice.
for i = -10:10
for j = -10:10
f(10*(i+10+1)+(j+10+1)) = (a-i*c)^2 + (b-j*d)^2 + e - h(i,j); % 10 is taken from your question.
end
end
end
(Aside, why is your h(i,j) taking negative indices??)
In your main function you can simply write
function out=myproblem(x0)
out=lsqnonlin(#fun,x0);
end
In your cmd, you can call with specific initial try such as
myproblem([0,0,0,0,0])
Helper function over anonymous because in my experience helpers get sped up by JIT while anonymous do not. I also opted to reshape in the loops as an opposed to actually call reshape after because I expect reshape to cost significant extra time. Remember that O(1) in fun is not O(1) in lsqnonlin.
(As always, a solution to a nonlinear problem is not guaranteed.)
This post builds on my post about quickly evaluating analytic Jacobian in Matlab:
fast evaluation of analytical jacobian in MATLAB
The key difference is that now, I am working with the Hessian and I have to evaluate close to 700 matlabFunctions (instead of 1 matlabFunction, like I did for the Jacobian) each time the hessian is evaluated. So there is an opportunity to do things a little differently.
I have tried to do this two ways so far and I am thinking about implementing a third and was wondering if anyone has any other suggestions. I will go through each method with a toy example, but first some preprocessing to generate these matlabFunctions:
PreProcessing:
% This part of the code is calculated once, it is not the issue
dvs = 5;
X=sym('X',[dvs,1]);
num = dvs - 1; % number of constraints
% multiple functions
for k = 1:num
f1(X(k+1),X(k)) = (X(k+1)^3 - X(k)^2*k^2);
c(k) = f1;
end
gradc = jacobian(c,X).'; % .' performs transpose
parfor k = 1:num
hessc{k} = jacobian(gradc(:,k),X);
end
parfor k = 1:num
hess_name = strcat('hessian_',num2str(k));
matlabFunction(hessc{k},'file',hess_name,'vars',X);
end
METHOD #1 : Evaluate functions in series
%% Now we use the functions to run an "optimization." Just for an example the "optimization" is just a for loop
fprintf('This is test A, where the functions are evaluated in series!\n');
tic
for q = 1:10
x_dv = rand(dvs,1); % these are the design variables
lambda = rand(num,1); % these are the lagrange multipliers
x_dv_cell = num2cell(x_dv); % for passing large design variables
for k = 1:num
hess_name = strcat('hessian_',num2str(k));
function_handle = str2func(hess_name);
H_temp(:,:,k) = lambda(k)*function_handle(x_dv_cell{:});
end
H = sum(H_temp,3);
end
fprintf('The time for test A was:\n')
toc
METHOD # 2: Evaluate functions in parallel
%% Try to run a parfor loop
fprintf('This is test B, where the functions are evaluated in parallel!\n');
tic
for q = 1:10
x_dv = rand(dvs,1); % these are the design variables
lambda = rand(num,1); % these are the lagrange multipliers
x_dv_cell = num2cell(x_dv); % for passing large design variables
parfor k = 1:num
hess_name = strcat('hessian_',num2str(k));
function_handle = str2func(hess_name);
H_temp(:,:,k) = lambda(k)*function_handle(x_dv_cell{:});
end
H = sum(H_temp,3);
end
fprintf('The time for test B was:\n')
toc
RESULTS:
METHOD #1 = 0.008691 seconds
METHOD #2 = 0.464786 seconds
DISCUSSION of RESULTS
This result makes sense because, the functions evaluate very quickly and running them in parallel waists a lot of time setting up and sending out the jobs to the different Matlabs ( and then getting the data back from them). I see the same result on my actual problem.
METHOD # 3: Evaluating the functions using the GPU
I have not tried this yet, but I am interested to see what the performance difference is. I am not yet familiar with doing this in Matlab and will add it once I am done.
Any other thoughts? Comments? Thanks!
I would like to solve a system with two equations and two variables.
Tau(i)and Roh(i) are input arrays.
Tau=[0.91411 0.91433 0.91389 0.91399 0.91511 0.915]
Roh=[0.07941 0.07942 0.07952 0.07946 0.07951 0.07947]
I would like to calculate R(i)and t(i)for each istep (for loop).
I will be glad to have help to solve the equations.
Tau(i)-((1-R(i))^2*t(i))/(1-R(i)^2*t(i)^2)==0
Roh(i)-R(i)-((1-R(i))^2*R(i)*t(i)^2)/(1-R(i)^2*t(i)^2)==0
I have tried the following script but I have difficulty to write the proper code to export the data. I get only "sym" which is not a value.
function [R,t] = glassair(Tau, Roh)
for i=1:6
syms R(i) t(i)
eq1(i)=sym('Tau(i)-((1-R(i))^2*t(i))/(1-R(i)^2*t(i)^2)');
eq2(i)=sym('Roh(i)-R(i)-((1-R(i))^2*R(i)*t(i)^2)/(1-R(i)^2*t(i)^2)');
sol(i)=solve(eq1(i),R(i), eq2(i),t(i));
end
end
There where multiple issues with your code.
Using the R(i) syntax with variables, you mixed in symbolic functions. I think here you only have variables. Same with eq(i), this created a symbolic function, not a list of your equations (as you probably intended)
You called solve with the wrong order of arguments
You called solve with the wrong order of arguments
Passing a string to sym your known constants Tau and Roh where not substituted, you ended up with 4 unknowns in your equations
.
Tau=[0.91411 0.91433 0.91389 0.91399 0.91511 0.915]
Roh=[0.07941 0.07942 0.07952 0.07946 0.07951 0.07947]
syms R t
for i=1:6
eq1=Tau(i)-((1-R)^2*t)/(1-R^2*t^2);
eq2=Roh(i)-R-((1-R)^2*R*t^2)/(1-R^2*t^2);
sol=solve([eq1,eq2]);
allsol(i).R=double(sol.R);
allsol(i).t=double(sol.t);
end
You just need to define the functions once, then use a for loop to get the values.
function [R_out,t_out] = glassair(Tau_in, Roh_in)
syms R t Tau Roh
eq1 = Tau-((1-R)^2*t)/(1-R^2*t^2);
eq2 = Roh-R-((1-R)^2*R*t^2)/(1-R^2*t^2);
R_out = zeros(1,6); % Given it will be always 6
t_out = zeros(1,6);
for i=1:6
Tau = Tau_in(i);
Roh = Roh_in(i);
sol = solve( subs( [eq1;eq2] ) );
R_out(i) = double(sol.R);
t_out(i) = double(sol.t);
end
end
Matlab is very smart in that defines the types for you. When you solve the equations it detects which variables are needed. The zero allocation is for speed up.
I've been trying to use MATLAB to solve equations like this:
B = alpha*Y0*sqrt(epsilon)/(pi*ln(b/a)*sqrt(epsilon_t))*integral from
0 to pi of
(2*sinint(k0*sqrt(epsilon*(a^2+b^2-2abcos(theta))-sinint(2*k0*sqrt(epsilon)*a*sin(theta/2))-sinint(2*k0*sqrt(epsilon)*b*sin(theta/2)))
with regard to theta
Where epsilon is the unknown.
I know how to symbolically solve equations with unknown embedded in an integral by using int() and solve(), but using the symbolic integrator int() takes too long for equations this complicated. When I try to use quad(), quadl() and quadgk(), I have trouble dealing with how the unknown is embedded in the integral.
This sort of thing gets complicated real fast. Although it is possible to do it all in a single inline equation, I would advise you to split it up into multiple nested functions, if only for readability.
The best example of why readability is important: you have a bracketing problem in the eqution you posted; there's not enough closing brackets, so I can't be entirely sure what the equation looks like in mathematical notation :)
Anyway, here's one way to do it with the version I --think-- you meant:
function test
% some random values for testing
Y0 = rand;
b = rand;
a = rand;
k0 = rand;
alpha = rand;
epsilon_t = rand;
% D is your B
D = -0.015;
% define SIMPLE anonymous function
Bb = #(ep) F(ep).*main_integral(ep) - D;
% aaaand...solve it!
sol = fsolve(Bb, 1)
% The anonymous function above is only simple, because of these:
% the main integral
function val = main_integral(epsilon)
% we need for loop through epsilon, due to how quad(gk) solves things
val = zeros(size(epsilon));
for ii = 1:numel(epsilon)
ep = epsilon(ii);
% NOTE how the sinint's all have a different function as argument:
val(ii) = quadgk(#(th)...
2*sinint(A(ep,th)) - sinint(B(ep,th)) - sinint(C(ep,th)), ...
0, pi);
end
end
% factor in front of integral
function f = F(epsilon)
f = alpha*Y0*sqrt(epsilon)./(pi*log(b/a)*sqrt(epsilon_t)); end
% first sinint argument
function val = A(epsilon, theta)
val = k0*sqrt(epsilon*(a^2+b^2-2*a*b*cos(theta))); end
% second sinint argument
function val = B(epsilon, theta)
val = 2*k0*sqrt(epsilon)*a*sin(theta/2); end
% third sinint argument
function val = C(epsilon, theta)
val = 2*k0*sqrt(epsilon)*b*sin(theta/2); end
end
The solution above will still be quite slow, but I think that's pretty normal for integrals this complicated.
I don't think implementing your own sinint will help much, as most of the speed loss is due to the for loops with non-builtin functions...If it's speed you want, I'd go for a MEX implementation with your own Gauss-Kronrod adaptive quadrature routine.
The code in question is here:
function k = whileloop(odefun,args)
...
while (sign(costheta) == originalsign)
y=y(:) + odefun(0,y(:),vars,param)*(dt); % Line 4
costheta = dot(y-normpt,normvec);
k = k + 1;
end
...
end
and to clarify, odefun is F1.m, an m-file of mine. I pass it into the function that contains this while-loop. It's something like whileloop(#F1,args). Line 4 in the code-block above is the Euler method.
The reason I'm using a while-loop is because I want to trigger upon the vector "y" crossing a plane defined by a point, "normpt", and the vector normal to the plane, "normvec".
Is there an easy change to this code that will speed it up dramatically? Should I attempt learning how to make mex files instead (for a speed increase)?
Edit:
Here is a rushed attempt at an example of what one could try to test with. I have not debugged this. It is to give you an idea:
%Save the following 3 lines in an m-file named "F1.m"
function ydot = F1(placeholder1,y,placeholder2,placeholder3)
ydot = y/10;
end
%Run the following:
dt = 1.5e-12 %I do not know about this. You will have to experiment.
y0 = [.1,.1,.1];
normpt = [3,3,3];
normvec = [1,1,1];
originalsign = sign(dot(y0-normpt,normvec));
costheta = originalsign;
y = y0;
k = 0;
while (sign(costheta) == originalsign)
y=y(:) + F1(0,y(:),0,0)*(dt); % Line 4
costheta = dot(y-normpt,normvec);
k = k + 1;
end
disp(k);
dt should be sufficiently small that it takes hundreds of thousands of iterations to trigger.
Assume I must use the Euler method. I have a stochastic differential equation with state-dependent noise if you are curious as to why I tell you to take such an assumption.
I would focus on your actual ODE integration. The fewer steps you have to take, the faster the loop will run. I would only worry about the speed of the sign check after you've optimized the actual integration method.
It looks like you're using the first-order explicit Euler method. Have you tried a higher-order integrator or an implicit method? Often you can increase the time step significantly.