Suppose I'm solving a system of nonlinear equations. A simple example would be:
function example
x0 = [15; -2];
options = optimoptions('fsolve','Display','iter','TolFun',eps,'TolX',eps);
[x,fval,exitflag,output] = fsolve(#P1a,x0,options);
end
function f1 = P1a(x)
f1 = [x(1)+x(2)*(x(2)*(5-x(2))-2)- 13; x(1)+x(2)*(x(2)*(1+x(2))-14)-29];
end
How do I determine the rate of convergence? 'Display','iter' shows me the norm at each step, but I can't find a way to extract these values. (For this particular example, I believe fsolve does not converge to the right solution, but rather to a local minimum. That is not the issue, however. I just want to find a way to estimate the convergence rate.)
You can get plenty out of fsolve. However, you'll need to do some work. Read up on the 'OutputFcn' option and writing output functions for Matlab's Optimization methods. This is vary similar to the option of the same name used by Matlab's ODE solvers. Here's an example that replicates the 'Display','iter' option-value for fsolve (for the default 'trust-region-dogleg' algorithm specifically):
function stop = outfun(x,optimValues,state)
% See private/trustnleqn
stop = false;
switch state
case 'init'
header = sprintf(['\n Norm of First-order Trust-region\n',...
' Iteration Func-count f(x) step optimality radius']);
disp(header);
case 'iter'
iter = optimValues.iteration; % Iteration
numFevals = optimValues.funccount; % Func-count
F = optimValues.fval; % f(x)
normd = optimValues.stepsize; % Norm of step
normgradinf = optimValues.firstorderopt; % First-order optimality
Delta = optimValues.trustregionradius; % Trust-region radius
if iter > 0
formatstr = ' %5.0f %5.0f %13.6g %13.6g %12.3g %12.3g';
iterOutput = sprintf(formatstr,iter,numFevals,F'*F,normd,normgradinf,Delta);
else
formatstr0 = ' %5.0f %5.0f %13.6g %12.3g %12.3g';
iterOutput = sprintf(formatstr0,iter,numFevals,F'*F,normgradinf,Delta);
end
disp(iterOutput);
case 'done'
otherwise
end
You can then call this via:
function example
P1a=#(x)[x(1)+x(2)*(x(2)*(5-x(2))-2)- 13; x(1)+x(2)*(x(2)*(1+x(2))-14)-29];
x0 = [15; -2];
opts = optimoptions('fsolve','Display','off','OutputFcn',#outfun,'TolFun',eps,'TolX',eps);
[x,fval,exitflag,output] = fsolve(P1a,x0,opts);
This still just prints to the Command Window. From here it's a matter of creating an output function that can write data to an array, file, or other data structure. Here's how you might do that with a global variable (in general, not a good idea):
function stop = outfun2(x,optimValues,state)
stop = false;
global out; % Global variable, define in main function too
switch state
case 'init'
out = [];
case 'iter'
iter = optimValues.iteration; % Iteration
numFevals = optimValues.funccount; % Func-count
F = optimValues.fval; % f(x)
normd = optimValues.stepsize; % Norm of step
normgradinf = optimValues.firstorderopt; % First-order optimality
Delta = optimValues.trustregionradius; % Trust-region radius
out = [out;iter numFevals F'*F normd normgradinf Delta];
case 'done'
otherwise
end
Then just declare global out; in your main function before calling fsolve. You could also accomplish this by making your output function a nested function, in which case the out array would be shared with the outer main function.
The second output function example performs dynamic memory allocation instead of reallocating the entire out array. There's no way around this because both we and the algorithm don't know how many iterations it will take to converge. However, for a few hundred iterations, dynamic memory allocation will be plenty fast.
I'll leave "determining the rate of convergence" to you now that you have the tools in hand...
Related
I want to fit two parameters using lsqnonlin. I have set up my system of ODE and want to solve them with my new parameters that are found after I performed the lsqnonlin.
function [dcdt] = CSTRSinSeries(t,c,par)
for i=1:par.n
if i==1
dcdt(i) = 1/par.tau_per_tank * (par.c0-c(i))+par.kf*(par.c0_meth-c(i))- ...
par.kb*c(i).^2;
else
dcdt(i) = 1/par.tau_per_tank * (c(i-1)-c(i))+par.k*(par.c0_meth-c(i))- ...
par.kb*c(i).^2;
end
end
dcdt = dcdt';
end
% fitcrit function
function error = fitcrit(curve_fit_parameters,time_exp,conc_exp, par, init)
[time_model_fit,conc_model_fit] = ode45(#(t,c) CSTRSinSeries(t,c,par),time_exp,init,[]);
error = (conc_exp-conc_model_fit);
end
I think the problem has to do with the fact that my parameters are in parameter struct par and that I don't want to fit on all of these parameters, but just on basis of two of those.
This is my main script for performing the curve fitting:
% initial guesses for model parameters, no. of indeces is number of fitted % parameters
k0 = [0.028 0.002];
% lower and upper bounds for model parameters, this can be altered
LB = [0.00 0.00];
UB = [Inf Inf];
% Set up fitting options
options = optimset('TolX',1.0E-6,'MaxFunEvals',1000);
% Perform nonlinear least squares fit (note that we store much more
% statistics than just the final fitted parameters)
[curve_fit_parameters,RESNORM,RESIDUAL,EXITFLAG,OUTPUT,LAMBDA,JACOBIAN] = ...
lsqnonlin(#(k) fitcrit(k,time_exp, conc_exp, par, init),k0,LB,UB,options);
The code now does not give an error, however the output of my curve_fit_parameters is now the same as my initial values (also when I change my initial value, it stays the same).
The error is:
>> PackedBed
Initial point is a local minimum.
Optimization completed because the size of the gradient at the initial point
is less than the default value of the optimality tolerance.
<stopping criteria details>
The stopping criteria gives a relative first-order optimality 0.00+00, so I think that the parameters lsqnonlin changes have no influence on my error.
I think that the mistake is the lsqnonlin function, where I refer to 'k' instead of 'par.kb and par.kf'. However, I don't know how to refer to these as these are nested functions. Replacing 'k' by 'par.kb, par.kf' gives me the error: Unexpected matlab operator.
Could anyone help me with my problem?
As suggested by Adriaan in the comments, you can easily make a copy of the par struct in your fitcrit function, and assign the parameters to optimize to this par_tmp struct. The fitcrit function then becomes:
function error = fitcrit(curve_fit_parameters,time_exp,conc_exp, par, init)
par_tmp = par;
par_tmp.kf = curve_fit_parameters(1); % change to parameters you need to fit
par_tmp.kb = curve_fit_parameters(2);
[time_model_fit,conc_model_fit] = ode45(#(t,c) odefun(t,c,par_tmp),time_exp,init,[]); % pass par_tmp to ODE solver
error = (conc_exp-conc_model_fit);
end
In the script calling lsqnonlin, you will have to change the par struct to include the converged solution of the parameter subset to optimize:
% initial guesses for model parameters, no. of indeces is number of fitted % parameters
k0 = [0.028 0.002];
% lower and upper bounds for model parameters, this can be altered
LB = [0.00 0.00];
UB = [Inf Inf];
% Set up fitting options
options = optimset('TolX',1.0E-6,'MaxFunEvals',1000);
% Perform nonlinear least squares fit (note that we store much more
% statistics than just the final fitted parameters)
[curve_fit_parameters,RESNORM,RESIDUAL,EXITFLAG,OUTPUT,LAMBDA,JACOBIAN] = ...
lsqnonlin(#(k) fitcrit(k,time_exp, conc_exp, par, init),k0,LB,UB,options);
% make par_final
par_final = par;
par_final.kf = curve_fit_parameters(1);
par_final.kb = curve_fit_parameters(2);
```
This post builds on my post about quickly evaluating analytic Jacobian in Matlab:
fast evaluation of analytical jacobian in MATLAB
The key difference is that now, I am working with the Hessian and I have to evaluate close to 700 matlabFunctions (instead of 1 matlabFunction, like I did for the Jacobian) each time the hessian is evaluated. So there is an opportunity to do things a little differently.
I have tried to do this two ways so far and I am thinking about implementing a third and was wondering if anyone has any other suggestions. I will go through each method with a toy example, but first some preprocessing to generate these matlabFunctions:
PreProcessing:
% This part of the code is calculated once, it is not the issue
dvs = 5;
X=sym('X',[dvs,1]);
num = dvs - 1; % number of constraints
% multiple functions
for k = 1:num
f1(X(k+1),X(k)) = (X(k+1)^3 - X(k)^2*k^2);
c(k) = f1;
end
gradc = jacobian(c,X).'; % .' performs transpose
parfor k = 1:num
hessc{k} = jacobian(gradc(:,k),X);
end
parfor k = 1:num
hess_name = strcat('hessian_',num2str(k));
matlabFunction(hessc{k},'file',hess_name,'vars',X);
end
METHOD #1 : Evaluate functions in series
%% Now we use the functions to run an "optimization." Just for an example the "optimization" is just a for loop
fprintf('This is test A, where the functions are evaluated in series!\n');
tic
for q = 1:10
x_dv = rand(dvs,1); % these are the design variables
lambda = rand(num,1); % these are the lagrange multipliers
x_dv_cell = num2cell(x_dv); % for passing large design variables
for k = 1:num
hess_name = strcat('hessian_',num2str(k));
function_handle = str2func(hess_name);
H_temp(:,:,k) = lambda(k)*function_handle(x_dv_cell{:});
end
H = sum(H_temp,3);
end
fprintf('The time for test A was:\n')
toc
METHOD # 2: Evaluate functions in parallel
%% Try to run a parfor loop
fprintf('This is test B, where the functions are evaluated in parallel!\n');
tic
for q = 1:10
x_dv = rand(dvs,1); % these are the design variables
lambda = rand(num,1); % these are the lagrange multipliers
x_dv_cell = num2cell(x_dv); % for passing large design variables
parfor k = 1:num
hess_name = strcat('hessian_',num2str(k));
function_handle = str2func(hess_name);
H_temp(:,:,k) = lambda(k)*function_handle(x_dv_cell{:});
end
H = sum(H_temp,3);
end
fprintf('The time for test B was:\n')
toc
RESULTS:
METHOD #1 = 0.008691 seconds
METHOD #2 = 0.464786 seconds
DISCUSSION of RESULTS
This result makes sense because, the functions evaluate very quickly and running them in parallel waists a lot of time setting up and sending out the jobs to the different Matlabs ( and then getting the data back from them). I see the same result on my actual problem.
METHOD # 3: Evaluating the functions using the GPU
I have not tried this yet, but I am interested to see what the performance difference is. I am not yet familiar with doing this in Matlab and will add it once I am done.
Any other thoughts? Comments? Thanks!
My main script contains following code:
%# Grid and model parameters
nModel=50;
nModel_want=1;
nI_grid1=5;
Nth=1;
nRow.Scale1=5;
nCol.Scale1=5;
nRow.Scale2=5^2;
nCol.Scale2=5^2;
theta = 90; % degrees
a_minor = 2; % range along minor direction
a_major = 5; % range along major direction
sill = var(reshape(Deff_matrix_NthModel,nCell.Scale1,1)); % variance of the coarse data matrix of size nRow.Scale1 X nCol.Scale1
%# Covariance computation
% Scale 1
for ihRow = 1:nRow.Scale1
for ihCol = 1:nCol.Scale1
[cov.Scale1(ihRow,ihCol),heff.Scale1(ihRow,ihCol)] = general_CovModel(theta, ihCol, ihRow, a_minor, a_major, sill, 'Exp');
end
end
% Scale 2
for ihRow = 1:nRow.Scale2
for ihCol = 1:nCol.Scale2
[cov.Scale2(ihRow,ihCol),heff.Scale2(ihRow,ihCol)] = general_CovModel(theta, ihCol/(nCol.Scale2/nCol.Scale1), ihRow/(nRow.Scale2/nRow.Scale1), a_minor, a_major, sill/(nRow.Scale2*nCol.Scale2), 'Exp');
end
end
%# Scale-up of fine scale values by averaging
[covAvg.Scale2,var_covAvg.Scale2,varNorm_covAvg.Scale2] = general_AverageProperty(nRow.Scale2/nRow.Scale1,nCol.Scale2/nCol.Scale1,1,nRow.Scale1,nCol.Scale1,1,cov.Scale2,1);
I am using two functions, general_CovModel() and general_AverageProperty(), in my main script which are given as following:
function [cov,h_eff] = general_CovModel(theta, hx, hy, a_minor, a_major, sill, mod_type)
% mod_type should be in strings
angle_rad = theta*(pi/180); % theta in degrees, angle_rad in radians
R_theta = [sin(angle_rad) cos(angle_rad); -cos(angle_rad) sin(angle_rad)];
h = [hx; hy];
lambda = a_minor/a_major;
D_lambda = [lambda 0; 0 1];
h_2prime = D_lambda*R_theta*h;
h_eff = sqrt((h_2prime(1)^2)+(h_2prime(2)^2));
if strcmp(mod_type,'Sph')==1 || strcmp(mod_type,'sph') ==1
if h_eff<=a
cov = sill - sill.*(1.5*(h_eff/a_minor)-0.5*((h_eff/a_minor)^3));
else
cov = sill;
end
elseif strcmp(mod_type,'Exp')==1 || strcmp(mod_type,'exp') ==1
cov = sill-(sill.*(1-exp(-(3*h_eff)/a_minor)));
elseif strcmp(mod_type,'Gauss')==1 || strcmp(mod_type,'gauss') ==1
cov = sill-(sill.*(1-exp(-((3*h_eff)^2/(a_minor^2)))));
end
and
function [PropertyAvg,variance_PropertyAvg,NormVariance_PropertyAvg]=...
general_AverageProperty(blocksize_row,blocksize_col,blocksize_t,...
nUpscaledRow,nUpscaledCol,nUpscaledT,PropertyArray,omega)
% This function computes average of a property and variance of that averaged
% property using power averaging
PropertyAvg=zeros(nUpscaledRow,nUpscaledCol,nUpscaledT);
%# Average of property
for k=1:nUpscaledT,
for j=1:nUpscaledCol,
for i=1:nUpscaledRow,
sum=0;
for a=1:blocksize_row,
for b=1:blocksize_col,
for c=1:blocksize_t,
sum=sum+(PropertyArray((i-1)*blocksize_row+a,(j-1)*blocksize_col+b,(k-1)*blocksize_t+c).^omega); % add all the property values in 'blocksize_x','blocksize_y','blocksize_t' to one variable
end
end
end
PropertyAvg(i,j,k)=(sum/(blocksize_row*blocksize_col*blocksize_t)).^(1/omega); % take average of the summed property
end
end
end
%# Variance of averageed property
variance_PropertyAvg=var(reshape(PropertyAvg,...
nUpscaledRow*nUpscaledCol*nUpscaledT,1),1,1);
%# Normalized variance of averageed property
NormVariance_PropertyAvg=variance_PropertyAvg./(var(reshape(...
PropertyArray,numel(PropertyArray),1),1,1));
Question: Using Matlab, I would like to optimize covAvg.Scale2 such that it matches closely with cov.Scale1 by perturbing/varying any (or all) of the following variables
1) a_minor
2) a_major
3) theta
I am aware I can use fminsearch, however, how I am not able to perturb the variables I want to while using this fminsearch.
I won't pretend to understand everything that you are doing. But it sounds like a typical minimization problem. What you want to do is to come up with a single function that takes a_minor, a_major and theta as arguments, and returns the square of the difference between covAvg.Scale2 and cov.Scale1. Something like this:
function diff = minimize_me(a_minor, a_major, theta)
... your script goes here
diff = (covAvg.Scale2 - cov.Scale1)^2;
end
Then you need matlab to minimize this function. There's more than one option here. Since you only have three variables to minimize over, fminsearch is a good place to start. You would call it something like this:
opts = optimset('display', 'iter');
x = fminsearch( #(x) minimize_me(x(1), x(2), x(3)), [a_minor_start a_major_start theta_start], opts)
The first argument to fminsearch is the function you want to optimize. It must take a single argument: a vector of the variables that will be perturbed in order to find the minimum value. Here I use an anonymous function to extract the values from this vector and pass them into minimize_me. The second argument to fminsearch is a vector containing the values to start searching at. The third argument are options that affect the search; it's a good idea to set display to iter when you first start optimizing, so that you can get an idea of well the optimizer is converging.
If your parameters have restricted domains (e.g. they must all be positive) take a look at fminsearchbnd on the file exchange.
If I have misunderstood your problem, and this doesn't help at all, try posting code that we can run to reproduce the problem ourselves.
Is is it possible to use an ODE solver, such as ode45, and still be able to 'change' values for the parameters within the called function?
For example, if I were to use the following function:
function y = thisode(t, Ic)
% example derivative function
% parameters
a = .05;
b = .005;
c = .0005;
d = .00005;
% state variables
R = Ic(1);
T = Ic(2);
y = [(R*a)-(T/b)+d; (d*R)-(c*T)];
with this script:
clear all
% Initial conditions
It = [0.06 0.010];
% time steps
time = 0:.5:10;
% ODE solver
[t,Ic1] = ode45(#thisode, time, It);
everything works as I would expect. However, I would like a way to easily change the parameter values, but still run multiple iterations of the ODE solver with just one function and one script. However, it does not seem like I can just add more terms to the ODE solver, for example:
function y = thisode(t, Ic, P)
% parameters
a = P(1);
b = P(2);
c = P(3);
d = P(4);
% state variables
R = Ic(1);
T = Ic(2);
y = [(R*a)-(T/b)+d; (d*R)-(c*T)];
with this script:
clear all
% Initial conditions
It = [0.06 0.010];
P1 = [.05 .005 .0005 .00005]
% time steps
time = 0:.5:10;
% ODE solver
[t,Ic1] = ode45(#thisode, time, It, [], P1);
does not work. I guess I know this shouldn't work, but I have been unable to come up with a solution. I was also considering an if statement within the function and then hard coding several sets of parameters based to be used (e.g use set 1 when P == 1, set 2 when P == 2, etc) but this also did not work as I don't where to call the set to be used with an ODE. Any tips or suggestion on how to use one function and one script with an ODE solver while being able to change parameter values would be much appreciated.
Thank you,
Mike
You'll have to call the function differently:
[t,Ic1] = ode45(#(t,y) thisode(t,y,P1), time, It);
The function ode45 assumes all functions passed to it accept only an t and a y. The above call is a standard trick to get Matlab to pass P1, while ode45 will pass it t and y on every call.
The code in question is here:
function k = whileloop(odefun,args)
...
while (sign(costheta) == originalsign)
y=y(:) + odefun(0,y(:),vars,param)*(dt); % Line 4
costheta = dot(y-normpt,normvec);
k = k + 1;
end
...
end
and to clarify, odefun is F1.m, an m-file of mine. I pass it into the function that contains this while-loop. It's something like whileloop(#F1,args). Line 4 in the code-block above is the Euler method.
The reason I'm using a while-loop is because I want to trigger upon the vector "y" crossing a plane defined by a point, "normpt", and the vector normal to the plane, "normvec".
Is there an easy change to this code that will speed it up dramatically? Should I attempt learning how to make mex files instead (for a speed increase)?
Edit:
Here is a rushed attempt at an example of what one could try to test with. I have not debugged this. It is to give you an idea:
%Save the following 3 lines in an m-file named "F1.m"
function ydot = F1(placeholder1,y,placeholder2,placeholder3)
ydot = y/10;
end
%Run the following:
dt = 1.5e-12 %I do not know about this. You will have to experiment.
y0 = [.1,.1,.1];
normpt = [3,3,3];
normvec = [1,1,1];
originalsign = sign(dot(y0-normpt,normvec));
costheta = originalsign;
y = y0;
k = 0;
while (sign(costheta) == originalsign)
y=y(:) + F1(0,y(:),0,0)*(dt); % Line 4
costheta = dot(y-normpt,normvec);
k = k + 1;
end
disp(k);
dt should be sufficiently small that it takes hundreds of thousands of iterations to trigger.
Assume I must use the Euler method. I have a stochastic differential equation with state-dependent noise if you are curious as to why I tell you to take such an assumption.
I would focus on your actual ODE integration. The fewer steps you have to take, the faster the loop will run. I would only worry about the speed of the sign check after you've optimized the actual integration method.
It looks like you're using the first-order explicit Euler method. Have you tried a higher-order integrator or an implicit method? Often you can increase the time step significantly.