Non linear function parameter estimation - matlab, lsqnonlin, fzero - matlab

I'm having difficulty with a fitting problem. From the errors that I get I imagine that the boundaries are not defined correctly and I haven't managed to find a solution. Any help would be very much appreciated.
Alternative methods for the solution of the same problem are also accepted.
Description
I have to estimate the parameters of a non-linear function of the type:
A*y(x) + B*EXP(C*y(x)) + g(x,D) = 0
subjected to the parameters PAR = [A,B,C,D] being within the range
LB < PAR < UB
Code
To solve the problem I'm using the Matlab functions lsqnonlin and fzero. The simplified code used is reported below.
The problem is divided in four functions:
parameterEstimation - (a wrapper for the lsqnonlin function)
objectiveFunction_lsq - (the objective function for the param estimation)
yFun - (the function returing the value of the variable y)
objectiveFunction_zero - (the objective function of the non-linear equation used to calculate y)
Errors
Running the code on the data I get the this waring
Warning: Length of lower bounds is > length(x); ignoring extra bounds
and this error
Failure in initial user-supplied objective function evaluation.
LSQNONLIN cannot continue
This makes me to think that the boundaries are not correctly used or not correctly called, but maybe the problem is elsewhere.
function Done = parameterEstimation()
%read inputs
Xmeas = xlsread('filepath','worksheet','range');
Ymeas = xlsread('filepath','worksheet','range');
%inital values and boundary conditions
initialGuess = [1,1,1,1]; %model parameters initial guess
LB = [0,0,0,0]; %model parameters lower boundaries
UB = [2,2,2,2]; %model parameters upper boundaries
%parameter estimation
calcParam = lsqnonlin(#objectiveFunction_lsq_2,initialGuess,LB,UB,[],Xmeas,Ymeas);
Done = calcParam;
function diff = objectiveFunction_lsq_2(PAR,Xmeas,Ymeas)
y_calculated = yFun(PAR,Xmeas);
diff = y_calculated-Ymeas;
function result = yFun(PAR,X)
y_0 = 2;
val = fzero(#(y)objfun_y(y,PAR,X),y_0);
result = val;
function result = objfun_y(y,PAR,X)
A = PAR(1);
B = PAR(2);
A = PAR(3);
C = PAR(4);
D = PAR(5);
val = A*y+B*exp(y*C)+g(D,X);
result = val;

I don't have the optimization toolbox, but are you sure you can pass the constants like that?
I would do this instead:
calcParam = lsqnonlin(#(PAR) objectiveFunction_lsq_2(PAR,Xmeas,Ymeas),initialGuess,LB,UB);

Related

Error lsqnonlin reaching values using nested functions

I want to fit two parameters using lsqnonlin. I have set up my system of ODE and want to solve them with my new parameters that are found after I performed the lsqnonlin.
function [dcdt] = CSTRSinSeries(t,c,par)
for i=1:par.n
if i==1
dcdt(i) = 1/par.tau_per_tank * (par.c0-c(i))+par.kf*(par.c0_meth-c(i))- ...
par.kb*c(i).^2;
else
dcdt(i) = 1/par.tau_per_tank * (c(i-1)-c(i))+par.k*(par.c0_meth-c(i))- ...
par.kb*c(i).^2;
end
end
dcdt = dcdt';
end
% fitcrit function
function error = fitcrit(curve_fit_parameters,time_exp,conc_exp, par, init)
[time_model_fit,conc_model_fit] = ode45(#(t,c) CSTRSinSeries(t,c,par),time_exp,init,[]);
error = (conc_exp-conc_model_fit);
end
I think the problem has to do with the fact that my parameters are in parameter struct par and that I don't want to fit on all of these parameters, but just on basis of two of those.
This is my main script for performing the curve fitting:
% initial guesses for model parameters, no. of indeces is number of fitted % parameters
k0 = [0.028 0.002];
% lower and upper bounds for model parameters, this can be altered
LB = [0.00 0.00];
UB = [Inf Inf];
% Set up fitting options
options = optimset('TolX',1.0E-6,'MaxFunEvals',1000);
% Perform nonlinear least squares fit (note that we store much more
% statistics than just the final fitted parameters)
[curve_fit_parameters,RESNORM,RESIDUAL,EXITFLAG,OUTPUT,LAMBDA,JACOBIAN] = ...
lsqnonlin(#(k) fitcrit(k,time_exp, conc_exp, par, init),k0,LB,UB,options);
The code now does not give an error, however the output of my curve_fit_parameters is now the same as my initial values (also when I change my initial value, it stays the same).
The error is:
>> PackedBed
Initial point is a local minimum.
Optimization completed because the size of the gradient at the initial point
is less than the default value of the optimality tolerance.
<stopping criteria details>
The stopping criteria gives a relative first-order optimality 0.00+00, so I think that the parameters lsqnonlin changes have no influence on my error.
I think that the mistake is the lsqnonlin function, where I refer to 'k' instead of 'par.kb and par.kf'. However, I don't know how to refer to these as these are nested functions. Replacing 'k' by 'par.kb, par.kf' gives me the error: Unexpected matlab operator.
Could anyone help me with my problem?
As suggested by Adriaan in the comments, you can easily make a copy of the par struct in your fitcrit function, and assign the parameters to optimize to this par_tmp struct. The fitcrit function then becomes:
function error = fitcrit(curve_fit_parameters,time_exp,conc_exp, par, init)
par_tmp = par;
par_tmp.kf = curve_fit_parameters(1); % change to parameters you need to fit
par_tmp.kb = curve_fit_parameters(2);
[time_model_fit,conc_model_fit] = ode45(#(t,c) odefun(t,c,par_tmp),time_exp,init,[]); % pass par_tmp to ODE solver
error = (conc_exp-conc_model_fit);
end
In the script calling lsqnonlin, you will have to change the par struct to include the converged solution of the parameter subset to optimize:
% initial guesses for model parameters, no. of indeces is number of fitted % parameters
k0 = [0.028 0.002];
% lower and upper bounds for model parameters, this can be altered
LB = [0.00 0.00];
UB = [Inf Inf];
% Set up fitting options
options = optimset('TolX',1.0E-6,'MaxFunEvals',1000);
% Perform nonlinear least squares fit (note that we store much more
% statistics than just the final fitted parameters)
[curve_fit_parameters,RESNORM,RESIDUAL,EXITFLAG,OUTPUT,LAMBDA,JACOBIAN] = ...
lsqnonlin(#(k) fitcrit(k,time_exp, conc_exp, par, init),k0,LB,UB,options);
% make par_final
par_final = par;
par_final.kf = curve_fit_parameters(1);
par_final.kb = curve_fit_parameters(2);
```

matlab GPU computation

I am learning matlab GPU functions. My function myfun takes 2 input parameters delta, p. Eventually, I will apply myfun to many combinations of delta,p. For each combination of delta,p, 'myfun' will how many V's satisfies the condition delta*V-p>0, where V = [0:0.001:1]. Ideally, I want V to be a global variable. But it seems that matlab GPU has some restrictions on global variable. So I use another way to do this thing. The code is as the following:
function result = gpueg2()
dd = 0.1;
DELTA = [dd:dd:1];
dp = 0.001;
P = [0:dp:1];
[p,delta]=meshgrid(P,DELTA);
p = gpuArray(p(:));
delta = gpuArray(delta(:));
V = [0:0.001:1];
function [O] = myfun(delta,p)
O = sum((delta*V-p)>0);
end
result = arrayfun(#myfun,delta,p);
end
However, it through an error message
Function passed as first input argument contains unsupported or unknown function 'sum'.
But I believe sum is applicable in GPU.
Any advice and suggestions are highly appreciated.
The problem with sum is not with the GPU, it's with using arrayfun on a GPU. The list of functions that arrayfun on a GPU accepts is given here: https://www.mathworks.com/help/distcomp/run-element-wise-matlab-code-on-a-gpu.html. sum is not on the list on that page of documentation.
Your vectors are not so big (though I accept this may be a toy example of your real problem). I suggest the following alternative implementation:
function result = gpueg2()
dd = 0.1;
DELTA = dd:dd:1;
dp = 0.001;
P = 0:dp:1;
V = 0:0.001:1;
[p,delta,v] = meshgrid(P,DELTA,V);
p = gpuArray(p);
delta = gpuArray(delta);
v = gpuArray(v);
result = sum(delta.*v-p>0, 3);
end
Note the following differences:
I make 3D arrays of p,delta,v, rather than 2D. These three are only 24MB in total.
I do the calculation delta.*v-p>0 on the whole 3D array: this will be well split on the GPU.
I do the sum on the 3rd index, i.e. over V.
I have checked that your routine on the CPU and mine on the GPU give the same results.

Two functions in Matlab to approximate integral - not enough input arguments?

I want to write a function that approximates integrals with the trapezoidal rule.
I first defined a function in one file:
function[y] = integrand(x)
y = x*exp(-x^2); %This will be integrand I want to approximate
end
Then I wrote my function that approximates definite integrals with lower bound a and upper bound b (also in another file):
function [result] = trapez(integrand,a,b,k)
sum = 0;
h = (b-a)/k; %split up the interval in equidistant spaces
for j = 1:k
x_j = a + j*h; %this are the points in the interval
sum = sum + ((x_j - x_(j-1))/2) * (integrand(x_(j-1)) + integrand(x_j));
end
result = sum
end
But when I want to call this function from the command window, using result = trapez(integrand,0,1,10) for example, I always get an error 'not enough input arguments'. I don't know what I'm doing wrong?
There are numerous issues with your code:
x_(j-1) is not defined, and is not really a valid Matlab syntax (assuming you want that to be a variable).
By calling trapez(integrand,0,1,10) you're actually calling integrand function with no input arguments. If you want to pass a handle, use #integrand instead. But in this case there's no need to pass it at all.
You should avoid variable names that coincide with Matlab functions, such as sum. This can easily lead to issues which are difficult to debug, if you also try to use sum as a function.
Here's a working version (note also a better code style):
function res = trapez(a, b, k)
res = 0;
h = (b-a)/k; % split up the interval in equidistant spaces
for j = 1:k
x_j1 = a + (j-1)*h;
x_j = a + j*h; % this are the points in the interval
res = res+ ((x_j - x_j1)/2) * (integrand(x_j1) + integrand(x_j));
end
end
function y = integrand(x)
y = x*exp(-x^2); % This will be integrand I want to approximate
end
And the way to call it is: result = trapez(0, 1, 10);
Your integrandfunction requires an input argument x, which you are not supplying in your command line function call

How to write a very long sum in a fitness function for global optimisation in Matlab

I a struggling writing a fully parametrised fitness function for the global optimisation toolbox in matlab.
Approach:
[x fvall,exitflag,output]=ga(fitnessfcn,nvars,A,b,Aeq,beq,lb,ub)
I have a fitness function that I call with
fitnessfcn=#fitnessTest;
hence, the function is stated in a separate file.
Problem:
My issue is now that my optimisation is a simple but super long sum like
cost=f1*x1+f2x2+...fnxn
n should be parameterised (384 at the moment). In all matlab help files, the objective function is always short and neat like
y = 100 * (x(1)^2 - x(2)) ^2 + (1 - x(1))^2;
I have tried several approaches "writing" the objective function intelligently, but then, I cannot call the function correctly:
if I write the fitness function manually (for fi=1)
function y = simple_fitness(x)
y = x(1)+ x(2)+ x(3)+ x(4)+ x(5)+ x(6)+ x(7)+ x(8);
the global optimisation works
But if I use my automated approach:
n = 8; %# number of function handles
parameters = 1:1:n;
store = cell(2,3);
for i=1:n
store{1,i} = sprintf('x(%i)',parameters(i));
store{2,i} = '+'; %# operator
end
%# combine such that we get
%# sin(t)+sin(t/2)+sin(t/4)
funStr = [store{1:end-1}];%# ignore last operator
endFunction=';';
%functionHandle = str2func(funStr)
y=strcat(funStr,endFunction)
matlab does not recognise the function properly:
error:
Subscripted assignment dimension mismatch.
Error in fcnvectorizer (line 14)
y(i,:) = feval(fun,(pop(i,:)));
thanks! I cannot write the objective function by hand as I will have several hundred variables.
You can use the sum(x) directly using function handle, instead of writing all indexes, do: fitnessfcn = #(x) sum(x).

Changing parameters within function for use with ODE solver

Is is it possible to use an ODE solver, such as ode45, and still be able to 'change' values for the parameters within the called function?
For example, if I were to use the following function:
function y = thisode(t, Ic)
% example derivative function
% parameters
a = .05;
b = .005;
c = .0005;
d = .00005;
% state variables
R = Ic(1);
T = Ic(2);
y = [(R*a)-(T/b)+d; (d*R)-(c*T)];
with this script:
clear all
% Initial conditions
It = [0.06 0.010];
% time steps
time = 0:.5:10;
% ODE solver
[t,Ic1] = ode45(#thisode, time, It);
everything works as I would expect. However, I would like a way to easily change the parameter values, but still run multiple iterations of the ODE solver with just one function and one script. However, it does not seem like I can just add more terms to the ODE solver, for example:
function y = thisode(t, Ic, P)
% parameters
a = P(1);
b = P(2);
c = P(3);
d = P(4);
% state variables
R = Ic(1);
T = Ic(2);
y = [(R*a)-(T/b)+d; (d*R)-(c*T)];
with this script:
clear all
% Initial conditions
It = [0.06 0.010];
P1 = [.05 .005 .0005 .00005]
% time steps
time = 0:.5:10;
% ODE solver
[t,Ic1] = ode45(#thisode, time, It, [], P1);
does not work. I guess I know this shouldn't work, but I have been unable to come up with a solution. I was also considering an if statement within the function and then hard coding several sets of parameters based to be used (e.g use set 1 when P == 1, set 2 when P == 2, etc) but this also did not work as I don't where to call the set to be used with an ODE. Any tips or suggestion on how to use one function and one script with an ODE solver while being able to change parameter values would be much appreciated.
Thank you,
Mike
You'll have to call the function differently:
[t,Ic1] = ode45(#(t,y) thisode(t,y,P1), time, It);
The function ode45 assumes all functions passed to it accept only an t and a y. The above call is a standard trick to get Matlab to pass P1, while ode45 will pass it t and y on every call.