Curvefitting for power law function y= a(x^b)+c - matlab

I am new to MATLAB, and I am trying to fit a power law through a dataset. I have been trying to use isqcurvefit function, but I am unsure how to proceed as the instructions found through Google are too convoluted for a beginner. I would like to derive the values b and c from the equation y = a(x^b)+c, and any suggestions would be greatly appreciated. Thanks.

You can use lsqcurvefit to fit a non linear curve through measured data points in least-square sense as follows:
% constant parameters
a = 1; % set the value of a
% initial guesses for fitted parameters
b_guess = 1; % provide an initial guess for b
c_guess = 0; % provide an initial guess for c
% Definition of the fitted function
f = #(x, xdata) a*(xdata.^x(1))+x(2);
% generate example data for the x and y data to fit (this should be replaced with your real measured data)
xdata = 1:10;
ydata = f([2 3], xdata); % create data with b=2 and c=3
% fit the data with the desired function
x = lsqcurvefit(f,[b_guess c_guess],xdata,ydata);
%result of the fit, i.e. the fitted parameters
b = x(1)
c = x(2)

Related

Constrained linear least squares not fitting data

I am trying to fit a 3D surface polynomial of n-degrees to some data points in 3D space. My system requires the surface to be monotonically increasing in the area of interest, that is the partial derivatives must be non-negative. This can be achieved using Matlab's built in lsqlin function.
So here's what I've done to try and achieve this:
I have a function that takes in four parameters;
x1 and x2 are my explanatory variables and y is my dependent variable. Finally, I can specify order of polynomial fit. First I build the design matrix A using data from x1 and x2 and the degree of fit I want. Next I build the matrix D that is my container for the partial derivatives of my datapoints. NOTE: the matrix D is double the length of matrix A since all datapoints must be differentiated with respect to both x1 and x2. I specify that Dx >= 0 by setting b to be zeroes.
Finally, I call lsqlin. I use "-D" since Matlab defines the function as Dx <= b.
function w_mono = monotone_surface_fit(x1, x2, y, order_fit)
% Initialize design matrix
A = zeros(length(x1), 2*order_fit+2);
% Adjusting for bias term
A(:,1) = ones(length(x1),1);
% Building design matrix
for i = 2:order_fit+1
A(:,(i-1)*2:(i-1)*2+1) = [x1.^(i-1), x2.^(i-1)];
end
% Initialize matrix containing derivative constraint.
% NOTE: Partial derivatives must be non-negative
D = zeros(2*length(y), 2*order_fit+1);
% Filling matrix that holds constraints for partial derivatives
% NOTE: Matrix D will be double length of A since all data points will have a partial derivative constraint in both x1 and x2 directions.
for i = 2:order_fit+1
D(:,(i-1)*2:(i-1)*2+1) = [(i-1)*x1.^(i-2), zeros(length(x2),1); ...
zeros(length(x1),1), (i-1)*x2.^(i-2)];
end
% Limit of derivatives
b = zeros(2*length(y), 1);
% Constrained LSQ fit
options = optimoptions('lsqlin','Algorithm','interior-point');
% Final weights of polynomial
w_mono = lsqlin(A,y,-D,b,[],[],[],[],[], options);
end
So now I get some weights out, but unfortunately they do not at all capture the structure of the data. I've attached an image so you can just how bad it looks. .
I'll give you my plotting script with some dummy data, so you can try it.
%% Plot different order polynomials to data with constraints
x1 = [-5;12;4;9;18;-1;-8;13;0;7;-5;-8;-6;14;-1;1;9;14;12;1;-5;9;-10;-2;9;7;-1;19;-7;12;-6;3;14;0;-8;6;-2;-7;10;4;-5;-7;-4;-6;-1;18;5;-3;3;10];
x2 = [81.25;61;73;61.75;54.5;72.25;80;56.75;78;64.25;85.25;86;80.5;61.5;79.25;76.75;60.75;54.5;62;75.75;80.25;67.75;86.5;81.5;62.75;66.25;78.25;49.25;82.75;56;84.5;71.25;58.5;77;82;70.5;81.5;80.75;64.5;68;78.25;79.75;81;82.5;79.25;49.5;64.75;77.75;70.25;64.5];
y = [-6.52857142857143;-1.04736842105263;-5.18750000000000;-3.33157894736842;-0.117894736842105;-3.58571428571429;-5.61428571428572;0;-4.47142857142857;-1.75438596491228;-7.30555555555556;-8.82222222222222;-5.50000000000000;-2.95438596491228;-5.78571428571429;-5.15714285714286;-1.22631578947368;-0.340350877192983;-0.142105263157895;-2.98571428571429;-4.35714285714286;-0.963157894736842;-9.06666666666667;-4.27142857142857;-3.43684210526316;-3.97894736842105;-6.61428571428572;0;-4.98571428571429;-0.573684210526316;-8.22500000000000;-3.01428571428571;-0.691228070175439;-6.30000000000000;-6.95714285714286;-2.57232142857143;-5.27142857142857;-7.64285714285714;-2.54035087719298;-3.45438596491228;-5.01428571428571;-7.47142857142857;-5.38571428571429;-4.84285714285714;-6.78571428571429;0;-0.973684210526316;-4.72857142857143;-2.84285714285714;-2.54035087719298];
% Used to plot the surface in all points in the grid
X1 = meshgrid(-10:1:20);
X2 = flipud(meshgrid(30:2:90).');
figure;
for i = 1:4
w_mono = monotone_surface_fit(x1, x2, y, i);
y_nr = w_mono(1)*ones(size(X1)) + w_mono(2)*ones(size(X2));
for j = 1:i
y_nr = w_mono(j*2)*X1.^j + w_mono(j*2+1)*X2.^j;
end
subplot(2,2,i);
scatter3(x1, x2, y); hold on;
axis tight;
mesh(X1, X2, y_nr);
set(gca, 'ZDir','reverse');
xlabel('x1'); ylabel('x2');
zlabel('y');
% zlim([-10 0])
end
I think it may have something to do with the fact that I haven't specified anything about the region of interest, but really I don't know. Thanks in advance for any help.
Alright I figured it out.
The main problem was simply an error in the plotting script. The value of y_nr should be updated and not overwritten in the loop.
Also I figured out that the second derivative should be monotonically decreasing. Here's the updated code if anybody is interested.
%% Plot different order polynomials to data with constraints
x1 = [-5;12;4;9;18;-1;-8;13;0;7;-5;-8;-6;14;-1;1;9;14;12;1;-5;9;-10;-2;9;7;-1;19;-7;12;-6;3;14;0;-8;6;-2;-7;10;4;-5;-7;-4;-6;-1;18;5;-3;3;10];
x2 = [81.25;61;73;61.75;54.5;72.25;80;56.75;78;64.25;85.25;86;80.5;61.5;79.25;76.75;60.75;54.5;62;75.75;80.25;67.75;86.5;81.5;62.75;66.25;78.25;49.25;82.75;56;84.5;71.25;58.5;77;82;70.5;81.5;80.75;64.5;68;78.25;79.75;81;82.5;79.25;49.5;64.75;77.75;70.25;64.5];
y = [-6.52857142857143;-1.04736842105263;-5.18750000000000;-3.33157894736842;-0.117894736842105;-3.58571428571429;-5.61428571428572;0;-4.47142857142857;-1.75438596491228;-7.30555555555556;-8.82222222222222;-5.50000000000000;-2.95438596491228;-5.78571428571429;-5.15714285714286;-1.22631578947368;-0.340350877192983;-0.142105263157895;-2.98571428571429;-4.35714285714286;-0.963157894736842;-9.06666666666667;-4.27142857142857;-3.43684210526316;-3.97894736842105;-6.61428571428572;0;-4.98571428571429;-0.573684210526316;-8.22500000000000;-3.01428571428571;-0.691228070175439;-6.30000000000000;-6.95714285714286;-2.57232142857143;-5.27142857142857;-7.64285714285714;-2.54035087719298;-3.45438596491228;-5.01428571428571;-7.47142857142857;-5.38571428571429;-4.84285714285714;-6.78571428571429;0;-0.973684210526316;-4.72857142857143;-2.84285714285714;-2.54035087719298];
% Used to plot the surface in all points in the grid
X1 = meshgrid(-10:1:20);
X2 = flipud(meshgrid(30:2:90).');
figure;
for i = 1:4
w_mono = monotone_surface_fit(x1, x2, y, i);
% NOTE: Should only have 1 bias term
y_nr = w_mono(1)*ones(size(X1));
for j = 1:i
y_nr = y_nr + w_mono(j*2)*X1.^j + w_mono(j*2+1)*X2.^j;
end
subplot(2,2,i);
scatter3(x1, x2, y); hold on;
axis tight;
mesh(X1, X2, y_nr);
set(gca, 'ZDir','reverse');
xlabel('x1'); ylabel('x2');
zlabel('y');
% zlim([-10 0])
end
And here's the updated function
function [w_mono, w] = monotone_surface_fit(x1, x2, y, order_fit)
% Initialize design matrix
A = zeros(length(x1), 2*order_fit+1);
% Adjusting for bias term
A(:,1) = ones(length(x1),1);
% Building design matrix
for i = 2:order_fit+1
A(:,(i-1)*2:(i-1)*2+1) = [x1.^(i-1), x2.^(i-1)];
end
% Initialize matrix containing derivative constraint.
% NOTE: Partial derivatives must be non-negative
D = zeros(2*length(y), 2*order_fit+1);
for i = 2:order_fit+1
D(:,(i-1)*2:(i-1)*2+1) = [(i-1)*x1.^(i-2), zeros(length(x2),1); ...
zeros(length(x1),1), -(i-1)*x2.^(i-2)];
end
% Limit of derivatives
b = zeros(2*length(y), 1);
% Constrained LSQ fit
options = optimoptions('lsqlin','Algorithm','active-set');
w_mono = lsqlin(A,y,-D,b,[],[],[],[],[], options);
w = lsqlin(A,y);
end
Finally a plot of the fitting (Have used a new simulation, but fit also works on given dummy data).

Obtaining theoretical semivariogram from parametrized covariance function (STK)

I have been using the STK toolbox for a few days, for kriging of environmental parameter fields, i.e. in a geostatistical context.
I find the toolbox very well implemented and useful (big thanks to the authors!), and the kriging predictions I am getting through STK actually seem fine; however, I am finding myself unable to visualize a semivariogram model based on the STK output (i.e. estimated parameters for gaussian process / covariance functions).
I am attaching an example figure, showing the empirical semivariogram for a simple 1D test case and a Gaussian semivariogram model (as typically used in geostatistics, see also figure) fitted directly to that data. The figure further shows a semivariogram model based on STK output, i.e. using previously estimated model parameters (model.param from stk_param_estim) to get covariance K on a target grid of lag distances and then converting K to semivariance (according to the well-known relation semivar = K0-K where K0 is the covariance at zero lag). I am attaching a simple script to reproduce the figure and detailing the attempted conversion.
As you can see in the figure, this doesn’t do the trick. I have tried several other simple examples and STK datasets, but models obtained through STK vs direct fitting never agree, and in fact usually look much more different than in the example (i.e. the range often seems very different, in addition to the sill/sigma2; uncomment line 12 in the script to see another example). I have also attempted to input the converted STK parameters into the geostatistical model (also in the script), however, the output is identical to the result based on converting K above.
I’d be very thankful for your help!
Figure illustrating the lack of agreement between semivariograms based on direct fit vs conversion of STK output
% Code to reproduce the figure illustrating my problem of getting
% variograms from STK output. The only external functions needed are those
% included with STK.
% TEST DATA - This is simply a monotonic part of the normal pdf
nugget = 0;
X = [0:20]'; % coordinates
% X = [0:50]'; % uncomment this line to see how strongly the models can deviate for different test cases
V = normpdf(X./10+nugget,0,1); % observed values
covmodel = 'stk_gausscov_iso'; % covar model, part of STK toolbox
variomodel = 'stk_gausscov_iso_vario'; % variogram model, nested function
% GET STRUCTURE FOR THE SELECTED KRIGING (GAUSSIAN PROCESS) MODEL
nDim = size(X,2);
model = stk_model (covmodel, nDim);
model.lognoisevariance = NaN; % This makes STK fit nugget
% ESTIMATE THE PARAMETERS OF THE COVARIANCE FUNCTION
[param0, model.lognoisevariance] = stk_param_init (model, X, V); % Compute an initial guess for the parameters of the covariance function (param0)
model.param = stk_param_estim (model, X, V, param0); % Now model the covariance function
% EMPIRICAL SEMIVARIOGRAM (raw, binning removed for simplicity)
D = pdist(X)';
semivar_emp = 0.5.*(pdist(V)').^2;
% THEORETICAL SEMIVARIOGRAM FROM STK
% Target grid of lag distances
DT = [0:1:100]';
DT_zero = zeros(size(DT));
% Get covariance matrix on target grid using STK estimated pars
pairwise = true;
K = feval(model.covariance_type, model.param, DT, DT_zero, -1, pairwise);
% convert covariance to semivariance, i.e. G = C(0) - C(h)
sill = exp(model.param(1));
nugget = exp(model.lognoisevariance);
semivar_stk = sill - K + nugget; % --> this variable is then plotted
% TEST: FIT A GAUSSIAN VARIOGRAM MODEL DIRECTLY TO THE EMPIRICAL SEMIVARIOGRAM
f = #(par)mseval(par,D,semivar_emp,variomodel);
par0 = [10 10 0.1]; % initial guess for pars
[par,mse] = fminsearch(f, par0); % optimize
semivar_directfit = feval(variomodel, par, DT); % evaluate
% TEST 2: USE PARS FROM STK AS INPUT TO GAUSSIAN VARIOGRAM MODEL
par(1) = exp(model.param(1)); % sill, PARAM(1) = log (SIGMA ^ 2), where SIGMA is the standard deviation,
par(2) = sqrt(3)./exp(model.param(2)); % range, PARAM(2) = - log (RHO), where RHO is the range parameter. --- > RHO = exp(-PARAM(2))
par(3) = exp(model.lognoisevariance); % nugget
semivar_stkparswithvariomodel = feval(variomodel, par, DT);
% PLOT SEMIVARIOGRAM
figure(); hold on;
plot(D(:), semivar_emp(:),'.k'); % Observed variogram, raw
plot(DT, semivar_stk,'-b','LineWidth',2); % Theoretical variogram, on a grid
plot(DT, semivar_directfit,'--r','LineWidth',2); % Test direct fit variogram
plot(DT,semivar_stkparswithvariomodel,'--g','LineWidth',2); % Test direct fit variogram using pars from stk
legend('raw empirical semivariance (no binned data here for simplicity) ',...
'Gaussian cov model from STK, i.e. exp(Sigma2) - K + exp(lognoisevar)',...
'Gaussian semivariogram model (fitted directly to semivariance)',...
'Gaussian semivariogram model (using transformed params from STK)');
xlabel('Lag distance','Fontweight','b');
ylabel('Semivariance','Fontweight','b');
% NESTED FUNCTIONS
% Objective function for direct fit
function [mse] = mseval(par,D,Graw,variomodel)
Gmod = feval(variomodel, par, D);
mse = mean((Gmod-Graw).^2);
end
% Gaussian semivariogram model.
function [semivar] = stk_gausscov_iso_vario(par, D) %#ok<DEFNU>
% D : lag distance, c : sill, a : range, n : nugget
c = par(1); % sill
a = par(2); % range
if length(par) > 2, n = par(3); % nugget optional
else, n = 0; end
semivar = n + c .* (1 - exp( -3.*D.^2./a.^2 )); % Model
end
There is nothing wrong with the way you compute the semivariogram.
To understand the figure that you obtain, consider that:
The parameters of the model are estimated in STK using the (restricted) maximum likelihood method, not by least-squares fitting on the semi-variogram.
For very smooth stationary random fields observed over short intervals, you should not expect that the theoretical semivariogram will agree with the empirical semivariogram, with or without binning. The reason for this is that the observations, and thus the squared differences, are very correlated in this case.
To convince yourself of the second point, you can run the following script repeatedly:
% a smooth GP
model = stk_model (#stk_gausscov_iso, 1);
model.param = log ([1.0, 0.2]); % unit variance
x_max = 20; x_obs = x_max * rand (50, 1);
% Simulate data
z_obs = stk_generate_samplepaths (model, x_obs);
% Empirical semivariogram (raw, no binning)
h = (pdist (double (x_obs)))';
semivar_emp = 0.5 * (pdist (z_obs)') .^ 2;
% Model-based semivariogram
x1 = (0:0.01:x_max)';
x0 = zeros (size (x1));
K = feval (model.covariance_type, model.param, x0, x1, -1, true);
semivar_th = 1 - K;
% Figure
figure; subplot (1, 2, 1); plot (x_obs, z_obs, '.');
subplot (1, 2, 2); plot (h(:), semivar_emp(:),'.k'); hold on;
plot (x1, semivar_th,'-b','LineWidth',2);
legend ('empirical', 'model'); xlabel ('lag'); ylabel ('semivar');
Further questions on parameter estimation for Gaussian process models should probably be asked on Cross-Validated rather than Stack Overflow.

Confidence intervals for linear curve fit under constraints in MATLAB

I have fitted a straight line to a dataset with 68 samples, under the constraint that the line passes through (x0,y0) using the function lsqlin in MATLAB. How can I find the confidence intervals for this?
My code (Source):
I import the dataset containing x and y vectors from a mat file, which also contains the values of constraints x0 and y0.
n = 1; % Degree of polynomial to fit
V(:,n+1) = ones(length(x),1,class(x)); %V=Vandermonde matrix for 'x'
for j = n:-1:1
V(:,j) = x.*V(:,j+1);
end
d = y; % 'd' is the vector of target values, 'y'.
% There are no inequality constraints in this case, i.e.,
A = [];b = [];
% We use linear equality constraints to force the curve to hit the required point. In
% this case, 'Aeq' is the Vandermoonde matrix for 'x0'
Aeq = x0.^(n:-1:0);
% and 'beq' is the value the curve should take at that point
beq = y0;
%%
[p, resnorm, residual, exitflag, output, lambda] = lsqlin(V, d, A, b, Aeq, beq);
%%
% We can then use POLYVAL to evaluate the fitted curve
yhat = polyval( p, x );
The function bootci can be used to find confidence intervals when using lsqlin. Here's how it can be used:
ci=bootci(68,{#(x,y)func(x,y),x,y},'type','student');
The first argument is the number of data points, or the length of the vector x.
The function in the second argument is basically supposed to compute any statistic for which you need to find the confidence intervals. In this case, this statistic is the coefficients of our fitted line. Hence, the function func(x,y) here should return the regression coefficients returned by lsqnonlin. The inputs to this function are the dataset vectors x and y.
The third and fourth argument lets you specify the distribution of your dataset. You can get an idea of this by plotting a histogram of the residuals like this:
histogram(residuals);

Finding the best monotonic curve fit

Edit: Some time after I asked this question, an R package called MonoPoly (available here) came out that does exactly what I want. I highly recommend it.
I have a set of points I want to fit a curve to. The curve must be monotonic (never decreasing in value) i.e. the curve can only go upward or stay flat.
I originally had been polyfitting my results and this had been working great until I found a particular dataset. The polyfit for data in this dataset was non-monotonic.
I did some research and found a possible solution in this post:
Use lsqlin. Constrain the first derivative to be non-negative at both
ends of the domain of interest.
I'm coming from a programming rather than math background so this is a little beyond me. I don't know how to constrain the first derivative to be non-negative as he said. Also, I think in my case I need a curve so I should use lsqcurvefit but I don't know how to constrain it to produce monotonic curves.
Further research turned up this post recommending lsqcurvefit but I can't figure out how to use the important part:
Try this non-linear function F(x) also. You use it together with
lsqcurvefit but it require a start guess on the parameters. But it is
a nice analytic expression to give as a semi-empirical formula in a
paper or a report.
%Monotone function F(x), with c0,c1,c2,c3 varitional constants F(x)=
c3 + exp(c0 - c1^2/(4*c2))sqrt(pi)...
Erfi((c1 + 2*c2*x)/(2*sqrt(c2))))/(2*sqrt(c2))
%Erfi(x)=erf(i*x) (look mathematica) but the function %looks much like
x^3 %derivative f(x), probability density f(x)>=0
f(x)=dF/dx=exp(c0+c1*x+c2*x.^2)
I must have a monotonic curve but I'm not sure how to do it, even with all of this information. Would a random number be enough for a "start guess". Is lsqcurvefit best? How can I use it to produce a best fitting monotonic curve?
Thanks
Here is a simple solution using lsqlin. The derivative constrain is enforced in each data point, this could be easily modified if needed.
Two coefficient matrices are needed, one (C) for least square error calculation and one (A) for derivatives in the data points.
% Following lsqlin's notations
%--------------------------------------------------------------------------
% PRE-PROCESSING
%--------------------------------------------------------------------------
% for reproducibility
rng(125)
degree = 3;
n_data = 10;
% dummy data
x = rand(n_data,1);
d = rand(n_data,1) + linspace(0,1,n_data).';
% limit on derivative - in each data point
b = zeros(n_data,1);
% coefficient matrix
C = nan(n_data, degree+1);
% derivative coefficient matrix
A = nan(n_data, degree);
% loop over polynomial terms
for ii = 1:degree+1
C(:,ii) = x.^(ii-1);
A(:,ii) = (ii-1)*x.^(ii-2);
end
%--------------------------------------------------------------------------
% FIT - LSQ
%--------------------------------------------------------------------------
% Unconstrained
% p1 = pinv(C)*y
p1 = fliplr((C\d).')
p2 = polyfit(x,d,degree)
% Constrained
p3 = fliplr(lsqlin(C,d,-A,b).')
%--------------------------------------------------------------------------
% PLOT
%--------------------------------------------------------------------------
xx = linspace(0,1,100);
plot(x, d, 'x')
hold on
plot(xx, polyval(p1, xx))
plot(xx, polyval(p2, xx),'--')
plot(xx, polyval(p3, xx))
legend('data', 'lsq-pseudo-inv', 'lsq-polyfit', 'lsq-constrained', 'Location', 'southoutside')
xlabel('X')
ylabel('Y')
For the specified input the fitted curves:
Actually this code is more general than what you requested, since the degree of polynomial can be changed as well.
EDIT: enforce derivative constrain in additional points
The issue pointed out in the comments is due to that the derivative checks are enforced only in the data points. Between those no checks are performed. Below is a solution to alleviate this problem. The idea: convert the problem to an unconstrained optimization by using a penalty term.
Note that it is using a term pen to penalize the violation of the derivative check, thus the result is not a true least square error solution. Additionally, the result is dependent on the penalty function.
function lsqfit_constr
% Following lsqlin's notations
%--------------------------------------------------------------------------
% PRE-PROCESSING
%--------------------------------------------------------------------------
% for reproducibility
rng(125)
degree = 3;
% data from comment
x = [0.2096 -3.5761 -0.6252 -3.7951 -3.3525 -3.7001 -3.7086 -3.5907].';
d = [95.7750 94.9917 90.8417 62.6917 95.4250 89.2417 89.4333 82.0250].';
n_data = length(d);
% number of equally spaced points to enforce the derivative
n_deriv = 20;
xd = linspace(min(x), max(x), n_deriv);
% limit on derivative - in each data point
b = zeros(n_deriv,1);
% coefficient matrix
C = nan(n_data, degree+1);
% derivative coefficient matrix
A = nan(n_deriv, degree);
% loop over polynom terms
for ii = 1:degree+1
C(:,ii) = x.^(ii-1);
A(:,ii) = (ii-1)*xd.^(ii-2);
end
%--------------------------------------------------------------------------
% FIT - LSQ
%--------------------------------------------------------------------------
% Unconstrained
% p1 = pinv(C)*y
p1 = (C\d);
lsqe = sum((C*p1 - d).^2);
p2 = polyfit(x,d,degree);
% Constrained
[p3, fval] = fminunc(#error_fun, p1);
% correct format for polyval
p1 = fliplr(p1.')
p2
p3 = fliplr(p3.')
fval
%--------------------------------------------------------------------------
% PLOT
%--------------------------------------------------------------------------
xx = linspace(-4,1,100);
plot(x, d, 'x')
hold on
plot(xx, polyval(p1, xx))
plot(xx, polyval(p2, xx),'--')
plot(xx, polyval(p3, xx))
% legend('data', 'lsq-pseudo-inv', 'lsq-polyfit', 'lsq-constrained', 'Location', 'southoutside')
xlabel('X')
ylabel('Y')
%--------------------------------------------------------------------------
% NESTED FUNCTION
%--------------------------------------------------------------------------
function e = error_fun(p)
% squared error
sqe = sum((C*p - d).^2);
der = A*p;
% penalty term - it is crucial to fine tune it
pen = -sum(der(der<0))*10*lsqe;
e = sqe + pen;
end
end
Gradient free methods might be used to solve the problem by exactly enforcing the derivative constrain, for example:
[p3, fval] = fminsearch(#error_fun, p_ini);
%--------------------------------------------------------------------------
% NESTED FUNCTION
%--------------------------------------------------------------------------
function e = error_fun(p)
% squared error
sqe = sum((C*p - d).^2);
der = A*p;
if any(der<0)
pen = Inf;
else
pen = 0;
end
e = sqe + pen;
end
fmincon with non-linear constraint might be a better choice.
I let you to work out the details and to tune the algorithms. I hope that it is sufficient.

Matlab find the best constants for a fitting model

Please find the data in the link below, or if you can send me your private email, I can send you the data
https://dl.dropboxusercontent.com/u/5353938/test_matlab_lefou.xlsx
In the excel sheet, the first column is y, the second is x and the third is t, I hope this will make things much more clear, and many thanks for the help.
I need to use the following model because it is the one that fits best my data, but what I don't know is how to find the best values of a and b, that will allow me to get the best fit, (I can attach a file if you need the values), I already have the values of y, x and t:
y= a*sqrt(x).exp(b.t)
Thanks
Without the dependency on the curve fitting toolbox, this problem can also be solved by using fminsearch. I first generate some data, which you already have but didn't share with us. An initial guess on the parameters a and b must be made (p0). Then I do the optimiziation by minizmizing the squared errors between data and fit resulting in the vector p_fit, which contains the optimized parameters for a and b. In the end, the result is visualized.
% ----- Generating some data for x, y and t (which you already got)
N = 10; % num of data points
x = linspace(0,5,N);
t = linspace(0,10,N);
% random parameters
a = rand()*5; % a between 0 and 5
b = (rand()-1); % b between -1 and 0
y = a*sqrt(x).*exp(b*t) + rand(size(x))*0.1; % noisy data
% ----- YOU START HERE WITH YOUR PROBLEM -----
% put x and t into a 2 row matrix for simplicity
D(1,:) = x;
D(2,:) = t;
% create model function with parameters p(1) = a and p(2) = b
model = #(p, D) p(1)*sqrt(D(1,:)).*exp(p(2)*D(2,:));
e = #(p) sum((y - model(p,D)).^2); % minimize squared errors
p0 = [1,-1]; % an initial guess (positive a and probably negative b for a decay)
[p_fit, r1] = fminsearch(e, p0); % Optimize
% ----- VISUALIZATION ----
figure
plot(x,y,'ko')
hold on
X = linspace(min(x), max(x), 100);
T = linspace(min(t), max(t), 100);
plot(X, model(p_fit, [X; T]), 'r--')
legend('data', sprintf('fit: y(t,x) = %.2f*sqrt(x)*exp(%.2f*t)', p_fit))
The result can look like
UPDATE AFTER MANY MANY COMMENTS
Your data are column vectors, my solution used row vectors. The error occured when the errorfunction tryed to compute the difference of a column vector (y) and a row-vector (result of the model-function). Easy hack: make them all to row vectors and use my approach. The result is: a = 0.5296 and b = 0.0013.
However, the Optimization depends on the initial guess p0, you might want to play around with it a little bit.
clear variables
load matlab.mat
% put x and t into a 2 row matrix for simplicity
D(1,:) = x;
D(2,:) = t;
y = reshape(y, 1, length(y)); % <-- also y is a row vector, now
% create model function with parameters p(1) = a and p(2) = b
model = #(p, D) p(1)*sqrt(D(1,:)).*exp(p(2)*D(2,:));
e = #(p) sum((y - model(p,D)).^2); % minimize squared errors
p0 = [1,0]; % an initial guess (positive a and probably negative b for a decay)
[p_fit, r1] = fminsearch(e, p0); % Optimize
% p_fit = nlinfit(D, y, model, p0) % as a working alternative with dependency on the statistics toolbox
% ----- VISUALIZATION ----
figure
plot(x,y,'ko', 'markerfacecolor', 'black', 'markersize',5)
hold on
X = linspace(min(x), max(x), 100);
T = linspace(min(t), max(t), 100);
plot(X, model(p_fit, [X; T]), 'r-', 'linewidth', 2)
legend('data', sprintf('fit: y(t,x) = %.2f*sqrt(x)*exp(%.2f*t)', p_fit))
The result doesn't look too satisfying though. But that mainly is because of your data. Have a look here:
With the cftool-command (curve fitting toolbox) you can fit to your own functions, returning the variables that you need (a,b). Make sure your x-data and y-data are in separate variables. you can also specify weights for your measurements.