Curve fitting with self-defined equation - matlab

I am trying to fit the experimental data with a self-defined equation with MATLAB. The parameters I am interested are y0, tau, D and C. But I keep getting an error saying matrix dimensions must agree with the the following codes:
t = [0.00468
0.01277
0.05987
0.10316
0.18595
0.29252
0.39529
0.52136
0.68313
0.88818
1.08182
1.28688
1.45625
1.57471
1.73267
1.84685]
y = [9.02766
7.53879
5.3679
4.28093
3.09349
2.11005
1.63202
1.10224
0.77341
0.54506
0.42022
0.24363
0.21623
0.11575
0.11575
0.06704]
a1 = 10.86;
a2 = 15.5;
b1 = 8.74;
E = 1e15;
F = #(x,xdata)y0.*exp(-xdata./tau-(4.*pi./3).*E.*(1.7725).*(C.*xdata).^(1/2).*((1.+a1.*(D.*C.^(-1/3).*xdata.^(2/3)...
)+a2.*(D.*C.^(-1/3).*xdata.^(2/3)).^2)./(1.+b1.*(D.*C.^(-1/3).*xdata.^(2/3))).^(3/4)));
X0 = [1 1e-3 1e-13 1e-30];
[x,resnorm,~,exitflag,output] = lsqcurvefit(F,X0,t,y)
The equation is described in the inserted graph and an example of using the functiong to fit the data is shown in another graph. Thank you for your help.

Related

Nonlinear Regression & Optimisation

I need some help... I have got a model function which is :
function y = Surf(param,x);
global af1 af2 tData % A2 mER2
A1 = param(1); m1 = param(2); A2 = param(3); m2 = param(4);
m = param(5); n = param(6);
k1 = #(T) A1*exp(mER1/T);
k2 = #(T) A2*exp(mER2/T);
af = #(T) sech(af1*T+af2);
y = zeros(length(x),1);
for i = 1:length(x)
a = x(i,1); T = temperature(i,1);
y(i) = (k2(T)+k1(T)*(a.^m))*((af(T)-a).^n);
end
end
And I have got a set of Data giving Cure, Cure_rate, Temperature. Which are all in a single vertical column matrix.
Basically, I tried to use :
[output,R1] = lsqcurvefit(#Surf, initial_guess, Cure, Cure_rate)
[output2,R2] = nlinfit(Cure,Cure_rate,#Surf,initial_guess)
And they works pretty well, (my initial_guess are initial guess of parameters in the above model which is in : [1.1e+07 -7.8e+03 1.2e+06 -7.1e+03 2.2 0.72])
My main problem is, when I try to look into different methods which could do nonlinear regression such as fminsearch, fmincon, fsolve, fminunc, etc. They just dont work and I am quite confused about the input that I am considering. Mainly beacuse they dont work as same as nlinfit and lsqcurvefit (input of Cure, Cure_rate), most of them considered the model function and the initial guess only, The way I did the above:
output3 = fminsearch(#Surf,initial_guess)
output4 = fsolve(#Surf,initial_guess)
output5 = fmincon(#Surf,x0,A,b,Aeq,beq)
(Not sure what should I put for Linear Inequality Constraint:
A,b and Aeq,beq )
output6 = fminunc(#Surf,initial_guess)
The problem is Matlab keep saying either I have not enough input or too many input which I don't get it and how should I include my Dataset in the fitting function (Cure, Cure_rate) in the above functions, like in nlinfit and lsqcurvefit?

MATLAB: How to plot a cubic expression for certain range of input pressure

I have a cubic expression here
I am trying to determine and plot δ𝛿 in the expression for P values of 0.0 to 5000. I'm really struggling to get the expression for δ in terms of the pressure P.
clear all;
close all;
t = 0.335*1e-9;
r = 62*1e-6;
delta = 1.2*1e+9;
E = 1e+12;
v = 0.17;
P = 0:100:5000
P = (4*delta*t)*w/r^2 + (2*E*t)*w^3/((1-v)*r^4);
I would appreciate if anyone could provide pointers.
I suggest two simple methods.
You evaluate P as a function of delta then you plot(P,delta). This is quick and dirty but if all you need is a plot it will do. The inconvenience is that you may to do some guess-and-trial to find the correct interval of P values, but you can also take a large enough value of delta_max and then restrict the x-axis limit of the plot.
Your function is a simple cubic, which you can solve analytically (see here if you are lost) to invert P(delta) into delta(P).
What you want is the functional inverse of your expression, i.e., δ𝛿 as a function of P. Since it's a cubic polynomial, you can expect up to three solutions (roots) for a give value of P. However, I'm guessing that you're only interested in real-valued solutions and nonnegative values of P. In that case there's just one real root for each value of P.
Given the values of your parameters, it makes most sense to solve this numerically using fzero. Using the parameter names in your code (different from equations):
t = 0.335*1e-9;
r = 62*1e-6;
delta = 1.2*1e9;
E = 1e12;
v = 0.17;
f = #(w,p)2*E*t*w.^3/((1-v)*r^4)+4*delta*t*w/r^2-p;
P = 0:100:5000;
w0 = [0 1]; % Bounded initial guess, valid up to very large values of P
w_sol = zeros(length(P),1);
for i = 1:length(P)
w_sol(i) = fzero(#(w)f(w,P(i)),w0); % Find solution for each P
end
figure;
plot(P,w_sol);
You could also solve this using symbolic math:
syms w p
t = 0.335*sym(1e-9);
r = 62*sym(1e-6);
delta = 1.2*sym(1e9);
E = sym(1e12);
v = sym(0.17);
w_sol = solve(p==2*E*t*w^3/((1-v)*r^4)+4*delta*t*w/r^2,w);
P = 0:100:5000;
w_sol = double(subs(w_sol(1),p,P)); % Plug in P values and convert to floating point
figure;
plot(P,w_sol);
Because of your numeric parameter values, solve returns an answer in terms of three RootOf objects, the first of which is the real one you want.

Finding eigenvalues and eigenvectors using MATLAB

I'm having trouble finding eigenvalues and eigenvectors using command null in MATLAB. My lecturer didn't want us to use eig. Instead she asked us to find them using nullspace, but when I run my command, the output is an empty matrix.
This is the code:
A = 'C:\Documents and Settings\User\Desktop\testproject.jpg';
B = imread(A,'jpg');
C = rgb2gray(B);
imshow(C);
D = im2double(C);
F = transpose(D);
G = F*D;
H = poly(G);
R = roots(H);
M = null(R)
The objective of this code is to find the SVD, hence find the singular values and the matrices U and V.

Matlab: finding coefficients of ODE system

I have all the data and an ODE system of three equations which has 9 unknown coefficients (a1, a2,..., a9).
dS/dt = a1*S+a2*D+a3*F
dD/dt = a4*S+a5*D+a6*F
dF/dt = a7*S+a8*D+a9*F
t = [1 2 3 4 5]
S = [17710 18445 20298 22369 24221]
D = [1357.33 1431.92 1448.94 1388.33 1468.95]
F = [104188 104792 112097 123492 140051]
How to find these coefficients (a1,..., a9) of an ODE using Matlab?
I can't spend too much time on this, but basically you need to use math to reduce the equation to something more meaningful:
your equation is of the order
dx/dt = A*x
ergo the solution is
x(t-t0) = exp(A*(t-t0)) * x(t0)
Thus
exp(A*(t-t0)) = x(t-t0) * Pseudo(x(t0))
Pseudo is the Moore-Penrose Pseudo-Inverse.
EDIT: Had a second look at my solution, and I didn't calculate the pseudo-inverse properly.
Basically, Pseudo(x(t0)) = x(t0)'*inv(x(t0)*x(t0)'), as x(t0) * Pseudo(x(t0)) equals the identity matrix
Now what you need to do is assume each time step (1 to 2, 2 to 3, 3 to 4) is an experiment (therefore t-t0=1), so the solution would be to:
1- Build your pseudo inverse:
xt = [S;D;F];
xt0 = xt(:,1:4);
xInv = xt0'*inv(xt0*xt0');
2- Get exponential result
xt1 = xt(:,2:5);
expA = xt1 * xInv;
3- Get the logarithm of the matrix:
A = logm(expA);
And since t-t0= 1, A is our solution.
And a simple proof to check
[t, y] = ode45(#(t,x) A*x,[1 5], xt(1:3,1));
plot (t,y,1:5, xt,'x')
You have a linear, coupled system of ordinary differential equations,
y' = Ay with y = [S(t); D(t); F(t)]
and you're trying to solve the inverse problem,
A = unknown
Interesting!
First line of attack
For given A, it is possible to solve such systems analytically (read the wiki for example).
The general solution for 3x3 design matrices A take the form
[S(t) D(t) T(t)].' = c1*V1*exp(r1*t) + c2*V2*exp(r2*t) + c3*V3*exp(r3*t)
with V and r the eigenvectors and eigenvalues of A, respectively, and c scalars that are usually determined by the problem's initial values.
Therefore, there would seem to be two steps to solve this problem:
Find vectors c*V and scalars r that best-fit your data
reconstruct A from the eigenvalues and eigenvectors.
However, going down this road is treaturous. You'd have to solve the non-linear least-squares problem for the sum-of-exponentials equation you have (using lsqcurvefit, for example). That would give you vectors c*V and scalars r. You'd then have to unravel the constants c somehow, and reconstruct the matrix A with V and r.
So, you'd have to solve for c (3 values), V (9 values), and r (3 values) to build the 3x3 matrix A (9 values) -- that seems too complicated to me.
Simpler method
There is a simpler way; use brute-force:
function test
% find
[A, fval] = fminsearch(#objFcn, 10*randn(3))
end
function objVal = objFcn(A)
% time span to be integrated over
tspan = [1 2 3 4 5];
% your desired data
S = [17710 18445 20298 22369 24221 ];
D = [1357.33 1431.92 1448.94 1388.33 1468.95 ];
F = [104188 104792 112097 123492 140051 ];
y_desired = [S; D; F].';
% solve the ODE
y0 = y_desired(1,:);
[~,y_real] = ode45(#(~,y) A*y, tspan, y0);
% objective function value: sum of squared quotients
objVal = sum((1 - y_real(:)./y_desired(:)).^2);
end
So far so good.
However, I tried both the complicated way and the brute-force approach above, but I found it very difficult to get the squared error anywhere near satisfyingly small.
The best solution I could find, after numerous attempts:
A =
1.216731997197118e+000 2.298119167536851e-001 -2.050312097914556e-001
-1.357306715497143e-001 -1.395572220988427e-001 2.607184719979916e-002
5.837808840775175e+000 -2.885686207763313e+001 -6.048741083713445e-001
fval =
3.868360951628554e-004
Which isn't bad at all :) But I would've liked a solution that was less difficult to find...

Histogram matching of 3D datasets using L2 norm minimisation (Matlab)

I need to perform some elementary histogram matching on 2 sets of 3D data. This is part of a larger algorithm.
My goal is to perform this by minimising the following cost function:
|| cumpdf(f(A)) - cumpdf(B) || .^2
where:
cumpdf is the cumulative histogram
f() is linear transformation a*A + b where a/b are affine coefficients to be
determined
A is the image to be transformed and B is the image to be matched
I am using lsqcurvefit however I have run into some trouble and therefore really need some help.
A(maskA==0)=0;
B(maskB==0)=0;
[na,~] = hist(A(maskA~=0),500);
na = na ./ numel(A(maskA~=0));
x_data = cumsum(na);
[nb,~] = hist(B(maskB~=0),500);
nb = nb ./ numel(B(maskB~=0));
y_data = cumsum(nb);
xo = [1.5 -200];
[coeff,~] = lsqcurvefit(#cost,xo,x_data,y_data);
function F = cost(x,xc)
F = x(1).*A + x(2);
[nc,~] = hist(C(maskA~=0),500);
nc = nc / numel(C(maskA~=0));
xc = cumsum(nc);
Amask and Bmask just represent some indexing I need to do.
My question is: I know that the above is wrong. However, I think it represents best what I want to do, regarding the cost function and the goal. Some help would me much appreciated!