Curve Fitting for equation with two parameters - matlab

I have two arrays:
E= [6656400;
13322500;
19980900;
26625600;
33292900;
39942400;
46648900;
53290000]
and
J=[0.0000000021;
0.0000000047;
0.0000000128;
0.0000000201;
0.0000000659;
0.0000000748;
0.0000001143;
0.0000001397]
I want to find the appropriate curve fitting for the above data by applying this equation:
J=A0.*(298).^2.*exp(-(W-((((1.6e-19)^3)/(4*pi*2.3*8.854e-12))^0.5).*E.^0.5)./((1.38e-23).*298))
I want to select the starting value of W from 1e-19
I have tried the curve fitting tools but it is not helping me to solve it!
Then, I selected some random values of A0=1.2e9 and W=2.243e-19, it gave me a better results. But I want to find the right values by using the code (not the curve fitting Apps)
Can you help me please?

A quick (and potentially easy) solution method would be to pose the curve fit as a minimization problem.
Define a correlation function that takes the fit parameters as an argument:
% x(1) == A0; x(2) == W
Jfunc = #(x) x(1).*(298).^2.*exp(-(x(2)-((((1.6e-19)^3)/(4*pi*2.3*8.854e-12))^0.5).*E.^0.5)./((1.38e-23).*298));
Then a objective function to minimize. Since you have data J we'll minimize the sum-of-squares of the difference between the data and the correlation:
Objective = #(x) sum((Jfunc(x) - J).^2);
And then attempt to minimize the objective using fminsearch:
x0 = [1.2E9;2.243E-19];
sol = fminsearch(Objective,x0);
I used the guesses you gave. For nonlinear solutions, a good first guess is often important for convergence.
If you have the Optimization Toolbox, you can also try lsqcurvefit or lsqnonlin (fminsearch is vanilla MATLAB).

Related

How to Fit a decay exponential function in Matlab

I have to fit the dots, results of measurements, by an exponential function on Matlab. My profesor asked me to use only
fminsearch
polyval
polyfit
One of them or both. I have to find the parameters a and b (the value) which are fitting it.
There is the lines I wrote :
x=[1:10:70]
y=[0:10:70]
x=[12.5,11.8,10.8,10.9,6.5,6.2,6.1,5.423,4.625]
y=[0,0.61,1.3,1.4,14.9,18.5,20.1,29.7,58.2]
xlabel('Conductivité')
ylabel('Inductance')
The function has the form a*e^(-b*x) +c
Well polyfit and polyval are only usefull for working with polynomials. So you would have to write a minimization problem of the form min(f(x)).
functionToMinimize = #(pars, x, y)(norm(pars(1).*exp(-pars(2).*x) - y));
targetFunctionForFminseardch = #(pars)(functionToMinimize(pars, x, y));
minPars = fminsearch(targetFunctionForFminseardch, [0, 1])
Read up on anonymous functions and the use of vector norms if you have questions how to construct such a minimization problem.
Your code also has some flaws. Why are you defining x and y twice when you only want to use the actual measured data?

Minimizing Function with vector valued input in MATLAB

I want to minimize a function like below:
Here, n can be 5,10,50 etc. I want to use Matlab and want to use Gradient Descent and Quasi-Newton Method with BFGS update to solve this problem along with backtracking line search. I am a novice in Matlab. Can anyone help, please? I can find a solution for a similar problem in that link: https://www.mathworks.com/help/optim/ug/unconstrained-nonlinear-optimization-algorithms.html .
But, I really don't know how to create a vector-valued function in Matlab (in my case input x can be an n-dimensional vector).
You will have to make quite a leap to get where you want to be -- may I suggest to go through some basic tutorial first in order to digest basic MATLAB syntax and concepts? Another useful read is the very basic example to unconstrained optimization in the documentation. However, the answer to your question touches only basic syntax, so we can go through it quickly nevertheless.
The absolute minimum to invoke the unconstraint nonlinear optimization algorithms of the Optimization Toolbox is the formulation of an objective function. That function is supposed to return the function value f of your function at any given point x, and in your case it reads
function f = objfun(x)
f = sum(100 * (x(2:end) - x(1:end-1).^2).^2 + (1 - x(1:end-1)).^2);
end
Notice that
we select the indiviual components of the x vector by matrix indexing, and that
the .^ notation effects that the operand is to be squared elementwise.
For simplicity, save this function to a file objfun.m in your current working directory, so that you have it available from the command window.
Now all you have to do is to call the appropriate optimization algorithm, say, the quasi Newton method, from the command window:
n = 10; % Use n variables
options = optimoptions(#fminunc,'Algorithm','quasi-newton'); % Use QM method
x0 = rand(n,1); % Random starting guess
[x,fval,exitflag] = fminunc(#objfun, x0, options); % Solve!
fprintf('Final objval=%.2e, exitflag=%d\n', fval, exitflag);
On my machine I see that the algorithm converges:
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the optimality tolerance.
Final objval=5.57e-11, exitflag=1

Curve fitting in MATLAB, for a Sinusoidal function with more than 8 terms?

I'm trying to fit some data to a sum of sines function in MATLAB, however, the number of terms of sine function in MATLAB is limited,i.e. to 1 ≤ n ≤ 8. However, I want more terms in my fit functions, i.e. over 50 term. Is there anyway to make MATLAB to fit my data to a sum of sine function with over 8 sinusoidal terms? Why there is such constraint in MATLAB (is it technically or arbitrary)? Is there any toolbox to fit sinusoidal function (especially something that is capable of supporting wieghted data)?
>f = fit(X,Y, 'sin10')
>Error using fittype>iCreateFromLibrary (line 412)
>Library function sin10 not found.
It is o.k up to 'sin8' or 'sin9' parameters.
I appreciate any answer.
I'v found a solution to my question accidentally, while browsing MATLAB help. I post this answer in hope of helping people who have the same problem.
As the first shot to solve this , I tried 'fit' instruction. For some reasons, customized 'fit' based fitting code like below, didn't workout:
FitOptions = fitoptions('Method','NonlinearLeastSquares', 'Algorithm', 'Trust-Region', 'MaxIter');
FitType = fittype('a*sin(1*f) + b*sin(2*f) + c*sin(3*f) + d*sin(4*f) + e*sin(5*f) + g*sin(6*f) + h*sin(7*f) + k*sin(8*f) + l*sin(9*f) + m*sin(10*f) + n*sin(11*f)', 'independent', 'f');
[FittedModel, GOF] = fit(freq, data, FitType)
% `In above code, phase parameters are not included, they might be added.
What I found is that using 'lsqcurvefit' instruction from Optimization Toolbox, customized function fitting is more feasible and easier than 'fit' function. I tested it to fit my data to sum of 12 (>8) sines in below code:
clear;clc
xdata=1:0.1:10; % X or Independant Data
ydata=sin(xdata+0.2)+0.5*sin(0.3*xdata+0.3)+ 2*sin( 0.2*xdata+23 )+...
0.7*sin( 0.34*xdata+12 )+.76*sin( .23*xdata+.3 )+.98*sin(.76 *xdata+.56 )+...
+.34*sin( .87*xdata+.123 )+.234*sin(.234 *xdata+23 ); % Y or Dependant data
x0 = randn(36,1); % Initial Guess
fun = #(x,xdata)x(1)*sin(x(2)*xdata+x(3))+...
x(4)*sin(x(5)*xdata+x(6))+...
x(7)*sin(x(8)*xdata+x(9))+...
x(10)*sin(x(11)*xdata+x(12))+...
x(13)*sin(x(14)*xdata+x(15))+...
x(16)*sin(x(17)*xdata+x(18))+...
x(19)*sin(x(20)*xdata+x(21))+...
x(22)*sin(x(23)*xdata+x(24))+...
x(25)*sin(x(26)*xdata+x(27))+...
x(28)*sin(x(29)*xdata+x(30))+...
x(31)*sin(x(32)*xdata+x(33))+...
x(34)*sin(x(35)*xdata+x(36)); % Goal function which is Sum of 12 sines
options = optimoptions('lsqcurvefit','Algorithm','trust-region-reflective');% Options for fitting
x=lsqcurvefit(fun,x0,xdata,ydata) % the main instruction
times = linspace(xdata(1),xdata(end));
plot(xdata,ydata,'ko',times,fun(x,times),'r-')
legend('Data','Fitted Sum of 12 Sines')
title('Data and Fitted Curve')
The results is satisfactory (till now), it is shown in below:
The above problem is that when I use matlab fit function, with specified argument for Sum of Sines fitting (e.g fit(xdata,ydata,'sin6')), it easily converges to an optimum solution and fitting results are acceptable as below:
but when I tried to fit same data using a customarily defined function, it results are not satisfactory at all as you see in figure below:
fun=#(x,xdata)a1*sin(b1*xdata+c1)+...+a6*sin(b6*xdata+c6); %Sum if Six Sines
f=fit(xdata,ydata,fun);
First, I felt it is the fit instruction so I tried other instructions like lsqcurvefit , it worked well for some data but as soon as other data were ued it started to ill-behave.
From Maltab documentations, I figured out Sum of Sine fitting and Fourier fitting are extremely sensitive to Starting points or initial points, or values that fitting algorithm assumes for fitting parameters (amplitudes, frequencies and phases) for its first iteration. Through inspection of Matlab fitting toolbox .m files , I noticed matlab does some clever trick to obtain starting point when you use predefined function fitting (e.g. fit(x,y,'sin1'), or fit(x,y,'sin2'),... but when you chose ti enter your custom function the initial points are generated randomly! This is why Matlab build functions work and my custom function fitting does not (even though I enter the same function).
By the way, Matlab computes FFT of the ydata and through some (seems greedy) method extracts initial points for amplitudes, frequencies and phases (a function called startpt.m does this).

How to solve equations with complex coefficients using ode45 in MATLAB?

I am trying to solve two equations with complex coefficients using ode45.
But iam getting an error message as "Inputs must be floats, namely single or
double."
X = sym(['[',sprintf('X(%d) ',1:2),']']);
Eqns=[-(X(1)*23788605396486326904946699391889*1i)/38685626227668133590597632 + (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632; (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632 + X(1)*(- 2500000 + (5223289665997855453060886952725538686654593059791*1i)/324518553658426726783156020576256)] ;
f=#(t,X)[Eqns];
[t,Xabc]=ode45(f,[0 300*10^-6],[0 1])
How can i fix this ? Can somebody can help me ?
Per the MathWorks Support Team, the "ODE solvers in MATLAB 5 (R12) and later releases properly handle complex valued systems." So the complex numbers are the not the issue.
The error "Inputs must be floats, namely single or double." stems from your definition of f using Symbolic Variables that are, unlike complex numbers, not floats. The easiest way to get around this is to not use the Symbolic Toolbox at all; just makes Eqns an anonymous function:
Eqns= #(t,X) [-(X(1)*23788605396486326904946699391889*1i)/38685626227668133590597632 + (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632; (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632 + X(1)*(- 2500000 + (5223289665997855453060886952725538686654593059791*1i)/324518553658426726783156020576256)] ;
[t,Xabc]=ode45(Eqns,[0 300*10^-6],[0 1]);
That being said, I'd like to point out that numerically time integrating this system over 300 microseconds (I assume without units given) will take a long time since your coefficient matrix has imaginary eigenvalues on the order of 10E+10. The extremely short wavelength of those oscillations will more than likely be resolved by Matlab's adaptive methods, and that will take a while to solve for a time span just a few orders greater than the wavelength.
I'd, therefore, suggest an analytical approach to this problem; unless it is a stepping stone another problem that is non-analytically solvable.
Systems of ordinary differential equations of the form
,
which is a linear, homogenous system with a constant coefficient matrix, has the general solution
,
where the m-subscripted exponential function is the matrix exponential.
Therefore, the analytical solution to the system can be calculated exactly assuming the matrix exponential can be calculated.
In Matlab, the matrix exponential is calculate via the expm function.
The following code computes the analytical solution and compares it to the numerical one for a short time span:
% Set-up
A = [-23788605396486326904946699391889i/38685626227668133590597632,23788605396486326904946699391889i/38685626227668133590597632;...
-2500000+5223289665997855453060886952725538686654593059791i/324518553658426726783156020576256,23788605396486326904946699391889i/38685626227668133590597632];
Eqns = #(t,X) A*X;
X0 = [0;1];
% Numerical
options = odeset('RelTol',1E-8,'AbsTol',1E-8);
[t,Xabc]=ode45(Eqns,[0 1E-9],X0,options);
% Analytical
Xana = cell2mat(arrayfun(#(tk) expm(A*tk)*X0,t,'UniformOutput',false)')';
k = 1;
% Plots
figure(1);
subplot(3,1,1)
plot(t,abs(Xana(:,k)),t,abs(Xabc(:,k)),'--');
title('Magnitude');
subplot(3,1,2)
plot(t,real(Xana(:,k)),t,real(Xabc(:,k)),'--');
title('Real');
ylabel('Values');
subplot(3,1,3)
plot(t,imag(Xana(:,k)),t,imag(Xabc(:,k)),'--');
title('Imaginary');
xlabel('Time');
The comparison plot is:
The output of ode45 matches the magnitude and real parts of the solution very well, but the imaginary portion is out-of-phase by exactly π.
However, since ode45's error estimator only looks at norms, the phase difference is not noticed which may lead to problems depending on the application.
It will be noted that while the matrix exponential solution is far more costly than ode45 for the same number of time vector elements, the analytical solution will produce the exact solution for any time vector of any density given to it. So for long time solutions, the matrix exponential can be viewed as an improvement in some sense.

matlab differential equation

I have the following differential equation which I'm not able to solve.
We know the following about the equation:
D(r) is a third grade polynom
D'(1)=D'(2)=0
D(2)=2D(1)
u(1)=450
u'(2)=-K * (u(2)-Te)
Where K and Te are constants.
I want to approximate the problem using a matrix and I managed to solve
the similiar equation: with the same limit conditions for u(1) and u'(2).
On this equation I approximated u' and u'' with central differences and used a finite difference method between r=1 to r=2. I then placed the results in a matrix A in matlab and the limit conditions in the vector Y in matlab and ran u=A\Y to get how the u value changes. Heres my matlab code for the equation I managed to solve:
clear
a=1;
b=2;
N=100;
h = (b-a)/N;
K=3.20;
Ti=450;
Te=20;
A = zeros(N+2);
A(1,1)=1;
A(end,end)=1/(2*h*K);
A(end,end-1)=1;
A(end,end-2)=-1/(2*h*K);
r=a+h:h:b;
%y(i)
for i=1:1:length(r)
yi(i)=-r(i)*(2/(h^2));
end
A(2:end-1,2:end-1)=A(2:end-1,2:end-1)+diag(yi);
%y(i-1)
for i=1:1:length(r)-1
ymin(i)=r(i+1)*(1/(h^2))-1/(2*h);
end
A(3:end-1,2:end-2) = A(3:end-1,2:end-2)+diag(ymin);
%y(i+1)
for i=1:1:length(r)
ymax(i)=r(i)*(1/(h^2))+1/(2*h);
end
A(2:end-1,3:end)=A(2:end-1,3:end)+diag(ymax);
Y=zeros(N+2,1);
Y(1) =Ti;
Y(2)=-(Ti*(r(1)/(h^2)-(1/(2*h))));
Y(end) = Te;
r=[1,r];
u=A\Y;
plot(r,u(1:end-1));
My question is, how do I solve the first differential equation?
As TroyHaskin pointed out in comments, one can determine D up to a constant factor, and that constant factor cancels out in D'/D anyway. Put another way: we can assume that D(1)=1 (a convenient number), since D can be multiplied by any constant. Now it's easy to find the coefficients (done with Wolfram Alpha), and the polynomial turns out to be
D(r) = -2r^3+9r^2-12r+6
with derivative D'(r) = -6r^2+18r-12. (There is also a smarter way to find the polynomial by starting with D', which is quadratic with known roots.)
I would probably use this information right away, computing the coefficient k of the first derivative:
r = a+h:h:b;
k = 1+r.*(-6*r.^2+18*r-12)./(-2*r.^3+9*r.^2-12*r+6);
It seems that k is always positive on the interval [1,2], so if you want to minimize the changes to existing code, just replace r(i) by r(i)/k(i) in it.
By the way, instead of loops like
for i=1:1:length(r)
yi(i)=-r(i)*(2/(h^2));
end
one usually does simply
yi=-r*(2/(h^2));
This vectorization makes the code more compact and can benefit the performance too (not so much in your example, where solving the linear system is the bottleneck). Another benefit is that yi is properly initialized, while with your loop construction, if yi happened to already exist and have length greater than length(r), the resulting array would have extraneous entries. (This is a potential source of hard-to-track bugs.)