I wonder if there is an easy way to get Jacobian out from fminsearch in Matlab ? like in
[OptimizedParamters,residualsNorm,residual,exitflag,output,lambda,jacobian] =
lsqnonlin(#function,
intialparamtersguess,lb,ub,options);
I've tried
options = optimset('MaxFunEvals',100,'Jacobian','on');
[x,fval,exitflag,output] = fminsearch(fun,x0,options)
but there is no Jacobin in the output
any ideas please
fminsearch performs gradient free optimization, i.e. this function never computes a Jacobian. Thus, it cannot return it.
To get a Jacobian you could try numerical or symbolic differentiation.
Related
I have two arrays:
E= [6656400;
13322500;
19980900;
26625600;
33292900;
39942400;
46648900;
53290000]
and
J=[0.0000000021;
0.0000000047;
0.0000000128;
0.0000000201;
0.0000000659;
0.0000000748;
0.0000001143;
0.0000001397]
I want to find the appropriate curve fitting for the above data by applying this equation:
J=A0.*(298).^2.*exp(-(W-((((1.6e-19)^3)/(4*pi*2.3*8.854e-12))^0.5).*E.^0.5)./((1.38e-23).*298))
I want to select the starting value of W from 1e-19
I have tried the curve fitting tools but it is not helping me to solve it!
Then, I selected some random values of A0=1.2e9 and W=2.243e-19, it gave me a better results. But I want to find the right values by using the code (not the curve fitting Apps)
Can you help me please?
A quick (and potentially easy) solution method would be to pose the curve fit as a minimization problem.
Define a correlation function that takes the fit parameters as an argument:
% x(1) == A0; x(2) == W
Jfunc = #(x) x(1).*(298).^2.*exp(-(x(2)-((((1.6e-19)^3)/(4*pi*2.3*8.854e-12))^0.5).*E.^0.5)./((1.38e-23).*298));
Then a objective function to minimize. Since you have data J we'll minimize the sum-of-squares of the difference between the data and the correlation:
Objective = #(x) sum((Jfunc(x) - J).^2);
And then attempt to minimize the objective using fminsearch:
x0 = [1.2E9;2.243E-19];
sol = fminsearch(Objective,x0);
I used the guesses you gave. For nonlinear solutions, a good first guess is often important for convergence.
If you have the Optimization Toolbox, you can also try lsqcurvefit or lsqnonlin (fminsearch is vanilla MATLAB).
I need an elegant, simple system to find out what is the highest value returned from a deterministic function given one, or more, parameters.
I know that there is a nice implementation of genetic algorithms in MATLAB, but actually, in my case this is an overkill. I need something simpler.
Any idea?
You cannot find a maximum with Matlab directly, but you can minimize something. Multiplying your function by -1 transformes your "find the maximum"-problem into a "find the minimum"-problem, which can be found with fminsearch
f = #(x) 2*x - 3*x.^2; % a simple function to find the maximum from
minusf = #(x) -1*f(x); % minus f, find minimum from this function
x = linspace(-2,2,100);
plot(x, f(x));
xmax = fminsearch(minusf, -1);
hold on
plot(xmax,f(xmax),'ro') % plot the minimum of minusf (maximum of f)
The result looks like this:
A real simple idea is to use a grid search approach, maybe with mesh refinements. A better idea would be to use a more advanced derivative-free optimizer, such as the Nelder-Mead algorithm. This is available in fminsearch.
You could also try algorithms from the global optimization toolbox: for example patternsearch or the infamous simulannealbnd.
J(x)= 1/π integral cos(xsintheta). limits are from 0 to π.
Plot J(2pid/λ) as a function of d/λ in MATLAB for d/λ ranging between
0 and 2. At what distance of separation (in wavelengths) is the
correlation between the antennas 0.7, 0 ?
I do not understand how to integrate it in matlab, when i define syms theta and use
J_=integral(J,0,pi); there appears an error. secondly, when i integrate it manually, the answer appears 0. Kindly help me with it.
Unless you really need to calculate this manually, you should use Matlab's built-in besselj function to calculate the zeroth order Bessel function of the first kind:
dlam = 0:0.01:2;
x = 2*pi*dlam;
y = besselj(0,x)
figure;
plot(x,y)
This will be faster and more accurate the performing quadrature.
If you wish to determine the to a high degree of accuracy the points at which y is 0.7 or 0, as opposed to reading them from a plot, you can use symbolic math in conjunction with solve and sym/besselj. Assuming that this is what that part of the question is about (I know nothing about antennas), you can use something like:
syms x;
double(solve(besselj(0,x) == 0.7,x))
The integral command does not work on syms, it works on functions. For symbolic integration, the command is int.
I don’t have MATLAB at hand right now to check for typos etc., but something like this should work:
x = 0.1;
integral(#(theta) cos(x.*sin(theta)), 0, pi)/pi
Or even
bessel = #(x) integral(#(theta) cos(x.*sin(theta)), 0, pi)/pi;
bessel(0.1)
I would like to know if anybody knows how I can plot an integral calculated using quad/quadl, or if this is possible.
I read that I can set the trace parameter to be non-zero, and this results in the information of each iteration being provided, but I'm not sure how and if I can use the information to plot an integral.
Thanks.
quad and quadl do not compute an integral function anyway, i.e., an integral as a function of the parameter. And since tools like this work iteratively, refining their estimate until it satisfies a tolerance on the global value, they are not easily made to produce the plot you desire.
You can do what you desire by using a differential equation solver to generate the solution, ode45 for example.
When using MATLAB's lsqnonlin function, I am trying to give a user-defined Jacobian matrix, as described in the documentation.
The output of the objective function used in lsqnonlin should be a vector of unsquared values, which, when squared and summed, give the energy. However, should the Jacobian be the partial derivates of the squared or unsquared values?
The unsquared values is correct.