Function to find roots using Newtons Method with jacobian and transpose - matlab

Given f=[f1,f2]^t
and the jacobian matrix for it
How can i make a function using Newtons method that takes initial guess of x1,x2 with a tolerance of E and a max iterations of k to find the roots?

roots are places where f1 and f2 are both zeros. so you can use a cost function of the form f1^2 + f2^2, and use fmincond/fminunc/fminsearch to find answers

Related

Numerical gradient of a Matlab function evaluated at a certain point

I have a Matlab function G(x,y,z). At each given (x,y,z), G(x,y,z) is a scalar. x=(x1,x2,...,xK) is a Kx1 vector.
Let us fix y,z at some given values. I would like your help to understand how to compute the derivative of G with respect to xk evaluated at a certain x.
For example, suppose K=3
function f= G(x1,x2,x3,y,z)
f=3*x1*sin(z)*cos(y)+3*x2*sin(z)*cos(y)+3*x3*sin(z)*cos(y);
end
How do I compute the derivative of G(x1,x2,x3,4,3) wrto x2 and then evaluate it at x=(1,2,6)?
You're looking for the partial derivative of dG/dx2
So the first thing would be getting rid of your fixed variables
G2 = #(x2) G(1,x2,6,4,3);
The numerical derivatives are finite differences, you need to choose an step h for your finite difference, and an appropriate method
The simplest one is
(G2(x2+h)-G2(x2))/h
You can make h as small as your numeric precision allows you to. At the limit h -> 0 the finite difference is the partial derivative

Minimizing Function with vector valued input in MATLAB

I want to minimize a function like below:
Here, n can be 5,10,50 etc. I want to use Matlab and want to use Gradient Descent and Quasi-Newton Method with BFGS update to solve this problem along with backtracking line search. I am a novice in Matlab. Can anyone help, please? I can find a solution for a similar problem in that link: https://www.mathworks.com/help/optim/ug/unconstrained-nonlinear-optimization-algorithms.html .
But, I really don't know how to create a vector-valued function in Matlab (in my case input x can be an n-dimensional vector).
You will have to make quite a leap to get where you want to be -- may I suggest to go through some basic tutorial first in order to digest basic MATLAB syntax and concepts? Another useful read is the very basic example to unconstrained optimization in the documentation. However, the answer to your question touches only basic syntax, so we can go through it quickly nevertheless.
The absolute minimum to invoke the unconstraint nonlinear optimization algorithms of the Optimization Toolbox is the formulation of an objective function. That function is supposed to return the function value f of your function at any given point x, and in your case it reads
function f = objfun(x)
f = sum(100 * (x(2:end) - x(1:end-1).^2).^2 + (1 - x(1:end-1)).^2);
end
Notice that
we select the indiviual components of the x vector by matrix indexing, and that
the .^ notation effects that the operand is to be squared elementwise.
For simplicity, save this function to a file objfun.m in your current working directory, so that you have it available from the command window.
Now all you have to do is to call the appropriate optimization algorithm, say, the quasi Newton method, from the command window:
n = 10; % Use n variables
options = optimoptions(#fminunc,'Algorithm','quasi-newton'); % Use QM method
x0 = rand(n,1); % Random starting guess
[x,fval,exitflag] = fminunc(#objfun, x0, options); % Solve!
fprintf('Final objval=%.2e, exitflag=%d\n', fval, exitflag);
On my machine I see that the algorithm converges:
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the optimality tolerance.
Final objval=5.57e-11, exitflag=1

getting a matrix into a sym variable

I'm trying to fit a function to my experimental data. The function is actually a summation of terms. I tried the symsum method to build my function.
z is a 155x1 matrix, Q is the value returned from lsqcurvefit, zo is a user defined input variable and k is the index of summation from 0 to 11.
syms z Q k r
Function=symsum( ((-Q)^k)*(1/(k+1)^1.5)/(1+z^2/r^2)^k,k)
I can't find method to use a matrix into sym. suggest me any alternate method.

Matlab Matrix Minimization

I have the following matrix
R=(A-C)*inv(A+B-C-C')*(A-C');
where A and B are n by n matrices. I want to find n*n matrix C such that the determinant of R is minimized, SO:
C=arg min (det(R));
Is there any function in MATLAB that can handle this problem?
It seems like you are trying to find the minimum of an unconstrained multivariable function. This can probably be achieved with fminunc
fun = #(x)x(1)*exp(-(x(1)^2 + x(2)^2)) + (x(1)^2 + x(2)^2)/20;
x0 = [1,2];
[x,fval] = fminunc(fun,x0)
Note that there are no examples in the documentation where a matrix is used, this is probably because horrendous performance could be expected when trying to solve this problem for a matrix of any nontiny size. (This is not because of matlab, but because of the nature of the problem).
It is also good to realize that this method does not (cannot) guarantee an optimum, only a local optimum.

Matlab: Error using ==> mpower

I'm trying to use fsolve in matlab to solve a system of nonlinear equations numerically. Here is a test sample of my program, k1 and R are parameters and x0 is the start point.
function y=f(k1, R, x0)
pair=fsolve(#system,x0);
y=pair(1);
function r=system(v)
int1=#(x) exp(k1*x);
int2=#(x) exp(k1*x^2)/(x^4);
r(1)=exp(v(1))*quadl(int1,0,v(1));
r(2)=exp(k1*v(2))*quadl(int2,v(1),20)*k1*R;
end
end
The strange thing is when I run this program, matlab keeps telling me that I should use .^ instead of ^ in int2=#(x) exp(k1*x^2)/(x^4). I am confused because the x in that function handle is supposed to be a scalar when it is used by quadl. Why should I have to use .^ in this case?
Also I see that a lot of the examples provided in online documentations also use .^ even though they are clearly taking power of a scalar, as in here. Can anybody help explain why?
Thanks in advance.
in the function int2 you have used matrix power (^) where you should use element-wise power (.^). Also, you have used matrix right division (/) where you should use element-wise division (./). This is needed, since quadl (and friends) will evaluate the integrand int2 for a whole array of x's at a time for reasons of efficiency.
So, use this:
function y = f(k1, R, x0)
pair = fsolve(#system,x0);
y = pair(1);
function r = system(v)
int1 = #(x) exp(k1*x);
int2 = #(x) exp(k1*x.^2)./(x.^4);
r(1) = exp( v(1)) * quadl(int1,0,v(1));
r(2) = exp(k1*v(2)) * k1*R*quadl(int2,v(1),20);
end
end
Also, have a look at quadgk or integral (if you're on newer Matlab).
By the way, I assume your real functions int1 and int2 are different functions? Because these functions are of course trivial to solve analytically...
Internally MATLAB will evaluate the function fun for necessary values of x, which is more than one and x is a vector. a and b are only used to describe the limits of integration. From the documentation
fun is a function handle. It accepts a vector x and returns a vector y, the function fun evaluated at each element of x. Limits a and b must be finite.
Hence, you must use .^ to operate on individual elements of vector x.