The fminsearch() function description is as follows:
Nonlinear programming solver. Searches for the minimum of a problem
specified by
min x f(x)
f(x) is a function that returns a scalar, and x is a vector or a
matrix.
What is its Big O time complexity?
Related
Good evening,
I am trying to compute a matrix in Matlab whose elements are given by:
K(n,m) = int_{-inf}^{+inf} dx int_{-inf}^{+inf} dy f(n,m,x,y),
with f being the elements of an (x,y)-dependent matrix whose form I already know.
I have tried different methods for this:
Computing directly the matrix F with elements f and then two univariate integrals (function integral) with the corresponding 'ArrayValued' options.
Computing directly the matrix F with elements f and then the double integral (function integral2).
Trying to compute the elements of K with two consecutive univariate integrals (again, with integral).
Computing the elements of K with a double integral (again, with integral2).
None of this seems to be working, so I was wondering if anyone could provide a different approach.
Cheers
I have a Matlab function G(x,y,z). At each given (x,y,z), G(x,y,z) is a scalar. x=(x1,x2,...,xK) is a Kx1 vector.
Let us fix y,z at some given values. I would like your help to understand how to compute the derivative of G with respect to xk evaluated at a certain x.
For example, suppose K=3
function f= G(x1,x2,x3,y,z)
f=3*x1*sin(z)*cos(y)+3*x2*sin(z)*cos(y)+3*x3*sin(z)*cos(y);
end
How do I compute the derivative of G(x1,x2,x3,4,3) wrto x2 and then evaluate it at x=(1,2,6)?
You're looking for the partial derivative of dG/dx2
So the first thing would be getting rid of your fixed variables
G2 = #(x2) G(1,x2,6,4,3);
The numerical derivatives are finite differences, you need to choose an step h for your finite difference, and an appropriate method
The simplest one is
(G2(x2+h)-G2(x2))/h
You can make h as small as your numeric precision allows you to. At the limit h -> 0 the finite difference is the partial derivative
is it possible to solve below equation in matlab?
A*X+B*exp(X)=C
A, B are square and constant matrices. C is a constant and column matrix.
X is a column matrix which should be found.( exp() acts element by element on X).
If you are looking for a numeric method, you might want to try fsolve
X = fsolve( #(x) A*x + B*exp(x) - C, x0 );
Due to the non-linear nature of the problem you need to provide an initial guess x0 - the quality of which can affect the performance of the solver.
I want to solve multiple measurement vector (MMV) sparse representation problem with CVX toolbox. I have a N*L matrix X. matrix X have only a few nonzero rows. I have system of equations Y=A*X. Y is M*L matrix of measurements (M
min Relax(X)
subject to Y=A*X
Realx(.) is a function that applies norm 1 to the vector t. (N*1)vector t consists of the norm 2 of each rows of matrix X. i.e. Relax(X)= norm_1(t) and t(i)=norm_2(X(i,:))
I can’t transform my objective function into a language that CVX can understand and solve.
Please tell me how I should change the problem objective and constraints that CVX could solve it.
'norms' is the cvx command you're looking for. Suppose sigma is some known parameter that allows Y to be only approximately equal to A*X (e.g. I tried it with sigma=10e-6). Then you could use this code:
cvx_begin separable
variable X(n,n)
minimize( norms(X,2,1) )
subject to
norm(Y - A*X,2)<= 10*sigma
cvx_end
Given f=[f1,f2]^t
and the jacobian matrix for it
How can i make a function using Newtons method that takes initial guess of x1,x2 with a tolerance of E and a max iterations of k to find the roots?
roots are places where f1 and f2 are both zeros. so you can use a cost function of the form f1^2 + f2^2, and use fmincond/fminunc/fminsearch to find answers