Minimizing Function with vector valued input in MATLAB - matlab

I want to minimize a function like below:
Here, n can be 5,10,50 etc. I want to use Matlab and want to use Gradient Descent and Quasi-Newton Method with BFGS update to solve this problem along with backtracking line search. I am a novice in Matlab. Can anyone help, please? I can find a solution for a similar problem in that link: https://www.mathworks.com/help/optim/ug/unconstrained-nonlinear-optimization-algorithms.html .
But, I really don't know how to create a vector-valued function in Matlab (in my case input x can be an n-dimensional vector).

You will have to make quite a leap to get where you want to be -- may I suggest to go through some basic tutorial first in order to digest basic MATLAB syntax and concepts? Another useful read is the very basic example to unconstrained optimization in the documentation. However, the answer to your question touches only basic syntax, so we can go through it quickly nevertheless.
The absolute minimum to invoke the unconstraint nonlinear optimization algorithms of the Optimization Toolbox is the formulation of an objective function. That function is supposed to return the function value f of your function at any given point x, and in your case it reads
function f = objfun(x)
f = sum(100 * (x(2:end) - x(1:end-1).^2).^2 + (1 - x(1:end-1)).^2);
end
Notice that
we select the indiviual components of the x vector by matrix indexing, and that
the .^ notation effects that the operand is to be squared elementwise.
For simplicity, save this function to a file objfun.m in your current working directory, so that you have it available from the command window.
Now all you have to do is to call the appropriate optimization algorithm, say, the quasi Newton method, from the command window:
n = 10; % Use n variables
options = optimoptions(#fminunc,'Algorithm','quasi-newton'); % Use QM method
x0 = rand(n,1); % Random starting guess
[x,fval,exitflag] = fminunc(#objfun, x0, options); % Solve!
fprintf('Final objval=%.2e, exitflag=%d\n', fval, exitflag);
On my machine I see that the algorithm converges:
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the optimality tolerance.
Final objval=5.57e-11, exitflag=1

Related

NOT CONVERGE: use Newton Raphson-Method to find root of nonlinear equations

I tried non-linear polynomial functions and this code works well. But for this one I tried several methods to solve the linear equation df0*X=f0 using backslash or bicg or lsqr, also tried several initial values but the result never converge.
% Define the given function
syms x1 x2 x3
x=[x1,x2,x3];
f(x)=[3*x1-cos(x2*x3)-1/2;x1^2+81*(x2+0.1)^2-sin(x3)+1.06;...
exp(-x1*x2)+20*x3+1/3*(10*pi-3)];
% Define the stopping criteria based on Nither or relative errors
tol=10^-5;
Niter=100;
df=jacobian(f,x);
x0=[0.1;0.1;-0.1];
% Setting starting values
error=1;
i=0;
% Start the Newton-Raphson Iteration
while(abs(error)>tol)
f0=eval(f(x0(1),x0(2),x0(3)));
df0=eval(df(x0(1),x0(2),x0(3)));
xnew=x0-df0\f0; % also tried lsqr(df0,f0),bicg(df0,f0)
error=norm(xnew-x0);
x0=xnew;
i=i+1
if i>=Niter
fprintf('Iteration times spill over Niter\n');
return;
end
end
You'll need anonymous functions here to better do the job (we mentioned it in passing today!).
First, let's get the function definition down. Anonymous functions are nice ways for you to call things in a manner similar to mathematical functions. For example,
f = #(x) x^2;
is a squaring function. To evaluate it, just write like you would on paper f(2) say. Since you have a multivariate function, you'll need to vectorize the definition as follows:
f(x) = #(x) [3*x(1) - cos(x(2) * x(3)) - 1/2; ...
For your Jacobian, you'll need to use another anonymous function (maybe call it grad_f) and compute it on paper, then code it in. The function jacobian uses finite differences and so the errors may pile up with the Jacobian is not stable in some regions.
The key is to just be careful and use some good coding practices. See this document for more info on anonymous functions and other good MATLAB practices.

Find maximum of a function that is a maximum of functions

I would like to solve the following problem:
x* = argmin f( max_{j=1,...,J} g_j(x)),
where f and g_j are known smooth, convex functions, and x is a scalar. I also have analytical gradients of f and g_j. Is it possible to use fminunc (using Newton method) or another solver in MatLab to find the solution? Obviously max_{j=1,...,J} g_j(x) is not differentiable (unless there is a k such that g_k(x) >= g_j(x) for all j), but it is sub-differentiable. Therefore, I cannot use fminunc directly, as it requests the gradient of the objective function. If I can use fminunc, how would I proceed?
As an example, consider f(x) = x, J=1,2, g_1(x) = (x-1)^2, g_j(x) = (x+1)^2. Then max g_1, g_2 is the upper envelope of g_1 and g_2. The minimum is located at x=0.
Note: The function fminimax is close to what I want. fminimax solves
x* = argmin max_{j=1,...,J} g_j(x).
I have one potential solution to my problem. We can replace the maximum function with a smoothed or soft maximum, see this article by John Cook. In particular, we can approximate max(x,y) with the function g(x,y;k) = log(exp(kx)+exp(ky))/y (k is a tuning parameter). Now the objective function is differentiable, so the Newton method can be applied (i.e., fminunc).

For loop equation into Octave / Matlab code

I'm using octave 3.8.1 which works like matlab.
I have an array of thousands of values I've only included three groupings as an example below:
(amp1=0.2; freq1=3; phase1=1; is an example of one grouping)
t=0;
amp1=0.2; freq1=3; phase1=1; %1st grouping
amp2=1.4; freq2=2; phase2=1.7; %2nd grouping
amp3=0.8; freq3=5; phase3=1.5; %3rd grouping
The Octave / Matlab code below solves for Y so I can plug it back into the equation to check values along with calculating values not located in the array.
clear all
t=0;
Y=0;
a1=[.2,3,1;1.4,2,1.7;.8,5,1.5]
for kk=1:1:length(a1)
Y=Y+a1(kk,1)*cos ((a1(kk,2))*t+a1(kk,3))
kk
end
Y
PS: I'm not trying to solve for Y since it's already solved for I'm trying to solve for Phase
The formulas located below are used to calculate Phase but I'm not sure how to put it into a for loop that will work in an array of n groupings:
How would I write the equation / for loop for finding the phase if I want to find freq=2.5 and amp=.23 and the phase is unknown I've looked online and it may require writing non linear equations which I'm not sure how to convert what I'm trying to do into such an equation.
phase1_test=acos(Y/amp1-amp3*cos(2*freq3*pi*t+phase3)/amp1-amp2*cos(2*freq2*pi*t+phase2)/amp1)-2*freq1*pi*t
phase2_test=acos(Y/amp2-amp3*cos(2*freq3*pi*t+phase3)/amp2-amp1*cos(2*freq1*pi*t+phase1)/amp2)-2*freq2*pi*t
phase3_test=acos(Y/amp3-amp2*cos(2*freq2*pi*t+phase2)/amp3-amp1*cos(2*freq1*pi*t+phase1)/amp3)-2*freq2*pi*t
Image of formula below:
I would like to do a check / calculate phases if given a freq and amp values.
I know I have to do a for loop but how do I convert the phase equation into a for loop so it will work on n groupings in an array and calculate different values not found in the array?
Basically I would be given an array of n groupings and freq=2.5 and amp=.23 and use the formula to calculate phase. Note: freq will not always be in the array hence why I'm trying to calculate the phase using a formula.
Ok, I think I finally understand your question:
you are trying to find a set of phase1, phase2,..., phaseN, such that equations like the ones you describe are satisfied
You know how to find y, and you supply values for freq and amp.
In Matlab, such a problem would be solved using, for example fsolve, but let's look at your problem step by step.
For simplicity, let me re-write your equations for phase1, phase2, and phase3. For example, your first equation, the one for phase1, would read
amp1*cos(phase1 + 2 freq1 pi t) + amp2*cos(2 freq2 pi t + phase2) + amp3*cos(2 freq3 pi t + phase3) - y = 0
Note that ampX (X is a placeholder for 1, 2, 3) are given, pi is a constant, t is given via Y (I think), freqX are given.
Hence, you are, in fact, dealing with a non-linear vector equation of the form
F(phase) = 0
where F is a multi-dimensional (vector) function taking a multi-dimensional (vector) input variable phase (comprised of phase1, phase2,..., phaseN). And you are looking for the set of phaseX, where all of the components of your vector function F are zero. N.B. F is a shorthand for your functions. Therefore, the first component of F, called f1, for example, is
f1 = amp1*cos(phase1+...) + amp2*cos(phase2+...) + amp3*cos(phase3+...) - y = 0.
Hence, f1 is a one-dimensional function of phase1, phase2, and phase3.
The technical term for what you are trying to do is find a zero of a non-linear vector function, or find a solution of a non-linear vector function. In Matlab, there are different approaches.
For a one-dimensional function, you can use fzero, which is explained at http://www.mathworks.com/help/matlab/ref/fzero.html?refresh=true
For a multi-dimensional (vector) function as yours, I would look into using fsolve, which is part of Matlab's optimization toolbox (which means I don't know how to do this in Octave). The function fsolve is explained at http://www.mathworks.com/help/optim/ug/fsolve.html
If you know an approximate solution for your phases, you may also look into iterative, local methods.
In particular, I would recommend you look into the Newton's Method, which allows you to find a solution to your system of equations F. Wikipedia has a good explanation of Newton's Method at https://en.wikipedia.org/wiki/Newton%27s_method . Newton iterations are very simple to implement and you should find a lot of resources online. You will have to compute the derivative of your function F with respect to each of your variables phaseX, which is very simple to compute since you're only dealing with cos() functions. For starters, have a look at the one-dimensional Newton iteration method in Matlab at http://www.math.colostate.edu/~gerhard/classes/331/lab/newton.html .
Finally, if you want to dig deeper, I found a textbook on this topic from the society for industrial and applied math: https://www.siam.org/books/textbooks/fr16_book.pdf .
As you can see, this is a very large field; Newton's method should be able to help you out, though.
Good luck!

How to solve an equation with piecewise defined function in Matlab?

I have been working on solving some equation in a more complicated context. However, I want to illustrate my question through the following simple example.
Consider the following two functions:
function y=f1(x)
y=1-x;
end
function y=f2(x)
if x<0
y=0;
else
y=x;
end
end
I want to solve the following equation: f1(x)=f2(x). The code I used is:
syms x;
x=solve(f1(x)-f2(x));
And I got the following error:
??? Error using ==> sym.sym>notimplemented at 2621
Function 'lt' is not implemented for MuPAD symbolic objects.
Error in ==> sym.sym>sym.lt at 812
notimplemented('lt');
Error in ==> f2 at 3
if x<0
I know the error is because x is a symbolic variable and therefore I could not compare x with 0 in the piecewise function f2(x).
Is there a way to fix this and solve the equation?
First, make sure symbolic math is even the appropriate solution method for your problem. In many cases it isn't. Look at fzero and fsolve amongst many others. A symbolic method is only needed if, for example, you want a formula or if you need to ensure precision.
In such an old version of Matlab, you may want to break up your piecewise function into separate continuous functions and solve them separately:
syms x;
s1 = solve(1-x^2,x) % For x >= 0
s2 = solve(1-x,x) % For x < 0
Then you can either manually examine or numerically compare the outputs to determine if any or all of the solutions are valid for the chosen regime – something like this:
s = [s1(double(s1) >= 0);s2(double(s2) < 0)]
You can also take advantage of the heaviside function, which is available in much older versions.
syms x;
f1 = 1-x;
f2 = x*heaviside(x);
s = solve(f1-f2,x)
Yes, the Heaviside function is 0.5 at zero – this gives it the appropriate mathematical properties. You can shift it to compare values other than zero. This is a standard technique.
In Matlab R2012a+, you can take advantage of assumptions in addition to the normal relational operators. To add to #AlexB's comment, you should convert the output of any logical comparison to symbolic before using isAlways:
isAlways(sym(x<0))
In your case, x is obviously not "always" on one side or the other of zero, but you may still find this useful in other cases.
If you want to get deep into Matlab's symbolic math, you can create piecewise functions using MuPAD, which are accessible from Matlab – e.g., see my example here.

parameter optimization of black-box function in MATLAB

I need an elegant, simple system to find out what is the highest value returned from a deterministic function given one, or more, parameters.
I know that there is a nice implementation of genetic algorithms in MATLAB, but actually, in my case this is an overkill. I need something simpler.
Any idea?
You cannot find a maximum with Matlab directly, but you can minimize something. Multiplying your function by -1 transformes your "find the maximum"-problem into a "find the minimum"-problem, which can be found with fminsearch
f = #(x) 2*x - 3*x.^2; % a simple function to find the maximum from
minusf = #(x) -1*f(x); % minus f, find minimum from this function
x = linspace(-2,2,100);
plot(x, f(x));
xmax = fminsearch(minusf, -1);
hold on
plot(xmax,f(xmax),'ro') % plot the minimum of minusf (maximum of f)
The result looks like this:
A real simple idea is to use a grid search approach, maybe with mesh refinements. A better idea would be to use a more advanced derivative-free optimizer, such as the Nelder-Mead algorithm. This is available in fminsearch.
You could also try algorithms from the global optimization toolbox: for example patternsearch or the infamous simulannealbnd.