Find maximum of a function that is a maximum of functions - matlab

I would like to solve the following problem:
x* = argmin f( max_{j=1,...,J} g_j(x)),
where f and g_j are known smooth, convex functions, and x is a scalar. I also have analytical gradients of f and g_j. Is it possible to use fminunc (using Newton method) or another solver in MatLab to find the solution? Obviously max_{j=1,...,J} g_j(x) is not differentiable (unless there is a k such that g_k(x) >= g_j(x) for all j), but it is sub-differentiable. Therefore, I cannot use fminunc directly, as it requests the gradient of the objective function. If I can use fminunc, how would I proceed?
As an example, consider f(x) = x, J=1,2, g_1(x) = (x-1)^2, g_j(x) = (x+1)^2. Then max g_1, g_2 is the upper envelope of g_1 and g_2. The minimum is located at x=0.
Note: The function fminimax is close to what I want. fminimax solves
x* = argmin max_{j=1,...,J} g_j(x).

I have one potential solution to my problem. We can replace the maximum function with a smoothed or soft maximum, see this article by John Cook. In particular, we can approximate max(x,y) with the function g(x,y;k) = log(exp(kx)+exp(ky))/y (k is a tuning parameter). Now the objective function is differentiable, so the Newton method can be applied (i.e., fminunc).

Related

Minimizing Function with vector valued input in MATLAB

I want to minimize a function like below:
Here, n can be 5,10,50 etc. I want to use Matlab and want to use Gradient Descent and Quasi-Newton Method with BFGS update to solve this problem along with backtracking line search. I am a novice in Matlab. Can anyone help, please? I can find a solution for a similar problem in that link: https://www.mathworks.com/help/optim/ug/unconstrained-nonlinear-optimization-algorithms.html .
But, I really don't know how to create a vector-valued function in Matlab (in my case input x can be an n-dimensional vector).
You will have to make quite a leap to get where you want to be -- may I suggest to go through some basic tutorial first in order to digest basic MATLAB syntax and concepts? Another useful read is the very basic example to unconstrained optimization in the documentation. However, the answer to your question touches only basic syntax, so we can go through it quickly nevertheless.
The absolute minimum to invoke the unconstraint nonlinear optimization algorithms of the Optimization Toolbox is the formulation of an objective function. That function is supposed to return the function value f of your function at any given point x, and in your case it reads
function f = objfun(x)
f = sum(100 * (x(2:end) - x(1:end-1).^2).^2 + (1 - x(1:end-1)).^2);
end
Notice that
we select the indiviual components of the x vector by matrix indexing, and that
the .^ notation effects that the operand is to be squared elementwise.
For simplicity, save this function to a file objfun.m in your current working directory, so that you have it available from the command window.
Now all you have to do is to call the appropriate optimization algorithm, say, the quasi Newton method, from the command window:
n = 10; % Use n variables
options = optimoptions(#fminunc,'Algorithm','quasi-newton'); % Use QM method
x0 = rand(n,1); % Random starting guess
[x,fval,exitflag] = fminunc(#objfun, x0, options); % Solve!
fprintf('Final objval=%.2e, exitflag=%d\n', fval, exitflag);
On my machine I see that the algorithm converges:
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the optimality tolerance.
Final objval=5.57e-11, exitflag=1

Minimize function defined by script

Suppose I have a function in Matlab calc(x,a,b) which outputs a scalar. a and b are constants, x is treated as multivariate. How do I minimize calc(x,a,b) with respect to x in Matlab?
edit: The content of the function creates a vector $v(x)$ and a matrix $A(x)$ and then computes $v(x)'*A(x)^(-1)*v(x)$
This is a fairly general question with a million possible responses depending on what calc is. (For instance, Can you provide gradients for calc? Does x need to take on values in a specific range?)
But, as a start, go for fminunc. It is for functions where you have no gradient information available and you want to find an unconstrained minimum.
Sample Code:
Suppose you want to minimize dot(x,x).
calc = #(x,a,b) dot(x,x)
calc_to_pass_to_fminunc = #(x) calc(x,1,2)
X = fminunc(calc_to_pass_to_fminunc,ones(3,1))
Gives:
Warning: Gradient must be provided for trust-region algorithm;
using line-search algorithm instead.
> In fminunc at 383
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the function tolerance.
<stopping criteria details>
X =
0
0
0
The easy answer is: if a and b are constants, and x is a one-dimensional variable, it's a 1-D optimization problem.
The previous answer suggests to usefminunc, which is part of the MATLAB Optimization Toolbox. If you don't have it, you can use fminbnd instead of it, which works just well in case of 1-D optimization in a given interval.
As example, let's say your calc function is:
function [y] = calc(x,a,b)
y = x.^3-2*x-5+a-b;
end
This is what you should do to find the minimum in the interval x1 < x < x2:
% constants
a = 1;
b = 2;
% boundaries of search interval
x1 = 0;
x2 = 2;
x = fminbnd(#(x)calc(x,a,b), x1, x2);
% value of function at the minimum
y = calc(x,y,a);
In the case of the x variable not being a scalar, you could use the analogous of fminbnd for a multidimensional variable: fminsearch, which performs an unconstrained search for the minimum of a multivariate function.
Addendum
fminbnd is a nice tool, but sometimes it's hard to make it behave as you expect. Of course you can specify the desired accuracy and a maximum number of iterations for converging in the options, but in my experience fminbnd might have problems with highly non-linear functions.
In these situations it's desirable to have a finer control on the optimization procedure, and especially on how it's defined the search interval. Given the search interval, arrayfun provides an elegant way to iterate over an array for finding a minimum of the function. Sample code:
% constants
a = 1;
b = 2;
% search interval
xi = linspace(0,2,1000);
yi = arrayfun(#(x)calc(x,a,b), xi);
% value of function at the minimum
[y, idx_m] = min(yi);
% location of minimum
x = xi(idx_m);
The drawback of this approach is that, in order to achieve a high accuracy, you might need a very long array xi. Good thing is that there are several ways to mitigate this issue: for instance, one could use a vector of log-spaced sampling points, or perform a multi-step minimization narrowing and increasing the sampling frequency at each step until the desired accuracy is achieved.

MATLAB complicated integration

I have an integration function which does not have indefinite integral expression.
Specifically, the function is f(y)=h(y)+integral(#(x) exp(-x-1/x),0,y) where h(y) is a simple function.
Matlab numerically computes f(y) well, but I want to compute the following function.
g(w)=w*integral(1-f(y).^(1/w),0,inf) where w is a real number in [0,1].
The problem for computing g(w) is handling f(y).^(1/w) numerically.
How can I calculate g(w) with MATLAB? Is it impossible?
Expressions containing e^(-1/x) are generally difficult to compute near x = 0. Actually, I am surprised that Matlab computes f(y) well in the first place. I'd suggest trying to compute g(w)=w*integral(1-f(y).^(1/w),epsilon,inf) for epsilon greater than zero, then gradually decreasing epsilon toward 0 to check if you can get numerical convergence at all. Convergence is certainly not guaranteed!
You can calculate g(w) using the functions you have, but you need to add the (ArrayValued,true) name-value pair.
The option allows you to specify a vector-valued w and allows the nested integral call to receive a vector of y values, which is how integral naturally works.
f = #(y) h(y)+integral(#(x) exp(-x-1/x),0,y,'ArrayValued',true);
g = #(w) w .* integral(1-f(y).^(1./w),0,Inf,'ArrayValued',true);
At least, that works on my R2014b installation.
Note: While h(y) may be simple, if it's integral over the positive real line does not converge, g(w) will more than likely not converge (I don't think I need to qualify that, but I'll hedge my bets).

parameter optimization of black-box function in MATLAB

I need an elegant, simple system to find out what is the highest value returned from a deterministic function given one, or more, parameters.
I know that there is a nice implementation of genetic algorithms in MATLAB, but actually, in my case this is an overkill. I need something simpler.
Any idea?
You cannot find a maximum with Matlab directly, but you can minimize something. Multiplying your function by -1 transformes your "find the maximum"-problem into a "find the minimum"-problem, which can be found with fminsearch
f = #(x) 2*x - 3*x.^2; % a simple function to find the maximum from
minusf = #(x) -1*f(x); % minus f, find minimum from this function
x = linspace(-2,2,100);
plot(x, f(x));
xmax = fminsearch(minusf, -1);
hold on
plot(xmax,f(xmax),'ro') % plot the minimum of minusf (maximum of f)
The result looks like this:
A real simple idea is to use a grid search approach, maybe with mesh refinements. A better idea would be to use a more advanced derivative-free optimizer, such as the Nelder-Mead algorithm. This is available in fminsearch.
You could also try algorithms from the global optimization toolbox: for example patternsearch or the infamous simulannealbnd.

Is my equation too complex for matlab to integrate?

I have a code that needs to evaluate the arc length equation below:
syms x
a = 10; b = 10; c = 10; d = 10;
fun = 4*a*x^3+3*b*x^2+2*c*x+d
int((1+(fun)^2)^.5)
but all that returns is below:
ans = int(((40*x^3 + 30*x^2 + 20*x + 10)^2 + 1)^(1/2), x)
Why wont matlab evaluate this integral? I added a line under to check if it would evaulate int(x) and it returned the desired result.
Problems involving square roots of functions may be tricky to intgrate. I am not sure whether the integral exists or not, but it if you look up the integral of a second order polynomial you will see that this one is already quite a mess. What you would have, would you expand the function inside the square root, would be a ninth order polynomial. If this integral actually would exist it may be too complex to calculate.
Anyway, if you think about it, would anyone really become any wiser by finding the analytical solution of this? If that is not the case a numerical solution should be sufficient.
EDIT
As thewaywewalk said in the comment, a general rule to calculate these kinds of integrals would be valuable, but to know the primitive function to the particular integral would probably be overkill (if a solution could be found).
Instead define the function as an anonymous function
fun = #(x) sqrt((4*a*x.^3+3*b*x.^2+2*c*x+d).^2+1);
and use integral to evaluate the function between some range, eg
integral(fun,0,100);
for evaluating the function in the closed interval [0,100].