fmincon optimization with defined Matlab function - matlab

Is it possible to use the optimization function fmincon with a Matlab defined function?
I wrote a function where I give few constant parameters (real or complex) and for now, every time I change these parameters, the result changes (you don't say).
[output1, output2] = my_function(input1,input2,input3,input4)
I saw that fmincon function allows to find the optimum result with a given constraint. Let's say that I want to find the optimum output acting only on input1 and keeping constant all the others inputs. Is it possible to define something like
fmincon(#(input1)my_function,[1,2],[],mean)
for input1 that goes from 1 to 2 for the best value mean, where mean is the mean value of some other results.
I know that is a quite vague question but I'm not able to give a minimum example, since function makes a lot of things
The first attept with multiple outputs gave me the error Only functions can return multiple values.
Then I tried with only one output and if I use
output1 = #(input1)function(input2,input3);
fmincon(#output1,[1,2],[],mean)
I get the error
Error: "output1" was previously used as a variable, conflicting with its use here as the name of a function or command.
See "How MATLAB Recognizes Command Syntax" in the MATLAB documentation for details.
With fmincon(#my_function,[1,2],[],mean) I get Not enough input arguments.

The input should be used in your function definition - read up on how anonymous functions should be written. You don't have to use anonymous functions to define the actual objective function (myFunction below), you can use functions in their own file. The key is that the objective function should return a scalar to be minimised.
Here is a very simple example, using fmincon to find the minima in myFunction, based on the initial guess [1.5,1.5].
% myFunction is min when x=1,y=2
myFunction = #(x,y) (x-1).^2 + (y-2).^2;
% Define the optimisation function.
% This should take one input (can be an array)
% and output a scalar to be minimised
optimFunc = #(P) myFunction( P(1), P(2) );
% Use fmincon to find the optimum solution, based on some initial guess
optimSoln = fmincon( optimFunc, [1.5, 1.5] );
% >> optimSoln
% optimSoln =
% 0.999999990065893 1.999999988824129
% Optimal x = optimSoln(1), optimal y = optimSoln(2);
You can see the calculated optimum isn't exactly [1,2], but it's within the default optimality tolerance. You can change the options for the fmincon solver - read the documentation.
If you wanted to keep y=1 as a constant, you just need to update the function definition:
% We only want solutions with y=1
optimFunc_y1 = #(P) myFunction( P(1), 1 ); % y=1 always
% Find new optimal solution
optimSoln_y1 = fmincon( optimFunc_y1, 1.5 );
% >> optimSoln_y1
% optimSoln_y1 =
% 0.999999990065893
% Optimal x when y=1 = optimSoln(1)
You can add inequality constraints using the A, B, Aeq and Beq inputs to fmincon, but that's too broad to go into here, please refer to the docs.
Note that you're using the keyword function in a way which is invalid syntax. I've instead used valid variable names for the functions in my demo.

Related

Matlab set range of inputs and step

I have this signal:
x(t) = t*sin(m*pi*t)*heaviside(-m*pi-t)+t*cos(k*pi*t)*heaviside(t-k*pi)+sin(k*pi*t)*cos(m*pi*t)*(heaviside(t+m*pi)-heaviside(t-k*pi));
and I want to calculate the values only from -5pi to 5pi with a step of pi/100 using Matlab. How could I do it?
Provided you have defined m and k somewhere, and you have the matlab symbolic toolbox which provides the heaviside function, this is how it is done:
% First we define the function
x = #(t) t*sin(m*pi*t)*heaviside(-m*pi-t)+t*cos(k*pi*t)*heaviside(t-k*pi)+sin(k*pi*t)*cos(m*pi*t)*(heaviside(t+m*pi)-heaviside(t-k*pi));
% Then we define the values for which we want to compute the function
t_values = -5*pi:pi/100:5*pi;
% Finally we evaluate the function
x_values = x(t_values)
Details
First line we define your function as an anonymous function which is a handy tool in matlab.
Then we create a vector of values from -5pi to 5*pi with steps of pi/100. For this we use the matlab colon syntax. It makes it short and efficient.
Finally we evaluate the function on each of the t_values by passing the vector to the anonymous function.
Note: If you don't have the symbolic toolbox, you could easily implement heaviside yourself.

Minimizing Function with vector valued input in MATLAB

I want to minimize a function like below:
Here, n can be 5,10,50 etc. I want to use Matlab and want to use Gradient Descent and Quasi-Newton Method with BFGS update to solve this problem along with backtracking line search. I am a novice in Matlab. Can anyone help, please? I can find a solution for a similar problem in that link: https://www.mathworks.com/help/optim/ug/unconstrained-nonlinear-optimization-algorithms.html .
But, I really don't know how to create a vector-valued function in Matlab (in my case input x can be an n-dimensional vector).
You will have to make quite a leap to get where you want to be -- may I suggest to go through some basic tutorial first in order to digest basic MATLAB syntax and concepts? Another useful read is the very basic example to unconstrained optimization in the documentation. However, the answer to your question touches only basic syntax, so we can go through it quickly nevertheless.
The absolute minimum to invoke the unconstraint nonlinear optimization algorithms of the Optimization Toolbox is the formulation of an objective function. That function is supposed to return the function value f of your function at any given point x, and in your case it reads
function f = objfun(x)
f = sum(100 * (x(2:end) - x(1:end-1).^2).^2 + (1 - x(1:end-1)).^2);
end
Notice that
we select the indiviual components of the x vector by matrix indexing, and that
the .^ notation effects that the operand is to be squared elementwise.
For simplicity, save this function to a file objfun.m in your current working directory, so that you have it available from the command window.
Now all you have to do is to call the appropriate optimization algorithm, say, the quasi Newton method, from the command window:
n = 10; % Use n variables
options = optimoptions(#fminunc,'Algorithm','quasi-newton'); % Use QM method
x0 = rand(n,1); % Random starting guess
[x,fval,exitflag] = fminunc(#objfun, x0, options); % Solve!
fprintf('Final objval=%.2e, exitflag=%d\n', fval, exitflag);
On my machine I see that the algorithm converges:
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the optimality tolerance.
Final objval=5.57e-11, exitflag=1

Matlab. Poisson fit. Factorial

I have a histogram that seems to fit a poisson distribution.
In order to fit it, I declare the function myself as follows
xdata; ydata; % Arrays in which I have stored the data.
%Ydata tell us how many times the xdata is repeated in the set.
fun= #(x,xdata) (exp(-x(1))*(x(1).^(xdata)) )/(factorial(xdata)) %Function I
% want to use in the fit. It is a poisson distribution.
x0=[1]; %Approximated value of the parameter lambda to help the fit
p=lsqcurvefit(fun,x0,xdata,ydata); % Fit in the least square sense
I find an error. It probably has to do with the "factorial". Any ideas?
Factorial outputs a vector from vector xdata. Why are you using .xdata in factorial?
For example:
data = [1 2 3];
factorial(data) is then [1! 2! 3!].
Try ./factorial(xdata) (I cannot recall if the dot is even necessary at this case.)
You need to use gamma(xdata+1) function instead of factorial(xdata) function. Gamma function is a generalized form of factorial function which can be used for real and complex numbers. Thus, your code would be:
fun = #(x,xdata) exp(-x(1))*x(1).^xdata./gamma(xdata+1);
x = lsqcurvefit(fun,1,xdata,ydata);
Alternatively, you can MATLAB fitdist function which is already optimized and you might get better results:
pd = fitdist(xdata,'Poisson','Frequency',ydata);
pd.lambda

NOT CONVERGE: use Newton Raphson-Method to find root of nonlinear equations

I tried non-linear polynomial functions and this code works well. But for this one I tried several methods to solve the linear equation df0*X=f0 using backslash or bicg or lsqr, also tried several initial values but the result never converge.
% Define the given function
syms x1 x2 x3
x=[x1,x2,x3];
f(x)=[3*x1-cos(x2*x3)-1/2;x1^2+81*(x2+0.1)^2-sin(x3)+1.06;...
exp(-x1*x2)+20*x3+1/3*(10*pi-3)];
% Define the stopping criteria based on Nither or relative errors
tol=10^-5;
Niter=100;
df=jacobian(f,x);
x0=[0.1;0.1;-0.1];
% Setting starting values
error=1;
i=0;
% Start the Newton-Raphson Iteration
while(abs(error)>tol)
f0=eval(f(x0(1),x0(2),x0(3)));
df0=eval(df(x0(1),x0(2),x0(3)));
xnew=x0-df0\f0; % also tried lsqr(df0,f0),bicg(df0,f0)
error=norm(xnew-x0);
x0=xnew;
i=i+1
if i>=Niter
fprintf('Iteration times spill over Niter\n');
return;
end
end
You'll need anonymous functions here to better do the job (we mentioned it in passing today!).
First, let's get the function definition down. Anonymous functions are nice ways for you to call things in a manner similar to mathematical functions. For example,
f = #(x) x^2;
is a squaring function. To evaluate it, just write like you would on paper f(2) say. Since you have a multivariate function, you'll need to vectorize the definition as follows:
f(x) = #(x) [3*x(1) - cos(x(2) * x(3)) - 1/2; ...
For your Jacobian, you'll need to use another anonymous function (maybe call it grad_f) and compute it on paper, then code it in. The function jacobian uses finite differences and so the errors may pile up with the Jacobian is not stable in some regions.
The key is to just be careful and use some good coding practices. See this document for more info on anonymous functions and other good MATLAB practices.

Matlab optimization: what types of objective functions are 'allowed' with fminsearch.m and Co.?

Examples for optimizations with functions like fmincon.m and fminsearchbnd.m usually minimize objective functions that are relatively simple. With simple I mean that the objective function only consists of some algebraic expression, e.g. the Rosenbrock formula.
In my problem, on the other hand, the objective function consists of several steps, including
computing an L2-norm misfit between an observed data point and a set of n training data points (n~5e4)
selecting those data points from the training data set that give the lowest misfit
then using the row indices of this selected subset to compute the final distance that I intend to minimize.
i.e. I perform operations that cannot be formulated as a single mathematical expression. Can I use such an objective function with tools like fminsearchbnd.m or fmincon.m at all? My results so far are not very promising...
There is an easy and obvious solution for that. You fminsearch() to find a minimum for some self-defined functions. In my example, it is fitting a polynomial, which of course is easy, but the trick is, that this could be anything. You can access the data if you make your objective function as a nested function, so they share the same variable scope.
You can start from the following code and fill in everything you want to do part by part and maybe ask followup questions, if any come up.
function main
verbose = 1; % some output
% optimize something, maybe a distorted polynomial
x = sort(rand(20,1));
p_original = [1.5, 3, 2, 1];
y = polyval(p_original,x) + 0.5*(rand(size(x))-0.5);
% optimize polynomial of order order. This is an example of how to pass
% a parameter to the fit function.
order = 3;
% obvious solution is this, but we want to do something else
p_polyfit = polyfit(x,y,order)
% we want to do it a bit more complex
pfit = optimize_something(x, y, order, verbose)
% what is happening?
figure
plot(x,polyval(p_original,x),'k-')
hold on
plot(x,y,'ko')
plot(x,polyval(p_polyfit,x),'rs-')
plot(x,fit_function(x,pfit),'gx-')
legend('original','noisy','polyfit','optimization')
end
function pfit = optimize_something(x,y, order, verbose)
% for polynomial of order order we need order+1 coefficients
p0 = ones(1,order+1); % initial guess: all coefficients are 1
if verbose
fprintf('optimize_something calling fminsearch(#objFun)\n');
end
% hand over only p0 to our objective function
pfit = fminsearch(#objFun, p0);
% ------------------------- NESTED objFUN --------------------------------%
function e = objFun(p)
% This function accepts only p as parameter and returns a value e, which
% will be minimized by some metric (maybe least squares).
% Since this function is nested, it can use also the predefined variables x, y (and also p0 and verbose).
% The magic is, we calculate a value yfitted out of x and p by a
% fit_function. This function can really be anything!
yfitted = fit_function(x, p);
e = sum((yfitted-y).^2);
% e = sum(abs(yfitted-y)); % another possibility
end
% ------------------------- NESTED objFUN --------------------------------%
if verbose
disp('pfit found')
end
end
function yfitted = fit_function(x, p)
% In our example we want to fit a polynomial, so we do so. We evaluate the
% polynomial p at x.
yfitted = polyval(p,x);
% But it could be anything, really.. each value in p could be something
% else, maybe the sum of an exponential function and a straight line
% yfitted = p(1)*exp(p(2)*x) + p(3)*x + p(4);
end
You can try to use CVX. It is an addon for Matlab that lets you describe your optimisation problem with normal Matlab code.
Alternatively, write down your objective function including any constraints. Your description is not clear to me, and it would help you too, if you would write this down in actual formulae.
I read your steps as this:
"Computing an L2-norm between an observed data point and a set of n training data points." It seems that there is a total of one (1) observed data points. Let's call the observed point x. Let's call the training data points y_i for i=1..n.
The L2-Norm is: |x-y_i|.
"Selecting those data points [multiple?] that give the lowest misfit". You haven't said how many data points you want, and how you'd combine multiple points to give a single L2-Norm. Let's assume you want exactly one such point (the closest to the observed data point x). Thus you get: argmin (over i) |x-y_i|. If you have multiple, you could greedily take the k closest points.
"Then using the row indices of this selected subset to compute the final distance that I intend to minimize." And what is the final distance that you intend to minimize?