How to pass variable constrains while minimizing function with multiple variables and return the variables? - matlab

Assume I have a function with 5 variables, and each one have a range constrains. I want to find the minimum of a function, as well as the values of those 5 variables which are needed to obtain that minimum value in that function. I am using the fminsearch.
func = #(x, y, z, k, m) (--some-function-which-depends-to-those-5-variable);
Assume I have above function that I want to minimize.
range_x = [12, 24];
range_y = [13.3, 30.2];
range_z = [1.4, 4.7];
range_k = [1.2, 1.4];
range_m = [4.12, 12.2];
and the above ranges.
??? = fminsearch(#(x) func(x(1), x(2), x(3), x(4), x(5)), ???)
I am currently using fminsearch function. However, I stucked the point that how can I use ranges and how can I extract the min value / and all those 5 variables which gives this result.
Thanks in advance.

fminsearch as per the documentation, is for unconstrained minimization, i.e. you don't put limits on the variables.
fmincon instead, is for constrained minimization.

Related

Solving a non linear equation multiple times with different parameter values in MATLAB

I have a non-linear equation with a constant 'X'.
I want to obtain the solution of the equation for 10 different values of 'X'.
What is the best way of doing it in MATLAB?
Employing fsolve, I thought of doing this using a loop which runs 10 times. But the problem is that it's not possible to send value of 'X' as a parameter to the function which is called by fsolve (as according to its syntax, fsolve can send only the initial guess value) and which contains the non-linear equation.
This is my MATLAB code:
function f = crd(m)
X=0.1; %Paramter for whose different values I want to solve the NLE using a loop
t=(1/(0.8/3600))*log(1/(1-X));
U=350; P=0.1134; T=165; L=21.415; %Constants
a=0.00102*820*2200/(U*P); %Constant
Q=(0.8/3600)*900*exp(-(0.8/3600)*t)*0.9964*(347.3*1000);
Tmi=60; %Constant
b=m*2200;
q=(a/t)+(b/L);
f = ( b - (Q/(T-Tmi)) ) * (b/(L*L*q*q)) - exp(-1/q);
end
Changing the value of the parameter 'X' each time, I can use "fsolve(#crd,10)" from the Command Window multiple times. But I want to do this using a loop.
I want to get solution for X=0.1,0.2,...,0.9,1.0
One possible way to do this is to change your function to accept two parameters: m and X:
function f = crd(m,X)
....
end
Then, when you want to call fsolve, you can create a temporary function that accepts a single parameter m, and pass an actual value for X to it. That is:
for X = [0.1, 0.2, 0.3, 0.4]
f = #(m) crd(m,X); % a function that accepts m as input
sol = fsolve(f, 10);
...
end

Matlab: Meaning of #(t)(costFunction(t, X, y)) from Andrew Ng's Machine Learning class

I have the following code in MATLAB:
% Set options for fminunc
options = optimset('GradObj', 'on', 'MaxIter', 400);
% Run fminunc to obtain the optimal theta
% This function will return theta and the cost
[theta, cost] = ...
fminunc(#(t)(costFunction(t, X, y)), initial_theta, options);
My instructor has explained the minimising function like so:
To specify the actual function we are minimizing, we use a "short-hand"
for specifying functions, like #(t)(costFunction(t, X, y)). This
creates a function, with argument t, which calls your costFunction. This
allows us to wrap the costFunction for use with fminunc.
I really cannot understand what #(t)(costFunction(t, X, y) means. What are the both ts are doing? What kind of expression is that?
In Matlab, this is called an anonymous function.
Take the following line:
f = #(t)( 10*t );
Here, we are defining a function f, which takes one argument t, and returns 10*t. It can be used by
f(5) % returns 50
In your case, you are using fminunc which takes a function as its first argument, with one parameter to minimise over. This could be called using
X = 1; y = 1; % Defining variables which aren't passed into the costFunction
% but which must exist for the next line to pass them as anything!
f = #(t)(costFunction(t, X, y)); % Explicitly define costFunction as a function of t alone
[theta, cost] = fminunc(f, 0, options);
This can be shortened by not defining f first, and just calling
[theta, cost] = fminunc(#(t)(costFunction(t, X, y)), 0, options);
Further reading
As mentioned in the comments, here is a link to generally parameterising functions.
Specifically, here is a documentation link about anonymous functions.
Just adding to Wolfie's response. I was confused as well and asked a similar question here:
Understanding fminunc arguments and anonymous functions, function handlers
The approach here is one of 3. The problem the anonymous function (1 of the 3 approaches in the link below) solves is that the solver, fminunc only optimizes one argument in the function passed to it. The anonymous function #(t)(costFunction(t, X, y) is a new function that takes in only one argument, t, and later passes this value to costFunction. You will notice that in the video lecture what was entered was just #costFunction and this worked because costFunction only took one argument, theta.
https://www.mathworks.com/help/optim/ug/passing-extra-parameters.html
I also had the same question. All thanks to the the link provided by Wolfie to understand paramterized and anonymous functions, I was able to clarify my doubts. Perhaps, you must have already found your answer but am explaining once again, for people who might develop this query in the mere future.
Let's say we want to derive a polynomial, and find its minimum/maximum value. Our code is:
m = 5;
fun = #(x) x^2 + m; % function that takes one input: x, accepts 'm' as constant
x = derive(fun, 0); % fun passed as an argument
As per the above code, 'fun' is a handle that points to our anonymous function, f(x)=x^2 + m. It accepts only one input, i.e. x. The advantage of an anonymous function is, one doesn't need to create a separate program for it. For the constant, 'm', it can accept any values residing in the current workspace.
The above code can be shortened by:
m = 5;
x = derive(#(x) x^2 + m, 0); % passed the anonymous function directly as argument
Our target is to find the global optimal,so i think the function here is to get a bounch of local minimal by change the alpha and compare with each other to see which one is the best.
to achive this you initiate the fminuc with value initial_theta
fminuc set t=initial_theta then compute CostFunction(t,X,y) which is equal to` CostFunction(initial_theta,X,y).you will get the Cost and also the gradient.
fminuc will compute a new_theta with the gradient and a alpha, then set t=new_theta and compute the Cost and gradient again.
it will loop like this until it find the local optimal.
Then it change the length of alpha and repeat above to get another optimal. At the end it will compare the optimals and return with the best one.

How to resolve MATLAB trapz function error?

I am working on an assignment that requires me to use the trapz function in MATLAB in order to evaluate an integral. I believe I have written the code correctly, but the program returns answers that are wildly incorrect. I am attempting to find the integral of e^(-x^2) from 0 to 1.
x = linspace(0,1,2000);
y = zeros(1,2000);
for iCnt = 1:2000
y(iCnt) = e.^(-(x(iCnt)^2));
end
a = trapz(y);
disp(a);
This code currently returns
1.4929e+03
What am I doing incorrectly?
You need to just specify also the x values:
x = linspace(0,1,2000);
y = exp(-x.^2);
a = trapz(x,y)
a =
0.7468
More details:
First of all, in MATLAB you can use vectors to avoid for-loops for performing operation on arrays (vectors). So the whole four lines of code
y = zeros(1,2000);
for iCnt = 1:2000
y(iCnt) = exp(-(x(iCnt)^2));
end
will be translated to one line:
y = exp(-x.^2)
You defined x = linspace(0,1,2000) it means that you need to calculate the integral of the given function in range [0 1]. So there is a mistake in the way you calculate y which returns it to be in range [1 2000] and that is why you got the big number as the result.
In addition, in MATLAB you should use exp there is not function as e in MATLAB.
Also, if you plot the function in the range, you will see that the result makes sense because the whole page has an area of 1x1.

How to use fminsearch to find local maximum?

I would like to use fminsearch in order to find the local maximum of a function.
Is it possible to find local maximum using fminsearch with "just" searching on the negative return value of the function.
for example:
function f = myfun(x,a)
f = x(1)^2 + a*x(2)^2;
a = 1.5;
x = fminsearch(#(x) -1 * myfun(x,a),[0,1]);
Is it possible?
Update1: In order to elaborate my question and making it clearer (following some comments below) - I'm adding this update:
By asking if it's possible to do so, I meant is it a proper use of fminsearch function - is it the intended use to find max using fminsearch?
Update2: for who ever concern with the same question - In addition to the correct answer below , here is the documentation from https://www.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-10
Maximizing Functions
The fminbnd and fminsearch solvers attempt to minimize an objective function. If you have a maximization problem, that is, a problem of the form
max x f(x), then define g(x) = –f(x), and minimize g.
For example, to find the maximum of tan(cos(x)) near x = 5, evaluate:
[x fval] = fminbnd(#(x)-tan(cos(x)),3,8)
x = 6.2832
fval = -1.5574
The maximum is 1.5574 (the negative of the reported
fval), and occurs at x = 6.2832. This answer is correct since, to five
digits, the maximum is tan(1) = 1.5574, which occurs at x = 2π =
6.2832.
Yes you can, that's also why there is no fmaxsearch function:
For example:
func = #(x) sin(x);
sol = fminsearch(#(x) func(x),0)
% sol = pi/2
sol = fminsearch(#(x) func(x)*-1,0)
% sol = -pi/2

Solve Lognormal equation for mu and sigma given fixed y and x in Matlab

I have the equation of the lognormal:
y = 1/(3.14*x*sig)*exp(-(log(x)-mu)^2/(2*sig^2))
and for fixed
y = a
x = b
I need to find the values of mu and sig. I can set mu in Matlab like:
mu = [0 1 1.1 1.2...]
and find all the values corresponding sig values, but I can't make it with solve or subs. Any ideas please???
Thanks!
Here's a proof of concept to use fzero to numerically search for a sigma(x,y,mu) function.
Assuming you have x,y fixed, you can set
mu = 1; %or whatever
myfun = #(sig) y-1./(3.14*x*sig).*exp(-(log(x)-mu)^2./(2*sig.^2)); %x,y,mu from workspace
sigma = fzero(myfun,1);
This will solve the equation
y-1/(3.14*x*sig)*exp(-(log(x)-mu)^2/(2*sig^2))==0
for sig starting from sig==1 and return it into sigma.
You can generalize it to get a function of mu:
myfun2 = #(mu,sig) y-1./(3.14*x*sig).*exp(-(log(x)-mu).^2./(2*sig.^2));
sigmafun=#(mu) fzero(#(sig)myfun2(mu,sig),1);
then sigmafun will give you a sigma for each value of mu you put into it. The parameters x and y are assumed to be set before the first anonymous function declaration.
Or you could get reaaally general, and define
myfun3 = #(x,y,mu,sig) y-1./(3.14*x*sig).*exp(-(log(x)-mu).^2./(2*sig.^2));
sigmafun2 = #(x,y,mu) fzero(#(sig)myfun3(x,y,mu,sig),1);
The main difference here is that x and y are fed into the function of sigmafun2 each time, so they can change. In the earlier cases the values of x and y were fixed in the anonymous functions at the time of their definition, i.e. when we issued myfun = #(sig).... Depending on your needs you can find out what you want to use.
As a proof of concept, I didn't check how well it behaved for the actual problem. You should definitely have an initial idea of what kind of parameters you expect, since there will be many cases where there's no solution, and fzero will return a NaN.
Update by Oliver Amundsen: the resulting sig(mu) function with x=100, y=0.001 looks like this: