Understanding fminunc arguments and anonymous functions, function handlers - matlab

Please bear with me. The question is at the end. I am trying to figure out the difference in how fminunc is called.
This question stems from Andrew Ng's Week 3 material in his Coursera Machine Learning course.
I am bouncing off of this question. Matlab: Meaning of #(t)(costFunction(t, X, y)) from Andrew Ng's Machine Learning class
I am trying to understand the meaning of the argument
#(t) ( costFunction(t, X, y) )
User wolfie, showed it as being a shortened version. Could anyone explain why the expression itself has to be like that? In the video lecture, he ran the function like this
[optTheta, functionVal, exitFlag] = fminunc(#costFunction, initialTheta, options)
where costFunction inputs and outputs from the program file are given as:
function [jVal, gradient] = costFunction(theta)
The exercise provided code has this version of the function:
costFunction(theta, X, y).
Why wasn't fminunc called like in the second case without the anonymous function, that is why was it called as:
[theta, cost] = fminunc(#(t)(costFunction(t, X, y)), initial_theta, options);
instead of as:
[theta, cost] = fminunc(#costFunction, initial_theta, options); ?

The cost function should have the parameter(s) to be optimized as input, and return the function value to be minimized. In case the cost function needs input other than the to-be-optimized parameter(s), the anonymous function form will do the trick:
funHandle = #(t) ( costFunction(t, X, y) );
This allows you pass extra input X and y, besides to-be-optimized t. You can check
this link from Mathworks for more information.

Related

fzero() Matlab function for complicated functions

Having such a function:
y=1.2*sin(x)+2*log(x+2)-5; I am looking for zeros of that function using fzero() functon- just for testing, I indicate other methods.
I received error and I am looking for the solution of that. fzero() is for nonlinear functions but for complex ones...? Doyou know similar method to fzero()?
The function in the example has a pole, but you can treat this case by looking at it's real part, get the zero and check it to see the imaginary part is zero:
syms x y yr
yr= #(x) real(1.2*sin(x)+2*log(x+2)-5);
fr=fzero(yr,0);
fr =
6.8458
y= #(x) (1.2*sin(x)+2*log(x+2)-5);
y(fr)
ans =
-8.8818e-16

Matlab: Meaning of #(t)(costFunction(t, X, y)) from Andrew Ng's Machine Learning class

I have the following code in MATLAB:
% Set options for fminunc
options = optimset('GradObj', 'on', 'MaxIter', 400);
% Run fminunc to obtain the optimal theta
% This function will return theta and the cost
[theta, cost] = ...
fminunc(#(t)(costFunction(t, X, y)), initial_theta, options);
My instructor has explained the minimising function like so:
To specify the actual function we are minimizing, we use a "short-hand"
for specifying functions, like #(t)(costFunction(t, X, y)). This
creates a function, with argument t, which calls your costFunction. This
allows us to wrap the costFunction for use with fminunc.
I really cannot understand what #(t)(costFunction(t, X, y) means. What are the both ts are doing? What kind of expression is that?
In Matlab, this is called an anonymous function.
Take the following line:
f = #(t)( 10*t );
Here, we are defining a function f, which takes one argument t, and returns 10*t. It can be used by
f(5) % returns 50
In your case, you are using fminunc which takes a function as its first argument, with one parameter to minimise over. This could be called using
X = 1; y = 1; % Defining variables which aren't passed into the costFunction
% but which must exist for the next line to pass them as anything!
f = #(t)(costFunction(t, X, y)); % Explicitly define costFunction as a function of t alone
[theta, cost] = fminunc(f, 0, options);
This can be shortened by not defining f first, and just calling
[theta, cost] = fminunc(#(t)(costFunction(t, X, y)), 0, options);
Further reading
As mentioned in the comments, here is a link to generally parameterising functions.
Specifically, here is a documentation link about anonymous functions.
Just adding to Wolfie's response. I was confused as well and asked a similar question here:
Understanding fminunc arguments and anonymous functions, function handlers
The approach here is one of 3. The problem the anonymous function (1 of the 3 approaches in the link below) solves is that the solver, fminunc only optimizes one argument in the function passed to it. The anonymous function #(t)(costFunction(t, X, y) is a new function that takes in only one argument, t, and later passes this value to costFunction. You will notice that in the video lecture what was entered was just #costFunction and this worked because costFunction only took one argument, theta.
https://www.mathworks.com/help/optim/ug/passing-extra-parameters.html
I also had the same question. All thanks to the the link provided by Wolfie to understand paramterized and anonymous functions, I was able to clarify my doubts. Perhaps, you must have already found your answer but am explaining once again, for people who might develop this query in the mere future.
Let's say we want to derive a polynomial, and find its minimum/maximum value. Our code is:
m = 5;
fun = #(x) x^2 + m; % function that takes one input: x, accepts 'm' as constant
x = derive(fun, 0); % fun passed as an argument
As per the above code, 'fun' is a handle that points to our anonymous function, f(x)=x^2 + m. It accepts only one input, i.e. x. The advantage of an anonymous function is, one doesn't need to create a separate program for it. For the constant, 'm', it can accept any values residing in the current workspace.
The above code can be shortened by:
m = 5;
x = derive(#(x) x^2 + m, 0); % passed the anonymous function directly as argument
Our target is to find the global optimal,so i think the function here is to get a bounch of local minimal by change the alpha and compare with each other to see which one is the best.
to achive this you initiate the fminuc with value initial_theta
fminuc set t=initial_theta then compute CostFunction(t,X,y) which is equal to` CostFunction(initial_theta,X,y).you will get the Cost and also the gradient.
fminuc will compute a new_theta with the gradient and a alpha, then set t=new_theta and compute the Cost and gradient again.
it will loop like this until it find the local optimal.
Then it change the length of alpha and repeat above to get another optimal. At the end it will compare the optimals and return with the best one.

Numerical integration of symbolic differentiation - MATLAB

The following is a MATLAB problem.
Suppose I define an function f(x,y).
I want to calculate the partial derivative of f with respect to y, evaluated at a specific value of y, e.g., y=6. Finally, I want to integrate this new function (which is only a function of x) over a range of x.
As an example, this is what I have tried
syms x y;
f = #(x, y) x.*y.^2;
Df = subs(diff(f,y),y,2);
Int = integral(Df , 0 , 1),
but I get the following error.
Error using integral (line 82)
First input argument must be a function
handle.
Can anyone help me in writing this code?
To solve the problem, matlabFunction was required. The solution looks like this:
syms x y
f = #(x, y) x.*y.^2;
Df = matlabFunction(subs(diff(f,y),y,2));
Int = integral(Df , 0 , 1);
Keeping it all symbolic, using sym/int:
syms x y;
f = #(x, y) x.*y.^2;
Df = diff(f,y);
s = int(Df,x,0,1)
which returns y. You can substitute 2 in for y here or earlier as you did in your question. Not that this will give you an exact answer in this case with no floating-point error, as opposed to integral which calculated the integral numerically.
When Googling for functions in Matlab, make sure to pay attention what toolbox they are in and what classes (datatypes) they support for their arguments. In some cases there are overloaded versions with the same name, but in others, you may need to look around for a different method (or devise your own).

Call functions with #() in MATLAB [duplicate]

This question already has answers here:
matlab to R: function calling and #
(2 answers)
Closed 8 years ago.
I'm trying to figure out what is the purpose of #(t) in the following code snippet:
[theta] = ...
fmincg (#(t)(lrCostFunction(t, X, (y == c), lambda)), ...
initial_theta, options);
lrCostFunction:
function [J, grad] = lrCostFunction(theta, X, y, lambda)
%LRCOSTFUNCTION Compute cost and gradient for logistic regression with
%regularization
% J = LRCOSTFUNCTION(theta, X, y, lambda) computes the cost of using
% theta as the parameter for regularized logistic regression and the
% gradient of the cost w.r.t. to the parameters.
and options:
options = optimset('GradObj', 'on', 'MaxIter', 50);
I'd appreciate some explanation. Thanks in advance
Let me answer your question focusing on anonymous function itself.
The following function, defined in a separate .m file
function y = foo(x, a, b)
y = x^(a-b);
end
is equivalent to defining an anonymous function in the main script
bar = #(x, a, b) x^(a-b);
When your main script calls function foo(5, 1, 2), Matlab searches in working directory, then reads and executes code within file foo.m. Contrarily, when you run a line bar(5, 1, 2), Matlab calls a "function handle" and treat it as a function (though its power is limited by a single line of code - you can't perform things like switch or for easily).
Sometimes we need to wrap some function into an easier-to-use one. Consider a case where we want to evaluate foo 1000 times, but only input x changes, while a and b remains same. It's of course OK to write foo(x, 1, 2) in the for loop, but you can also wrap the function before going into the loop.
a = 1;
b = 2;
foobar = #(x) foo(x, a, b);
When you call foobar(5), Matlab first invokes the function handle foobar, taking 5 as its only input. That function handle has one instruction: call another function (or function handle, if you define it as so) named foo. The arguments of foo are: x, which is defined when user calls foobar(x); a and b, which have been defined in the first place BEFORE the function handle definition code is executed.
In your case, fmincg only accepts, as its first argument, a function that only has one input argument. But lrCostFunction takes four. fmincg doesn't know how to treat x, y, or lambda (I don't either). So it's your job to wrap the cost function into the form that a general optimizer can understand. That also requires you assign x, y, c and lambda in advance.
What is it.
#(t) creates a function with argument t that calls your costFunction(t,X,y) so if you write
fmincg (#(t)(lrCostFunction(t, X, (y == c), lambda)), ...
initial_theta, options);
it will call your function lrCostFunction and pass the values
Why we need it
It allows us to use the built in optimization function provided by Octave (because MATLAB doesn't have fminc function AFAIK). So it takes your costFunction and Optimise it using the settings that you provide.
Optimization Settings
optimset('GradObj', 'on', 'MaxIter', 50); allows you to set the Optimization settings that are required for minimization problem as mentioned above.
All information is from Andrew NG classes. I hope it helps..
Correct me if I am wrong.

Matlab minimization with fminsearch and parametrized function

I am writing a program in Matlab and I have a function defined this way.
sum (i=1...100) (a*x(i) + b*y(i) + c)
x and y are known, while a, b and c are not: I need to find values for them such that the total value of the function is minimized. There is no additional constraint for the problem.
I thought of using fminsearch to solve this minimization problem, but from Mathworks I get that functions which are suitable inputs for fminsearch are defined like this (an example):
square = #(x) x.^2
So in my case I could use a vector p=[a, b, c] as the value to minimize, but then I don't know how to define the remaining part of the function. As you can see the number of possible values for the index i is huge, so I cannot simply sum everything together explicitly, but I need to represent the summation in some way. If I write the function somewhere else then I am forced to use symbolic calculus for a, b and c (declaring them with syms) and I'm not sure fminsearch would accept that.
What can I do? Of course if fminsearch turns out to be unfeasible for my situation I accept links to use something else.
The most general solution is to use x and y in the definition of the objective function:
>> objfun = #(p) sum( p(1).*x + p(2).*y + p(3) );
>> optp = fminsearch( objfun, po, ... );