Function handle formats in Octave - matlab

A function handles in Octave is defined as the example below.
f = #sin;
From now on, calling function f(x) has the same effect as calling sin(x). So far so good. My problem starts with the function below from one of my programming assignments.
function sim = gaussianKernel(x1, x2, sigma)
The line above represents the header of the function gaussianKernel. This takes three variables as input. However, the call below messes up my mind because it only passes two variables and then three while referring to gaussianKernel.
model = svmTrain(X, y, C, #(x1, x2) gaussianKernel(x1, x2, sigma));
Shouldn't that be simply model = svmTrain(X, y, C, #gaussianKernel(x1, x2, sigma));? What is the difference?

You didn't provide the surrounding code, but my guess is that the variable sigma is defined in the code before calling model = svmTrain(X, y, C, #(x1, x2) gaussianKernel(x1, x2, sigma));. It is an example of a parametrized anonymous function that captures the values of variables in the current scope. This is also known as a closure. It looks like Matlab has better documentation for this very useful programming pattern.
The function handle #gaussianKernel(x1, x2, sigma) would be equivalent to #gaussianKernel. Using model = svmTrain(X, y, C, #gaussianKernel(x1, x2, sigma)); might not work in this case if the fourth argument of svmTrain is required to be a function with two input arguments.

The sigma variable is already defined somewhere else in the code. Therefore, svmTrain pulls that value out of the existing scope.
The purpose of creating the anonymous function #(x1, x2) gaussianKernel(x1, x2, sigma) is to make a function that takes in two arguments instead of three. If you look at the code in svmTrain, it takes in a parameter kernelFunction and only calls it with two arguments. svmTrain itself is not concerned with the value of sigma and in fact only knows that the kernelFunction it is passed should only have two arguments.
An alternate approach would have been to define a new function:
function sim = gKwithoutSigma(x1, x2)
sim = gaussianKernel(x1, x2, sigma)
endfunction
Note that this would have to be defined somewhere within the script calling svmTrain in the first place. Then, you could call svmTrain as:
model = svmTrain(X, y, C, #gKwithoutSigma(x1, x2))
Using the anonymous parametrized function prevents you from having to write the extra code for gKwithoutSigma.

Related

What is the difference between creating creating functions using function handle and declaring syms?

It seems that, to create a function f(x,y)=x+y, I can have two approaches.
syms x y; f(x,y) = x+y
f = #(x,y) x+y
They seem very similar, and I do not know whether there are some subtle differences.
Typically, if I need to evaluate the function for inputs or many samples I would opt-in to using the second method (function handles/anonymous functions).
Method 1: Symbolic Functions
This method allows the function to be evaluated at a specific point/value by using the subs(), substitution function. Both plots can be plotted using fsurf().
clear;
syms x y
f(x,y) = x+y;
fsurf(f);
subs(f,[x y],[5 5])
Variants and offsetting of symbolic functions can be done similarly to anonymous functions/function handles with the one caveat of not needing to include the input parameters in the #().
g = f(x,y) + f(x-5,y-5)
fsurf(g);
Method 2: Anonymous Functions/Function Handles
This method allows you to directly input values into the function f(x,y). I prefer anonymous functions because they seem more flexible.
clear;
f = #(x,y) x+y;
fsurf(f);
f(5,5)
Some cool things you can do is offset and easily add variants of anonymous functions. Inputs can also be in the form of arrays.
x = 10; y = 2;
f(x-5,y-5) + f(x,y)
g = #(x,y) f(x,y) + f(x-5,y-20);
fsurf(g);
Ran using MATLAB R2019b

Matlab: Meaning of #(t)(costFunction(t, X, y)) from Andrew Ng's Machine Learning class

I have the following code in MATLAB:
% Set options for fminunc
options = optimset('GradObj', 'on', 'MaxIter', 400);
% Run fminunc to obtain the optimal theta
% This function will return theta and the cost
[theta, cost] = ...
fminunc(#(t)(costFunction(t, X, y)), initial_theta, options);
My instructor has explained the minimising function like so:
To specify the actual function we are minimizing, we use a "short-hand"
for specifying functions, like #(t)(costFunction(t, X, y)). This
creates a function, with argument t, which calls your costFunction. This
allows us to wrap the costFunction for use with fminunc.
I really cannot understand what #(t)(costFunction(t, X, y) means. What are the both ts are doing? What kind of expression is that?
In Matlab, this is called an anonymous function.
Take the following line:
f = #(t)( 10*t );
Here, we are defining a function f, which takes one argument t, and returns 10*t. It can be used by
f(5) % returns 50
In your case, you are using fminunc which takes a function as its first argument, with one parameter to minimise over. This could be called using
X = 1; y = 1; % Defining variables which aren't passed into the costFunction
% but which must exist for the next line to pass them as anything!
f = #(t)(costFunction(t, X, y)); % Explicitly define costFunction as a function of t alone
[theta, cost] = fminunc(f, 0, options);
This can be shortened by not defining f first, and just calling
[theta, cost] = fminunc(#(t)(costFunction(t, X, y)), 0, options);
Further reading
As mentioned in the comments, here is a link to generally parameterising functions.
Specifically, here is a documentation link about anonymous functions.
Just adding to Wolfie's response. I was confused as well and asked a similar question here:
Understanding fminunc arguments and anonymous functions, function handlers
The approach here is one of 3. The problem the anonymous function (1 of the 3 approaches in the link below) solves is that the solver, fminunc only optimizes one argument in the function passed to it. The anonymous function #(t)(costFunction(t, X, y) is a new function that takes in only one argument, t, and later passes this value to costFunction. You will notice that in the video lecture what was entered was just #costFunction and this worked because costFunction only took one argument, theta.
https://www.mathworks.com/help/optim/ug/passing-extra-parameters.html
I also had the same question. All thanks to the the link provided by Wolfie to understand paramterized and anonymous functions, I was able to clarify my doubts. Perhaps, you must have already found your answer but am explaining once again, for people who might develop this query in the mere future.
Let's say we want to derive a polynomial, and find its minimum/maximum value. Our code is:
m = 5;
fun = #(x) x^2 + m; % function that takes one input: x, accepts 'm' as constant
x = derive(fun, 0); % fun passed as an argument
As per the above code, 'fun' is a handle that points to our anonymous function, f(x)=x^2 + m. It accepts only one input, i.e. x. The advantage of an anonymous function is, one doesn't need to create a separate program for it. For the constant, 'm', it can accept any values residing in the current workspace.
The above code can be shortened by:
m = 5;
x = derive(#(x) x^2 + m, 0); % passed the anonymous function directly as argument
Our target is to find the global optimal,so i think the function here is to get a bounch of local minimal by change the alpha and compare with each other to see which one is the best.
to achive this you initiate the fminuc with value initial_theta
fminuc set t=initial_theta then compute CostFunction(t,X,y) which is equal to` CostFunction(initial_theta,X,y).you will get the Cost and also the gradient.
fminuc will compute a new_theta with the gradient and a alpha, then set t=new_theta and compute the Cost and gradient again.
it will loop like this until it find the local optimal.
Then it change the length of alpha and repeat above to get another optimal. At the end it will compare the optimals and return with the best one.

Matlab: Why in passing additional arguments in ode45 i need to pass `(t,y)` as well?

In MATLAB: How do I pass a parameter to a function?,
it is said that if i want to pass the parameter u, i need to use anonymous function:
u = 1.2;
[t y] = ode45(#(t, y) ypdiff(t, y, u), [to tf], yo);
Originally, without passing parameter u, the ode line reads:
[t y] = ode45(#ypdiff, [to tf], yo);, where #ypdiff just creates a function handle.
Why if we want to pass u only, we also need to include t and y in the creation of anonymous function #(t, y) ypdiff(t, y, u), but not something like #ypdiff(u)?
Simply appending # to the front of a function creates a function handle not an anonymous function. This function handle implicitly forwards all input arguments onto the function.
What you need is a function handle to an anonymous function (since it accepts inputs and performs an action or calls another function). In this case, it does not implicitly pass inputs therefore you need to explicitly receive input arguments and then use them (or not) within the anonymous function.
#(t, y)ypdiff(t, y, u)
The only exception to this rule is that some graphics objects will accept a cell array in place of a callback function which accept a function handle as the first element and any additional parameters as the second, but this is not the case for ode45.
{#ypdiff, u}

Call functions with #() in MATLAB [duplicate]

This question already has answers here:
matlab to R: function calling and #
(2 answers)
Closed 8 years ago.
I'm trying to figure out what is the purpose of #(t) in the following code snippet:
[theta] = ...
fmincg (#(t)(lrCostFunction(t, X, (y == c), lambda)), ...
initial_theta, options);
lrCostFunction:
function [J, grad] = lrCostFunction(theta, X, y, lambda)
%LRCOSTFUNCTION Compute cost and gradient for logistic regression with
%regularization
% J = LRCOSTFUNCTION(theta, X, y, lambda) computes the cost of using
% theta as the parameter for regularized logistic regression and the
% gradient of the cost w.r.t. to the parameters.
and options:
options = optimset('GradObj', 'on', 'MaxIter', 50);
I'd appreciate some explanation. Thanks in advance
Let me answer your question focusing on anonymous function itself.
The following function, defined in a separate .m file
function y = foo(x, a, b)
y = x^(a-b);
end
is equivalent to defining an anonymous function in the main script
bar = #(x, a, b) x^(a-b);
When your main script calls function foo(5, 1, 2), Matlab searches in working directory, then reads and executes code within file foo.m. Contrarily, when you run a line bar(5, 1, 2), Matlab calls a "function handle" and treat it as a function (though its power is limited by a single line of code - you can't perform things like switch or for easily).
Sometimes we need to wrap some function into an easier-to-use one. Consider a case where we want to evaluate foo 1000 times, but only input x changes, while a and b remains same. It's of course OK to write foo(x, 1, 2) in the for loop, but you can also wrap the function before going into the loop.
a = 1;
b = 2;
foobar = #(x) foo(x, a, b);
When you call foobar(5), Matlab first invokes the function handle foobar, taking 5 as its only input. That function handle has one instruction: call another function (or function handle, if you define it as so) named foo. The arguments of foo are: x, which is defined when user calls foobar(x); a and b, which have been defined in the first place BEFORE the function handle definition code is executed.
In your case, fmincg only accepts, as its first argument, a function that only has one input argument. But lrCostFunction takes four. fmincg doesn't know how to treat x, y, or lambda (I don't either). So it's your job to wrap the cost function into the form that a general optimizer can understand. That also requires you assign x, y, c and lambda in advance.
What is it.
#(t) creates a function with argument t that calls your costFunction(t,X,y) so if you write
fmincg (#(t)(lrCostFunction(t, X, (y == c), lambda)), ...
initial_theta, options);
it will call your function lrCostFunction and pass the values
Why we need it
It allows us to use the built in optimization function provided by Octave (because MATLAB doesn't have fminc function AFAIK). So it takes your costFunction and Optimise it using the settings that you provide.
Optimization Settings
optimset('GradObj', 'on', 'MaxIter', 50); allows you to set the Optimization settings that are required for minimization problem as mentioned above.
All information is from Andrew NG classes. I hope it helps..
Correct me if I am wrong.

matlab ode45 how to change a parameter inside the function continiously

i am trying to solve a differential equation using ode45, I have a function in which one of the parameters has to vary by specific step, here is my function:
function f=RSJ(y,t,M1,P,M2,E,current)
f=(current/P)-(M1/P)*sin(y)+(M2/P)*sin(y+E);
P, M1, M2 & E are numerical constants, current is the parameter which I should resolve this differential equation for several cases, for example current=0:1:10
how can I do such a thing?
Use a closure (a.k.a. anonymous or lambda function):
% declare t, y, tspan and y0
% [...]
current = 6e-7 : 1e-8 : 8.5e-7;
for k=1:length(current)
f = #(y, t, M1, P, M2, E) (current(k)/P)-(M1/P)*sin(y)+(M2/P)*sin(y+E);
[t{k}, y{k}] = ode45(f, tspan, y0);
end
A quick and dirty solution. Define current as a global variable (you need to do this in both the base workspace and in the function) and use a for loop, e.g.
current_vector=1e-7*(6:0.1:8.5);
global current
for k=1:length(current_vector)
current = current_vector(k);
[t{k},y{k}]=ode45(f,<tspan>,<y0>)
end
Replace <tspan> and <y0> by the appropriate values/variables. I assume the other constants are defined in the body of the function, in which case your function should look something like this:
function f=RSJ(t,y)
global current
M1 = ... % etc...
f=(current/P)-(M1/P)*sin(y)+(M2/P)*sin(y+E);
end
BTW, I don't see any explicit dependence on time t in your function...