I am in interested in passing extra arguments to nlinfit function in Matlab
beta = nlinfit(X,Y,modelfun,beta0)
and let the modelfun is
function y = modelfun(beta, c, X)
y = beta(1)*x.^(beta2) + c;
My interest is estimate beta and also to provide c externally. X and Y have their obvious meanings.
Can it be done?
If c is a value generated before you call nlinfit (i.e. its value is fixed while nlinfit is running), then you can use an anonymous function wrapper to pass the extra parameter like so:
beta = nlinfit(X, Y, #(beta, X) modelfun(beta, c, X), beta0);
Related
It seems that, to create a function f(x,y)=x+y, I can have two approaches.
syms x y; f(x,y) = x+y
f = #(x,y) x+y
They seem very similar, and I do not know whether there are some subtle differences.
Typically, if I need to evaluate the function for inputs or many samples I would opt-in to using the second method (function handles/anonymous functions).
Method 1: Symbolic Functions
This method allows the function to be evaluated at a specific point/value by using the subs(), substitution function. Both plots can be plotted using fsurf().
clear;
syms x y
f(x,y) = x+y;
fsurf(f);
subs(f,[x y],[5 5])
Variants and offsetting of symbolic functions can be done similarly to anonymous functions/function handles with the one caveat of not needing to include the input parameters in the #().
g = f(x,y) + f(x-5,y-5)
fsurf(g);
Method 2: Anonymous Functions/Function Handles
This method allows you to directly input values into the function f(x,y). I prefer anonymous functions because they seem more flexible.
clear;
f = #(x,y) x+y;
fsurf(f);
f(5,5)
Some cool things you can do is offset and easily add variants of anonymous functions. Inputs can also be in the form of arrays.
x = 10; y = 2;
f(x-5,y-5) + f(x,y)
g = #(x,y) f(x,y) + f(x-5,y-20);
fsurf(g);
Ran using MATLAB R2019b
A function handles in Octave is defined as the example below.
f = #sin;
From now on, calling function f(x) has the same effect as calling sin(x). So far so good. My problem starts with the function below from one of my programming assignments.
function sim = gaussianKernel(x1, x2, sigma)
The line above represents the header of the function gaussianKernel. This takes three variables as input. However, the call below messes up my mind because it only passes two variables and then three while referring to gaussianKernel.
model = svmTrain(X, y, C, #(x1, x2) gaussianKernel(x1, x2, sigma));
Shouldn't that be simply model = svmTrain(X, y, C, #gaussianKernel(x1, x2, sigma));? What is the difference?
You didn't provide the surrounding code, but my guess is that the variable sigma is defined in the code before calling model = svmTrain(X, y, C, #(x1, x2) gaussianKernel(x1, x2, sigma));. It is an example of a parametrized anonymous function that captures the values of variables in the current scope. This is also known as a closure. It looks like Matlab has better documentation for this very useful programming pattern.
The function handle #gaussianKernel(x1, x2, sigma) would be equivalent to #gaussianKernel. Using model = svmTrain(X, y, C, #gaussianKernel(x1, x2, sigma)); might not work in this case if the fourth argument of svmTrain is required to be a function with two input arguments.
The sigma variable is already defined somewhere else in the code. Therefore, svmTrain pulls that value out of the existing scope.
The purpose of creating the anonymous function #(x1, x2) gaussianKernel(x1, x2, sigma) is to make a function that takes in two arguments instead of three. If you look at the code in svmTrain, it takes in a parameter kernelFunction and only calls it with two arguments. svmTrain itself is not concerned with the value of sigma and in fact only knows that the kernelFunction it is passed should only have two arguments.
An alternate approach would have been to define a new function:
function sim = gKwithoutSigma(x1, x2)
sim = gaussianKernel(x1, x2, sigma)
endfunction
Note that this would have to be defined somewhere within the script calling svmTrain in the first place. Then, you could call svmTrain as:
model = svmTrain(X, y, C, #gKwithoutSigma(x1, x2))
Using the anonymous parametrized function prevents you from having to write the extra code for gKwithoutSigma.
I have data like this:
y = [0.001
0.0042222222
0.0074444444
0.0106666667
0.0138888889
0.0171111111
0.0203333333
0.0235555556
0.0267777778
0.03]
and
x = [3.52E-06
9.72E-05
0.0002822918
0.0004929136
0.0006759156
0.0008199029
0.0009092797
0.0009458332
0.0009749509
0.0009892005]
and I want y to be a function of x with y = a(0.01 − b*n^−cx).
What is the best and easiest computational approach to find the best combination of the coefficients a, b and c that fit to the data?
Can I use Octave?
Your function
y = a(0.01 − b*n−cx)
is in quite a specific form with 4 unknowns. In order to estimate your parameters from your list of observations I would recommend that you simplify it
y = β1 + β2β3x
This becomes our objective function and we can use ordinary least squares to solve for a good set of betas.
In default Matlab you could use fminsearch to find these β parameters (lets call it our parameter vector, β), and then you can use simple algebra to get back to your a, b, c and n (assuming you know either b or n upfront). In Octave I'm sure you can find an equivalent function, I would start by looking in here: http://octave.sourceforge.net/optim/index.html.
We're going to call fminsearch, but we need to somehow pass in your observations (i.e. x and y) and we will do that using anonymous functions, so like example 2 from the docs:
beta = fminsearch(#(x,y) objfun(x,y,beta), beta0) %// beta0 are your initial guesses for beta, e.g. [0,0,0] or [1,1,1]. You need to pick these to be somewhat close to the correct values.
And we define our objective function like this:
function sse = objfun(x, y, beta)
f = beta(1) + beta(2).^(beta(3).*x);
err = sum((y-f).^2); %// this is the sum of square errors, often called SSE and it is what we are trying to minimise!
end
So putting it all together:
y= [0.001; 0.0042222222; 0.0074444444; 0.0106666667; 0.0138888889; 0.0171111111; 0.0203333333; 0.0235555556; 0.0267777778; 0.03];
x= [3.52E-06; 9.72E-05; 0.0002822918; 0.0004929136; 0.0006759156; 0.0008199029; 0.0009092797; 0.0009458332; 0.0009749509; 0.0009892005];
beta0 = [0,0,0];
beta = fminsearch(#(x,y) objfun(x,y,beta), beta0)
Now it's your job to solve for a, b and c in terms of beta(1), beta(2) and beta(3) which you can do on paper.
The following is a MATLAB problem.
Suppose I define an function f(x,y).
I want to calculate the partial derivative of f with respect to y, evaluated at a specific value of y, e.g., y=6. Finally, I want to integrate this new function (which is only a function of x) over a range of x.
As an example, this is what I have tried
syms x y;
f = #(x, y) x.*y.^2;
Df = subs(diff(f,y),y,2);
Int = integral(Df , 0 , 1),
but I get the following error.
Error using integral (line 82)
First input argument must be a function
handle.
Can anyone help me in writing this code?
To solve the problem, matlabFunction was required. The solution looks like this:
syms x y
f = #(x, y) x.*y.^2;
Df = matlabFunction(subs(diff(f,y),y,2));
Int = integral(Df , 0 , 1);
Keeping it all symbolic, using sym/int:
syms x y;
f = #(x, y) x.*y.^2;
Df = diff(f,y);
s = int(Df,x,0,1)
which returns y. You can substitute 2 in for y here or earlier as you did in your question. Not that this will give you an exact answer in this case with no floating-point error, as opposed to integral which calculated the integral numerically.
When Googling for functions in Matlab, make sure to pay attention what toolbox they are in and what classes (datatypes) they support for their arguments. In some cases there are overloaded versions with the same name, but in others, you may need to look around for a different method (or devise your own).
need to find a set of optimal parameters P of the system y = P(1)*exp(-P(2)*x) - P(3)*x where x and y are experimental values. I defined my function
f = #(P) P(1)*exp(-P(2)*x) - P(3)*x
and
guess = [1, 1, 1]
and tried
P = fminsearch(f,guess)
according to Help. I get an error
Subscripted assignment dimension mismatch.
Error in fminsearch (line 191)
fv(:,1) = funfcn(x,varargin{:});
I don't quite understand where my y values would fall in, as well as where the function takes P from. I unfortunately have no access to nlinfit or optimization toolboxes.
You should try the matlab function lsqnonlin(#testfun,[1;1;1])
But first make a function and save in an m-file that includes all the data points, lets say your y is A and x is x like here below:
function F = testfun(P)
A = [1;2;3;7;30;100];
x = [1;2;3;4;5;6];
F = A-P(1)*exp(-P(2)*x) - P(3)*x;
This minimized the 2-norm end gives you the best parameters.