Converting and applying MATLAB lsqnonlin function in Julia with multiple function arguments - matlab

I am trying to convert some MATLAB code into Julia. For the largest part this went pretty smooth, but I am now stuck with converting the MATLAB [lsqnonlin][1] function.
After scanning available options, I think that the [LsqFit.jl][2] package with its curve_fit function is the best way to go. Please correct me if I am wrong.
The original MATLAB code looks like this, where a function calc_z with given lower and upper boundaries (lb resp. ub) and initial guess z should yield optimal values of z. **kwargs contains all other function arguments required for calc_z.
z_opt = lsqnonlin(#calc_z, z, lb, ub, lsq_options, **kwargs)
Internally, the function calc_z computes the error between initial z and observed values of z (z_obs which are part of **kwargs) plus two additional penalty terms which should be minimized to obtain the optimal values of z, z_opt.
The question now is how to correctly implement this in Julia. Thus far, I understand that I need something like
using LsqFit
fit = curve_fit(calz_z, xdata, ydata, p0, lower=lb, upper=ub)
# calz_z rewritten in Julia
z_opt = fit.param
What is not clear to me is how to deal with the (kw)args required for calc_z. Do all **kwargs go into p0? And is xdata equivalent to z and ydata equivalent to z_obs?
It is also unclear to me whether Julia curve_fit implicitly computes the sum of squares of the components as lsqnonlin does in MATLAB?
Many thanks for your help!

Related

Levenberg-Marquardt algorithm for 3D data

I have a point cloud in 3D whose coordinates are stored in a 3D vector, and I would like to fit a nonlinear function to the point cloud.
Do you know if the lsqcurvefit algorithm implemented in MATLAB works for 3D data as well?
Do you have any example data that uses 'levenberg-marquardt' for 3D data using MATLAB?
options = optimoptions('lsqcurvefit','Algorithm','levenberg-marquardt');
Yes, you can still use lsqcurvefit in 3D, but if you want to keep your code as simple as possible (see edit) I suggest the lsqnonlin function for multivariate nonlinear data fitting. The linked documentation page shows several examples, one of which uses the Levenberg-Marquardt algorithm in 2D. In 3D or higher, the usage is similar.
For instance, suppose you have a cloud of points in 3D whose coordinates are stored in the arrays x, y and z. Suppose you are looking for a fitting surface (no longer a curve because you are in 3D) of the form z = exp(r1*x + r2*y), where r1 and r2 are coefficients to be found. You start by defining the following inline function
fun = #(r) exp(r(1)*x + r(2)*y) - z;
where r is an unknown 1x2 array whose entries will be your unknown coefficients (r1 and r2). We are ready to solve the problem:
r0 = [1,1]; % Initial guess for r
options = optimoptions(#lsqnonlin, 'Algorithm', 'levenberg-marquardt');
lsqnonlin(fun, r0, [], [], options)
You will get the output on the command window.
Tested on MATLAB 2018a.
Hope that helps.
EDIT: lsqcurvefit vs lsqnonlin
lsqcurvefit is basically a special case of lsqnonlin. The discussion on whichever is better in terms of speed and accuracy is wide and is beyond the scope of this post. There are two reasons why I have suggested lsqnonlin:
You are free to take x, y, z as matrices instead of column vectors, just make sure that the dimensions match. In fact, if you use lsqcurvefit, your fun must have an additional argument xdata defined as [x, y] where x and y are taken in column form.
You are free to choose your fitting function fun to be implicit, that is of the form f(x,y,z)=0.

How do I construct a piecewise polynomial (cubic spline) for 4D data in MATLAB?

I have a problem in which I have to interpolate 4D data d = f(a, b, c) often, because the interpolation happens within an optimisation routine. Now, at first I programmed this using Matlab's interpn function. However, the program obviously became very slow, because the cubic splines had to be constructed upon each iteration within the optimisation.
I have read about 2D spline interpolation and I am basically looking for its 4D equivalent: pp = spline(a,b,c,d). Also, I found the scatteredInterpolant function (I have a non-uniform grid), but this function only gives me options for 'linear', 'nearest', or 'natural' and not the 'spline' option I'm looking for.
I could imagine that Matlab would have the function that is underneath the interpn function available, but I can't seem to find it. Does anyone know such a function that returns the piecewise polynomial or some other form of a spline function for a 4D interpolant, preferably Matlab-original?
P.s. I have also looked into a workaround; typing edit interpn, I have tried copying the Matlab function interpn, naming it differently and editing it such that it returns F instead of Vq, the interpolating function. However, doing this it says it doesn't recognise the methodandextrapval function, the first nested Matlab built-in it encounters.

Minimizing Function with vector valued input in MATLAB

I want to minimize a function like below:
Here, n can be 5,10,50 etc. I want to use Matlab and want to use Gradient Descent and Quasi-Newton Method with BFGS update to solve this problem along with backtracking line search. I am a novice in Matlab. Can anyone help, please? I can find a solution for a similar problem in that link: https://www.mathworks.com/help/optim/ug/unconstrained-nonlinear-optimization-algorithms.html .
But, I really don't know how to create a vector-valued function in Matlab (in my case input x can be an n-dimensional vector).
You will have to make quite a leap to get where you want to be -- may I suggest to go through some basic tutorial first in order to digest basic MATLAB syntax and concepts? Another useful read is the very basic example to unconstrained optimization in the documentation. However, the answer to your question touches only basic syntax, so we can go through it quickly nevertheless.
The absolute minimum to invoke the unconstraint nonlinear optimization algorithms of the Optimization Toolbox is the formulation of an objective function. That function is supposed to return the function value f of your function at any given point x, and in your case it reads
function f = objfun(x)
f = sum(100 * (x(2:end) - x(1:end-1).^2).^2 + (1 - x(1:end-1)).^2);
end
Notice that
we select the indiviual components of the x vector by matrix indexing, and that
the .^ notation effects that the operand is to be squared elementwise.
For simplicity, save this function to a file objfun.m in your current working directory, so that you have it available from the command window.
Now all you have to do is to call the appropriate optimization algorithm, say, the quasi Newton method, from the command window:
n = 10; % Use n variables
options = optimoptions(#fminunc,'Algorithm','quasi-newton'); % Use QM method
x0 = rand(n,1); % Random starting guess
[x,fval,exitflag] = fminunc(#objfun, x0, options); % Solve!
fprintf('Final objval=%.2e, exitflag=%d\n', fval, exitflag);
On my machine I see that the algorithm converges:
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the optimality tolerance.
Final objval=5.57e-11, exitflag=1

Deriving dirac delta function using Matlab symbolic toolbox

i'm new in matlab. I didn't understand how to derive a dirac delta function and then shift it using symbolic toolbox.
syms t
x = dirac(t)
why can't i see the dirac delta function using ezplot(x,[-10,10]) for example?
As others have noted, the Dirac delta function is not a true function, but a generalized function. The help for dirac indicates this:
dirac(X) is not a function in the strict sense, but rather a
distribution with int(dirac(x-a)*f(x),-inf,inf) = f(a) and
diff(heaviside(x),x) = dirac(x).
Strictly speaking, it's impossible for Matlab to plot the Dirac delta function in the normal way because part of it extends to infinity. However, there are numerous workarounds if you want a visualization. A simple one is to use the stem plot function and the > operator to convert the one Inf value to something finite. This produces a unit impulse function (or Kronecker delta):
t = -10:10;
x = dirac(t) > 0;
stem(t,x)
If t and x already exist as symbolic variables/expressions rather than numeric ones you can use subs:
syms t
x = dirac(t);
t2 = -10:10;
x2 = subs(x,t,t2)>0;
stem(t2, x2)
You can write your own plot routine if you want something that looks different. Using ezplot is not likely to work as it doesn't offer as much control.
First, I've not met ezplot before; I had to read up on it. For things that are functionals like your x, it's handy, but you still have to realize it's exactly giving you what it promises: A plot.
If you had the job of plotting the dirac delta function, how would you go about doing it correctly? You can't. You must find a convention of annotating your plot with the info that there is a single, isolated, infinite point in your plot.
Plotting something with a line plot hence is unsuitable for anything but smooth functions (that's a well-defined term). Dirac Delta definitely isn't amongst the class of functions that are smooth. You would typically use a vertical line or something to denote the point where your functional is not 0.

How to find the intersections of two functions in MATLAB?

Lets say, I have a function 'x' and a function '2sin(x)'
How do I output the intersects, i.e. the roots in MATLAB? I can easily plot the two functions and find them that way but surely there must exist an absolute way of doing this.
If you have two analytical (by which I mean symbolic) functions, you can define their difference and use fzero to find a zero, i.e. the root:
f = #(x) x; %defines a function f(x)
g = #(x) 2*sin(x); %defines a function g(x)
%solve f==g
xroot = fzero(#(x)f(x)-g(x),0.5); %starts search from x==0.5
For tricky functions you might have to set a good starting point, and it will only find one solution even if there are multiple ones.
The constructs seen above #(x) something-with-x are called anonymous functions, and they can be extended to multivariate cases as well, like #(x,y) 3*x.*y+c assuming that c is a variable that has been assigned a value earlier.
When writing the comments, I thought that
syms x; solve(x==2*sin(x))
would return the expected result. At least in Matlab 2013b solve fails to find a analytic solution for this problem, falling back to a numeric solver only returning one solution, 0.
An alternative is
s = feval(symengine,'numeric::solve',2*sin(x)==x,x,'AllRealRoots')
which is taken from this answer to a similar question. Besides using AllRealRoots you could use a numeric solver, manually setting starting points which roughly match the values you have read from the graph. This wa you get precise results:
[fzero(#(x)f(x)-g(x),-2),fzero(#(x)f(x)-g(x),0),fzero(#(x)f(x)-g(x),2)]
For a higher precision you could switch from fzero to vpasolve, but fzero is probably sufficient and faster.