Matlab optimization using genetic algorithm - matlab

I want to fit a curve from a theoretical model to experimental data points. The model consists of 5 parameters. I can easily get the closest fit but I want something different. I need the closest fit possible but it should never go below the experimental curve. In other words, every y-value of the fit should be greater than or equal to the corresponding y-value from the experiment.
I would highly appreciate any ideas on how this could be implemented. Thanks!

Have you tried adding nonlinear constraints to your genetic algorithm?
More details are given here
https://www.mathworks.com/help/gads/examples/constrained-minimization-using-the-genetic-algorithm.html
In your case all you would need to do would be
assign
the 'c' inequality constraint value in your non-linear constraints function
to the difference between the new y values and the actual y values and the genetic algorithm should do the rest.

Related

Difference between scipy.optimize.curve_fit and linear least squares

I am struggling to find information on what exactly the scipy.optimize.curve_fit function does to fit (for example) exponential data and how would does this method differ from just linearizing the data and directly computing the linear fit using the general formulas for a weighted linear least squares fit?
It's Levenberg-Marquadt nonlinear fitting for unbounded problems and a trust-region variant when bounds are given. See the references in the docstring of least_squares.

Initial guess and resnorm Issue in Matlab curve fitting

I am fitting data to a system of non linear ODEs to estimate model parameters using Matlab lsqcurvefit.
In this fitting the fit depends so much on the initial guesses that I use for the lsqcurvefit .
For example, if I use x0=5 as a initial guess I will get residual norm of 30, where as if I choose x0=5.2 I get a residual norm of 1.5.
1) What does residual norm (resnorm) in Matlab represent? is it the sum of the squared errors? Is there a way to decide what range of value for resnorm is acceptable.
2) When the fit depends so much on initial guess, is there a way to deal with these problems? How would I know whether a better fit can be obtained from a different initial guess?
3) In using lsqcurvefit is it required to check whether the residuals are normally distributed?
lsqcurvefit fits your data in the least squares sense. Thus it all comes down to the minimisation, and as your data is non-linear, you do not have any guarantee, that this is the global minimum nor that it is unique.
E.g. Consider the function sin(x), which x-value minimises this function, well all x=2*pi*n + 3/2*pi for n=0,1,2,... but your numerical method can only return one solution, which will then depend on your initial guess.
To further elaborate on the problem. The simplest (in my opinion) minimisation algorithm is known as the steepest descent. it uses the idea, known from calculus, that the steepest descent is in the direction of the minus the gradient. Thus it finds the gradient in the suggested point takes a step in negative that direction (scaled by some stepsize) and continues to do this until the step/derivative is significantly small.
However, even if you consider the function cos(3 pi x)/x from 0.5 to infinity, which does have a unique global minima in 1, you only find it if your guess lies in between 0.7 and 1.3 (or so). All other guesses will find their respective local minima.
With this we can answer your questions:
1) resnorm is the 2-norm of the residuals. What would it mean that specific norm would be acceptable? The algorithm is looking for a minimum, if you already are at a minimum, what would it mean to continue the search?
2) Not in an (pseudo) exact sense. What is typically done is to either use your knowledge to come up with a sensible initial guess. If this is not possible, you simply have to repeatedly make random initial guesses and then keep the best.
3) Depends on what you want to do, if you want to make statistical tests, which depends on the residuals being normally distributed, then YES. If you are solely interested in fitting the function with the lowest residual norm, then NO.

Self-Organizing Maps

I have a question on self-organizing maps:
But first, here is my approach on implementing one:
The som neurons are stored in a basic array. Each neuron consists of a vector (another array of the size of the input neurons) of double values which are initialized to a random value.
As far as I understand the algorithm, this is actually all I need to implement it.
So, for the training I choose a sample of the training data at random an calculate the BMU using the Euclidian distance of sample's values and the neuron weights.
Afterwards I update it's weights and all other neurons in it's range depending on the neighborhood function and the learning rate.
Then, I decrease the neighborhood function and the learning rate.
This is done until a fixed amount of iterations.
My question is now: How do I determine the clusters after the training? My approach so far is to present a new input vector and calculate the min Euclidian distance between it and the BMU . But this seems a little naive to me. I'm sure that I've missed something.
There is no single correct way of doing that. As you noted, finding the BMU is one of them and the only one that makes sense if you just want to find the most similar cluster.
If you want to reconstruct your input vector, returning the BMU prototype works too, but may not be very precise (it is equivalent to the Nearest Neighbor rule or 1NN). Then you need to interpolate between neurons to find a better reconstruction. This could be done by weighting each neuron inversely proportional to their distance to the input vector and then computing the weighted average (this is equivalent to weighted KNN). You can also restrict this interpolation only to the BMU's neighbors, which will work faster and may give better results (this would be weighted 5NN). This technique was used here: The Continuous Interpolating Self-organizing Map.
You can see and experiment with those different options here: http://www.inf.ufrgs.br/~rcpinto/itm/ (not a SOM, but a close cousin). Click "Apply" to do regression on a curve using the reconstructed vectors, then check "Draw Regression" and try the different options.
BTW, the description of your implementation is correct.
A pretty common approach nowadays is the soft subspace clustering, where feature weights are added to find the most relevant features. You can use these weights to increase performance and improve the BMU calculation with euclidean distance.

How calculating hessian works for Neural Network learning

Can anyone explain to me in a easy and less mathematical way what is a Hessian and how does it work in practice when optimizing the learning process for a neural network ?
To understand the Hessian you first need to understand Jacobian, and to understand a Jacobian you need to understand the derivative
Derivative is the measure of how fast function value changes withe the change of the argument. So if you have the function f(x)=x^2 you can compute its derivative and obtain a knowledge how fast f(x+t) changes with small enough t. This gives you knowledge about basic dynamics of the function
Gradient shows you in multidimensional functions the direction of the biggest value change (which is based on the directional derivatives) so given a function ie. g(x,y)=-x+y^2 you will know, that it is better to minimize the value of x, while strongly maximize the vlaue of y. This is a base of gradient based methods, like steepest descent technique (used in the traditional backpropagation methods).
Jacobian is yet another generalization, as your function might have many values, like g(x,y)=(x+1, xy, x-z), thus you now have 23 partial derivatives, one gradient per each output value (each of 2 values) thus forming together a matrix of 2*3=6 values.
Now, derivative shows you the dynamics of the function itself. But you can go one step further, if you can use this dynamics to find the optimum of the function, maybe you can do even better if you find out the dynamics of this dynamics, and so - compute derivatives of second order? This is exactly what Hessian is, it is a matrix of second order derivatives of your function. It captures the dynamics of the derivatives, so how fast (in what direction) does the change change. It may seem a bit complex at the first sight, but if you think about it for a while it becomes quite clear. You want to go in the direction of the gradient, but you do not know "how far" (what is the correct step size). And so you define new, smaller optimization problem, where you are asking "ok, I have this gradient, how can I tell where to go?" and solve it analogously, using derivatives (and derivatives of the derivatives form the Hessian).
You may also look at this in the geometrical way - gradient based optimization approximates your function with the line. You simply try to find a line which is closest to your function in a current point, and so it defines a direction of change. Now, lines are quite primitive, maybe we could use some more complex shapes like.... parabolas? Second derivative, hessian methods are just trying to fit the parabola (quadratic function, f(x)=ax^2+bx+c) to your current position. And based on this approximation - chose the valid step.

Linear least-squares fit with constraint - any ideas?

I have a problem where I am fitting a high-order polynomial to (not very) noisy data using linear least squares. Currently I'm using polynomial orders around 15 - 25, which work surprisingly well: The dependence is very nearly linear, but the accuracy of modelling the 'very nearly' is critical. I'm using Matlab's polyfit() function, and (obviously) normalising the x-data. This generally works fine, but I have come across an issue with some recent datasets. The fitted polynomial has extrema within the x-data interval. For the application I'm working on this is a non-no. The polynomial model must have no stationary points over the x-interval.
So I need to add a constraint to the least-squares problem: the derivative of the fitted polynomial must be strictly positive over a known x-range (or strictly negative - this depends on the data but a simple linear fit will quickly tell me which it is.) I have had a quick look at the available optimisation toolbox functions, but I admit I'm at a loss to know how to go about this. Does anyone have any suggestions?
[I appreciate there are probably better models than polynomials for this data, but in the short term it isn't feasible to change the form of the model]
[A closing note: I have finally got the go-ahead to replace this awful polynomial model! I am going to adopt a nonparametric approach, spline smoothing, using the excellent SPLINEFIT code by Jonas Lundgren. This has the advantage that I'm already using a spline model in the end-user application, so I already have C# code available to evaluate a spline model]
You could use cftool and use the exclude data points option.