fit function of Matlab is really slow - matlab

Why is the fitfunction from Matlab so slow? I'm trying to fit a gauss4 so I can get the means of the gaussians.
here's my plot,
I want to get the means from the blue data and red data.
I'm fitting a gaussian there but this function is really slow.
Is there an alternative?
fa = fit(fn', facm', 'gauss4');
acm = [fa.b1 fa.b2 fa.b3 fa.b4];
a_cm = sort(acm, 'ascend');

I would apply some of the options available with fit. These include smoothing by setting SmoothingParam (your data is quite noisy, the alternative of applying a time domain filter may also help*), and setting the values of your initial parameter estimates, with StartPoint. Your fits may also not be converging because you set your tolerances (TolFun, TolX) too low, although from inspection of your fits that does not appear to be the case, in fact the opposite is likely, you probably want to increase the MaxIter and/or MaxFunEvals.
To figure out how to get going you can also try the Spectr-O-Matic toolbox. It requires Matlab 7.12. It includes a script called GaussFit.m to fit gauss4 to data, but it also uses the fit routine and provides examples on how to set and get parameters.
Note that smoothing will of course broaden your peaks, but you can subtract the contribution after the fact. The effect on the mean should not be deleterious, on the contrary, since you are presumably removing noise this should be more accurate.

In general functions will be faster if you apply it to a shorter series. Hence, if speedup is really important you could downsample.
For example, if you have a vector that you want to downsample by a factor 2: (you may need to make sure it fits first)
n = 2;
x = sin(0.01:0.01:pi);
x_downsampled = x(1:n:end)+x(2:n:end);
You will now see that x_downsampled is much smaller (and should thus be easier to process), but will still have the same shape. In your case I think this is sufficient.
To see what you got try:
plot(x)
Now you can simply process x_downsampled and map your solution, for example
f = find(x_downsampled == max(x_downsampled));
location_of_maximum = f * n;
Needless to say this should be done in combination with the most efficient options that the fit function has to offer.

Related

Fitting data with Voigt profile in lmfit in python - huge errors

I am trying to fit some RIXS data with Voigt profiles (lmfit in Python), and I have defined the Voigt profile in the following way:
def gfunction_norm(x, pos, gwid):
gauss= (1/(gwid*(np.sqrt(2*np.pi))))*(np.exp((-1.0/2)*((((x-pos)/gwid))**2)))
return (gauss-gauss.min())/(gauss.max()-gauss.min())
def lfunction_norm(x,pos,lwid):
lorentz=(0.15915*lwid)/((x-pos)**2+0.25*lwid**2)
return (lorentz-lorentz.min())/(lorentz.max()-lorentz.min())
def voigt(x, pos, gwid, lwid, int):
step=0.005
x2=np.arange(pos-7,pos+7+step,step)
voigt3=np.convolve(gfunction_norm(x2, pos, gwid), lfunction_norm(x2, pos, lwid), mode='same')
norm=(voigt3-voigt3.min())/(voigt3.max()-voigt3.min())
y=np.interp(energy, x2, norm)
return y * int
I have used this definition instead of the popular Voigt profile definition in Python:
def voigt(x, alpha, cen, gamma):
sigma=alpha/np.sqrt(2*np.log(2))
return np.real(wofz((x-cen+1j*gamma)/sigma/np.sqrt(2)))/(sigma*2.51)
because it gives me more clarity on the intensity of the peaks etc.
Now, I have a couple of spectra with 9-10 peaks and I am trying to fit all of them with Voigt profiles (precisely in the way I defined it).
Now, I have a couple of questions:
Do you think my Voigt definition is OK? What (dis)advantages do I have by using the convolution instead of the approximative second definition?
As a result of my fit, sometimes I get crazy large standard deviations. For example, these are best-fit parameters for one of the peaks:
int8: 0.00986265 +/- 0.00113104 (11.47%) (init = 0.05)
pos8: -2.57960013 +/- 0.00790640 (0.31%) (init = -2.6)
gwid8: 0.06613237 +/- 0.02558441 (38.69%) (init = 0.1)
lwid8: 1.0909e-04 +/- 1.48706395 (1363160.91%) (init = 0.001)
(intensity, position, gaussian and lorentzian width respectively).
Does this output mean that this peak should be purely gaussian?
I have noticed that large errors usually happen when the best-fit parameter is very small. Does this have something to do with the Levenberg-Marquardt algorithm that is used by default in lmfit? I should note that I sometimes have the same problem even when I use the other definition of the Voigt profile (and not just for Lorentzian widths).
Here is a part of the code (it is a part of a bigger code and it is in a for loop, meaning I use same initial values for multiple similar spectra):
model = Model(final)
result = model.fit(spectra[:,nb_spectra], params, x=energy)
print(result.fit_report())
"final" is the sum of many voigt profiles that I previously defined.
Thank you!
This seems a duplicate or follow-up to Lmfit fit produces huge uncertainties - please use on SO question per topic.
Do you think my Voigt definition is OK? What (dis)advantages do I have by using the convolution instead of the approximative second definition?
What makes you say that the second definition is approximate? In some sense, all floating-point calculations are approximate, but the Faddeeva function from scipy.special.wofz is the analytic solution for the Voigt profile. Doing the convolution yourself is likely to be a bit slower and is also an approximation (at the machine-precision level).
Since you are using Lmfit, I would recommend using its VoigtModel which will make your life easier: it uses scipy.special.wofz and parameter names that make it easy to switch to other profiles (say, GaussianModel).
You did not give a very complete example of code (for reference, a minimal, working version of actual code is more or less expected on SO and highly recommended), but that might look something like
from lmfit.models import VoigtModel
model = VoigtModel(prefix='p1_') + VoigtModel(prefix='p2_') + ...
As a result of my fit, sometimes I get crazy large standard
deviations. For example, these are best-fit parameters for one of the
peaks:
int8: 0.00986265 +/- 0.00113104 (11.47%) (init = 0.05)
pos8: -2.57960013 +/- 0.00790640 (0.31%) (init = -2.6)
gwid8: 0.06613237 +/- 0.02558441 (38.69%) (init = 0.1)
lwid8: 1.0909e-04 +/- 1.48706395 (1363160.91%) (init = 0.001)
(intensity, position, gaussian and lorentzian width respectively).
Does this output mean that this peak should be purely gaussian?
First, that may not be a "crazy large" standard deviation - it sort of depends on the data and rest of the fit. Perhaps the value for int8 is really, really small, and heavily overlapped with other peaks -- it might be highly correlated with other variables. But, it may very well mean that the peak is more Gaussian-like.
Since you are analyzing X-ray scattering data, the use of a Voigt function is probably partially justified with the idea (assertion, assumption, expectation?) that the material response would give a Gaussian profile, while the instrumentation (including X-ray source) would give a Lorentzian broadening. That suggests that the Lorentzian width might be the same for the various peaks, or perhaps parameterized as a simple function of the incident and scattering wavelengths or q values. That is, you might be able to (and may be better off) constrain the values of the Lorentzian width (your lwid, I think, or gamma in lmfit.models.VoigtModel) to all be the same.

Dimensionality reduction using PCA - MATLAB

I am trying to reduce dimensionality of a training set using PCA.
I have come across two approaches.
[V,U,eigen]=pca(train_x);
eigen_sum=0;
for lamda=1:length(eigen)
eigen_sum=eigen_sum+eigen(lamda,1);
if(eigen_sum/sum(eigen)>=0.90)
break;
end
end
train_x=train_x*V(:, 1:lamda);
Here, I simply use the eigenvalue matrix to reconstruct the training set with lower amount of features determined by principal components describing 90% of original set.
The alternate method that I found is almost exactly the same, save the last line, which changes to:
train_x=U(:,1:lamda);
In other words, we take the training set as the principal component representation of the original training set up to some feature lamda.
Both of these methods seem to yield similar results (out of sample test error), but there is difference, however minuscule it may be.
My question is, which one is the right method?
The answer depends on your data, and what you want to do.
Using your variable names. Generally speaking is easy to expect that the outputs of pca maintain
U = train_x * V
But this is only true if your data is normalized, specifically if you already removed the mean from each component. If not, then what one can expect is
U = train_x * V - mean(train_x * V)
And in that regard, weather you want to remove or maintain the mean of your data before processing it, depends on your application.
It's also worth noting that even if you remove the mean before processing, there might be some small difference, but it will be around floating point precision error
((train_x * V) - U) ./ U ~~ 1.0e-15
And this error can be safely ignored

Matlab, economy QR decomposition, control precision?

There is a [Q,R] = qr(A,0) function in Matlab, which, according to documentation, returns an "economy" version of qr-decomposition of A. norm(A-Q*R) returns ~1e-12 for my data set. Also Q'*Q should theoretically return I. In practice there are small nonzero elements above and below the diagonal (of the order of 1e-6 or so), as well as diagonal elements that are slightly greater than 1 (again, by 1e-6 or so). Is anyone aware of a way to control precision of qr(.,0), or quality(orthogonality) of resulting Q, either by specifying epsilon, or via the number of iterations ? The size of the data set makes qr(A) run out of memory so I have to use qr(A,0).
When I try the non- economy setting, I actually get comparable results for A-Q*R. Even for a tiny matrix containing small numbers as shown here:
A = magic(20);
[Q, R] = qr(A); %Result does not change when using qr(A,0)
norm(A-Q*R)
As such I don't believe the 'economy' is the problem as confirmed by #horchler in the comments, but that you have just ran into the limits of how accurate calculations can be done with data of type 'double'.
Even if you change the accuracy somehow, you will always be dealing with an approximation, so perhaps the first thing to consider here is whether you really need greater accuracy than you already have. If you need more accuracy there may always be a way, but I doubt whether it will be a straightforward one.

function parameters in matlab wander off after curve fitting

first a little background. I'm a psychology student so my background in coding isn't on par with you guys :-)
My problem is as follow and the most important observation is that curve fitting with 2 different programs gives completly different results for my parameters, altough my graphs stay the same. The main program we have used to fit my longitudinal data is kaleidagraph and this should be seen as kinda the 'golden standard', the program I'm trying to modify is matlab.
I was trying to be smart and wrote some code (a lot at least for me) and the goal of that code was the following:
1. Taking an individual longitudinal datafile
2. curve fitting this data on a non-parametric model using lsqcurvefit
3. obtaining figures and the points where f' and f'' are zero
This all worked well (woohoo :-)) but when I started comparing the function parameters both programs generate there is a huge difference. The kaleidagraph program stays close to it's original starting values. Matlab wanders off and sometimes gets larger by a factor 1000. The graphs stay however more or less the same in both situations and both fit the data well. However it would be lovely if I would know how to make the matlab curve fitting more 'conservative' and more located near it's original starting values.
validFitPersons = true(nbValidPersons,1);
for i=1:nbValidPersons
personalData = data{validPersons(i),3};
personalData = personalData(personalData(:,1)>=minAge,:);
% Fit a specific model for all valid persons
try
opts = optimoptions(#lsqcurvefit, 'Algorithm', 'levenberg-marquardt');
[personalParams,personalRes,personalResidual] = lsqcurvefit(heightModel,initialValues,personalData(:,1),personalData(:,2),[],[],opts);
catch
x=1;
end
Above is a the part of the code i've written to fit the datafiles into a specific model.
Below is an example of a non-parametric model i use with its function parameters.
elseif strcmpi(model,'jpa2')
% y = a.*(1-1/(1+(b_1(t+e))^c_1+(b_2(t+e))^c_2+(b_3(t+e))^c_3))
heightModel = #(params,ages) abs(params(1).*(1-1./(1+(params(2).* (ages+params(8) )).^params(5) +(params(3).* (ages+params(8) )).^params(6) +(params(4) .*(ages+params(8) )).^params(7) )));
modelStrings = {'a','b1','b2','b3','c1','c2','c3','e'};
% Define initial values
if strcmpi('male',gender)
initialValues = [176.76 0.339 0.1199 0.0764 0.42287 2.818 18.52 0.4363];
else
initialValues = [161.92 0.4173 0.1354 0.090 0.540 2.87 14.281 0.3701];
end
I've tried to mimick the curve fitting process in kaleidagraph as good as possible. There I've found they use the levenberg-marquardt algorithm which I've selected. However results still vary and I don't have any more clues about how I can change this.
Some extra adjustments:
The idea for this code was the following:
I'm trying to compare different fitting models (they are designed for this purpose). So what I do is I have 5 models with different parameters and different starting values ( the second part of my code) and next I have the general curve fitting file. Since there are different models it would be interesting if I could put restrictions into how far my starting values could wander off.
Anyone any idea how this could be done?
Anybody willing to help a psychology student?
Cheers
This is a common issue when dealing with non-linear models.
If I were, you, I would try to check if you can remove some parameters from the model in order to simplify it.
If you really want to keep your solution not too far from the initial point, you can use upper bounds and lower bounds for each variable:
x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub)
defines a set of lower and upper bounds on the design variables in x so that the solution is always in the range lb ≤ x ≤ ub.
Cheers
You state:
I'm trying to compare different fitting models (they are designed for
this purpose). So what I do is I have 5 models with different
parameters and different starting values ( the second part of my code)
and next I have the general curve fitting file.
You will presumably compare the statistics from fits with different models, to see whether reductions in the fitting error are unlikely to be due to chance. You may want to rely on that comparison to pick the model that not only fits your data suitably but is also simplest (which is often referred to as the principle of parsimony).
The problem is really with the model you have shown resulting in correlated parameters and therefore overfitting, as mentioned by #David. Again, this should be resolved when you compare different models and find that some do just as well (statistically speaking) even though they involve fewer parameters.
edit
To drive the point home regarding the problem with the choice of model, here are (1) results of a trial fit using simulated data (2) the correlation matrix of the parameters in graphical form:
Note that absolute values of the correlation close to 1 indicate strongly correlated parameters, which is highly undesirable. Note also that the trend in the data is practically linear over a long portion of the dataset, which implies that 2 parameters might suffice over that stretch, so using 8 parameters to describe it seems like overkill.

Matlab: poor accuracy of optimizers/solvers

I am having difficulty achieving sufficient accuracy in a root-finding problem on Matlab. I have a function, Lik(k), and want to find the value of k where Lik(k)=L0. Basically, the problem is that various built-in Matlab solvers (fzero, fminbnd, fmincon) are not getting as close to the solution as I would like or expect.
Lik() is a user-defined function which involves extensive coding to compute a numerical inverse Laplace transform, etc., and I therefore do not include the full code. However, I have used this function extensively and it appears to work properly. Lik() actually takes several input parameters, but for the current step, all of these are fixed except k. So it is really a one-dimensional root-finding problem.
I want to find the value of k >= 165.95 for which Lik(k)-L0 = 0. Lik(165.95) is less than L0 and I expect Lik(k) to increase monotonically from here. In fact, I can evaluate Lik(k)-L0 in the range of interest and it appears to smoothly cross zero: e.g. Lik(165.95)-L0 = -0.7465, ..., Lik(170.5)-L0 = -0.1594, Lik(171)-L0 = -0.0344, Lik(171.5)-L0 = 0.1015, ... Lik(173)-L0 = 0.5730, ..., Lik(200)-L0 = 19.80. So it appears that the function is behaving nicely.
However, I have tried to "automatically" find the root with several different methods and the accuracy is not as good as I would expect...
Using fzero(#(k) Lik(k)-L0): If constrained to the interval (165.95,173), fzero returns k=170.96 with Lik(k)-L0=-0.045. Okay, although not great. And for practical purposes, I would not know such a precise upper bound without a lot of manual trial and error. If I use the interval (165.95,200), fzero returns k=167.19 where Lik(k)-L0 = -0.65, which is rather poor. I have been running these tests with Display set to iter so I can see what's going on, and it appears that fzero hits 167.19 on the 4th iteration and then stays there on the 5th iteration, meaning that the change in k from one iteration to the next is less than TolX (set to 0.001) and thus the procedure ends. The exit flag indicates that it successfully converged to a solution.
I also tried minimizing abs(Lik(k)-L0) using fminbnd (giving upper and lower bounds on k) and fmincon (giving a starting point for k) and ran into similar accuracy issues. In particular, with fmincon one can set both TolX and TolFun, but playing around with these (down to 10^-6, much higher precision than I need) did not make any difference. Confusingly, sometimes the optimizer even finds a k-value on an earlier iteration that is closer to making the objective function zero than the final k-value it returns.
So, it appears that the algorithm is iterating to a certain point, then failing to take any further step of sufficient size to find a better solution. Does anyone know why the algorithm does not take another, larger step? Is there anything I can adjust to change this? (I have looked at the list under optimset but did not come up with anything useful.)
Thanks a lot!
As you seem to have a 'wild' function that does appear to be monotone in the region, a fairly small range of interest, and not a very high requirement in precision I think all criteria are met for recommending the brute force approach.
Assuming it does not take too much time to evaluate the function in a point, please try this:
Find an upperbound xmax and a lower bound xmin, choose a preferred stepsize and evaluate your function at
xmin:stepsize:xmax
If required (and monotonicity really applies) you can get another upper and lower bound by doing this and repeat the process for better accuracy.
I also encountered this problem while using fmincon. Here is how I fixed it.
I needed to find the solution of a function (single variable) within an optimization loop (multiple variables). Because of this, I needed to provide a large interval for the solution of the single variable function. The problem is that fmincon (or fzero) does not converge to a solution if the search interval is too large. To get past this, I solve the problem inside a while loop, with a huge starting upperbound (1e200) with the constraint made on the fval value resulting from the solver. If the resulting fval is not small enough, I decrease the upperbound by a factor. The code looks something like this:
fval = 1;
factor = 1;
while fval>1e-7
UB = factor*1e200;
[x,fval,exitflag] = fminbnd(#(x)function(x,...),LB,UB,options);
factor = factor * 0.001;
end
The solver exits the while when a good solution is found. You can of course play also with the LB by introducing another factor and/or increase the factor step.
My 1st language isn't English so I apologize for any mistakes made.
Cheers,
Cristian
Why not use a simple bisection method? You always evaluate the middle of a certain interval and then reduce this to the right or left part so that you always have one bound giving a negative and the other bound giving a positive value. You can reduce to arbitrary precision very quickly. Since you reduce the interval in half each time it should converge very quickly.
I would suspect however there is some other problem with that function in that it has discontinuities. It seems strange that fzero would work so badly. It's a deterministic function right?