I'm trying to do a fit on cftool for a basic oscillator. The problem is that Matlab won't make a fit; it keeps drawing a straight line. I've been experimenting with the starting points and limits, but to no avail.
The problem is probably something trivial, but I can't figure out the problem.
Current fit:
You are using custom equation y = f(x) = a * exp(-b*x) * sin(dx+e) + c.
Matlab understands dx inside the sin above as a constant coefficient, so you have the sin of a constant, which is a constant number itself.
cftool is left then trying to approximate a sinusoidal motion with f(x), which at this point is a custom exponential function of the type const * exp(-const * x) + const, so the best it can do is to yield the mean value, that is, ~0.176.
In order to correct this, just substitute d*x for dx inside the sin in your custom function.
In addition to the pertinent answer from Lingo.
In practical use of non-linear regression sofware, a frequent cause of failure or of not good convergence is the initial setting of values of the parameters. The values of parameters given below would be very good to start a non-linear regression calculus.
Those values are probably more or less biaised because the data was not available on numerical form but only from the graph provided by the OP. "Substitue" data was obtained from scanning the graph. This isn't an accurate way.
NOTE : The linearised regression method used to compute the above approximate values is explained in https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales
Related
I have discrete data of a 2D function defined as
theta = linspace(0,pi,nTheta);
phi = linspace(0,2*pi,nPhi);
p=zeros(nPhi,nTheta);%only to show the dimension of my matrix
[np,nt]=ndgrid(phi,theta);
f1 = griddedInterpolant(np,nt,p,'spline');
f2= #(np,nt) f1(np,nt);
integral2(f2,0,2*pi,0,pi)
Note that p is calculated from a complex physical problem, but i showed above how it is initialized.
Also, I can increase nTheta and nPhi, which leads to more accurate calculation of p.
My calculated function (with nPhi=400,nTheta=200) is something like:
I tried 3 ways :
using Trapz function
using the code above but with linear interpolation for gridded interpolant
using the code above with spline interpolation
Although the spline is better than others, i still need to increase nPhi and nTheta, which makes it impossible for me to do the simulation due to its cost.
Is there any suggestion except these 3 methods or any general suggestion how i can do this computation more efficient? (I also took advantage of the symmetry in both directions)
Note that the shape of my function varies in each time step, so a local mesh refinement might be challenging because i don't know the detail of my function in advance.
For a project in the University I am working with several "Quality Assessement" metrics on Finger-Vein images.
Now I try to implement a metric that uses the Radon Transform and I got stuck at some point doing this in Matlab.
My problem is as follows:
I got the following formula for the Radon Transform. In the first steps I used the built in one in Matlab, but for further implementing the metric I need the derivation of the thing for the Curvature of the curve.
the delta is the dirac-delta function.
Derivation:
So my intention is to calculate the Radon Transform on my own with the formula but my problem is that F(x,y) is the gray value of the pixel located at (x,y). And so I need a Function F(x,y) that gives me the gray value of the pixel that I can put in to calculate the derivates and the double integral.
How can I get such a function? Or got I do some kind of "Curve Fitting" with my values of the pixels that I get a function?
Thanks in advance.
As I understand your question, there are two things that you could do:
Compute the derivatives of the Radon transform numerically (as suggested by Ander Biguri in a comment above). If you compute the Radon transform carefully, it will be a band-limited function, making the computation of derivatives possible. See this paper for some ideas on how to enforce a band-limited transform:
"The generalized Radon transform: sampling, accuracy and memory considerations" (PDF).
Compute the derivatives of the image numerically, then sample those derivatives to compute your C function. That is, you compute dF/dx, dF/dy, d^2F/dx^2, and whichever derivatives you need as images. You can interpolate into these derivatives if you need more precision.
IMO the best way to compute derivatives of a discrete image is through Gaussian derivatives. Note that this applies to both solutions above. For example dF/dx (Fx) can be computed by (see here for more details):
h = fspecial('gaussian',[1,2*cutoff+1],sigma);
dh = h .* (-cutoff:cutoff) / (-sigma^2);
Fx = conv2(dh,h,F,'same');
PS: sorry for all the self-references, but I have worked on these topics quite a bit in the past. :)
I have a discrete curve y=f(x). I know the locations and amplitudes of peaks. I want to approximate the curve by fitting a gaussian at each peak. How should I go about finding the optimized gaussian parameters ? I would like to know if there is any inbuilt function which will make my task simpler.
Edit
I have fixed mean of gaussians and tried to optimize on sigma using
lsqcurvefit() in matlab. MSE is less. However, I have an additional hard constraint that the value of approximate curve should be equal to the original function at the peaks. This constraint is not satisfied by my model. I am pasting current working code here. I would like to have a solution which obeys the hard constraint at peaks and approximately fits the curve at other points. The basic idea is that the approximate curve has fewer parameters but still closely resembles the original curve.
fun = #(x,xdata)myFun(x,xdata,pks,locs); %pks,locs are the peak locations and amplitudes already available
x0=w(1:6)*0.25; % my initial guess based on domain knowledge
[sigma resnorm] = lsqcurvefit(fun,x0,xdata,ydata); %xdata and ydata are the original curve data points
recons = myFun(sigma,xdata,pks,locs);
figure;plot(ydata,'r');hold on;plot(recons);
function f=myFun(sigma,xdata,a,c)
% a is constant , c is mean of individual gaussians
f=zeros(size(xdata));
for i = 1:6 %use 6 gaussians to approximate function
f = f + a(i) * exp(-(xdata-c(i)).^2 ./ (2*sigma(i)^2));
end
end
If you know your peak locations and amplitudes, then all you have left to do is find the width of each Gaussian. You can think of this as an optimization problem.
Say you have x and y, which are samples from the curve you want to approximate.
First, define a function g() that will construct the approximation for given values of the widths. g() takes a parameter vector sigma containing the width of each Gaussian. The locations and amplitudes of the Gaussians will be constrained to the values you already know. g() outputs the value of the sum-of-gaussians approximation at each point in x.
Now, define a loss function L(), which takes sigma as input. L(sigma) returns a scalar that measures the error--how badly the given approximation (using sigma) differs from the curve you're trying to approximate. The squared error is a common loss function for curve fitting:
L(sigma) = sum((y - g(sigma)) .^ 2)
The task now is to search over possible values of sigma, and find the choice that minimizes the error. This can be done using a variety of optimization routines.
If you have the Mathworks optimization toolbox, you can use the function lsqnonlin() (in this case you won't have to define L() yourself). The curve fitting toolbox is probably an alternative. Otherwise, you can use an open source optimization routine (check out cvxopt).
A couple things to note. You need to impose the constraint that all values in sigma are greater than zero. You can tell the optimization algorithm about this constraint. Also, you'll need to specify an initial guess for the parameters (i.e. sigma). In this case, you could probably choose something reasonable by looking at the curve in the vicinity of each peak. It may be the case (when the loss function is nonconvex) that the final solution is different, depending on the initial guess (i.e. you converge to a local minimum). There are many fancy techniques for dealing with this kind of situation, but a simple thing to do is to just try with multiple different initial guesses, and pick the best result.
Edited to add:
In python, you can use optimization routines in the scipy.optimize module, e.g. curve_fit().
Edit 2 (response to edited question):
If your Gaussians have much overlap with each other, then taking their sum may cause the height of the peaks to differ from your known values. In this case, you could take a weighted sum, and treat the weights as another parameter to optimize.
If you want the peak heights to be exactly equal to some specified values, you can enforce this constraint in the optimization problem. lsqcurvefit() won't be able to do it because it only handles bound constraints on the parameters. Take a look at fmincon().
you can use Expectation–Maximization algorithm for fitting Mixture of Gaussians on your data. it don't care about data dimension.
in documentation of MATLAB you can lookup gmdistribution.fit or fitgmdist.
How do I calculate the time it takes for a curve to reach a specific x coordinate (in Matlab). Let's say we have:
dx/dt = x^2 + y^2 and dy/dt = 5.x.y and the curve starts at the point (a,b). With help from ode45 I was able to get the figure of the curve. I need too calculate the time it takes for the curve too reach x = c, (c>a). I've been told that this can be done by interpolation, but I have no idea how to write the code.
Depending on the behavior of your system around c, using data interpolation methods such as interp1 on the output may or may not work. The more rigorous way to solve this is either with events (see my answers here or here) or by using the single structure output argument form of ode45 in conjunction with deval and regular data interpolation methods. Both of these use polynomial interpolation methods designed to work with the underlying ODEs. Though more complicated, events are probably the best way to accuratly determine crossing times like your case.
I have a curve IxV. I also have an equation that I want to fit in this IxV curve, so I can adjust its constants. It is given by:
I = I01(exp((V-R*I)/(n1*vth))-1)+I02(exp((V-R*I)/(n2*vth))-1)
vth and R are constants already known, so I only want to achieve I01, I02, n1, n2. The problem is: as you can see, I is dependent on itself. I was trying to use the curve fitting toolbox, but it doesn't seem to work on recursive equations.
Is there a way to make the curve fitting toolbox work on this? And if there isn't, what can I do?
Assuming that I01 and I02 are variables and not functions, then you should set the problem up like this:
a0 = [I01 I02 n1 n2];
MinFun = #(a) abs(a(1)*(exp(V-R*I)/(a(3)*vth))-1) + a(2)*(exp((V-R*I)/a(4)*vth))-1) - I);
aout = fminsearch(a0,MinFun);
By subtracting I and taking the absolute value, the point where both sides are equal will be the point where MinFun is zero (minimized).
No, the CFTB cannot fit such recursively defined functions. And errors in I, since the true value of I is unknown for any point, will create a kind of errors in variables problem. All you have are the "measured" values for I.
The problem of errors in I MAY be serious, since any errors in I, or lack of fit, noise, model problems, etc., will be used in the expression itself. Then you exponentiate these inaccurate values, potentially casing a mess.
You may be able to use an iterative approach. Thus something like
% 0. Initialize I_pred
I_pred = I;
% 1. Estimate the values of your coefficients, for this model:
% (The curve fitting toolbox CAN solve this problem, given I_pred)
I = I01(exp((V-R*I_pred)/(n1*vth))-1)+I02(exp((V-R*I_pred)/(n2*vth))-1)
% 2. Generate new predictions for I_pred
I_pred = I01(exp((V-R*I_pred)/(n1*vth))-1)+I02(exp((V-R*I_pred)/(n2*vth))-1)
% Repeat steps 1 and 2 until the parameters from the CFTB stabilize.
The above pseudo-code will work only if your starting values are good, and there are not large errors/noise in the model/data. Even on a good day, the above approach may not converge well. But I see little hope otherwise.