MATLAB Fitting Function - matlab

I am trying to fit a line to some data without using polyfit and polyval. I got some good help already on how to implement this and I have gotten it to work with a simple sin function. However, when applied to the function I am trying to fit, it does not work. Here is my code:
clear all
clc
lb=0.001; %lowerbound of data
ub=10; %upperbound of data
step=.1; %step-size through data
a=.03;
la=1482/120000; %1482 is speed of sound in water and 120kHz
ep1=.02;
ep2=.1;
x=lb:step:ub;
r_sq_des=0.90; %desired value of r^2 for the fit of data without noise present
i=1;
for x=lb:step:ub
G(i,1)= abs(sin((a/la)*pi*x*(sqrt(1+(1/x)^2)-1)));
N(i,1)=2*rand()-1;
Ghat(i,1)=(1+ep1*N(i,1))*G(i,1)+ep2*N(i,1);
r(i,1)=x;
i=i+1;
end
x=r;
y=G;
V=[x.^0];
Vfit=[x.^0];
for i=1:1:1000
V = [x.^i V];
c = V \ y;
Vfit = [x.^i Vfit];
yFit=Vfit*c;
plot(x,y,'o',x,yFit,'--')
drawnow
pause
end
The first two sections are just defining variables and the function. The second for loop is where I am making the fit. As you can see, I have it pause after every nth order in order to see the fit.

I changed your fit formula a bit, I got the same answers but quickly got
a warning that the matrix was singular. No sense in continuing past
the point that the inversion is singular.
Depending on what you are doing you can usually change out variables or change domains.
This doesn't do a lot better, but it seemed to help a little bit.
I increased the number of samples by a factor of 10 since the initial part of the curve
didn't look sampled highly enough.
I added a weighting variable but it is set to equal weight in the code below. Attempts
to deweight the tail didn't help as much as I hoped.
Probably not really a solution, but perhaps will help with a few more knobs/variables.
...
step=.01; %step-size through data
...
x=r;
y=G;
t=x.*sqrt(1+x.^(-2));
t=log(t);
V=[ t.^0];
w=ones(size(t));
for i=1:1:1000
% Trying to solve for value of c
% c that
% yhat = V*c approximates y
% or y = V*c
% V'*y = V'*V * c
% c = (V'*V) \ V'*y
V = [t.^i V];
c = (V'*diag(w.^2)*V ) \ (V'*diag(w.^2)*y) ;
yFit=V*c;
subplot(211)
plot(t,y,'o',t,yFit,'--')
subplot(212)
plot(x,y,'o',x,yFit,'--')
drawnow
pause
end
It looks like more of a frequency estimation problem, and trying to fit a unknown frequency
with polynomial tends to be touch and go. Replacing the polynomial basis with a quick
sin/cos basis didn't seem to do to bad.
V = [sin(t*i) cos(t*i) V];
Unless you specifically need a polynomial basis, you can apply your knowledge of the problem domain to find other potential basis functions for your fit, or to attempt to make the domain in which you are performing the fit more linear.

As dennis mentioned, a different set of basis functions might do better. However you can improve the polynomial fit with QR factorisation, rather than just \ to solve the matrix equation. It is a badly conditioned problem no matter what you do however, and using smooth basis functions wont allow you to accurately reproduce the sharp corners in the actual function.
clear all
close all
clc
lb=0.001; %lowerbound of data
ub=10; %upperbound of data
step=.1; %step-size through data
a=.03;
la=1482/120000; %1482 is speed of sound in water and 120kHz
ep1=.02;
ep2=.1;
x=logspace(log10(lb),log10(ub),100)';
r_sq_des=0.90; %desired value of r^2 for the fit of data without noise present
y=abs(sin(a/la*pi*x.*(sqrt(1+(1./x).^2)-1)));
N=2*rand(size(x))-1;
Ghat=(1+ep1*N).*y+ep2*N;
V=[x.^0];
xs=(lb:.01:ub)';
Vfit=[xs.^0];
for i=1:1:20%length(x)-1
V = [x.^i V];
Vfit = [xs.^i Vfit];
[Q,R]=qr(V,0);
c = R\(Q'*y);
yFit=Vfit*c;
plot(x,y,'o',xs,yFit)
axis([0 10 0 1])
drawnow
pause
end

Related

Find zero crossing points in a signal and plot them matlab

I have a signal 's' of voice of which you can see an extract here:
I would like to plot the zero crossing points in the same graph. I have tried with the following code:
zci = #(v) find(v(:).*circshift(v(:), [-1 0]) <= 0); % Returns Zero-Crossing Indices Of Argument Vector
zx = zci(s);
figure
set(gcf,'color','w')
plot(t,s)
hold on
plot(t(zx),s(zx),'o')
But it does not interpole the points in which the sign change, so the result is:
However, I'd like that the highlighted points were as near as possible to zero.
I hope someone can help me. Thanks you for your responses in advanced.
Try this?
w = 1;
crossPts=[];
for k=1:(length(s)-1)
if (s(k)*s(k+1)<0)
crossPts(w) = (t(k)+t(k+1))/2;
w = w + 1;
end
end
figure
set(gcf,'color','w')
plot(t,s)
hold on
plot(t, s)
plot(crossPts, zeros(length(crossPts)), 'o')
Important questions: what is the highest frequency conponent of the signal you are measuring? Can you remeasure this signal? What is your sampling rate? What is this analysis for? (Schoolwork or scholarly research). You may have quite a bit of trouble measuring the zeros of this function with any significance or accurracy because it looks like your waveform has a frequency greater than half of your sampling rate (greater than your Nyquist frequency). Upsampling/interpolating your entire waveform will allow you to find the zeros much more precisely (but with no greater degree of accurracy) but this is a huge no-no in the scientific community. While my method may not look super pretty, it's the most accurate method that doesn't make unsafe assumptions. If you just want it to look pretty, I would recommend interp1 and using the 'Spline' method. You can interpolate the whole waveform and then use the above answer to find more accurate zeros.
Also, you could calculate the zeros on the interpolated waveform and then display it on the raw data.
A remotely possible solution to improve your data;
If you're measuring a human voice, why not try filtering at the range of human speech? This should be fine mathematically and could possibly improve your waveform.

Speed up calculation in Physics simulation in Matlab

I am working on a MR-physic simulation written in Matlab which simulates bloch's equations on an defined object. The magnetisation in the object is updated every time-step with the following functions.
function Mt = evolveMtrans(gamma, delta_B, G, T2, Mt0, delta_t)
% this function calculates precession and relaxation of the
% transversal component, Mt, of M
delta_phi = gamma*(delta_B + G)*delta_t;
Mt = Mt0 .* exp(-delta_t*1./T2 - 1i*delta_phi);
end
This function is a very small part of the entire code but is called upon up to 250.000 times and thus slows down the code and the performance of the entire simulation. I have thought about how I can speed up the calculation but haven't come up with a good solution. There is one line that is VERY time consuming and stands for approximately 50% - 60% of the overall simulation time. This is the line,
Mt = Mt0 .* exp(-delta_t*1./T2 - 1i*delta_phi);
where
Mt0 = 512x512 matrix
delta_t = a scalar
T2 = 512x512 matrix
delta_phi = 512x512 matrix
I would be very grateful for any suggestion to speed up this calculation.
More info below,
The function evovleMtrans is called every timestep during the simulation.
The parameters that are used for calling the function are,
gamma = a constant. (gyramagnetic constant)
delta_B = the magnetic field value
G = gradientstrength
T2 = a 512x512 matrix with T2-values for the object
Mstart.r = a 512x512 matrix with the values M.r had the last timestep
delta_t = a scalar with the difference in time since the last calculated M.r
The only parameters of these that changed during the simulation are,
G, Mstart.r and delta_t. The rest do not change their values during the simulation.
The part below is the part in the main code that calls the function.
% update phase and relaxation to calcTime
delta_t = calcTime - Mstart_t;
delta_B = (d-d0)*B0;
G = Sq.Gx*Sq.xGxref + Sq.Gz*Sq.zGzref;
% Precession around B0 (z-axis) and B1 (+-x-axis or +-y-axis)
% is defined clock-wise in a right hand system x, y, z and
% x', y', z (see the Bloch equation, Bloch 1946 and Levitt
% 1997). The x-axis has angle zero and the y-axis has angle 90.
% For flipping/precession around B1 in the xy-plane, z-axis has
% angle zero.
% For testing of precession direction:
% delta_phi = gamma*((ones(size(d)))*1e-6*B0)*delta_t;
M.r = evolveMtrans(gamma, delta_B, G, T2, Mstart.r, delta_t);
M.l = evolveMlong(T1, M0.l, Mstart.l, delta_t);
This is not a surprise.
That "single line" is a matrix equation. It's really 1,024 simultaneous equations.
Per Jannick, that first term means element-wise division, so "delta_t/T[i,j]". Multiplying a matrix by a scalar is O(N^2). Matrix addition is O(N^2). Evaluating exponential of a matrix will be O(N^2).
I'm not sure if I saw a complex argument in there as well. Does that mean complex matricies with real and imaginary entries? Does your equation simplify to real and imaginary parts? That means twice the number of computations.
Your best hope is to exploit symmetry as much as possible. If all your matricies are symmetric, you cut your calculations roughly in half.
Use parallelization if you can.
Algorithm choice can make a big difference, too. If you're using explicit Euler integration, you may have time step limitations due to stability concerns. Is that why you have 250,000 steps? Maybe a larger time step is possible with a more stable integration schema. Think about a higher order adaptive scheme with error correction, like 5th order Runge Kutta.
There are several possibilities to improve the speed of the code but all that I see come with a caveat.
Numerical ode integration
The first possibility would be to change your analytical solution by numerical differential equation solver. This has several advantages
The analytical solution includes the complex exponential function, which is costly to calculate, while the differential equation contains only multiplication and addition. (d/dt u = -a u => u=exp(-at))
There are plenty of built-in solvers for matlab available and they are typically pretty fast (e.g. ode45). The built-ins however all use a variable step size. This improves speed and accuracy but would be a problem if you really need a fixed equally spaced grid of time points. Here are unofficial fixed step solvers.
As a start you could also try to use just an euler step by replacing
M.r = evolveMtrans(gamma, delta_B, G, T2, Mstart.r, delta_t);
by
delta_phi = gamma*(delta_B + G)*t_step;
M.r += M.r .* (1-t_step*1./T2 - 1i*delta_phi);
You can then further improve that by precalculating all constant values, e.g. one_over_T1=1/T1, moving delta_phi out of the loop.
Caveat:
You are bound to a minimum step size or the accuracy suffers. Therefore this is only a good idea if you time-spacing is quite fine.
Less points in time
You should carfully analyze whether you really need so many points in time. It seems somewhat puzzling to me that you need so many points. As you know the full analytical solution you can freely choose how to sample the time and maybe use this to your advantage.
Going fortran
This might seem like a grand step but in my experience basic (simple loops, matrix operations etc.) matlab code can be relatively easily translated to fortran line-by-line. This would be especially helpful in addition to my first point. If you still want to use the full analytical solution probably there is not much to gain here because exp is already pretty fast in matlab.

How do I make the actual plot look like the correct plot?

The prompt is to develop a program to evaluate convolutions. This is to be done without using MATLAB's built in conv function. Therefore utilizing the Fourier transform, multiplying the two functions together and then inverse Fourier transforming the product. The transform is done by using direct integration. The trapz function is recommended form of integration to accomplish this goal.
I would appreciate ANY feedback on how to improve my code, please thoroughly explain what the improvements and link an reference as to how they work.
Given the code:
t = -5:.1:5;
w = pi;
X = zeros(101,1);
H = zeros(101,1);
Y = zeros(101,1);
y = zeros(101,1);
if t >= 0
x = 0;
h = 0;
else
x = exp((-3.*t)+(-1i*w.*t));
h = exp((-2*t)+(-1i*w.*t));
end
for k=2:101
X(k)=trapz(t(1:k),x(1:k));
H(k)=trapz(t(1:k),h(1:k));
Y = (X.*H)*exp(1i*w.*t);
y(k) = (1/(2*pi))*trapz(t(1:k),Y(1:k));
end
disp (length(x))
disp (length(X))
disp (length(Y))
disp (length(y))
disp (y)
figure(1);
subplot(1,2,1),plot(t,real(y));grid on;
As I do not have enough reputation to directly post images, the actual output and desired output are as follows:
The actual plot is this.
The desired plot is this.
My primary question is this: why is my plot not working?
Secondarily: What is unneeded in this code? What could make this code more efficient?
I won't do your entire homework, but I'll give you a few clues:
Skip the if t < 0 part, it doesn't work. For your exam, try to understand why. If you can't figure it out, come with your best guess and you might get an explanation =)
Try the following instead (no loops or ifs needed:
x = exp((-3.*t)+(-1i*pi.*t)).*(t>0);
And the same for h. Try to understand what .*(t<0) does in this context.
This one: Y = (X.*H)*exp(1i*w.*t); should be outside the loop. Why? Make a guess and you might get guidance if you're wrong.
Also Y is a 101x101 matrix. I guess you want it to be 101x1? You probably need to transform one of the vectors in the expression used to create Y. Before you do, you should figure out the difference between ' and .' (an important difference in this case).
You are using subplot, but only plotting one graph. If you want to graphs in the same plot, use hold on. If you want to plots beside each other, remember to plot the second one too.
And why use w = pi;, when you can just as well use pi in the equations?

Matlab recursive curve fitting with custom equations

I have a curve IxV. I also have an equation that I want to fit in this IxV curve, so I can adjust its constants. It is given by:
I = I01(exp((V-R*I)/(n1*vth))-1)+I02(exp((V-R*I)/(n2*vth))-1)
vth and R are constants already known, so I only want to achieve I01, I02, n1, n2. The problem is: as you can see, I is dependent on itself. I was trying to use the curve fitting toolbox, but it doesn't seem to work on recursive equations.
Is there a way to make the curve fitting toolbox work on this? And if there isn't, what can I do?
Assuming that I01 and I02 are variables and not functions, then you should set the problem up like this:
a0 = [I01 I02 n1 n2];
MinFun = #(a) abs(a(1)*(exp(V-R*I)/(a(3)*vth))-1) + a(2)*(exp((V-R*I)/a(4)*vth))-1) - I);
aout = fminsearch(a0,MinFun);
By subtracting I and taking the absolute value, the point where both sides are equal will be the point where MinFun is zero (minimized).
No, the CFTB cannot fit such recursively defined functions. And errors in I, since the true value of I is unknown for any point, will create a kind of errors in variables problem. All you have are the "measured" values for I.
The problem of errors in I MAY be serious, since any errors in I, or lack of fit, noise, model problems, etc., will be used in the expression itself. Then you exponentiate these inaccurate values, potentially casing a mess.
You may be able to use an iterative approach. Thus something like
% 0. Initialize I_pred
I_pred = I;
% 1. Estimate the values of your coefficients, for this model:
% (The curve fitting toolbox CAN solve this problem, given I_pred)
I = I01(exp((V-R*I_pred)/(n1*vth))-1)+I02(exp((V-R*I_pred)/(n2*vth))-1)
% 2. Generate new predictions for I_pred
I_pred = I01(exp((V-R*I_pred)/(n1*vth))-1)+I02(exp((V-R*I_pred)/(n2*vth))-1)
% Repeat steps 1 and 2 until the parameters from the CFTB stabilize.
The above pseudo-code will work only if your starting values are good, and there are not large errors/noise in the model/data. Even on a good day, the above approach may not converge well. But I see little hope otherwise.

Any idea to find the local minium?

I have many different data sets which are discrete data. The local minimum is not necessary the smallest data but it is the valley around the first peak. I am trying to find the indices of the first valley around the the first peak. My idea is to search the difference between two neighbor points and when the difference is less than some critical value and when the forward point larger than the backward point, then that's the point we wanted. e.g.
for k=PEAK_POS:END_POS
if ( (abs(y(k)-y(k-1))<=0.01) && (y(k-1)>y(k)) )
expected_pos = k;
break;
end
end
this works for some data set but not for all since some dataset might have different sample step so we might change the critical condition but I have so many data set to analyse, I don't think I can analyze each set manually. I am looking for any better way to find that minimum. Thanks.
As mentioned by #JakobS., optimization theory is a large field in Mathematics, with its own journals and conferences and everything.
Your problem does not sound complicated enough to justify the optimization toolbox (correct me if I'm wrong). It sounds like fminsearch is sufficient for your needs. Here's a brief tutorial for it. Type help fminsearch or doc fminsearch for more info.
% example cost function to minimize
z = #(x) sin(x(:,1)).*cos(x(:,2));
% Make a plot
x = -pi:0.01:pi;
y = x;
[x,y] = meshgrid(x,y);
figure(1), surf(x,y, reshape(z([x(:) y(:)]), size(x)), 'edgecolor', 'none')
% find local minimum
[best, fval] = fminsearch(z, [pi,pi])
The result is
best =
1.570819831365890e+00 3.141628097071647e+00
fval =
-9.999999990956473e-01
Which is obviously a very reasonable approximation to the expected local optimum.
Optimization problems are a very broad topic and there has been done a lot already, it's not necessarily a good idea to start coding your own algorithms. For matlab there is the optimization toolbox, which might help: http://www.mathworks.de/products/optimization/
I would use the condition that to the left of a minimum its derivation <0 and to the right >0.
Like in this example:
x = cumsum(rand(1,100)); % nonuniform distance
y = 5*sin(x/10)+randn(size(x)); % example data
dd = diff(y);
ig = [false (dd(1:end-1)<0 & dd(2:end)>0) false]; % patch to correct length
plot(x,y)
line(x(ig),y(ig),'Marker','+','LineStyle','none','Color','r')
and as you stat you'd like the first after a peak:
x_peak = 15;
candidates = x(ig);
i_min=find(candidates>x_peak,1,'first');
candidates(i_min)