How to solve equations with complex coefficients using ode45 in MATLAB? - matlab

I am trying to solve two equations with complex coefficients using ode45.
But iam getting an error message as "Inputs must be floats, namely single or
double."
X = sym(['[',sprintf('X(%d) ',1:2),']']);
Eqns=[-(X(1)*23788605396486326904946699391889*1i)/38685626227668133590597632 + (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632; (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632 + X(1)*(- 2500000 + (5223289665997855453060886952725538686654593059791*1i)/324518553658426726783156020576256)] ;
f=#(t,X)[Eqns];
[t,Xabc]=ode45(f,[0 300*10^-6],[0 1])
How can i fix this ? Can somebody can help me ?

Per the MathWorks Support Team, the "ODE solvers in MATLAB 5 (R12) and later releases properly handle complex valued systems." So the complex numbers are the not the issue.
The error "Inputs must be floats, namely single or double." stems from your definition of f using Symbolic Variables that are, unlike complex numbers, not floats. The easiest way to get around this is to not use the Symbolic Toolbox at all; just makes Eqns an anonymous function:
Eqns= #(t,X) [-(X(1)*23788605396486326904946699391889*1i)/38685626227668133590597632 + (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632; (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632 + X(1)*(- 2500000 + (5223289665997855453060886952725538686654593059791*1i)/324518553658426726783156020576256)] ;
[t,Xabc]=ode45(Eqns,[0 300*10^-6],[0 1]);
That being said, I'd like to point out that numerically time integrating this system over 300 microseconds (I assume without units given) will take a long time since your coefficient matrix has imaginary eigenvalues on the order of 10E+10. The extremely short wavelength of those oscillations will more than likely be resolved by Matlab's adaptive methods, and that will take a while to solve for a time span just a few orders greater than the wavelength.
I'd, therefore, suggest an analytical approach to this problem; unless it is a stepping stone another problem that is non-analytically solvable.
Systems of ordinary differential equations of the form
,
which is a linear, homogenous system with a constant coefficient matrix, has the general solution
,
where the m-subscripted exponential function is the matrix exponential.
Therefore, the analytical solution to the system can be calculated exactly assuming the matrix exponential can be calculated.
In Matlab, the matrix exponential is calculate via the expm function.
The following code computes the analytical solution and compares it to the numerical one for a short time span:
% Set-up
A = [-23788605396486326904946699391889i/38685626227668133590597632,23788605396486326904946699391889i/38685626227668133590597632;...
-2500000+5223289665997855453060886952725538686654593059791i/324518553658426726783156020576256,23788605396486326904946699391889i/38685626227668133590597632];
Eqns = #(t,X) A*X;
X0 = [0;1];
% Numerical
options = odeset('RelTol',1E-8,'AbsTol',1E-8);
[t,Xabc]=ode45(Eqns,[0 1E-9],X0,options);
% Analytical
Xana = cell2mat(arrayfun(#(tk) expm(A*tk)*X0,t,'UniformOutput',false)')';
k = 1;
% Plots
figure(1);
subplot(3,1,1)
plot(t,abs(Xana(:,k)),t,abs(Xabc(:,k)),'--');
title('Magnitude');
subplot(3,1,2)
plot(t,real(Xana(:,k)),t,real(Xabc(:,k)),'--');
title('Real');
ylabel('Values');
subplot(3,1,3)
plot(t,imag(Xana(:,k)),t,imag(Xabc(:,k)),'--');
title('Imaginary');
xlabel('Time');
The comparison plot is:
The output of ode45 matches the magnitude and real parts of the solution very well, but the imaginary portion is out-of-phase by exactly π.
However, since ode45's error estimator only looks at norms, the phase difference is not noticed which may lead to problems depending on the application.
It will be noted that while the matrix exponential solution is far more costly than ode45 for the same number of time vector elements, the analytical solution will produce the exact solution for any time vector of any density given to it. So for long time solutions, the matrix exponential can be viewed as an improvement in some sense.

Related

Speed up calculation in Physics simulation in Matlab

I am working on a MR-physic simulation written in Matlab which simulates bloch's equations on an defined object. The magnetisation in the object is updated every time-step with the following functions.
function Mt = evolveMtrans(gamma, delta_B, G, T2, Mt0, delta_t)
% this function calculates precession and relaxation of the
% transversal component, Mt, of M
delta_phi = gamma*(delta_B + G)*delta_t;
Mt = Mt0 .* exp(-delta_t*1./T2 - 1i*delta_phi);
end
This function is a very small part of the entire code but is called upon up to 250.000 times and thus slows down the code and the performance of the entire simulation. I have thought about how I can speed up the calculation but haven't come up with a good solution. There is one line that is VERY time consuming and stands for approximately 50% - 60% of the overall simulation time. This is the line,
Mt = Mt0 .* exp(-delta_t*1./T2 - 1i*delta_phi);
where
Mt0 = 512x512 matrix
delta_t = a scalar
T2 = 512x512 matrix
delta_phi = 512x512 matrix
I would be very grateful for any suggestion to speed up this calculation.
More info below,
The function evovleMtrans is called every timestep during the simulation.
The parameters that are used for calling the function are,
gamma = a constant. (gyramagnetic constant)
delta_B = the magnetic field value
G = gradientstrength
T2 = a 512x512 matrix with T2-values for the object
Mstart.r = a 512x512 matrix with the values M.r had the last timestep
delta_t = a scalar with the difference in time since the last calculated M.r
The only parameters of these that changed during the simulation are,
G, Mstart.r and delta_t. The rest do not change their values during the simulation.
The part below is the part in the main code that calls the function.
% update phase and relaxation to calcTime
delta_t = calcTime - Mstart_t;
delta_B = (d-d0)*B0;
G = Sq.Gx*Sq.xGxref + Sq.Gz*Sq.zGzref;
% Precession around B0 (z-axis) and B1 (+-x-axis or +-y-axis)
% is defined clock-wise in a right hand system x, y, z and
% x', y', z (see the Bloch equation, Bloch 1946 and Levitt
% 1997). The x-axis has angle zero and the y-axis has angle 90.
% For flipping/precession around B1 in the xy-plane, z-axis has
% angle zero.
% For testing of precession direction:
% delta_phi = gamma*((ones(size(d)))*1e-6*B0)*delta_t;
M.r = evolveMtrans(gamma, delta_B, G, T2, Mstart.r, delta_t);
M.l = evolveMlong(T1, M0.l, Mstart.l, delta_t);
This is not a surprise.
That "single line" is a matrix equation. It's really 1,024 simultaneous equations.
Per Jannick, that first term means element-wise division, so "delta_t/T[i,j]". Multiplying a matrix by a scalar is O(N^2). Matrix addition is O(N^2). Evaluating exponential of a matrix will be O(N^2).
I'm not sure if I saw a complex argument in there as well. Does that mean complex matricies with real and imaginary entries? Does your equation simplify to real and imaginary parts? That means twice the number of computations.
Your best hope is to exploit symmetry as much as possible. If all your matricies are symmetric, you cut your calculations roughly in half.
Use parallelization if you can.
Algorithm choice can make a big difference, too. If you're using explicit Euler integration, you may have time step limitations due to stability concerns. Is that why you have 250,000 steps? Maybe a larger time step is possible with a more stable integration schema. Think about a higher order adaptive scheme with error correction, like 5th order Runge Kutta.
There are several possibilities to improve the speed of the code but all that I see come with a caveat.
Numerical ode integration
The first possibility would be to change your analytical solution by numerical differential equation solver. This has several advantages
The analytical solution includes the complex exponential function, which is costly to calculate, while the differential equation contains only multiplication and addition. (d/dt u = -a u => u=exp(-at))
There are plenty of built-in solvers for matlab available and they are typically pretty fast (e.g. ode45). The built-ins however all use a variable step size. This improves speed and accuracy but would be a problem if you really need a fixed equally spaced grid of time points. Here are unofficial fixed step solvers.
As a start you could also try to use just an euler step by replacing
M.r = evolveMtrans(gamma, delta_B, G, T2, Mstart.r, delta_t);
by
delta_phi = gamma*(delta_B + G)*t_step;
M.r += M.r .* (1-t_step*1./T2 - 1i*delta_phi);
You can then further improve that by precalculating all constant values, e.g. one_over_T1=1/T1, moving delta_phi out of the loop.
Caveat:
You are bound to a minimum step size or the accuracy suffers. Therefore this is only a good idea if you time-spacing is quite fine.
Less points in time
You should carfully analyze whether you really need so many points in time. It seems somewhat puzzling to me that you need so many points. As you know the full analytical solution you can freely choose how to sample the time and maybe use this to your advantage.
Going fortran
This might seem like a grand step but in my experience basic (simple loops, matrix operations etc.) matlab code can be relatively easily translated to fortran line-by-line. This would be especially helpful in addition to my first point. If you still want to use the full analytical solution probably there is not much to gain here because exp is already pretty fast in matlab.

Optimizing huge amounts of calls of fsolve in Matlab

I'm solving a pair of non-linear equations for each voxel in a dataset of a ~billion voxels using fsolve() in MATLAB 2016b.
I have done all the 'easy' optimizations that I'm aware of. Memory localization is OK, I'm using parfor, the equations are in fairly numerically simple form. All discontinuities of the integral are fed to integral(). I'm using the Levenberg-Marquardt algorithm with good starting values and a suitable starting damping constant, it converges on average with 6 iterations.
I'm now at ~6ms per voxel, which is good, but not good enough. I'd need a order of magnitude reduction to make the technique viable. There's only a few things that I can think of improving before starting to hammer away at accuracy:
The splines in the equation are for quick sampling of complex equations. There are two for each equation, one is inside the 'complicated nonlinear equation'. They represent two equations, one which is has a large amount of terms but is smooth and has no discontinuities and one which approximates a histogram drawn from a spectrum. I'm using griddedInterpolant() as the editor suggested.
Is there a faster way to sample points from pre-calculated distributions?
parfor i=1:numel(I1)
sols = fsolve(#(x) equationPair(x, input1, input2, ...
6 static inputs, fsolve options)
output1(i) = sols(1); output2(i) = sols(2)
end
When calling fsolve, I'm using the 'parametrization' suggested by Mathworks to input the variables. I have a nagging feeling that defining a anonymous function for each voxel is taking a large slice of the time at this point. Is this true, is there a relatively large overhead for defining the anonymous function again and again? Do I have any way to vectorize the call to fsolve?
There are two input variables which keep changing, all of the other input variables stay static. I need to solve one equation pair for each input pair so I can't make it a huge system and solve it at once. Do I have any other options than fsolve for solving pairs of nonlinear equations?
If not, some of the static inputs are the fairly large. Is there a way to keep the inputs as persistent variables using MATLAB's persistent, would that improve performance? I only saw examples of how to load persistent variables, how could I make it so that they would be input only once and future function calls would be spared from the assumedly largish overhead of the large inputs?
EDIT:
The original equations in full form look like:
Where:
and:
Everything else is known, solving for x_1 and x_2. f_KN was approximated by a spline. S_low (E) and S_high(E) are splines, the histograms they are from look like:
So, there's a few things I thought of:
Lookup table
Because the integrals in your function do not depend on any of the parameters other than x, you could make a simple 2D-lookup table from them:
% assuming simple (square) range here, adjust as needed
[x1,x2] = meshgrid( linspace(0, xmax, N) );
LUT_high = zeros(size(x1));
LUT_low = zeros(size(x1));
for ii = 1:N
LUT_high(:,ii) = integral(#(E) Fhi(E, x1(1,ii), x2(:,ii)), ...
0, E_high, ...
'ArrayValued', true);
LUT_low(:,ii) = integral(#(E) Flo(E, x1(1,ii), x2(:,ii)), ...
0, E_low, ...
'ArrayValued', true);
end
where Fhi and Flo are helper functions to compute those integrals, vectorized with scalar x1 and vector x2 in this example. Set N as high as memory will allow.
Those lookup tables you then pass as parameters to equationPair() (which allows parfor to distribute the data). Then just use interp2 in equationPair():
F(1) = I_high - interp2(x1,x2,LUT_high, x(1), x(2));
F(2) = I_low - interp2(x1,x2,LUT_low , x(1), x(2));
So, instead of recomputing the whole integral every time, you evaluate it once for the expected range of x, and reuse the outcomes.
You can specify the interpolation method used, which is linear by default. Specify cubic if you're really concerned about accuracy.
Coarse/Fine
Should the lookup table method not be possible for some reason (memory limitations, in case the possible range of x is too big), here's another thing you could do: split up the whole procedure in 2 parts, which I'll call coarse and fine.
The intent of the coarse method is to improve your initial estimates really quickly, but perhaps not so accurately. The quickest way to approximate that integral by far is via the rectangle method:
do not approximate S with a spline, just use the original tabulated data (so S_high/low = [S_high/low#E0, S_high/low#E1, ..., S_high/low#E_high/low]
At the same values for E as used by the S data (E0, E1, ...), evaluate the exponential at x:
Elo = linspace(0, E_low, numel(S_low)).';
integrand_exp_low = exp(x(1)./Elo.^3 + x(2)*fKN(Elo));
Ehi = linspace(0, E_high, numel(S_high)).';
integrand_exp_high = exp(x(1)./Ehi.^3 + x(2)*fKN(Ehi));
then use the rectangle method:
F(1) = I_low - (S_low * Elo) * (Elo(2) - Elo(1));
F(2) = I_high - (S_high * Ehi) * (Ehi(2) - Ehi(1));
Running fsolve like this for all I_low and I_high will then have improved your initial estimates x0 probably to a point close to "actual" convergence.
Alternatively, instead of the rectangle method, you use trapz (trapezoidal method). A tad slower, but possibly a bit more accurate.
Note that if (Elo(2) - Elo(1)) == (Ehi(2) - Ehi(1)) (step sizes are equal), you can further reduce the number of computations. In that case, the first N_low elements of the two integrands are identical, so the values of the exponentials will only differ in the N_low + 1 : N_high elements. So then just compute integrand_exp_high, and set integrand_exp_low equal to the first N_low elements of integrand_exp_high.
The fine method then uses your original implementation (with the actual integral()s), but then starting at the updated initial estimates from the coarse step.
The whole objective here is to try and bring the total number of iterations needed down from about 6 to less than 2. Perhaps you'll even find that the trapz method already provides enough accuracy, rendering the whole fine step unnecessary.
Vectorization
The rectangle method in the coarse step outlined above is easy to vectorize:
% (uses R2016b implicit expansion rules)
Elo = linspace(0, E_low, numel(S_low));
integrand_exp_low = exp(x(:,1)./Elo.^3 + x(:,2).*fKN(Elo));
Ehi = linspace(0, E_high, numel(S_high));
integrand_exp_high = exp(x(:,1)./Ehi.^3 + x(:,2).*fKN(Ehi));
F = [I_high_vector - (S_high * integrand_exp_high) * (Ehi(2) - Ehi(1))
I_low_vector - (S_low * integrand_exp_low ) * (Elo(2) - Elo(1))];
trapz also works on matrices; it will integrate over each column in the matrix.
You'd call equationPair() then using x0 = [x01; x02; ...; x0N], and fsolve will then converge to [x1; x2; ...; xN], where N is the number of voxels, and each x0 is 1×2 ([x(1) x(2)]), so x0 is N×2.
parfor should be able to slice all of this fairly easily over all the workers in your pool.
Similarly, vectorization of the fine method should also be possible; just use the 'ArrayValued' option to integral() as shown above:
F = [I_high_vector - integral(#(E) S_high(E) .* exp(x(:,1)./E.^3 + x(:,2).*fKN(E)),...
0, E_high,...
'ArrayValued', true);
I_low_vector - integral(#(E) S_low(E) .* exp(x(:,1)./E.^3 + x(:,2).*fKN(E)),...
0, E_low,...
'ArrayValued', true);
];
Jacobian
Taking derivatives of your function is quite easy. Here is the derivative w.r.t. x_1, and here w.r.t. x_2. Your Jacobian will then have to be a 2×2 matrix
J = [dF(1)/dx(1) dF(1)/dx(2)
dF(2)/dx(1) dF(2)/dx(2)];
Don't forget the leading minus sign (F = I_hi/lo - g(x) → dF/dx = -dg/dx)
Using one or both of the methods outlined above, you can implement a function to compute the Jacobian matrix and pass this on to fsolve via the 'SpecifyObjectiveGradient' option (via optimoptions). The 'CheckGradients' option will come in handy there.
Because fsolve usually spends the vast majority of its time computing the Jacobian via finite differences, manually computing a value for it manually will normally speed the algorithm up tremendously.
It will be faster, because
fsolve doesn't have to do extra function evaluations to do the finite differences
the convergence rate will increase due to the improved precision of the Jacobian
Especially if you use the rectangle method or trapz like above, you can reuse many of the computations you've already done for the function values themselves, meaning, even more speed-up.
Rody's answer was the correct one. Supplying the Jacobian was the single largest factor. Especially with the vectorized version, there were 3 orders of magnitude of difference in speed with the Jacobian supplied and not.
I had trouble finding information about this subject online so I'll spell it out here for future reference: It is possible to vectorize independant parallel equations with fsolve() with great gains.
I also did some work with inlining fsolve(). After supplying the Jacobian and being smarter about the equations, the serial version of my code was mostly overhead at ~1*10^-3 s per voxel. At that point most of the time inside the function was spent passing around a options -struct and creating error-messages which are never sent + lots of unused stuff assumedly for the other optimization functions inside the optimisation function (levenberg-marquardt for me). I succesfully butchered the function fsolve and some of the functions it calls, dropping the time to ~1*10^-4s per voxel on my machine. So if you are stuck with a serial implementation e.g. because of having to rely on the previous results it's quite possible to inline fsolve() with good results.
The vectorized version provided the best results in my case, with ~5*10^-5 s per voxel.

Simple script that computes a solution of linear ODEs giving wrong result

this a question that envolves both programming and mathematics. So, I'm trying to write a code that computes the general solution of a system of linear ODEs described by . The mathematical formula it's shown above:
where the greek symbol \PHI that appers in the equation is the expm(A*t)
clear all
A=[-2]; %system matrix
t0=1; %initial time of simulation
tf=2; %final time of simulation
syms t x_0
x0=x_0;
hom=expm(A*t); %hom means "homogeneous solution"
hom_initialcond=hom*x0;%this is the homogeneous solution multiplied by the initial conditon
invhom=inv(hom); %this is the inverse of the greek letter at which, multiplied by the input of the system, composes the integrand of the integral
g=5*cos(2*t); %system input
integrand=invhom*g; %computation of the integrand
integral=int(integrand,t0,t); %computation of the definite integral from t0 to t, as shown by the math formula
partsol=hom*integral; %this is the particular solution
gen_sol=partsol+hom_initialcond %this is the general solution
x_0=1; %this is the initial condition
t=linspace(t0,tf); %vector of time from t0 to tf
y=double(subs(gen_sol)); %here I am evaluating my symbolic expression
plot(t,y)
The problem is that my plot of the ODE's solution it's not looking well, as you can see:
The solution it's wrong because the curve shown in the graph doesnt start at the initial value equals 1. But the shape it's very similar from the plot gave by the MATLAB ODE solver:
However, if I set t0=0 then the plot gave by my code and by MATLAB solver it's exacly equal to each other. So, my code it's fine for t0=0 but with any other values my code goes wrong.
The general solution in terms of fundamental matrices is
or more often seen as
But since the initial time is often taken to be zero, the inverse of the fundamental matrix is often omitted since it is the identity for linear, constant coefficient problems at zero (i.e., expm(zeros(n)) == eye(n)) and the c vector is equivalent to the initial condition vector.
Swapping some of the lines around near your symbolic declaration to this
syms t x_0 c_0
hom = expm(A*t) ;
invhom = inv(hom) ;
invhom_0 = subs(invhom,t,sym(t0)) ;
c_0 = invhom_0 * x_0 ;
hom_initialcond = hom * c_0 ;
should provide the correct solution for non-zero initial time.

matlab differential equation

I have the following differential equation which I'm not able to solve.
We know the following about the equation:
D(r) is a third grade polynom
D'(1)=D'(2)=0
D(2)=2D(1)
u(1)=450
u'(2)=-K * (u(2)-Te)
Where K and Te are constants.
I want to approximate the problem using a matrix and I managed to solve
the similiar equation: with the same limit conditions for u(1) and u'(2).
On this equation I approximated u' and u'' with central differences and used a finite difference method between r=1 to r=2. I then placed the results in a matrix A in matlab and the limit conditions in the vector Y in matlab and ran u=A\Y to get how the u value changes. Heres my matlab code for the equation I managed to solve:
clear
a=1;
b=2;
N=100;
h = (b-a)/N;
K=3.20;
Ti=450;
Te=20;
A = zeros(N+2);
A(1,1)=1;
A(end,end)=1/(2*h*K);
A(end,end-1)=1;
A(end,end-2)=-1/(2*h*K);
r=a+h:h:b;
%y(i)
for i=1:1:length(r)
yi(i)=-r(i)*(2/(h^2));
end
A(2:end-1,2:end-1)=A(2:end-1,2:end-1)+diag(yi);
%y(i-1)
for i=1:1:length(r)-1
ymin(i)=r(i+1)*(1/(h^2))-1/(2*h);
end
A(3:end-1,2:end-2) = A(3:end-1,2:end-2)+diag(ymin);
%y(i+1)
for i=1:1:length(r)
ymax(i)=r(i)*(1/(h^2))+1/(2*h);
end
A(2:end-1,3:end)=A(2:end-1,3:end)+diag(ymax);
Y=zeros(N+2,1);
Y(1) =Ti;
Y(2)=-(Ti*(r(1)/(h^2)-(1/(2*h))));
Y(end) = Te;
r=[1,r];
u=A\Y;
plot(r,u(1:end-1));
My question is, how do I solve the first differential equation?
As TroyHaskin pointed out in comments, one can determine D up to a constant factor, and that constant factor cancels out in D'/D anyway. Put another way: we can assume that D(1)=1 (a convenient number), since D can be multiplied by any constant. Now it's easy to find the coefficients (done with Wolfram Alpha), and the polynomial turns out to be
D(r) = -2r^3+9r^2-12r+6
with derivative D'(r) = -6r^2+18r-12. (There is also a smarter way to find the polynomial by starting with D', which is quadratic with known roots.)
I would probably use this information right away, computing the coefficient k of the first derivative:
r = a+h:h:b;
k = 1+r.*(-6*r.^2+18*r-12)./(-2*r.^3+9*r.^2-12*r+6);
It seems that k is always positive on the interval [1,2], so if you want to minimize the changes to existing code, just replace r(i) by r(i)/k(i) in it.
By the way, instead of loops like
for i=1:1:length(r)
yi(i)=-r(i)*(2/(h^2));
end
one usually does simply
yi=-r*(2/(h^2));
This vectorization makes the code more compact and can benefit the performance too (not so much in your example, where solving the linear system is the bottleneck). Another benefit is that yi is properly initialized, while with your loop construction, if yi happened to already exist and have length greater than length(r), the resulting array would have extraneous entries. (This is a potential source of hard-to-track bugs.)

Find approximation of sine using least squares

I am doing a project where i find an approximation of the Sine function, using the Least Squares method. Also i can use 12 values of my own choice.Since i couldn't figure out how to solve it i thought of using Taylor's series for Sine and then solving it as a polynomial of order 5. Here is my code :
%% Find the sine of the 12 known values
x=[0,pi/8,pi/4,7*pi/2,3*pi/4,pi,4*pi/11,3*pi/2,2*pi,5*pi/4,3*pi/8,12*pi/20];
y=zeros(12,1);
for i=1:12
y=sin(x);
end
n=12;
j=5;
%% Find the sums to populate the matrix A and matrix B
s1=sum(x);s2=sum(x.^2);
s3=sum(x.^3);s4=sum(x.^4);
s5=sum(x.^5);s6=sum(x.^6);
s7=sum(x.^7);s8=sum(x.^8);
s9=sum(x.^9);s10=sum(x.^10);
sy=sum(y);
sxy=sum(x.*y);
sxy2=sum( (x.^2).*y);
sxy3=sum( (x.^3).*y);
sxy4=sum( (x.^4).*y);
sxy5=sum( (x.^5).*y);
A=[n,s1,s2,s3,s4,s5;s1,s2,s3,s4,s5,s6;s2,s3,s4,s5,s6,s7;
s3,s4,s5,s6,s7,s8;s4,s5,s6,s7,s8,s9;s5,s6,s7,s8,s9,s10];
B=[sy;sxy;sxy2;sxy3;sxy4;sxy5];
Then at matlab i get this result
>> a=A^-1*B
a =
-0.0248
1.2203
-0.2351
-0.1408
0.0364
-0.0021
However when i try to replace the values of a in the taylor series and solve f.e t=pi/2 i get wrong results
>> t=pi/2;
fun=t-t^3*a(4)+a(6)*t^5
fun =
2.0967
I am doing something wrong when i replace the values of a matrix in the Taylor series or is my initial thought flawed ?
Note: i can't use any built-in function
If you need a least-squares approximation, simply decide on a fixed interval that you want to approximate on and generate some x abscissae on that interval (possibly equally spaced abscissae using linspace - or non-uniformly spaced as you have in your example). Then evaluate your sine function at each point such that you have
y = sin(x)
Then simply use the polyfit function (documented here) to obtain least squares parameters
b = polyfit(x,y,n)
where n is the degree of the polynomial you want to approximate. You can then use polyval (documented here) to obtain the values of your approximation at other values of x.
EDIT: As you can't use polyfit you can generate the Vandermonde matrix for the least-squares approximation directly (the below assumes x is a row vector).
A = ones(length(x),1);
x = x';
for i=1:n
A = [A x.^i];
end
then simply obtain the least squares parameters using
b = A\y;
You can clearly optimise the clumsy Vandermonde generation loop above I have just written to illustrate the concept. For better numerical stability you would also be better to use a nice orthogonal polynomial system like Chebyshev polynomials of the first kind. If you are not even allowed to use the matrix divide \ function then you will need to code up your own implementation of a QR factorisation and solve the system that way (or some other numerically stable method).