Runge-kutta for coupled ODEs - matlab

I’m building a function in Octave that can solve N coupled ordinary differential equation of the type:
dx/dt = F(x,y,…,z,t)
dy/dt = G(x,y,…,z,t)
dz/dt = H(x,y,…,z,t)
With any of these three methods (Euler, Heun and Runge-Kutta-4).
The following code correspond to the function:
function sol = coupled_ode(E, dfuns, steps, a, b, ini, method)
range = b-a;
h=range/steps;
rows = (range/h)+1;
columns = size(dfuns)(2)+1;
sol= zeros(abs(rows),columns);
heun=zeros(1,columns-1);
for i=1:abs(rows)
if i==1
sol(i,1)=a;
else
sol(i,1)=sol(i-1,1)+h;
end
for j=2:columns
if i==1
sol(i,j)=ini(j-1);
else
if strcmp("euler",method)
sol(i,j)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end));
elseif strcmp("heun",method)
heun(j-1)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end));
elseif strcmp("rk4",method)
k1=h*dfuns{j-1}(E, [sol(i-1,1), sol(i-1,2:end)]);
k2=h*dfuns{j-1}(E, [sol(i-1,1)+(0.5*h), sol(i-1,2:end)+(0.5*h*k1)]);
k3=h*dfuns{j-1}(E, [sol(i-1,1)+(0.5*h), sol(i-1,2:end)+(0.5*h*k2)]);
k4=h*dfuns{j-1}(E, [sol(i-1,1)+h, sol(i-1,2:end)+(h*k3)]);
sol(i,j)=sol(i-1,j)+((1/6)*(k1+(2*k2)+(2*k3)+k4));
end
end
end
if strcmp("heun",method)
if i~=1
for k=2:columns
sol(i,k)=sol(i-1,k)+(h/2)*((dfuns{k-1}(E, sol(i-1,1:end)))+(dfuns{k-1}(E, [sol(i,1),heun])));
end
end
end
end
end
When I use the function for a single ordinary differential equation, the RK4 method is the best as expected, but when I ran the code for a couple system of differential equation, RK4 is the worst, I've been checking and checking and I don't know what I am doing wrong.
The following code is an example of how to call the function
F{1} = #(e, y) 0.6*y(3);
F{2} = #(e, y) -0.6*y(3)+0.001407*y(4)*y(3);
F{3} = #(e, y) -0.001407*y(4)*y(3);
steps = 24;
sol1 = coupled_ode(0,F,steps,0,24,[0 5 995],"euler");
sol2 = coupled_ode(0,F,steps,0,24,[0 5 995],"heun");
sol3 = coupled_ode(0,F,steps,0,24,[0 5 995],"rk4");
plot(sol1(:,1),sol1(:,4),sol2(:,1),sol2(:,4),sol3(:,1),sol3(:,4));
legend("Euler", "Heun", "RK4");

Careful: there's a few too many h's in the RK4 formulæ:
k2 = h*dfuns{ [...] +(0.5*h*k1)]);
k3 = h*dfuns{ [...] +(0.5*h*k2]);
should be
k2 = h*dfuns{ [...] +(0.5*k1)]);
k3 = h*dfuns{ [...] +(0.5*k2]);
(last h's removed).
However, this makes no difference for the example that you provided, since h=1 there.
But other than that little bug, I don't think you're actually doing anything wrong.
If I plot the solution generated by the more advanced, adaptive 4ᵗʰ/5ᵗʰ order RK implemented in ode45:
F{1} = #(e,y) +0.6*y(3);
F{2} = #(e,y) -0.6*y(3) + 0.001407*y(4)*y(3);
F{3} = #(e,y) -0.001407*y(4)*y(3);
tend = 24;
steps = 24;
y0 = [0 5 995];
plotN = 2;
sol1 = coupled_ode(0,F, steps, 0,tend, y0, 'euler');
sol2 = coupled_ode(0,F, steps, 0,tend, y0, 'heun');
sol3 = coupled_ode(0,F, steps, 0,tend, y0, 'rk4');
figure(1), clf, hold on
plot(sol1(:,1), sol1(:,plotN+1),...
sol2(:,1), sol2(:,plotN+1),...
sol3(:,1), sol3(:,plotN+1));
% New solution, generated by ODE45
opts = odeset('AbsTol', 1e-12, 'RelTol', 1e-12);
fcn = #(t,y) [F{1}(0,[0; y])
F{2}(0,[0; y])
F{3}(0,[0; y])];
[t,solN] = ode45(fcn, [0 tend], y0, opts);
plot(t, solN(:,plotN))
legend('Euler', 'Heun', 'RK4', 'ODE45');
xlabel('t');
Then we have something more believable to compare to.
Now, plain-and-simple RK4 indeed performs terribly for this isolated case:
However, if I simply flip the signs of the last term in the last two functions:
% ±
F{2} = #(e,y) +0.6*y(3) - 0.001407*y(4)*y(3);
F{3} = #(e,y) +0.001407*y(4)*y(3);
Then we get this:
The main reason RK4 performs badly for your case is because of the step size. The adaptive RK4/5 (with a tolerance set to 1 instead of 1e-12 as above) produces an average δt = 0.15. This means that basic error analysis has indicated that for this particular problem, h = 0.15 is the largest step you can take without introducing unacceptable error.
But you were taking h = 1, which then indeed gives a large accumulated error.
The fact that Heun and Euler perform so well for your case is, well, just plain luck, as demonstrated by the sign inversion example above.
Welcome to the world of numerical mathematics - there never is 1 method that's best for all problems under all circumstances :)

Apart from the error described in the older answer, there is indeed a fundamental methodological error in the implementation. First, the implementation is correct for scalar order-one differential equations. But the moment you try to use it on a coupled system, the de-coupled treatment of the stages in the Runge-Kutta method (note that Heun is just a copy of the Euler step) reduces them to an order-one method.
Specifically, starting in
k2=h*dfuns{j-1}(E, [sol(i-1,1)+(0.5*h), sol(i-1,2:end)+(0.5*h*k1)]);
the addition of 0.5*k1 to sol(i-1,2:end) means to add the vector of slopes of the first stage, not to add the same slope value to all components of the position vector.
Taking this into account results in the change to the implementation
function sol = coupled_ode(E, dfuns, steps, a, b, ini, method)
range = b-a;
h=range/steps;
rows = steps+1;
columns = size(dfuns)(2)+1;
sol= zeros(rows,columns);
k = ones(4,columns);
sol(1,1)=a;
sol(1,2:end)=ini(1:end);
for i=2:abs(rows)
sol(i,1)=sol(i-1,1)+h;
if strcmp("euler",method)
for j=2:columns
sol(i,j)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end));
end
elseif strcmp("heun",method)
for j=2:columns
k(1,j) = h*dfuns{j-1}(E, sol(i-1,1:end));
end
for j=2:columns
sol(i,j)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end)+k(1,1:end));
end
elseif strcmp("rk4",method)
for j=2:columns
k(1,j)=h*dfuns{j-1}(E, sol(i-1,:));
end
for j=2:columns
k(2,j)=h*dfuns{j-1}(E, sol(i-1,:)+0.5*k(1,:));
end
for j=2:columns
k(3,j)=h*dfuns{j-1}(E, sol(i-1,:)+0.5*k(2,:));
end
for j=2:columns
k(4,j)=h*dfuns{j-1}(E, sol(i-1,:)+k(3,:));
end
sol(i,2:end)=sol(i-1,2:end)+(1/6)*(k(1,2:end)+(2*k(2,2:end))+(2*k(3,2:end))+k(4,2:end));
end
end
end
As can be seen, the loop over the vector components is recurring frequently. One can hide this by using a full vectorization using a vector-valued function for the right side of the coupled ODE system.
The plot for the second component of the solution with these changes gives the much more reasonable plot for step size 1
and with a subdivision into 120 intervals for step size 0.2
where the graph for RK4 did not change much while the other two moved towards it from below and above.

Related

Solving DDE in Matlab

I am trying to learn how to solve DDE (delay diff. eq) on Matlab and I am using a very helpful (Youtube-tutorial) where the guy solves examples. In the case of a 3-dimensional system, the code goes as follows:
tau = [1 0.5];
tf = 10;
sol = dde23(#dde,tau,#history,[0 tf]);
t = linspace(0,tf,200);
y = deval(sol,t);
figure(1)
plot(t,y)
function y = history(t)
y = [1;0;-1];
end
The dde function according to the tutorial is:
function dydt = dde(t,y,tau)
y1tau1 = tau(:,1);
y2tau2 = tau(:,2);
dydt = [y1tau1(1)
y(1) - y1tau1(1) + y2tau2(2)
y(2) - y(3)];
end
, where if I get it well, we have a 3by2 matrix with the state variables and their respective delays: the first column is about the first delay tau_1 for each of the three state variables, and the second column is for the tau_2.
Nevertheless, this matrix has only 2 non-zero components and they are the tau_1 delay for y_1 and the tau_2 for y_2.
Given that, I thought it would be the same (at least in this case where we only have one delay for y_1 and one for y_2) as if the function were:
function dydt = dde(t,y,tau)
dydt = [tau(1)
y(1) - tau(1) + tau(2)
y(2) - y(3)];
end
I ran the script for both of them and the results are totally different, both qualitatively and quantitatively, and I cannot figure out why. Could someone explain the difference?
y1tau1(1) and y2tau2(2) are tau(1,1) and tau(2,2)
while
tau(1) and tau(2) are the same as tau(1,1) and tau(2,1).
So the second one is different, there is no reason why the two different, if only slightly so, DDE systems should have the same solutions.
As you have also a general interpretation question, the DDE system is mathematically
dy1(t)/dt = y1(t-1)
dy2(t)/dt = y1(t) - y1(t-1) + y2(t-0.5)
dy3(t)/dt = y2(t) - y3(t)
It is unfortunate that the delayed values are also named tau, it would be better to use something else like yd,
function dydt = dde(t,y,yd)
dydt = [ yd(1,1)
y(1) - yd(1,1) + yd(2,2)
y(2) - y(3) ];
end
This yd has two columns, the first is the value y(t-tau(1))=y(t-1), the second column is y(t-tau(2))=y(t-0.5). The DDE function does not know the delays themselves, it only computes with the values at these delays. These are obtained from a piecewise interpolation function that continues the history function with the solution so far. The history function takes the place of the initial condition.

Strange wrong result for (un)coupled PDEs using MATLAB's pdepe, time is doubled

I am trying to solve two coupled reaction diffusion equations in 1d, using pdpe, namely
$\partial_t u_1 = \nabla^2 u_1 + 2k(-u_1^2+u_2)$
$\partial_t u_2 = \nabla^2 u_1 + k(u_1^2-u_2)$
The solution is in the domain $x\in[0,1]$, with initial conditions being two identical Gaussian profiles centered at $x=1/2$. The boundary conditions are absorbing for both components, i.e. $u_1(0)=u_2(0)=u_1(1)=u_2(1)=0$.
Pdepe gives me a solution without prompting any errors. However, I think the solutions must be wrong, because when I set the coupling to zero, i.e. $k=0$ (and also if I set it to be very small, say $k=0.001$), the solutions do not coincide with the solution of the simple diffusion equation
$\partial_t u = \nabla^2 u$
as obtained from pdepe itself.
Strangely enough, the solutions $u_1(t)=u_2(t)$ from the "coupled" case with coupling set to zero, and the solution for the case uncoupled by construction $u(t')$ coincide if we set $t'=2t$, that is, the solution of the "coupled" case evolves twice as fast as the solution of the uncoupled case.
Here's a minimal working example:
Coupled case
function [xmesh,tspan,sol] = coupled(k) %argument is the coupling k
std=0.001; %width of initial gaussian
center=1/2; %center of gaussian
xmesh=linspace(0,1,10000);
tspan=linspace(0,1,1000);
sol = pdepe(0,#pdefun,#icfun,#bcfun,xmesh,tspan);
function [c,f,s] = pdefun(x,t,u,dudx)
c=ones(2,1);
f=zeros(2,1);
f(1) = dudx(1);
f(2) = dudx(2);
s=zeros(2,1);
s(1) = 2*k*(u(2)-u(1)^2);
s(2) = k*(u(1)^2-u(2));
end
function u0 = icfun(x)
u0=ones(2,1);
u0(1) = exp(-(x-center)^2/(2*std^2))/(sqrt(2*pi)*std);
u0(2) = exp(-(x-center)^2/(2*std^2))/(sqrt(2*pi)*std);
end
function [pL,qL,pR,qR] = bcfun(xL,uL,xR,uR,t)
pL=zeros(2,1);
pL(1) = uL(1);
pL(2) = uL(2);
pR=zeros(2,1);
pR(1) = uR(1);
pR(2) = uR(2);
qL = [0 0;0 0];
qR = [0 0;0 0];
end
end
Uncoupled case
function [xmesh,tspan,sol] = uncoupled()
std=0.001; %width of initial gaussian
center=1/2; %center of gaussian
xmesh=linspace(0,1,10000);
tspan=linspace(0,1,1000);
sol = pdepe(0,#pdefun,#icfun,#bcfun,xmesh,tspan);
function [c,f,s] = pdefun(x,t,u,dudx)
c=1;
f = dudx;
s=0;
end
function u0 = icfun(x)
u0=exp(-(x-center)^2/(2*std^2))/(sqrt(2*pi)*std);
end
function [pL,qL,pR,qR] = bcfun(xL,uL,xR,uR,t)
pL=uL;
pR=uR;
qL = 0;
qR = 0;
end
end
Now, suppose we run
[xmesh,tspan,soluncoupled] = uncoupled();
[xmesh,tspan,solcoupled] = coupled(0); %coupling k=0, i.e. uncoupled solutions
One can directly check by plotting the solutions for any time index $it$ that, even if they should be identical, the solutions given by each function are not identical, e.g.
hold all
plot(xmesh,soluncoupled(it+1,:),'b')
plot(xmesh,solcoupled(it+1,:,1),'r')
plot(xmesh,solcoupled(it+1,:,2),'g')
On the other hand, if we double the time of the uncoupled solution, the solutions are identical
hold all
plot(xmesh,soluncoupled(2*it+1,:),'b')
plot(xmesh,solcoupled(it+1,:,1),'r')
plot(xmesh,solcoupled(it+1,:,2),'g')
The case $k=0$ is not singular, one can set $k$ to be small but finite, and the deviations from the case $k=0$ are minimal, i.e. the solution still goes twice as fast as the uncoupled solution.
I really don't understand what is going on. I need to work on the coupled case, but obviously I don't trust the results if it does not give the right limit when $k\to 0$. I don't see where I could be making a mistake. Could it be a bug?
I found the source of the error. The problem lies in the qL and qR variables of bcfun for the coupled() function. The MATLAB documentation, see here and here, is slightly ambiguous on whether the q's should be matrices or column vectors. I had used matrices
qL = [0 0;0 0];
qR = [0 0;0 0];
but in reality I should have used column vectors
qL = [0;0];
qR = [0;0];
Amazingly, pdpe didn't throw an error, and simply gave wrong results. This should perhaps be fixed by the developers.

Input equations into Matlab for Simulink Function

I am currently working on an assignment where I need to create two different controllers in Matlab/Simulink for a robotic exoskeleton leg. The idea behind this is to compare both of them and see which controller is better at assisting a human wearing it. I am having a lot of trouble putting specific equations into a Matlab function block to then run in Simulink to get results for an AFO (adaptive frequency oscillator). The link has the equations I'm trying to put in and the following is the code I have so far:
function [pos_AFO, vel_AFO, acc_AFO, offset, omega, phi, ampl, phi1] = LHip(theta, eps, nu, dt, AFO_on)
t = 0;
% syms j
% M = 6;
% j = sym('j', [1 M]);
if t == 0
omega = 3*pi/2;
theta = 0;
phi = pi/2;
ampl = 0;
else
omega = omega*(t-1) + dt*(eps*offset*cos(phi1));
theta = theta*(t-1) + dt*(nu*offset);
phi = phi*(t-1) + dt*(omega + eps*offset*cos(phi*core(t-1)));
phi1 = phi*(t-1) + dt*(omega + eps*offset*cos(phi*core(t-1)));
ampl = ampl*(t-1) + dt*(nu*offset*sin(phi));
offset = theta - theta*(t-1) - sym(ampl*sin(phi), [1 M]);
end
pos_AFO = (theta*(t-1) + symsum(ampl*(t-1)*sin(phi* (t-1))))*AFO_on; %symsum needs input argument for index M and range
vel_AFO = diff(pos_AFO)*AFO_on;
acc_AFO = diff(vel_AFO)*AFO_on;
end
https://www.pastepic.xyz/image/pg4mP
Essentially, I don't know how to do the subscripts, sigma, or the (t+1) function. Any help is appreciated as this is due next week
You are looking to find the result of an adaptive process therefore your algorithm needs to consider time as it progresses. There is no (t-1) operator as such. It is just a mathematical notation telling you that you need to reuse an old value to calculate a new value.
omega_old=0;
theta_old=0;
% initialize the rest of your variables
for [t=1:N]
omega[t] = omega_old + % here is the rest of your omega calculation
theta[t] = theta_old + % ...
% more code .....
% remember your old values for next iteration
omega_old = omega[t];
theta_old = theta[t];
end
I think you forgot to apply the modulo operation to phi judging by the original formula you linked. As a general rule, design your code in small pieces, make sure the output of each piece makes sense and then combine all pieces and make sure the overall result is correct.

Quickly Evaluating MANY matlabFunctions

This post builds on my post about quickly evaluating analytic Jacobian in Matlab:
fast evaluation of analytical jacobian in MATLAB
The key difference is that now, I am working with the Hessian and I have to evaluate close to 700 matlabFunctions (instead of 1 matlabFunction, like I did for the Jacobian) each time the hessian is evaluated. So there is an opportunity to do things a little differently.
I have tried to do this two ways so far and I am thinking about implementing a third and was wondering if anyone has any other suggestions. I will go through each method with a toy example, but first some preprocessing to generate these matlabFunctions:
PreProcessing:
% This part of the code is calculated once, it is not the issue
dvs = 5;
X=sym('X',[dvs,1]);
num = dvs - 1; % number of constraints
% multiple functions
for k = 1:num
f1(X(k+1),X(k)) = (X(k+1)^3 - X(k)^2*k^2);
c(k) = f1;
end
gradc = jacobian(c,X).'; % .' performs transpose
parfor k = 1:num
hessc{k} = jacobian(gradc(:,k),X);
end
parfor k = 1:num
hess_name = strcat('hessian_',num2str(k));
matlabFunction(hessc{k},'file',hess_name,'vars',X);
end
METHOD #1 : Evaluate functions in series
%% Now we use the functions to run an "optimization." Just for an example the "optimization" is just a for loop
fprintf('This is test A, where the functions are evaluated in series!\n');
tic
for q = 1:10
x_dv = rand(dvs,1); % these are the design variables
lambda = rand(num,1); % these are the lagrange multipliers
x_dv_cell = num2cell(x_dv); % for passing large design variables
for k = 1:num
hess_name = strcat('hessian_',num2str(k));
function_handle = str2func(hess_name);
H_temp(:,:,k) = lambda(k)*function_handle(x_dv_cell{:});
end
H = sum(H_temp,3);
end
fprintf('The time for test A was:\n')
toc
METHOD # 2: Evaluate functions in parallel
%% Try to run a parfor loop
fprintf('This is test B, where the functions are evaluated in parallel!\n');
tic
for q = 1:10
x_dv = rand(dvs,1); % these are the design variables
lambda = rand(num,1); % these are the lagrange multipliers
x_dv_cell = num2cell(x_dv); % for passing large design variables
parfor k = 1:num
hess_name = strcat('hessian_',num2str(k));
function_handle = str2func(hess_name);
H_temp(:,:,k) = lambda(k)*function_handle(x_dv_cell{:});
end
H = sum(H_temp,3);
end
fprintf('The time for test B was:\n')
toc
RESULTS:
METHOD #1 = 0.008691 seconds
METHOD #2 = 0.464786 seconds
DISCUSSION of RESULTS
This result makes sense because, the functions evaluate very quickly and running them in parallel waists a lot of time setting up and sending out the jobs to the different Matlabs ( and then getting the data back from them). I see the same result on my actual problem.
METHOD # 3: Evaluating the functions using the GPU
I have not tried this yet, but I am interested to see what the performance difference is. I am not yet familiar with doing this in Matlab and will add it once I am done.
Any other thoughts? Comments? Thanks!

tensile tests in matlab

The problem says:
Three tensile tests were carried out on an aluminum bar. In each test the strain was measured at the same values of stress. The results were
where the units of strain are mm/m.Use linear regression to estimate the modulus of elasticity of the bar (modulus of elasticity = stress/strain).
I used this program for this problem:
function coeff = polynFit(xData,yData,m)
% Returns the coefficients of the polynomial
% a(1)*x^(m-1) + a(2)*x^(m-2) + ... + a(m)
% that fits the data points in the least squares sense.
% USAGE: coeff = polynFit(xData,yData,m)
% xData = x-coordinates of data points.
% yData = y-coordinates of data points.
A = zeros(m); b = zeros(m,1); s = zeros(2*m-1,1);
for i = 1:length(xData)
temp = yData(i);
for j = 1:m
b(j) = b(j) + temp;
temp = temp*xData(i);
end
temp = 1;
for j = 1:2*m-1
s(j) = s(j) + temp;
temp = temp*xData(i);
end
end
for i = 1:m
for j = 1:m
A(i,j) = s(i+j-1);
end
end
% Rearrange coefficients so that coefficient
% of x^(m-1) is first
coeff = flipdim(gaussPiv(A,b),1);
The problem is solved without a program as follows
MY ATTEMPT
T=[34.5,69,103.5,138];
D1=[.46,.95,1.48,1.93];
D2=[.34,1.02,1.51,2.09];
D3=[.73,1.1,1.62,2.12];
Mod1=T./D1;
Mod2=T./D2;
Mod3=T./D3;
xData=T;
yData1=Mod1;
yData2=Mod2;
yData3=Mod3;
coeff1 = polynFit(xData,yData1,2);
coeff2 = polynFit(xData,yData2,2);
coeff3 = polynFit(xData,yData3,2);
x1=(0:.5:190);
y1=coeff1(2)+coeff1(1)*x1;
subplot(1,3,1);
plot(x1,y1,xData,yData1,'o');
y2=coeff2(2)+coeff2(1)*x1;
subplot(1,3,2);
plot(x1,y2,xData,yData2,'o');
y3=coeff3(2)+coeff3(1)*x1;
subplot(1,3,3);
plot(x1,y3,xData,yData3,'o');
What do I have to do to get this result?
As a general advice:
avoid for loops wherever possible.
avoid using i and j as variable names, as they are Matlab built-in names for the imaginary unit (I really hope that disappears in a future release...)
Due to m being an interpreted language, for-loops can be very slow compared to their compiled alternatives. Matlab is named MATtrix LABoratory, meaning it is highly optimized for matrix/array operations. Usually, when there is an operation that cannot be done without a loop, Matlab has a built-in function for it that runs way way faster than a for-loop in Matlab ever will. For example: computing the mean of elements in an array: mean(x). The sum of all elements in an array: sum(x). The standard deviation of elements in an array: std(x). etc. Matlab's power comes from these built-in functions.
So, your problem. You have a linear regression problem. The easiest way in Matlab to solve this problem is this:
%# your data
stress = [ %# in Pa
34.5 69 103.5 138] * 1e6;
strain = [ %# in m/m
0.46 0.95 1.48 1.93
0.34 1.02 1.51 2.09
0.73 1.10 1.62 2.12]' * 1e-3;
%# make linear array for the data
yy = strain(:);
xx = repmat(stress(:), size(strain,2),1);
%# re-formulate the problem into linear system Ax = b
A = [xx ones(size(xx))];
b = yy;
%# solve the linear system
x = A\b;
%# modulus of elasticity is coefficient
%# NOTE: y-offset is relatively small and can be ignored)
E = 1/x(1)
What you did in the function polynFit is done by A\b, but the \-operator is capable of doing it way faster, way more robust and way more flexible than what you tried to do yourself. I'm not saying you shouldn't try to make these thing yourself (please keep on doing that, you learn a lot from it!), I'm saying that for the "real" results, always use the \-operator (and check your own results against it as well).
The backslash operator (type help \ on the command prompt) is extremely useful in many situations, and I advise you learn it and learn it well.
I leave you with this: here's how I would write your polynFit function:
function coeff = polynFit(X,Y,m)
if numel(X) ~= numel(X)
error('polynFit:size_mismathc',...
'number of elements in matrices X and Y must be equal.');
end
%# bad condition number, rank errors, etc. taken care of by \
coeff = bsxfun(#power, X(:), m:-1:0) \ Y(:);
end
I leave it up to you to figure out how this works.