Strange wrong result for (un)coupled PDEs using MATLAB's pdepe, time is doubled - matlab

I am trying to solve two coupled reaction diffusion equations in 1d, using pdpe, namely
$\partial_t u_1 = \nabla^2 u_1 + 2k(-u_1^2+u_2)$
$\partial_t u_2 = \nabla^2 u_1 + k(u_1^2-u_2)$
The solution is in the domain $x\in[0,1]$, with initial conditions being two identical Gaussian profiles centered at $x=1/2$. The boundary conditions are absorbing for both components, i.e. $u_1(0)=u_2(0)=u_1(1)=u_2(1)=0$.
Pdepe gives me a solution without prompting any errors. However, I think the solutions must be wrong, because when I set the coupling to zero, i.e. $k=0$ (and also if I set it to be very small, say $k=0.001$), the solutions do not coincide with the solution of the simple diffusion equation
$\partial_t u = \nabla^2 u$
as obtained from pdepe itself.
Strangely enough, the solutions $u_1(t)=u_2(t)$ from the "coupled" case with coupling set to zero, and the solution for the case uncoupled by construction $u(t')$ coincide if we set $t'=2t$, that is, the solution of the "coupled" case evolves twice as fast as the solution of the uncoupled case.
Here's a minimal working example:
Coupled case
function [xmesh,tspan,sol] = coupled(k) %argument is the coupling k
std=0.001; %width of initial gaussian
center=1/2; %center of gaussian
xmesh=linspace(0,1,10000);
tspan=linspace(0,1,1000);
sol = pdepe(0,#pdefun,#icfun,#bcfun,xmesh,tspan);
function [c,f,s] = pdefun(x,t,u,dudx)
c=ones(2,1);
f=zeros(2,1);
f(1) = dudx(1);
f(2) = dudx(2);
s=zeros(2,1);
s(1) = 2*k*(u(2)-u(1)^2);
s(2) = k*(u(1)^2-u(2));
end
function u0 = icfun(x)
u0=ones(2,1);
u0(1) = exp(-(x-center)^2/(2*std^2))/(sqrt(2*pi)*std);
u0(2) = exp(-(x-center)^2/(2*std^2))/(sqrt(2*pi)*std);
end
function [pL,qL,pR,qR] = bcfun(xL,uL,xR,uR,t)
pL=zeros(2,1);
pL(1) = uL(1);
pL(2) = uL(2);
pR=zeros(2,1);
pR(1) = uR(1);
pR(2) = uR(2);
qL = [0 0;0 0];
qR = [0 0;0 0];
end
end
Uncoupled case
function [xmesh,tspan,sol] = uncoupled()
std=0.001; %width of initial gaussian
center=1/2; %center of gaussian
xmesh=linspace(0,1,10000);
tspan=linspace(0,1,1000);
sol = pdepe(0,#pdefun,#icfun,#bcfun,xmesh,tspan);
function [c,f,s] = pdefun(x,t,u,dudx)
c=1;
f = dudx;
s=0;
end
function u0 = icfun(x)
u0=exp(-(x-center)^2/(2*std^2))/(sqrt(2*pi)*std);
end
function [pL,qL,pR,qR] = bcfun(xL,uL,xR,uR,t)
pL=uL;
pR=uR;
qL = 0;
qR = 0;
end
end
Now, suppose we run
[xmesh,tspan,soluncoupled] = uncoupled();
[xmesh,tspan,solcoupled] = coupled(0); %coupling k=0, i.e. uncoupled solutions
One can directly check by plotting the solutions for any time index $it$ that, even if they should be identical, the solutions given by each function are not identical, e.g.
hold all
plot(xmesh,soluncoupled(it+1,:),'b')
plot(xmesh,solcoupled(it+1,:,1),'r')
plot(xmesh,solcoupled(it+1,:,2),'g')
On the other hand, if we double the time of the uncoupled solution, the solutions are identical
hold all
plot(xmesh,soluncoupled(2*it+1,:),'b')
plot(xmesh,solcoupled(it+1,:,1),'r')
plot(xmesh,solcoupled(it+1,:,2),'g')
The case $k=0$ is not singular, one can set $k$ to be small but finite, and the deviations from the case $k=0$ are minimal, i.e. the solution still goes twice as fast as the uncoupled solution.
I really don't understand what is going on. I need to work on the coupled case, but obviously I don't trust the results if it does not give the right limit when $k\to 0$. I don't see where I could be making a mistake. Could it be a bug?

I found the source of the error. The problem lies in the qL and qR variables of bcfun for the coupled() function. The MATLAB documentation, see here and here, is slightly ambiguous on whether the q's should be matrices or column vectors. I had used matrices
qL = [0 0;0 0];
qR = [0 0;0 0];
but in reality I should have used column vectors
qL = [0;0];
qR = [0;0];
Amazingly, pdpe didn't throw an error, and simply gave wrong results. This should perhaps be fixed by the developers.

Related

Input equations into Matlab for Simulink Function

I am currently working on an assignment where I need to create two different controllers in Matlab/Simulink for a robotic exoskeleton leg. The idea behind this is to compare both of them and see which controller is better at assisting a human wearing it. I am having a lot of trouble putting specific equations into a Matlab function block to then run in Simulink to get results for an AFO (adaptive frequency oscillator). The link has the equations I'm trying to put in and the following is the code I have so far:
function [pos_AFO, vel_AFO, acc_AFO, offset, omega, phi, ampl, phi1] = LHip(theta, eps, nu, dt, AFO_on)
t = 0;
% syms j
% M = 6;
% j = sym('j', [1 M]);
if t == 0
omega = 3*pi/2;
theta = 0;
phi = pi/2;
ampl = 0;
else
omega = omega*(t-1) + dt*(eps*offset*cos(phi1));
theta = theta*(t-1) + dt*(nu*offset);
phi = phi*(t-1) + dt*(omega + eps*offset*cos(phi*core(t-1)));
phi1 = phi*(t-1) + dt*(omega + eps*offset*cos(phi*core(t-1)));
ampl = ampl*(t-1) + dt*(nu*offset*sin(phi));
offset = theta - theta*(t-1) - sym(ampl*sin(phi), [1 M]);
end
pos_AFO = (theta*(t-1) + symsum(ampl*(t-1)*sin(phi* (t-1))))*AFO_on; %symsum needs input argument for index M and range
vel_AFO = diff(pos_AFO)*AFO_on;
acc_AFO = diff(vel_AFO)*AFO_on;
end
https://www.pastepic.xyz/image/pg4mP
Essentially, I don't know how to do the subscripts, sigma, or the (t+1) function. Any help is appreciated as this is due next week
You are looking to find the result of an adaptive process therefore your algorithm needs to consider time as it progresses. There is no (t-1) operator as such. It is just a mathematical notation telling you that you need to reuse an old value to calculate a new value.
omega_old=0;
theta_old=0;
% initialize the rest of your variables
for [t=1:N]
omega[t] = omega_old + % here is the rest of your omega calculation
theta[t] = theta_old + % ...
% more code .....
% remember your old values for next iteration
omega_old = omega[t];
theta_old = theta[t];
end
I think you forgot to apply the modulo operation to phi judging by the original formula you linked. As a general rule, design your code in small pieces, make sure the output of each piece makes sense and then combine all pieces and make sure the overall result is correct.

Runge-kutta for coupled ODEs

I’m building a function in Octave that can solve N coupled ordinary differential equation of the type:
dx/dt = F(x,y,…,z,t)
dy/dt = G(x,y,…,z,t)
dz/dt = H(x,y,…,z,t)
With any of these three methods (Euler, Heun and Runge-Kutta-4).
The following code correspond to the function:
function sol = coupled_ode(E, dfuns, steps, a, b, ini, method)
range = b-a;
h=range/steps;
rows = (range/h)+1;
columns = size(dfuns)(2)+1;
sol= zeros(abs(rows),columns);
heun=zeros(1,columns-1);
for i=1:abs(rows)
if i==1
sol(i,1)=a;
else
sol(i,1)=sol(i-1,1)+h;
end
for j=2:columns
if i==1
sol(i,j)=ini(j-1);
else
if strcmp("euler",method)
sol(i,j)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end));
elseif strcmp("heun",method)
heun(j-1)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end));
elseif strcmp("rk4",method)
k1=h*dfuns{j-1}(E, [sol(i-1,1), sol(i-1,2:end)]);
k2=h*dfuns{j-1}(E, [sol(i-1,1)+(0.5*h), sol(i-1,2:end)+(0.5*h*k1)]);
k3=h*dfuns{j-1}(E, [sol(i-1,1)+(0.5*h), sol(i-1,2:end)+(0.5*h*k2)]);
k4=h*dfuns{j-1}(E, [sol(i-1,1)+h, sol(i-1,2:end)+(h*k3)]);
sol(i,j)=sol(i-1,j)+((1/6)*(k1+(2*k2)+(2*k3)+k4));
end
end
end
if strcmp("heun",method)
if i~=1
for k=2:columns
sol(i,k)=sol(i-1,k)+(h/2)*((dfuns{k-1}(E, sol(i-1,1:end)))+(dfuns{k-1}(E, [sol(i,1),heun])));
end
end
end
end
end
When I use the function for a single ordinary differential equation, the RK4 method is the best as expected, but when I ran the code for a couple system of differential equation, RK4 is the worst, I've been checking and checking and I don't know what I am doing wrong.
The following code is an example of how to call the function
F{1} = #(e, y) 0.6*y(3);
F{2} = #(e, y) -0.6*y(3)+0.001407*y(4)*y(3);
F{3} = #(e, y) -0.001407*y(4)*y(3);
steps = 24;
sol1 = coupled_ode(0,F,steps,0,24,[0 5 995],"euler");
sol2 = coupled_ode(0,F,steps,0,24,[0 5 995],"heun");
sol3 = coupled_ode(0,F,steps,0,24,[0 5 995],"rk4");
plot(sol1(:,1),sol1(:,4),sol2(:,1),sol2(:,4),sol3(:,1),sol3(:,4));
legend("Euler", "Heun", "RK4");
Careful: there's a few too many h's in the RK4 formulæ:
k2 = h*dfuns{ [...] +(0.5*h*k1)]);
k3 = h*dfuns{ [...] +(0.5*h*k2]);
should be
k2 = h*dfuns{ [...] +(0.5*k1)]);
k3 = h*dfuns{ [...] +(0.5*k2]);
(last h's removed).
However, this makes no difference for the example that you provided, since h=1 there.
But other than that little bug, I don't think you're actually doing anything wrong.
If I plot the solution generated by the more advanced, adaptive 4ᵗʰ/5ᵗʰ order RK implemented in ode45:
F{1} = #(e,y) +0.6*y(3);
F{2} = #(e,y) -0.6*y(3) + 0.001407*y(4)*y(3);
F{3} = #(e,y) -0.001407*y(4)*y(3);
tend = 24;
steps = 24;
y0 = [0 5 995];
plotN = 2;
sol1 = coupled_ode(0,F, steps, 0,tend, y0, 'euler');
sol2 = coupled_ode(0,F, steps, 0,tend, y0, 'heun');
sol3 = coupled_ode(0,F, steps, 0,tend, y0, 'rk4');
figure(1), clf, hold on
plot(sol1(:,1), sol1(:,plotN+1),...
sol2(:,1), sol2(:,plotN+1),...
sol3(:,1), sol3(:,plotN+1));
% New solution, generated by ODE45
opts = odeset('AbsTol', 1e-12, 'RelTol', 1e-12);
fcn = #(t,y) [F{1}(0,[0; y])
F{2}(0,[0; y])
F{3}(0,[0; y])];
[t,solN] = ode45(fcn, [0 tend], y0, opts);
plot(t, solN(:,plotN))
legend('Euler', 'Heun', 'RK4', 'ODE45');
xlabel('t');
Then we have something more believable to compare to.
Now, plain-and-simple RK4 indeed performs terribly for this isolated case:
However, if I simply flip the signs of the last term in the last two functions:
% ±
F{2} = #(e,y) +0.6*y(3) - 0.001407*y(4)*y(3);
F{3} = #(e,y) +0.001407*y(4)*y(3);
Then we get this:
The main reason RK4 performs badly for your case is because of the step size. The adaptive RK4/5 (with a tolerance set to 1 instead of 1e-12 as above) produces an average δt = 0.15. This means that basic error analysis has indicated that for this particular problem, h = 0.15 is the largest step you can take without introducing unacceptable error.
But you were taking h = 1, which then indeed gives a large accumulated error.
The fact that Heun and Euler perform so well for your case is, well, just plain luck, as demonstrated by the sign inversion example above.
Welcome to the world of numerical mathematics - there never is 1 method that's best for all problems under all circumstances :)
Apart from the error described in the older answer, there is indeed a fundamental methodological error in the implementation. First, the implementation is correct for scalar order-one differential equations. But the moment you try to use it on a coupled system, the de-coupled treatment of the stages in the Runge-Kutta method (note that Heun is just a copy of the Euler step) reduces them to an order-one method.
Specifically, starting in
k2=h*dfuns{j-1}(E, [sol(i-1,1)+(0.5*h), sol(i-1,2:end)+(0.5*h*k1)]);
the addition of 0.5*k1 to sol(i-1,2:end) means to add the vector of slopes of the first stage, not to add the same slope value to all components of the position vector.
Taking this into account results in the change to the implementation
function sol = coupled_ode(E, dfuns, steps, a, b, ini, method)
range = b-a;
h=range/steps;
rows = steps+1;
columns = size(dfuns)(2)+1;
sol= zeros(rows,columns);
k = ones(4,columns);
sol(1,1)=a;
sol(1,2:end)=ini(1:end);
for i=2:abs(rows)
sol(i,1)=sol(i-1,1)+h;
if strcmp("euler",method)
for j=2:columns
sol(i,j)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end));
end
elseif strcmp("heun",method)
for j=2:columns
k(1,j) = h*dfuns{j-1}(E, sol(i-1,1:end));
end
for j=2:columns
sol(i,j)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end)+k(1,1:end));
end
elseif strcmp("rk4",method)
for j=2:columns
k(1,j)=h*dfuns{j-1}(E, sol(i-1,:));
end
for j=2:columns
k(2,j)=h*dfuns{j-1}(E, sol(i-1,:)+0.5*k(1,:));
end
for j=2:columns
k(3,j)=h*dfuns{j-1}(E, sol(i-1,:)+0.5*k(2,:));
end
for j=2:columns
k(4,j)=h*dfuns{j-1}(E, sol(i-1,:)+k(3,:));
end
sol(i,2:end)=sol(i-1,2:end)+(1/6)*(k(1,2:end)+(2*k(2,2:end))+(2*k(3,2:end))+k(4,2:end));
end
end
end
As can be seen, the loop over the vector components is recurring frequently. One can hide this by using a full vectorization using a vector-valued function for the right side of the coupled ODE system.
The plot for the second component of the solution with these changes gives the much more reasonable plot for step size 1
and with a subdivision into 120 intervals for step size 0.2
where the graph for RK4 did not change much while the other two moved towards it from below and above.

Vectorizing the solution of a linear equation system in MATLAB

Summary: This question deals with the improvement of an algorithm for the computation of linear regression.
I have a 3D (dlMAT) array representing monochrome photographs of the same scene taken at different exposure times (the vector IT) . Mathematically, every vector along the 3rd dimension of dlMAT represents a separate linear regression problem that needs to be solved. The equation whose coefficients need to be estimated is of the form:
DL = R*IT^P, where DL and IT are obtained experimentally and R and P must be estimated.
The above equation can be transformed into a simple linear model after applying a logarithm:
log(DL) = log(R) + P*log(IT) => y = a + b*x
Presented below is the most "naive" way to solve this system of equations, which essentially involves iterating over all "3rd dimension vectors" and fitting a polynomial of order 1 to (IT,DL(ind1,ind2,:):
%// Define some nominal values:
R = 0.3;
IT = 600:600:3000;
P = 0.97;
%// Impose some believable spatial variations:
pMAT = 0.01*randn(3)+P;
rMAT = 0.1*randn(3)+R;
%// Generate "fake" observation data:
dlMAT = bsxfun(#times,rMAT,bsxfun(#power,permute(IT,[3,1,2]),pMAT));
%// Regression:
sol = cell(size(rMAT)); %// preallocation
for ind1 = 1:size(dlMAT,1)
for ind2 = 1:size(dlMAT,2)
sol{ind1,ind2} = polyfit(log(IT(:)),log(squeeze(dlMAT(ind1,ind2,:))),1);
end
end
fittedP = cellfun(#(x)x(1),sol); %// Estimate of pMAT
fittedR = cellfun(#(x)exp(x(2)),sol); %// Estimate of rMAT
The above approach seems like a good candidate for vectorization, since it does not utilize MATLAB's main strength that is MATrix operations. For this reason, it does not scale very well and takes much longer to execute than I think it should.
There exist alternative ways to perform this computation based on matrix division, as demonstrated here and here, which involve something like this:
sol = [ones(size(x)),log(x)]\log(y);
That is, appending a vector of 1s to the observations, followed by mldivide to solve the equation system.
The main challenge I'm facing is how to adapt my data to the algorithm (or vice versa).
Question #1: How can the matrix-division-based solution be extended to solve the problem presented above (and potentially replace the loops I am using)?
Question #2 (bonus): What is the principle behind this matrix-division-based solution?
The secret ingredient behind the solution that includes matrix division is the Vandermonde matrix. The question discusses a linear problem (linear regression), and those can always be formulated as a matrix problem, which \ (mldivide) can solve in a mean-square error sense‡. Such an algorithm, solving a similar problem, is demonstrated and explained in this answer.
Below is benchmarking code that compares the original solution with two alternatives suggested in chat1, 2 :
function regressionBenchmark(numEl)
clc
if nargin<1, numEl=10; end
%// Define some nominal values:
R = 5;
IT = 600:600:3000;
P = 0.97;
%// Impose some believable spatial variations:
pMAT = 0.01*randn(numEl)+P;
rMAT = 0.1*randn(numEl)+R;
%// Generate "fake" measurement data using the relation "DL = R*IT.^P"
dlMAT = bsxfun(#times,rMAT,bsxfun(#power,permute(IT,[3,1,2]),pMAT));
%% // Method1: loops + polyval
disp('-------------------------------Method 1: loops + polyval')
tic; [fR,fP] = method1(IT,dlMAT); toc;
fprintf(1,'Regression performance:\nR: %d\nP: %d\n',norm(fR-rMAT,1),norm(fP-pMAT,1));
%% // Method2: loops + Vandermonde
disp('-------------------------------Method 2: loops + Vandermonde')
tic; [fR,fP] = method2(IT,dlMAT); toc;
fprintf(1,'Regression performance:\nR: %d\nP: %d\n',norm(fR-rMAT,1),norm(fP-pMAT,1));
%% // Method3: vectorized Vandermonde
disp('-------------------------------Method 3: vectorized Vandermonde')
tic; [fR,fP] = method3(IT,dlMAT); toc;
fprintf(1,'Regression performance:\nR: %d\nP: %d\n',norm(fR-rMAT,1),norm(fP-pMAT,1));
function [fittedR,fittedP] = method1(IT,dlMAT)
sol = cell(size(dlMAT,1),size(dlMAT,2));
for ind1 = 1:size(dlMAT,1)
for ind2 = 1:size(dlMAT,2)
sol{ind1,ind2} = polyfit(log(IT(:)),log(squeeze(dlMAT(ind1,ind2,:))),1);
end
end
fittedR = cellfun(#(x)exp(x(2)),sol);
fittedP = cellfun(#(x)x(1),sol);
function [fittedR,fittedP] = method2(IT,dlMAT)
sol = cell(size(dlMAT,1),size(dlMAT,2));
for ind1 = 1:size(dlMAT,1)
for ind2 = 1:size(dlMAT,2)
sol{ind1,ind2} = flipud([ones(numel(IT),1) log(IT(:))]\log(squeeze(dlMAT(ind1,ind2,:)))).'; %'
end
end
fittedR = cellfun(#(x)exp(x(2)),sol);
fittedP = cellfun(#(x)x(1),sol);
function [fittedR,fittedP] = method3(IT,dlMAT)
N = 1; %// Degree of polynomial
VM = bsxfun(#power, log(IT(:)), 0:N); %// Vandermonde matrix
result = fliplr((VM\log(reshape(dlMAT,[],size(dlMAT,3)).')).');
%// Compressed version:
%// result = fliplr(([ones(numel(IT),1) log(IT(:))]\log(reshape(dlMAT,[],size(dlMAT,3)).')).');
fittedR = exp(real(reshape(result(:,2),size(dlMAT,1),size(dlMAT,2))));
fittedP = real(reshape(result(:,1),size(dlMAT,1),size(dlMAT,2)));
The reason why method 2 can be vectorized into method 3 is essentially that matrix multiplication can be separated by the columns of the second matrix. If A*B produces matrix X, then by definition A*B(:,n) gives X(:,n) for any n. Moving A to the right-hand side with mldivide, this means that the divisions A\X(:,n) can be done in one go for all n with A\X. The same holds for an overdetermined system (linear regression problem), in which there is no exact solution in general, and mldivide finds the matrix that minimizes the mean-square error. In this case too, the operations A\X(:,n) (method 2) can be done in one go for all n with A\X (method 3).
The implications of improving the algorithm when increasing the size of dlMAT can be seen below:
For the case of 500*500 (or 2.5E5) elements, the speedup from Method 1 to Method 3 is about x3500!
It is also interesting to observe the output of profile (here, for the case of 500*500):
Method 1
Method 2
Method 3
From the above it is seen that rearranging the elements via squeeze and flipud takes up about half (!) of the runtime of Method 2. It is also seen that some time is lost on the conversion of the solution from cells to matrices.
Since the 3rd solution avoids all of these pitfalls, as well as the loops altogether (which mostly means re-evaluation of the script on every iteration) - it unsurprisingly results in a considerable speedup.
Notes:
There was very little difference between the "compressed" and the "explicit" versions of Method 3 in favor of the "explicit" version. For this reason it was not included in the comparison.
A solution was attempted where the inputs to Method 3 were gpuArray-ed. This did not provide improved performance (and even somewhat degradaed them), possibly due to wrong implementation, or the overhead associated with copying matrices back and forth between RAM and VRAM.

separate 'entangled' vectors in Matlab

I have a set of three vectors (stored into a 3xN matrix) which are 'entangled' (e.g. some value in the second row should be in the third row and vice versa). This 'entanglement' is based on looking at the figure in which alpha2 is plotted. To separate the vector I use a difference based approach where I calculate the difference of one value with respect the three next values (e.g. comparing (1,i) with (:,i+1)). Then I take the minimum and store that. The method works to separate two of the three vectors, but not for the last.
I was wondering if you guys can share your ideas with me how to solve this problem (if possible). I have added my coded below.
Thanks in advance!
Problem in figures:
clear all; close all; clc;
%%
alpha2 = [-23.32 -23.05 -22.24 -20.91 -19.06 -16.70 -13.83 -10.49 -6.70;
-0.46 -0.33 0.19 2.38 5.44 9.36 14.15 19.80 26.32;
-1.58 -1.13 0.06 0.70 1.61 2.78 4.23 5.99 8.09];
%%% Original
figure()
hold on
plot(alpha2(1,:))
plot(alpha2(2,:))
plot(alpha2(3,:))
%%% Store start values
store1(1,1) = alpha2(1,1);
store2(1,1) = alpha2(2,1);
store3(1,1) = alpha2(3,1);
for i=1:size(alpha2,2)-1
for j=1:size(alpha2,1)
Alpha1(j,i) = abs(store1(1,i)-alpha2(j,i+1));
Alpha2(j,i) = abs(store2(1,i)-alpha2(j,i+1));
Alpha3(j,i) = abs(store3(1,i)-alpha2(j,i+1));
[~, I] = min(Alpha1(:,i));
store1(1,i+1) = alpha2(I,i+1);
[~, I] = min(Alpha2(:,i));
store2(1,i+1) = alpha2(I,i+1);
[~, I] = min(Alpha3(:,i));
store3(1,i+1) = alpha2(I,i+1);
end
end
%%% Plot to see if separation worked
figure()
hold on
plot(store1)
plot(store2)
plot(store3)
Solution using extrapolation via polyfit:
The idea is pretty simple: Iterate over all positions i and use polyfit to fit polynomials of degree d to the d+1 values from F(:,i-(d+1)) up to F(:,i). Use those polynomials to extrapolate the function values F(:,i+1). Then compute the permutation of the real values F(:,i+1) that fits those extrapolations best. This should work quite well, if there are only a few functions involved. There is certainly some room for improvement, but for your simple setting it should suffice.
function F = untangle(F, maxExtrapolationDegree)
%// UNTANGLE(F) untangles the functions F(i,:) via extrapolation.
if nargin<2
maxExtrapolationDegree = 4;
end
extrapolate = #(f) polyval(polyfit(1:length(f),f,length(f)-1),length(f)+1);
extrapolateAll = #(F) cellfun(extrapolate, num2cell(F,2));
fitCriterion = #(X,Y) norm(X(:)-Y(:),1);
nFuncs = size(F,1);
nPoints = size(F,2);
swaps = perms(1:nFuncs);
errorOfFit = zeros(1,size(swaps,1));
for i = 1:nPoints-1
nextValues = extrapolateAll(F(:,max(1,i-(maxExtrapolationDegree+1)):i));
for j = 1:size(swaps,1)
errorOfFit(j) = fitCriterion(nextValues, F(swaps(j,:),i+1));
end
[~,j_bestSwap] = min(errorOfFit);
F(:,i+1) = F(swaps(j_bestSwap,:),i+1);
end
Initial solution: (not that pretty - Skip this part)
This is a similar solution that tries to minimize the sum of the derivatives up to some degree of the vector valued function F = #(j) alpha2(:,j). It does so by stepping through the positions i and checks all possible permutations of the coordinates of i to get a minimal seminorm of the function F(1:i).
(I'm actually wondering right now if there is any canonical mathematical way to define the seminorm so we get our expected results... I initially was going for the H^1 and H^2 seminorms, but they didn't quite work...)
function F = untangle(F)
nFuncs = size(F,1);
nPoints = size(F,2);
seminorm = #(x,i) sum(sum(abs(diff(x(:,1:i),1,2)))) + ...
sum(sum(abs(diff(x(:,1:i),2,2)))) + ...
sum(sum(abs(diff(x(:,1:i),3,2)))) + ...
sum(sum(abs(diff(x(:,1:i),4,2))));
doSwap = #(x,swap,i) [x(:,1:i-1), x(swap,i:end)];
swaps = perms(1:nFuncs);
normOfSwap = zeros(1,size(swaps,1));
for i = 2:nPoints
for j = 1:size(swaps,1)
normOfSwap(j) = seminorm(doSwap(F,swaps(j,:),i),i);
end
[~,j_bestSwap] = min(normOfSwap);
F = doSwap(F,swaps(j_bestSwap,:),i);
end
Usage:
The command alpha2 = untangle(alpha2); will untangle your functions:
It should even work for more complicated data, like these shuffled sine-waves:
nPoints = 100;
nFuncs = 5;
t = linspace(0, 2*pi, nPoints);
F = bsxfun(#(a,b) sin(a*b), (1:nFuncs).', t);
for i = 1:nPoints
F(:,i) = F(randperm(nFuncs),i);
end
Remark: I guess if you already know that your functions will be quadratic or some other special form, RANSAC would be a better idea for larger number of functions. This could also be useful if the functions are not given with the same x-value spacing.

Solving for Fisher Kolmagorov Partial Differential Equation

I have been trying to solve the non dimensional Fisher Kolmagorov equation in Matlab. I am getting a graph which doesn't look at all like it should. Also, I'm getting the equation independent of value of s (the source term in the pdepe solver). No matter what value of s I put in the graph remains the same.
function FK
m = 0;
x = linspace(0,1,100);
t = linspace(0,1,100);
u = pdepe(m,#FKpde,#FKic,#FKbc,x,t);
[X,T] = meshgrid(x,t);
%ANALYTICAL SOLUTION
% a=(sqrt(2))-1;
% q=2;
% s=2/q;
% b= q /((2*(q+2))^0.5);
% c= (q+4)/((2*(q+2))^0.5);
% zeta= X-c*T;
%y = 1/((1+(a*(exp(b*zeta))))^s);
%r=(y(:,:)-u(:,:))./y(:,:); % relative error in numerical and analytical value
figure;
plot(x,u(10,:),'o',x,u(40,:),'o',x,u(60,:),'o',x,u(end,:),'o')
title('Numerical Solutions at different times');
legend('tn=1','tn=5','tn=30','tn=10','ta=20','ta=600','ta=800','ta=1000',0);
xlabel('Distance x');
ylabel('u(x,t)');
% --------------------------------------------------------------------------
function [c,f,s] = FKpde(x,t,u,DuDx)
c = 1;
f = DuDx;
s =u*(1-u);
% --------------------------------------------------------------------------
function u0 = FKic(x)
u0 = 10^(-4);
% --------------------------------------------------------------------------
function [pl,ql,pr,qr] = FKbc(xl,ul,xr,ur,t)
pl = ul-1;
ql = 0;
pr = ur;
qr = 0;
Should maybe be a comment, but putting it as an answer for better formatting. Your analytic solution, which I assume you're using to compare with the numerical answer to say that the graph does not look as it should, does not appear to respect the initial conditions or boundary conditions you are feeding pdepe, so I'd start there if trying to figure out why u does not look like y
The initial and boundary conditions you are setting are:
u(0, t) = 1
u(1, t) = 0
u(x, 0) = 1e-4
Setting aside that the boundary and initial conditions are in conflict, the analytic solution you suggest in the code has
u(0, t) = 1/((1+exp(-b*ct)))
u(1, t) = 1/((1+exp(b*(1-ct)))
u(x, 0) = 1/((1+exp(b*x))
So it seems to me the numerical and analytic solutions should be expected to be different, and the differences you observe are probably due to the IC/BC setup. I suspect that pdepe is probably solving the equation you are giving it.
On increasing the length scale and time scale, I get the answers I want. The problem was to solve for different times, and see the wave propogate. For small lenghts, I could only see part of that wave.