Matlab, integration of conservation laws - matlab

I was wondering if there are established robust libraries or FEX-like packages to deal with scalar conservation laws (say 1D) in matlab.
I am currently dealing with 1D non-linear, non-local, conservation laws and the diffusive error of first order schemes is killing me, moreover a lot of physics is missed. Thus, I am wondering if there is some robust tool already there so to avoid cooking some code myself (ideally, something like boost::odeint for scheme agnostic high order ODE integration in C++).
Any help appreciated.
EDIT: Apologies for the lack of clarity. Here for conservation laws I mean general hyberbolic partial derivative equations in the form
u_t(t,x) + F_x(t,x) = 0
where u=u(t,x) is an intensive conserved variable (say scalar, 1D, e.g. mass density, energy density,...) and F = F(t,x) is its flux. Therefore, I am not interested in the kind of conservation properties Hamiltonian systems feature (energy, currents...) (thanks to #headmyshoulder for his comment).
I cited boost::odeint for a conceptual reference of a robust and generic library addressing a mathematical issue (integration of ODEs). Therefore I am looking for some package implementing Godunov-type methods and so on.

I am currently working on new methods for shock-turbulence simulations and doing lots of code testing/validation in MATLAB. Unfortunately, I haven't found a general library that does what you're hoping, but a basic Godunov or MUSCL code is relatively straightforward to implement. This paper has a good overview of some useful methods:
[1] Kurganov, Alexander and Eitan Tadmor (2000), New High-Resolution Central Schemes for Nonlinear Conservation Laws and Convection-Diffusion Equations, J. Comp. Phys., 160, 214–282. PDF
Here are a few examples from that paper for a 1D equally spaced grid on a periodic domain for solving inviscid Burgers equation. The methods easily generalize to systems of equations, dissipative (viscous) systems, and higher dimension as outlined in [1]. These methods rely on the following functions:
Flux term:
function f = flux(u)
%flux term for Burgers equation: F(u) = u^2/2;
f = u.^2/2;
Minmod function:
function m = minmod(a,b)
%minmod function:
m = (sign(a)+sign(b))/2.*min(abs(a),abs(b));
Methods
Nessyahu-Tadmor scheme:
A 2nd order scheme
function unew = step_u(dx,dt,u)
%%% Nessyahu-Tadmor scheme
ux = minmod((u-circshift(u,[0 1]))/dx,(circshift(u,[0 -1])-u)/dx);
f = flux(u);
fx = minmod((f-circshift(f,[0 1]))/dx,(circshift(f,[0 -1])-f)/dx);
umid = u-dt/2*fx;
fmid = flux(umid);
unew = (u + circshift(u,[0 -1]))/2 + (dx)/8*(ux-circshift(ux,[0 -1])) ...
-dt/dx*( circshift(fmid,[0 -1])-fmid );
This method computes a new u value at xj+1/2 grid points so it also requires a grid shift at each step. The main function should be something like:
clear all
% Set up grid
nx = 256;
xmin=0; xmax=2*pi;
x=linspace(xmin,xmax,nx);
dx = x(2)-x(1);
%initialize
u = exp(-4*(x-pi*1/2).^2)-exp(-4*(x-pi*3/2).^2);
%CFL number:
CFL = 0.25;
t = 0;
dt = CFL*dx/max(abs(u(:)));
while (t<2)
u = step_u(dx,dt,u);
x=x+dx/2;
% handle grid shifts
if x(end)>=xmax+dx
x(end)=0;
x=circshift(x,[0 1]);
u=circshift(u,[0 1]);
end
t = t+dt;
%plot
figure(1)
clf
plot(x,u,'k')
title(sprintf('time, t = %1.2f',t))
if ~exist('YY','var')
YY=ylim;
end
axis([xmin xmax YY])
drawnow
end
Kurganov-Tadmor scheme
The Kurganov-Tadmor scheme of [1] has several advantages over the NT scheme including lower numerical dissipation and a semi-discrete form that allows the use of any time integration method you choose. Using the same spatial discretization as above, it can be formulated as an ODE for du/dt = (stuff). The right hand side of this ODE can be computed by the function:
function RHS = KTrhs(dx,u)
%%% Kurganov-Tadmor scheme
ux = minmod((u-circshift(u,[0 1]))/dx,(circshift(u,[0 -1])-u)/dx);
uplus = u-dx/2*ux;
uminus = circshift(u+dx/2*ux,[0 1]);
a = max(abs(rhodF(uminus)),abs(rhodF(uplus)));
RHS = -( flux(circshift(uplus,[0 -1]))+flux(circshift(uminus,[0 -1])) ...
-flux(uplus)-flux(uminus) )/(2*dx) ...
+( circshift(a,[0 -1]).*(circshift(uplus,[0 -1])-circshift(uminus,[0 -1])) ...
-a.*(uplus-uminus) )/(2*dx);
This function also relies on knowing the spectral radius of the Jacobian of F(u) (rhodF in the code above). For inviscid Burgers this is just
function rho = rhodF(u)
dFdu=abs(u);
The main program of the KT scheme could be something like:
clear all
nx = 256;
xmin=0; xmax=2*pi;
x=linspace(xmin,xmax,nx);
dx = x(2)-x(1);
%initialize
u = exp(-4*(x-pi*1/2).^2)-exp(-4*(x-pi*3/2).^2);
%CFL number:
CFL = 0.25;
t = 0;
dt = CFL*dx/max(abs(u(:)));
while (t<3)
% 4th order Runge-Kutta time stepping
k1 = KTrhs(dx,u);
k2 = KTrhs(dx,u+dt/2*k1);
k3 = KTrhs(dx,u+dt/2*k2);
k4 = KTrhs(dx,u+dt*k3);
u = u+dt/6*(k1+2*k2+2*k3+k4);
t = t+dt;
%plot
figure(1)
clf
plot(x,u,'k')
title(sprintf('time, t = %1.2f',t))
if ~exist('YY','var')
YY=ylim;
end
axis([xmin xmax YY])
drawnow
end

Related

Strange wrong result for (un)coupled PDEs using MATLAB's pdepe, time is doubled

I am trying to solve two coupled reaction diffusion equations in 1d, using pdpe, namely
$\partial_t u_1 = \nabla^2 u_1 + 2k(-u_1^2+u_2)$
$\partial_t u_2 = \nabla^2 u_1 + k(u_1^2-u_2)$
The solution is in the domain $x\in[0,1]$, with initial conditions being two identical Gaussian profiles centered at $x=1/2$. The boundary conditions are absorbing for both components, i.e. $u_1(0)=u_2(0)=u_1(1)=u_2(1)=0$.
Pdepe gives me a solution without prompting any errors. However, I think the solutions must be wrong, because when I set the coupling to zero, i.e. $k=0$ (and also if I set it to be very small, say $k=0.001$), the solutions do not coincide with the solution of the simple diffusion equation
$\partial_t u = \nabla^2 u$
as obtained from pdepe itself.
Strangely enough, the solutions $u_1(t)=u_2(t)$ from the "coupled" case with coupling set to zero, and the solution for the case uncoupled by construction $u(t')$ coincide if we set $t'=2t$, that is, the solution of the "coupled" case evolves twice as fast as the solution of the uncoupled case.
Here's a minimal working example:
Coupled case
function [xmesh,tspan,sol] = coupled(k) %argument is the coupling k
std=0.001; %width of initial gaussian
center=1/2; %center of gaussian
xmesh=linspace(0,1,10000);
tspan=linspace(0,1,1000);
sol = pdepe(0,#pdefun,#icfun,#bcfun,xmesh,tspan);
function [c,f,s] = pdefun(x,t,u,dudx)
c=ones(2,1);
f=zeros(2,1);
f(1) = dudx(1);
f(2) = dudx(2);
s=zeros(2,1);
s(1) = 2*k*(u(2)-u(1)^2);
s(2) = k*(u(1)^2-u(2));
end
function u0 = icfun(x)
u0=ones(2,1);
u0(1) = exp(-(x-center)^2/(2*std^2))/(sqrt(2*pi)*std);
u0(2) = exp(-(x-center)^2/(2*std^2))/(sqrt(2*pi)*std);
end
function [pL,qL,pR,qR] = bcfun(xL,uL,xR,uR,t)
pL=zeros(2,1);
pL(1) = uL(1);
pL(2) = uL(2);
pR=zeros(2,1);
pR(1) = uR(1);
pR(2) = uR(2);
qL = [0 0;0 0];
qR = [0 0;0 0];
end
end
Uncoupled case
function [xmesh,tspan,sol] = uncoupled()
std=0.001; %width of initial gaussian
center=1/2; %center of gaussian
xmesh=linspace(0,1,10000);
tspan=linspace(0,1,1000);
sol = pdepe(0,#pdefun,#icfun,#bcfun,xmesh,tspan);
function [c,f,s] = pdefun(x,t,u,dudx)
c=1;
f = dudx;
s=0;
end
function u0 = icfun(x)
u0=exp(-(x-center)^2/(2*std^2))/(sqrt(2*pi)*std);
end
function [pL,qL,pR,qR] = bcfun(xL,uL,xR,uR,t)
pL=uL;
pR=uR;
qL = 0;
qR = 0;
end
end
Now, suppose we run
[xmesh,tspan,soluncoupled] = uncoupled();
[xmesh,tspan,solcoupled] = coupled(0); %coupling k=0, i.e. uncoupled solutions
One can directly check by plotting the solutions for any time index $it$ that, even if they should be identical, the solutions given by each function are not identical, e.g.
hold all
plot(xmesh,soluncoupled(it+1,:),'b')
plot(xmesh,solcoupled(it+1,:,1),'r')
plot(xmesh,solcoupled(it+1,:,2),'g')
On the other hand, if we double the time of the uncoupled solution, the solutions are identical
hold all
plot(xmesh,soluncoupled(2*it+1,:),'b')
plot(xmesh,solcoupled(it+1,:,1),'r')
plot(xmesh,solcoupled(it+1,:,2),'g')
The case $k=0$ is not singular, one can set $k$ to be small but finite, and the deviations from the case $k=0$ are minimal, i.e. the solution still goes twice as fast as the uncoupled solution.
I really don't understand what is going on. I need to work on the coupled case, but obviously I don't trust the results if it does not give the right limit when $k\to 0$. I don't see where I could be making a mistake. Could it be a bug?
I found the source of the error. The problem lies in the qL and qR variables of bcfun for the coupled() function. The MATLAB documentation, see here and here, is slightly ambiguous on whether the q's should be matrices or column vectors. I had used matrices
qL = [0 0;0 0];
qR = [0 0;0 0];
but in reality I should have used column vectors
qL = [0;0];
qR = [0;0];
Amazingly, pdpe didn't throw an error, and simply gave wrong results. This should perhaps be fixed by the developers.

Input equations into Matlab for Simulink Function

I am currently working on an assignment where I need to create two different controllers in Matlab/Simulink for a robotic exoskeleton leg. The idea behind this is to compare both of them and see which controller is better at assisting a human wearing it. I am having a lot of trouble putting specific equations into a Matlab function block to then run in Simulink to get results for an AFO (adaptive frequency oscillator). The link has the equations I'm trying to put in and the following is the code I have so far:
function [pos_AFO, vel_AFO, acc_AFO, offset, omega, phi, ampl, phi1] = LHip(theta, eps, nu, dt, AFO_on)
t = 0;
% syms j
% M = 6;
% j = sym('j', [1 M]);
if t == 0
omega = 3*pi/2;
theta = 0;
phi = pi/2;
ampl = 0;
else
omega = omega*(t-1) + dt*(eps*offset*cos(phi1));
theta = theta*(t-1) + dt*(nu*offset);
phi = phi*(t-1) + dt*(omega + eps*offset*cos(phi*core(t-1)));
phi1 = phi*(t-1) + dt*(omega + eps*offset*cos(phi*core(t-1)));
ampl = ampl*(t-1) + dt*(nu*offset*sin(phi));
offset = theta - theta*(t-1) - sym(ampl*sin(phi), [1 M]);
end
pos_AFO = (theta*(t-1) + symsum(ampl*(t-1)*sin(phi* (t-1))))*AFO_on; %symsum needs input argument for index M and range
vel_AFO = diff(pos_AFO)*AFO_on;
acc_AFO = diff(vel_AFO)*AFO_on;
end
https://www.pastepic.xyz/image/pg4mP
Essentially, I don't know how to do the subscripts, sigma, or the (t+1) function. Any help is appreciated as this is due next week
You are looking to find the result of an adaptive process therefore your algorithm needs to consider time as it progresses. There is no (t-1) operator as such. It is just a mathematical notation telling you that you need to reuse an old value to calculate a new value.
omega_old=0;
theta_old=0;
% initialize the rest of your variables
for [t=1:N]
omega[t] = omega_old + % here is the rest of your omega calculation
theta[t] = theta_old + % ...
% more code .....
% remember your old values for next iteration
omega_old = omega[t];
theta_old = theta[t];
end
I think you forgot to apply the modulo operation to phi judging by the original formula you linked. As a general rule, design your code in small pieces, make sure the output of each piece makes sense and then combine all pieces and make sure the overall result is correct.

Runge-kutta for coupled ODEs

I’m building a function in Octave that can solve N coupled ordinary differential equation of the type:
dx/dt = F(x,y,…,z,t)
dy/dt = G(x,y,…,z,t)
dz/dt = H(x,y,…,z,t)
With any of these three methods (Euler, Heun and Runge-Kutta-4).
The following code correspond to the function:
function sol = coupled_ode(E, dfuns, steps, a, b, ini, method)
range = b-a;
h=range/steps;
rows = (range/h)+1;
columns = size(dfuns)(2)+1;
sol= zeros(abs(rows),columns);
heun=zeros(1,columns-1);
for i=1:abs(rows)
if i==1
sol(i,1)=a;
else
sol(i,1)=sol(i-1,1)+h;
end
for j=2:columns
if i==1
sol(i,j)=ini(j-1);
else
if strcmp("euler",method)
sol(i,j)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end));
elseif strcmp("heun",method)
heun(j-1)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end));
elseif strcmp("rk4",method)
k1=h*dfuns{j-1}(E, [sol(i-1,1), sol(i-1,2:end)]);
k2=h*dfuns{j-1}(E, [sol(i-1,1)+(0.5*h), sol(i-1,2:end)+(0.5*h*k1)]);
k3=h*dfuns{j-1}(E, [sol(i-1,1)+(0.5*h), sol(i-1,2:end)+(0.5*h*k2)]);
k4=h*dfuns{j-1}(E, [sol(i-1,1)+h, sol(i-1,2:end)+(h*k3)]);
sol(i,j)=sol(i-1,j)+((1/6)*(k1+(2*k2)+(2*k3)+k4));
end
end
end
if strcmp("heun",method)
if i~=1
for k=2:columns
sol(i,k)=sol(i-1,k)+(h/2)*((dfuns{k-1}(E, sol(i-1,1:end)))+(dfuns{k-1}(E, [sol(i,1),heun])));
end
end
end
end
end
When I use the function for a single ordinary differential equation, the RK4 method is the best as expected, but when I ran the code for a couple system of differential equation, RK4 is the worst, I've been checking and checking and I don't know what I am doing wrong.
The following code is an example of how to call the function
F{1} = #(e, y) 0.6*y(3);
F{2} = #(e, y) -0.6*y(3)+0.001407*y(4)*y(3);
F{3} = #(e, y) -0.001407*y(4)*y(3);
steps = 24;
sol1 = coupled_ode(0,F,steps,0,24,[0 5 995],"euler");
sol2 = coupled_ode(0,F,steps,0,24,[0 5 995],"heun");
sol3 = coupled_ode(0,F,steps,0,24,[0 5 995],"rk4");
plot(sol1(:,1),sol1(:,4),sol2(:,1),sol2(:,4),sol3(:,1),sol3(:,4));
legend("Euler", "Heun", "RK4");
Careful: there's a few too many h's in the RK4 formulæ:
k2 = h*dfuns{ [...] +(0.5*h*k1)]);
k3 = h*dfuns{ [...] +(0.5*h*k2]);
should be
k2 = h*dfuns{ [...] +(0.5*k1)]);
k3 = h*dfuns{ [...] +(0.5*k2]);
(last h's removed).
However, this makes no difference for the example that you provided, since h=1 there.
But other than that little bug, I don't think you're actually doing anything wrong.
If I plot the solution generated by the more advanced, adaptive 4ᵗʰ/5ᵗʰ order RK implemented in ode45:
F{1} = #(e,y) +0.6*y(3);
F{2} = #(e,y) -0.6*y(3) + 0.001407*y(4)*y(3);
F{3} = #(e,y) -0.001407*y(4)*y(3);
tend = 24;
steps = 24;
y0 = [0 5 995];
plotN = 2;
sol1 = coupled_ode(0,F, steps, 0,tend, y0, 'euler');
sol2 = coupled_ode(0,F, steps, 0,tend, y0, 'heun');
sol3 = coupled_ode(0,F, steps, 0,tend, y0, 'rk4');
figure(1), clf, hold on
plot(sol1(:,1), sol1(:,plotN+1),...
sol2(:,1), sol2(:,plotN+1),...
sol3(:,1), sol3(:,plotN+1));
% New solution, generated by ODE45
opts = odeset('AbsTol', 1e-12, 'RelTol', 1e-12);
fcn = #(t,y) [F{1}(0,[0; y])
F{2}(0,[0; y])
F{3}(0,[0; y])];
[t,solN] = ode45(fcn, [0 tend], y0, opts);
plot(t, solN(:,plotN))
legend('Euler', 'Heun', 'RK4', 'ODE45');
xlabel('t');
Then we have something more believable to compare to.
Now, plain-and-simple RK4 indeed performs terribly for this isolated case:
However, if I simply flip the signs of the last term in the last two functions:
% ±
F{2} = #(e,y) +0.6*y(3) - 0.001407*y(4)*y(3);
F{3} = #(e,y) +0.001407*y(4)*y(3);
Then we get this:
The main reason RK4 performs badly for your case is because of the step size. The adaptive RK4/5 (with a tolerance set to 1 instead of 1e-12 as above) produces an average δt = 0.15. This means that basic error analysis has indicated that for this particular problem, h = 0.15 is the largest step you can take without introducing unacceptable error.
But you were taking h = 1, which then indeed gives a large accumulated error.
The fact that Heun and Euler perform so well for your case is, well, just plain luck, as demonstrated by the sign inversion example above.
Welcome to the world of numerical mathematics - there never is 1 method that's best for all problems under all circumstances :)
Apart from the error described in the older answer, there is indeed a fundamental methodological error in the implementation. First, the implementation is correct for scalar order-one differential equations. But the moment you try to use it on a coupled system, the de-coupled treatment of the stages in the Runge-Kutta method (note that Heun is just a copy of the Euler step) reduces them to an order-one method.
Specifically, starting in
k2=h*dfuns{j-1}(E, [sol(i-1,1)+(0.5*h), sol(i-1,2:end)+(0.5*h*k1)]);
the addition of 0.5*k1 to sol(i-1,2:end) means to add the vector of slopes of the first stage, not to add the same slope value to all components of the position vector.
Taking this into account results in the change to the implementation
function sol = coupled_ode(E, dfuns, steps, a, b, ini, method)
range = b-a;
h=range/steps;
rows = steps+1;
columns = size(dfuns)(2)+1;
sol= zeros(rows,columns);
k = ones(4,columns);
sol(1,1)=a;
sol(1,2:end)=ini(1:end);
for i=2:abs(rows)
sol(i,1)=sol(i-1,1)+h;
if strcmp("euler",method)
for j=2:columns
sol(i,j)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end));
end
elseif strcmp("heun",method)
for j=2:columns
k(1,j) = h*dfuns{j-1}(E, sol(i-1,1:end));
end
for j=2:columns
sol(i,j)=sol(i-1,j)+h*dfuns{j-1}(E, sol(i-1,1:end)+k(1,1:end));
end
elseif strcmp("rk4",method)
for j=2:columns
k(1,j)=h*dfuns{j-1}(E, sol(i-1,:));
end
for j=2:columns
k(2,j)=h*dfuns{j-1}(E, sol(i-1,:)+0.5*k(1,:));
end
for j=2:columns
k(3,j)=h*dfuns{j-1}(E, sol(i-1,:)+0.5*k(2,:));
end
for j=2:columns
k(4,j)=h*dfuns{j-1}(E, sol(i-1,:)+k(3,:));
end
sol(i,2:end)=sol(i-1,2:end)+(1/6)*(k(1,2:end)+(2*k(2,2:end))+(2*k(3,2:end))+k(4,2:end));
end
end
end
As can be seen, the loop over the vector components is recurring frequently. One can hide this by using a full vectorization using a vector-valued function for the right side of the coupled ODE system.
The plot for the second component of the solution with these changes gives the much more reasonable plot for step size 1
and with a subdivision into 120 intervals for step size 0.2
where the graph for RK4 did not change much while the other two moved towards it from below and above.

How does one compute a single finite differences in Matlab efficiently?

I wanted to compute a finite difference with respect to the change of the function in Matlab. In other words
f(x+e_i) - f(x)
is what I want to compute. Note that its very similar to the first order numerical partial differentiation (forward differentiation in this case) :
(f(x+e_i) - f(x)) / (e_i)
Currently I am using for loops to compute it but it seems that Matlab is much slower than I thought. I am doing it as follows:
function [ dU ] = numerical_gradient(W,f,eps)
%compute gradient or finite difference update numerically
[D1, D2] = size(W);
dU = zeros(D1, D2);
for d1=1:D1
for d2=1:D2
e = zeros([D1,D2]);
e(d1,d2) = eps;
f_e1 = f(W+e);
f_e2 = f(W-e);
%numerical_derivative = (f_e1 - f_e2)/(2*eps);
%dU(d1,d2) = numerical_derivative
numerical_difference = f_e1 - f_e2;
dU(d1,d2) = numerical_difference;
end
end
it seems that its really difficult to vectorize the above code because for numerical differences follow the definition of the gradient and partial derivatives which is:
df_dW = [ ..., df_dWi, ...]
where df_dWi assumes the other coordinates are fixed and it only worries about the change of the variable Wi. Thus, I can't just change all the coordinates at once.
Is there a better way to do this? My intuition tells me that the best way to do this is to implement this not in matlab but in some other language, say C and then have matlab call that library. Is that true? Does it mean that the best solution is some Matlab library that does this for me?
I did see:
https://www.mathworks.com/matlabcentral/answers/332414-what-is-the-quickest-way-to-find-a-gradient-or-finite-difference-in-matlab-of-a-real-function-in-hig
but unfortunately, it computes exact derivatives, which isn't what I am looking for. I am explicitly looking for differences or "bad approximation" to the gradient.
Since it seems this code is not easy to vectorize (in fact my intuition tells me its not possible to do so) my only other idea is to implement this finite difference function in C and then have C call the function. Is this a good idea? Anyone know how to do this?
I did try reading the following:
https://www.mathworks.com/help/matlab/matlab_external/standalone-example.html
but it was too difficult to understand for me because I have no idea what a mex file is, if I need to have a arrayProduct.c file as well as a mex.h file, if I also needed a matlab file, etc. If there just existed a way to simply download a working example with all the functions they suggest there and some instructions to compile it, then it would be super helpful. But just reading the hmtl/article like that its impossible for me to infer what they want me to do.
For the sake of completness it seems reddit has some comments in its discussion of this:
https://www.reddit.com/r/matlab/comments/623m7i/how_does_one_compute_a_single_finite_differences/
Here is a more efficient doing so:
function [ vNumericalGrad ] = CalcNumericalGradient( hInputFunc, vInputPoint, epsVal )
numElmnts = size(vInputPoint, 1);
vNumericalGrad = zeros([numElmnts, 1]);
refVal = hInputFunc(vInputPoint);
for ii = 1:numElmnts
% Set the perturbation vector
refInVal = vInputPoint(ii);
vInputPoint(ii) = refInVal + epsVal;
% Compute Numerical Gradient
vNumericalGrad(ii) = (hInputFunc(vInputPoint) - refVal) / epsVal;
% Reset the perturbation vector
vInputPoint(ii) = refInVal;
end
end
This code allocate less memory.
The above code performance will be totally controlled by the speed of the hInputFunction.
The small tricks compared to original code are:
No memory reallocation of e each iteration.
Instead of addition of vectors W + e there are 2 assignments to the array.
Decreasing the calls to hInputFunction() by half by defining the reference value outside the loop (This only works for Forward / Backward difference).
Probably this will be very close to C code unless you can code in C more efficiently the function which computes the value (hInputFunction).
A full implementation can be found in StackOverflow Q44984132 Repository (It was Posted in StackOverflow Q44984132).
See CalcFunGrad( vX, hObjFun, difMode, epsVal ).
A way better approach (numerically more stable, no issue of choosing the perturbation hyperparameter, accurate up to machine precision) is to use algorithmic/automatic differentiation. For this you need the Matlab Deep Learning Toolbox. Then you can use dlgradient to compute the gradient. Below you find the source code attached corresponding to your example.
Most importantly, you can examine the error and observe that the deviation of the automatic approach from the analytical solution is indeed machine precision, while for the finite difference approach (I choose second order central differences) the error is orders of magnitude higher. For 100 points and a range of $[-10, 10]$ this errors are somewhat tolerable, but if you play a bit with Rand_Max and n_points you observe that the errors become larger and larger.
Error of algorithmic / automatic diff. is: 1.4755528111219851e-14
Error of finite difference diff. is: 1.9999999999348703e-01 for perturbation 1.0000000000000001e-01
Error of finite difference diff. is: 1.9999999632850161e-03 for perturbation 1.0000000000000000e-02
Error of finite difference diff. is: 1.9999905867860374e-05 for perturbation 1.0000000000000000e-03
Error of finite difference diff. is: 1.9664569947425062e-07 for perturbation 1.0000000000000000e-04
Error of finite difference diff. is: 1.0537897883625319e-07 for perturbation 1.0000000000000001e-05
Error of finite difference diff. is: 1.5469326944467290e-06 for perturbation 9.9999999999999995e-07
Error of finite difference diff. is: 1.3322061696937969e-05 for perturbation 9.9999999999999995e-08
Error of finite difference diff. is: 1.7059535957436630e-04 for perturbation 1.0000000000000000e-08
Error of finite difference diff. is: 4.9702408787320664e-04 for perturbation 1.0000000000000001e-09
Source Code:
f2.m
function y = f2(x)
x1 = x(:, 1);
x2 = x(:, 2);
x3 = x(:, 3);
y = x1.^2 + 2*x2.^2 + 2*x3.^3 + 2*x1.*x2 + 2*x2.*x3;
f2_grad_analytic.m:
function grad = f2_grad_analytic(x)
x1 = x(:, 1);
x2 = x(:, 2);
x3 = x(:, 3);
grad(:, 1) = 2*x1 + 2*x2;
grad(:, 2) = 4*x2 + 2*x1 + 2 * x3;
grad(:, 3) = 6*x3.^2 + 2*x2;
f2_grad_AD.m:
function grad = f2_grad_AD(x)
x1 = x(:, 1);
x2 = x(:, 2);
x3 = x(:, 3);
y = x1.^2 + 2*x2.^2 + 2*x3.^3 + 2*x1.*x2 + 2*x2.*x3;
grad = dlgradient(y, x);
CalcNumericalGradient.m:
function NumericalGrad = CalcNumericalGradient(InputPoints, eps)
% (Central, second order accurate FD)
NumericalGrad = zeros(size(InputPoints) );
for i = 1:size(InputPoints, 2)
perturb = zeros(size(InputPoints));
perturb(:, i) = eps;
NumericalGrad(:, i) = (f2(InputPoints + perturb) - f2(InputPoints - perturb)) / (2 * eps);
end
main.m:
clear;
close all;
clc;
n_points = 100;
Rand_Max = 20;
x_test_FD = rand(n_points, 3) * Rand_Max - Rand_Max/2;
% Calculate analytical solution
grad_analytic = f2_grad_analytic(x_test_FD);
grad_AD = zeros(n_points, 3);
for i = 1:n_points
x_test_dl = dlarray(x_test_FD(i,:) );
grad_AD(i,:) = dlfeval(#f2_grad_AD, x_test_dl);
end
Err_AD = norm(grad_AD - grad_analytic);
fprintf("Error of algorithmic / automatic diff. is: %.16e\n", Err_AD);
eps_range = [1e-1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6, 1e-7, 1e-8, 1e-9];
for i = 1:length(eps_range)
eps = eps_range(i);
grad_FD = CalcNumericalGradient(x_test_FD, eps);
Err_FD = norm(grad_FD - grad_analytic);
fprintf("Error of finite difference diff. is: %.16e for perturbation %.16e\n", Err_FD, eps);
end

Solving for Fisher Kolmagorov Partial Differential Equation

I have been trying to solve the non dimensional Fisher Kolmagorov equation in Matlab. I am getting a graph which doesn't look at all like it should. Also, I'm getting the equation independent of value of s (the source term in the pdepe solver). No matter what value of s I put in the graph remains the same.
function FK
m = 0;
x = linspace(0,1,100);
t = linspace(0,1,100);
u = pdepe(m,#FKpde,#FKic,#FKbc,x,t);
[X,T] = meshgrid(x,t);
%ANALYTICAL SOLUTION
% a=(sqrt(2))-1;
% q=2;
% s=2/q;
% b= q /((2*(q+2))^0.5);
% c= (q+4)/((2*(q+2))^0.5);
% zeta= X-c*T;
%y = 1/((1+(a*(exp(b*zeta))))^s);
%r=(y(:,:)-u(:,:))./y(:,:); % relative error in numerical and analytical value
figure;
plot(x,u(10,:),'o',x,u(40,:),'o',x,u(60,:),'o',x,u(end,:),'o')
title('Numerical Solutions at different times');
legend('tn=1','tn=5','tn=30','tn=10','ta=20','ta=600','ta=800','ta=1000',0);
xlabel('Distance x');
ylabel('u(x,t)');
% --------------------------------------------------------------------------
function [c,f,s] = FKpde(x,t,u,DuDx)
c = 1;
f = DuDx;
s =u*(1-u);
% --------------------------------------------------------------------------
function u0 = FKic(x)
u0 = 10^(-4);
% --------------------------------------------------------------------------
function [pl,ql,pr,qr] = FKbc(xl,ul,xr,ur,t)
pl = ul-1;
ql = 0;
pr = ur;
qr = 0;
Should maybe be a comment, but putting it as an answer for better formatting. Your analytic solution, which I assume you're using to compare with the numerical answer to say that the graph does not look as it should, does not appear to respect the initial conditions or boundary conditions you are feeding pdepe, so I'd start there if trying to figure out why u does not look like y
The initial and boundary conditions you are setting are:
u(0, t) = 1
u(1, t) = 0
u(x, 0) = 1e-4
Setting aside that the boundary and initial conditions are in conflict, the analytic solution you suggest in the code has
u(0, t) = 1/((1+exp(-b*ct)))
u(1, t) = 1/((1+exp(b*(1-ct)))
u(x, 0) = 1/((1+exp(b*x))
So it seems to me the numerical and analytic solutions should be expected to be different, and the differences you observe are probably due to the IC/BC setup. I suspect that pdepe is probably solving the equation you are giving it.
On increasing the length scale and time scale, I get the answers I want. The problem was to solve for different times, and see the wave propogate. For small lenghts, I could only see part of that wave.