Basic 2-body interaction using Matlab's ode45 - matlab

I'm trying to model basic gravitation for an object of negligible mass around a massive body. I've followed the examples provided in the ODE suite documentation, but the results I'm getting are plainly ridiculous.
Here's the function I'm calling with ode45:
function dy = rigid(t,y)
dy = zeros(4,1);
%Position
xx=y(1);
yy=y(2);
%Radius
r=(xx.^2+yy.^2).^0.5;
%Constants
M=10^30;
G=6.67*10^-11;
%dX/dt
dy(1)=y(3); %vx
dy(3)=-M.*G.*xx.*r.^-3; %ax
%dY/dt
dy(2)=y(4); %vy
dy(4)=-M.*G.*yy.*r.^-3; %ay
And here are the solver lines:
%Init
x_0=1;
y_0=1;
vx_0=0;
vy_0=5;
%ODEs
[T,Y] = ode45(#rigid,[0 1000],[x_0 y_0 vx_0 vy_0]);
%Vectors
x=Y(:,1);
y=Y(:,2);
%Plot
plot(x,y)
xlabel('x');
ylabel('y');
title('y=f(x)');
I get a linear plot at the end. Even with initial speed, the position doesn't change over a period of several steps. The only thing I can think of is that I've misunderstood the way to set up my system of ODEs.
I've been pondering this for a while now, and I'm really short on ideas having had done a few searches on the web.

Summary: There are problems in integrating Hamiltonian systems with normal numerical integrators, and your special initial conditions aggravate this to the point where the numerical solution has no resemblance with the correct one.
There's nothing wrong with your implementation per se. However, the initial conditions you use are not the best. The constants G and M you use are in SI units, which means the coordinates are in m, the speeds are in m/s, and time is in s. These lines
x_0=1;
y_0=1;
vx_0=0;
vy_0=5;
[T,Y] = ode45(#rigid,[0 1000],[x_0 y_0 vx_0 vy_0]);
therefore mean that you are asking for an orbit with a radius of about 1.4 meters and an orbital speed of 5 m/s, and you want this orbit over a period of 17 minutes. Imagine there actually was an object just meters away from a mass of 10^30 kilograms!
So let's try to ask for something more realistic, similar to Earths' orbit, and look at it over 1 year:
x_0=149.513e9;
y_0=0;
vx_0=0;
vy_0=29.78e3;
[T,Y] = ode45(#rigid,[0 31.536e6],[x_0 y_0 vx_0 vy_0]);
And the result is
which looks as expected.
But there is a second problem here. Let's look at this orbit over a period of 10 years ([0 315.36e6]):
Now we no longer get a closed orbit, but a spiral! That's because the numerical integration proceeds with limited precision, and for this set of equations this leads (physically speaking) to a loss of energy. The precision can be increased using parameters to ode45, but ultimately the problem will persist.
Now let's go back to your original parameters and have a look at the result:
This "orbit" is a straight line towards the origin (the sun). Which could be ok, since a straight oscillation is a possible special case of an elliptic orbit. But looking at the coordinates over time:
We see that there is no oscillation. The "planet" falls into the sun and stays there. What's happening here is the same effect as with the larger orbit: Imprecise integration leads to a loss of energy. Moreover, the numerical integration gets stuck: We asked for a period of 1000 s, but the integration does not proceed beyond 1.6e-10 seconds.
As far as I can tell, neither ode45 nor any of the other standard Matlab integrators are adequate to deal with this problem. There are special numerical integration algorithms designed to do so, specifically for Hamiltonian systems, called symplectic integrators. There is a file exchange entry that provides different implementations. Also see this question and answers for more pointers.

Related

Normalization of integrand for numerical integration in Matlab

First off, I'm not sure if this is the best place to post this, but since there isn't a dedicated Matlab community I'm posting this here.
To give a little background, I'm currently prototyping a plasma physics simulation which involves triple integration. The innermost integral can be done analytically, but for the outer two this is just impossible. I always thought it's best to work with values close to unity and thus normalized the my innermost integral such that it is unit-less and usually takes values close to unity. However, compared to an earlier version of the code where the this innermost integral evaluated to values of the order of 1e-50, the numerical double integration, which uses the native Matlab function integral2 with target relative tolerance of 1e-6, now requires around 1000 times more function evaluations to converge. As a consequence my simulation now takes roughly 12h instead of the previous 20 minutes.
Question
So my questions are:
Is it possible that the faster convergence in the older version is simply due to the additional evaluations vanishing as roundoff errors and that the results thus arn't trustworthy even though it passes the 1e-6 relative tolerance? In the few tests I run the results seemed to be the same in both versions though.
What is the best practice concerning the normalization of the integrand for numerical integration?
Is there some way to improve the convergence of numerical integrals, especially if the integrand might have singularities?
I'm thankful for any help or insight, especially since I don't fully understand the inner workings of Matlab's integral2 function and what should be paid attention to when using it.
If I didn't know any better I would actually conclude, that the integrand which is of the order of 1e-50 works way better than one of say the order of 1e+0, but that doesn't seem to make sense. Is there some numerical reason why this could actually be the case?
TL;DR when multiplying the function to be integrated numerically by Matlab 's integral2 with a factor 1e-50 and then the result in turn with a factor 1e+50, the integral gives the same result but converges way faster and I don't understand why.
edit:
I prepared a short script to illustrate the problem. Here the relative difference between the two results was of the order of 1e-4 and thus below the actual relative tolerance of integral2. In my original problem however the difference was even smaller.
fun = #(x,y,l) l./(sqrt(1-x.*cos(y)).^5).*((1-x).*sin(y));
x = linspace(0,1,101);
y = linspace(0,pi,101).';
figure
surf(x,y,fun(x,y,1));
l = linspace(0,1,101); l=l(2:end);
v1 = zeros(1,100); v2 = v1;
tval = tic;
for i=1:100
fun1 = #(x,y) fun(x,y,l(i));
v1(i) = integral2(fun1,0,1,0,pi,'RelTol',1e-6);
end
t1 = toc(tval)
tval = tic;
for i=1:100
fun1 = #(x,y) 1e-50*fun(x,y,l(i));
v2(i) = 1e+50*integral2(fun1,0,1,0,pi,'RelTol',1e-6);
end
t2 = toc(tval)
figure
hold all;
plot(l,v1);
plot(l,v2);
plot(l,abs((v2-v1)./v1));

Speed up calculation in Physics simulation in Matlab

I am working on a MR-physic simulation written in Matlab which simulates bloch's equations on an defined object. The magnetisation in the object is updated every time-step with the following functions.
function Mt = evolveMtrans(gamma, delta_B, G, T2, Mt0, delta_t)
% this function calculates precession and relaxation of the
% transversal component, Mt, of M
delta_phi = gamma*(delta_B + G)*delta_t;
Mt = Mt0 .* exp(-delta_t*1./T2 - 1i*delta_phi);
end
This function is a very small part of the entire code but is called upon up to 250.000 times and thus slows down the code and the performance of the entire simulation. I have thought about how I can speed up the calculation but haven't come up with a good solution. There is one line that is VERY time consuming and stands for approximately 50% - 60% of the overall simulation time. This is the line,
Mt = Mt0 .* exp(-delta_t*1./T2 - 1i*delta_phi);
where
Mt0 = 512x512 matrix
delta_t = a scalar
T2 = 512x512 matrix
delta_phi = 512x512 matrix
I would be very grateful for any suggestion to speed up this calculation.
More info below,
The function evovleMtrans is called every timestep during the simulation.
The parameters that are used for calling the function are,
gamma = a constant. (gyramagnetic constant)
delta_B = the magnetic field value
G = gradientstrength
T2 = a 512x512 matrix with T2-values for the object
Mstart.r = a 512x512 matrix with the values M.r had the last timestep
delta_t = a scalar with the difference in time since the last calculated M.r
The only parameters of these that changed during the simulation are,
G, Mstart.r and delta_t. The rest do not change their values during the simulation.
The part below is the part in the main code that calls the function.
% update phase and relaxation to calcTime
delta_t = calcTime - Mstart_t;
delta_B = (d-d0)*B0;
G = Sq.Gx*Sq.xGxref + Sq.Gz*Sq.zGzref;
% Precession around B0 (z-axis) and B1 (+-x-axis or +-y-axis)
% is defined clock-wise in a right hand system x, y, z and
% x', y', z (see the Bloch equation, Bloch 1946 and Levitt
% 1997). The x-axis has angle zero and the y-axis has angle 90.
% For flipping/precession around B1 in the xy-plane, z-axis has
% angle zero.
% For testing of precession direction:
% delta_phi = gamma*((ones(size(d)))*1e-6*B0)*delta_t;
M.r = evolveMtrans(gamma, delta_B, G, T2, Mstart.r, delta_t);
M.l = evolveMlong(T1, M0.l, Mstart.l, delta_t);
This is not a surprise.
That "single line" is a matrix equation. It's really 1,024 simultaneous equations.
Per Jannick, that first term means element-wise division, so "delta_t/T[i,j]". Multiplying a matrix by a scalar is O(N^2). Matrix addition is O(N^2). Evaluating exponential of a matrix will be O(N^2).
I'm not sure if I saw a complex argument in there as well. Does that mean complex matricies with real and imaginary entries? Does your equation simplify to real and imaginary parts? That means twice the number of computations.
Your best hope is to exploit symmetry as much as possible. If all your matricies are symmetric, you cut your calculations roughly in half.
Use parallelization if you can.
Algorithm choice can make a big difference, too. If you're using explicit Euler integration, you may have time step limitations due to stability concerns. Is that why you have 250,000 steps? Maybe a larger time step is possible with a more stable integration schema. Think about a higher order adaptive scheme with error correction, like 5th order Runge Kutta.
There are several possibilities to improve the speed of the code but all that I see come with a caveat.
Numerical ode integration
The first possibility would be to change your analytical solution by numerical differential equation solver. This has several advantages
The analytical solution includes the complex exponential function, which is costly to calculate, while the differential equation contains only multiplication and addition. (d/dt u = -a u => u=exp(-at))
There are plenty of built-in solvers for matlab available and they are typically pretty fast (e.g. ode45). The built-ins however all use a variable step size. This improves speed and accuracy but would be a problem if you really need a fixed equally spaced grid of time points. Here are unofficial fixed step solvers.
As a start you could also try to use just an euler step by replacing
M.r = evolveMtrans(gamma, delta_B, G, T2, Mstart.r, delta_t);
by
delta_phi = gamma*(delta_B + G)*t_step;
M.r += M.r .* (1-t_step*1./T2 - 1i*delta_phi);
You can then further improve that by precalculating all constant values, e.g. one_over_T1=1/T1, moving delta_phi out of the loop.
Caveat:
You are bound to a minimum step size or the accuracy suffers. Therefore this is only a good idea if you time-spacing is quite fine.
Less points in time
You should carfully analyze whether you really need so many points in time. It seems somewhat puzzling to me that you need so many points. As you know the full analytical solution you can freely choose how to sample the time and maybe use this to your advantage.
Going fortran
This might seem like a grand step but in my experience basic (simple loops, matrix operations etc.) matlab code can be relatively easily translated to fortran line-by-line. This would be especially helpful in addition to my first point. If you still want to use the full analytical solution probably there is not much to gain here because exp is already pretty fast in matlab.

Matlab understanding ode solver

I have a system of linked differential equations that I am solving with the ode23 solver. When a certain threshold is reached one of the parameters changes which reverses the slope of my function.
I followed the behavior of the ode with the debugging function and noticed that it starts to jump back in "time" around this point. Basically it generates more data points.However, these are not all represented in the final solution vector.
Can somebody explain this behavior, especially why not all calculated values find their way into the solution vector?
//Edit: To clarify, the behavior starts when v changes from 0 to any other value. (When I write every value of v to a vector it has more than a 1000 components while the ode solver solution only has ~300).
Find the code of my equations below:
%chemostat model, based on:
%DCc=-v0*Cc/V + umax*Cs*Cc/(Ks+Cs)-rd
%Dcs=(v0/V)*(Cs0-Cs) - Cc*(Ys*umax*Cs/(Ks+Cs)-m)
function dydt=systemEquationsRibose(t,y,funV0Ribose,V,umax,Ks,rd,Cs0,Ys,m)
v=funV0Ribose(t,y); %funV0Ribose determines v dependent on y(1)
if y(2)<0
y(2)=0
end
dydt=[-(v/V)*y(1)+(umax*y(1)*y(2))/(Ks+y(2))-rd;
(v/V)*(Cs0-y(2))-((1/Ys)*(umax*y(2)*y(1))/(Ks+y(2)))];
Thanks in advance!
Cheers,
dahlai
The first conditional can also be expressed as
y(2) = max(0, y(2)).
As one can see, this is still a continuous function, but with a kink, i.e., a discontinuity in the first derivative. One can this also interpret as a point with curvature radius 0, i.e., infinite curvature.
ode23 uses an order 2 method to integrate, an order 3 method to estimate the error and probably the order 1 Euler step to estimate stiffness.
An integration step over the kink renders all discretization errors to be order 1 (or 2, depending on the convention), confounding the logic of the step size control. This forces a rather radical step-size reduction, but since that small step then falls, most probably, short of the kink, the correct orders are found again, resulting in a step-size increase in the next step which could again go over the kink etc.
The return array only contains successful integration steps, not the failed attempts of the step-size control.

MATLAB using FFT to find steady state response to a periodic input force (mass spring damper system)

Lets say I have a mass-spring-damper system...
here is my code (matlab)...
% system parameters
m=4; k=256; c=1; wn=sqrt(k/m); z=c/2/sqrt(m*k); wd=wn*sqrt(1-z^2);
% initial conditions
x0=0; v0=0;
%% time
dt=.001; tMax=2*pi; t=0:dt:tMax;
% input
F=cos(t); Fw=fft(F);
% impulse response function
h=1/m/wd*exp(-z*wn*t).*sin(wd*t); H=fft(h);
% convolution
convolution=Fw.*H; sol=ifft(convolution);
% plot
plot(t,sol)
so I can successfully retrieve a plot, however I am getting strange responses I also programmed a RK4 method that solves the system of differential equations so I know how the plot SHOULD look like, and the plot I am getting from using FFT has an amplitude of a like 2 when it should have an amplitude of like .05.
So, how can I solve for the steady state response for this system using FFT. I want to use FFT because it is about 3 orders of magnitude faster than numerical integration methods.
Keep in mind I am defining my periodic input as cos(t) which has a period of 2*pi that is why I only used FFT over the time vector that spanned 0 to 2*pi (1 period). I also noticed if I changed the tMax time to a multiple of 2*pi, like 10*pi, I got a similar looking plot but the amplitude was 4 rather than 2, either way still not .05!. maybe there is some kind of factor I need to multiply by?
also I plotted : plot(t,Fw) expecting to see one peak at 1 since the forcing function is cos(t), yet I did not see any peaks (maybe I shouldn't be plotting Fw vs t)
I know it is possible to solve for the steady state response using fourier transform / fft, I am just missing something! I need help and understanding!!
The original results
Running the code you provided and comparing the result with the RK4 code posted in your other question, we get the following responses:
where the blue curve represents the FFT based implementation, and the red curve represents your alternate RK4 implementation. As you have pointed out, the curves are quite different.
Getting the correct response
The most obvious problem is of course the amplitude, and the main sources of the amplitude discrepancies of the code posted in this question are the same as the ones I indicated in my answer to your other question:
The RK4 implementation performs a numeric integration which correctly scales the summed values by the integration step dt. This scaling is lacking from the FFT based implementation.
The impulse response used in the FFT based implementation is consistent with the driving force being scaled by the mass m, a factor which was missing from the RK4 implementation.
Fixing those two issues results in responses which are a little closer, but still not identical. As you probably found out given the changes in the posted code of your other question, another thing that was lacking was zero padding of the input and of the impulse response, without which you were getting a circular convolution rather than a linear convolution:
f=[cos(t),zeros(1,length(t)-1)]; %force f
h=[1/m/wd*exp(-z*wn*t).*sin(wd*t),zeros(1,length(t)-1)]; %impulse response
Finally the last element to ensure the convolution yields good result is to use a good approximation to the infinite length impulse response. How long is long enough depends on the rate of decay of the impulse response. With the parameters you provided, the impulse response would have died down to 1% of its original values after approximately 11*pi. So extending the time span to tMax=14*pi (to include a full 2*pi cycle after the impulse response has died down), would probably be enough.
Obtaining the steady-state response
The simplest way to obtain the steady-state response is then to discard the initial transient. In this process we discard a integer number of cycles of the reference driving force (this of course requires knowledge of the driving force's fundamental frequency):
T0 = tMax-2*pi;
delay = min(find(t>T0));
sol = sol(delay:end);
plot([0:length(sol)-1]*dt, sol, 'b');
axis([0 2*pi]);
The resulting responses are then:
where again the blue curve represents the FFT based implementation, and the red curve represents your alternate RK4 implementation. Much better!
An alternate method
Computing the response for many cycles waiting for the transient response to die down and extracting the remaining samples corresponding
to the steady state might appear to be a little wasteful, despite the fact that the computation is still fairly fast thanks to the FFT.
So, let's go back a little and look at the problem domain. As you are probably aware,
the mass-spring-damper system is governed by the differential equation:
where f(t) is the driving force in this case.
Note that the general solution to the homogeneous equation has the form:
The key is then to realize that the general solution in the case where c>0 and m>0 vanishes in the steady state (t going to infinity).
The steady-state solution is thus only dependent on the particular solution to the non-homogenous equation.
This particular solution can be found by the method of undetermined coefficients, for a driving force of the form
by correspondingly assuming that the solution has the form
Substituting in the differential equation yields the equations:
thus, the solution can be implemented as:
EF0 = [wn*wn-w*w 2*z*wn*w; -2*z*wn*w wn*wn-w*w]\[1/m; 0];
sol = EF0(1)*cos(w*t)+EF0(2)*sin(w*t);
plot(t, sol);
where w=2*pi in your case.
Generalization
The above approach can be generalized for more arbitrary periodic driving forces by expressing the driving force as a
Fourier Series (assuming the driving force function satisfies the Dirichlet conditions):
The particular solution can correspondingly be assumed to have the form
Solving for the particular solution can be done in a way very similar to the earlier case. This result in the following implementation:
% normalize
y = F/m;
% compute coefficients proportional to the Fourier series coefficients
Yw = fft(y);
% setup the equations to solve the particular solution of the differential equation
% by the method of undetermined coefficients
k = [0:N/2];
w = 2*pi*k/T;
A = wn*wn-w.*w;
B = 2*z*wn*w;
% solve the equation [A B;-B A][real(Xw); imag(Xw)] = [real(Yw); imag(Yw)] equation
% Note that solution can be obtained by writing [A B;-B A] as a scaling + rotation
% of a 2D vector, which we solve using complex number algebra
C = sqrt(A.*A+B.*B);
theta = acos(A./C);
Ywp = exp(j*theta)./C.*Yw([1:N/2+1]);
% build a hermitian-symmetric spectrum
Xw = [Ywp conj(fliplr(Ywp(2:end-1)))];
% bring back to time-domain (function synthesis from Fourier Series coefficients)
x = ifft(Xw);
A final note
I purposely avoided the undamped c=0 case in the above derivation. In this case, the oscillation never die down and the general solution to the homogeneous equation does not have to be the trivial one.
The final "steady-state" in this case may or may not have the same period as the driving force. As a matter of fact, it may not be periodic at all if the period oscillations from the general solution is not related to the period of the driving force through a rational number (ratio of integers).

Double integration over a polygon in Matlab

I am given a function #f(x,y) and I want to evaluate the integral of this function over a certain convex polygon in MATLAB. The polygon is not necessarily a rectangle and that's why I can't use MATLAB's function "dblquad". The polygon I have is given by a set of vertices represented by the vectors X and Y, i.e. the vertices are (X(1),Y(1)),....,(X(n),Y(n)). Is there any function or method that I can use?
The trick is to use tools to integrate inside the region of interest. I've written a few tools for integration in a triangulated domain.
% Define a function to integrate.
% This function takes an nx2 array, where each row
% contains a single point to evaluate the kernel at.
% This computes x^2 + y^2 at each point.
fun = #(xy) sum(xy.^2,2);
% define the domain as a triangulated polygon
% this tool uses ear clipping to do so.
sc = poly2tri([1 4 3 1],[1 3 5 4]);
% Gauss-Legendre integration over the 2-d domain
[integ,fev]= quadgsc(fun,sc,2)
integ =
113.166666666667
fev =
8
% the triangulated polygon...
plotsc(sc,'facecolor','none','markerfacecolor','r')
axis equal
grid on
We can visualize the function itself, as a mapping z(x,y) over that polygonal domain. When a range field is supplied, the simplicial complex turns into a 2-1 mapping from the 2-d (x,y) domain.
sc2 = refinesc(sc,'max',.5);
sc2.range = fun(sc2.domain);
plotsc(sc2,'markerfacecolor','r')
grid on
view(17,12)
This is a simple polynomial function over the domain of interest, so the default low order Gaussian integration was adequate. The scheme used is a Gauss-Legendre one in a tensor product form over a triangle, not truly optimal, but viable. The problem with Gaussian quadrature, is it is not adaptive. It computes an estimate, based on implicit approximation by polynomials over a finite set of points.
The above estimate used 8 function evals to compute that estimate. Since the kernel is a low order polynomial, it should do perfectly. The problem is, you need to know if it is a correct solution. This is the problem with a Gaussian quadrature, there is no simple way to know if the answer is correct, except for resolving the problem with a higher order scheme until it seems to converge.
See that with 1 point per triangle at the barycenter, we get the wrong answer, but the higher order estimates all agree.
[integ,fev]= quadgsc(fun,sc,1)
integ =
107.777777777778
fev =
2
[integ,fev]= quadgsc(fun,sc,3)
integ =
113.166666666667
fev =
18
[integ,fev]= quadgsc(fun,sc,4)
integ =
113.166666666667
fev =
32
After writing quadgsc, I had to try an adaptive solver, that works in the same way as the other quad tools do in MATLAB. This does an adaptive refinement of the triangulation, looking for triangles where the solution is not stable. The problem is, I never did finish writing these tools to my satisfaction. There are many different methods one can employ for the cubature problem over a triangulated domain. quadrsc does a low order solution, then refines it, uses a Richardson extrapolation, then compares the results. For any triangles where the difference is too large, it refines them further until it converges.
For example,
[integ,fev]= quadrsc(fun,sc)
integ =
113.166666666667
fev =
16
So this works. The problem shows up on more complex kernels, where the issue becomes to know when to stop the refinement, and to do so before one has used up too many function evaluations. I never did get that fully working to my satisfaction, so I never posted these tools. I can send the toolbox to those who send me direct mail. The zip file is about 2.4 MB. One day I'll get around to finishing those tools, I hope...