When I try to use this Matlab code it goes in an infinite loop. I am trying to perform integration inside ode45:
clear
clc
options = odeset('RelTol',1e-4,'AbsTol',[1e-4 1e-4 1e-5]);
[T,Y] = ode45(#rigid,[0 12],[0 1 1],options);
plot(T,Y(:,1),'+',T,Y(:,2),'*',T,Y(:,3),'.')
function dy = rigid(t,y)
dy = zeros(3,1); % a column vector
dy(1) = y(2) ;
dy(2) = -y(1) * y(3);
fun = #(t) exp(-t.^2).*log(t).^2+y(1);
q = integral(fun,0,Inf);
dy(3) = y(2) * y(3) + q;
There is no "infinite loop." Your function just takes a very long time to integrate. Try setting tspan to [0 1e-7]. It appears to be a high frequency oscillation, but I don't know if your equations are correct (that's a math question rather than a programming one). Such systems are hard to integrate accurately (ode15 might be a better choice), let alone quickly.
You also didn't bother to mention the important fact that the call to integral is generating a warning message:
Warning: Minimum step size reached near x = 1.75484e+22. There may be a
singularity, or the tolerances may be too tight for this problem.
> In funfun/private/integralCalc>checkSpacing at 457
In funfun/private/integralCalc>iterateScalarValued at 320
In funfun/private/integralCalc>vadapt at 133
In funfun/private/integralCalc at 84
In integral at 88
In rtest1>rigid at 17
In ode15s at 580
In rtest1 at 5
Printing out warning messages on each iteration greatly slows down integration. There's a good reason for this warning. You do realize that the function that you're evaluating with integral from 0 to Inf is equivalent to the following, right?
sqrt(pi)*((eulergamma + log(4))^2/8 + pi^2/16) + Inf*y(1)
where eulergamma is psi(1) or double(sym('eulergamma')). Your integrand doesn't converge.
If you like, can try to avoid the warning message in one of two ways.
1. Turn off the the warning (being sure to re-enable it afterwards). You can do that with the following code:
...
warning('OFF','MATLAB:integral:MinStepSize');
[T,Y] = ode45(#rigid,[0 12],[0 1 1],options);
warning('ON','MATLAB:integral:MinStepSize');
...
You can obtain the the ID for a waring via the lastwarn function.
2. The other option might be to change your integration bounds and avoid the warning altogether, e.g.:
...
q = integral(fun,0,1e20);
...
This may or may not be acceptable, but integral is not be returning the correct solution either because the result doesn't converge.
Related
How would I numerically solve for the following simple system of differential equations using Octave?
Note:
I use the qualifier "simple" as, from my understanding, the system is
first order and is not coupled.
I have tried every
method and script online to try solve this including here,
here and here. In all options, I either get a hanging,
non-responsive Octave, a prompt stating "repeated convergence
failures", an error with recommendation that I manually adjust
the initial and maximum step size (which I did try and do, to no
avail), or something that initially seems like a solution on account of no errors but plotting the solution shows a blank graph
Where Octave provided for equivalent Matlab routines, I tried the various routines ode45, ode23, ode113, ode15s, ode23s, ode23t, ode23tb, ode15i and of course, Octaves own lsode command, all giving the same errors described above.
Let's first replicate the vanilla solution
% z = [x,y]
f = #(t,z) [ z(1).^2+t; z(1).*z(2)-2 ];
z0 = [ 2; 1];
[ T, Z ] = ode45(f, [0, 10], z0);
plot(T,Z); legend(["x";"y"]);
The integrator fails as reported with the warning
warning: Solving was not successful. The iterative integration loop exited at time t = 0.494898 before the endpoint at tend = 10.000000 was reached. This may happen if the stepsize becomes too small. Try to reduce the value of 'InitialStep' and/or 'MaxStep' with the command 'odeset'.
Repeating the integration up to shortly before the critical time
opt = odeset('MaxStep',0.01);
[ T, Z ] = ode45(f, [0, 0.49], z0, opt);
clf; plot(T,Z); legend(["x";"y"]);
results in the graph
where one can see that the quadratic term in the first equation leads to run-away growth. For some reason the solver does only recognize the ever reducing step size, but not the run-away values of the solution.
Indeed the first is a Riccati equation which are known to have poles at finite times. Using the typical parametrization x(t)=-u'(t)/u(t) has by the product/quotient rule the derivative
x' = -u''(t)/u(t) - u'(t)* (-u'(t)/u(t)^2) = -u''(t)/u(t) + x(t)^2
which then results in the ODE for u
u''(t)+t*u(t)=0, u(0)=-1, u'(0)=x(0)=2,
which is an Airy equation with the oscillating branch for t>0. The first root of u is a pole for x, there is no way to extend the solution beyond this point.
g=#(t,u) [u(2); -t.*u(1)]
u0 = [ 1; -2];
function [val,term, dir] = event(t,u)
val = u(1);
term = 0;
dir = 0;
end
opt = odeset('MaxStep',0.1, 'Events', #(t,u) event(t,u));
[T,U,Te,Ue,Ie] = ode45(g,[0,4],u0,opt);
disp(Te)
clf; plot(T,U); legend(["u";"u'"]);
which lists the zeros of u as 0.4949319379979706, 2.886092605590324, again confirming the reason for the warning, and gives the plot
I have a 3D Mesh grid, X, Y, Z. I want to create a new 3D array that is a function of X, Y, & Z. That function comprises the sum of several 3D Gaussians located at different points. Currently, I have a for loop that runs over the different points where I have my gaussians, and I have an array of center locations r0(nGauss, 1:3)
[X,Y,Z]=meshgrid(-10:.1:10);
Psi=0*X;
for index = 1:nGauss
Psi = Psi + Gauss3D(X,Y,Z,[r0(index,1),r0(index,2),r0(index,3)]);
end
where my 3D gaussian function is
function output=Gauss3D(X,Y,Z,r0)
output=exp(-(X-r0(1)).^2 + (Y-r0(2)).^2 + (Z-r0(3)).^2);
end
I'm happy to redesign the function, which is the slowest part of my code and has to happen many many time, but I can't figure out how to vectorize this so that it will run faster. Any suggestions would be appreciated
*****NB the Original function had a square root in it, and has been modified to make it an actual gaussian***
NOTE! I've modified your code to create a Gaussian, which was:
output=exp(-sqrt((X-r0(1)).^2 + (Y-r0(2)).^2 + (Z-r0(3)).^2));
That does not make a Gaussian. I changed this to:
output = exp(-((X-r0(1)).^2 + (Y-r0(2)).^2 + (Z-r0(3)).^2));
(note no sqrt). This is a Gaussian with sigma = sqrt(1/2).
If this is not what you want, then this answer might not be very useful to you, because your function does not go to 0 as fast as the Gaussian, and therefore is harder to truncate, and it is not separable.
Vectorizing this code is pointless, as the other answers attest. MATLAB's JIT is perfectly capable of running this as fast as it'll go. But you can reduce the amount of computation significantly by noting that the Gaussian goes to almost zero very quickly, and is separable:
Most of the exp evaluations you're doing here yield a very tiny number. You don't need to compute those, just fill in 0.
exp(-x.^2-y.^2) is the same as exp(-x.^2).*exp(-y.^2), which is much cheaper to compute.
Let's put these two things to the test. Here is the test code:
function gaussian_test
N = 100;
r0 = rand(N,3)*20 - 10;
% Original
tic
[X,Y,Z] = meshgrid(-10:.1:10);
Psi1 = zeros(size(X));
for index = 1:N
Psi1 = Psi1 + Gauss3D(X,Y,Z,r0(index,:));
end
t = toc;
fprintf('original, time = %f\n',t)
% Fast, large truncation
tic
[X,Y,Z] = deal(-10:.1:10);
Psi2 = zeros(numel(X),numel(Y),numel(Z));
for index = 1:N
Psi2 = Gauss3D_fast(Psi2,X,Y,Z,r0(index,:),5);
end
t = toc;
fprintf('tuncation = 5, time = %f\n',t)
fprintf('mean abs error = %f\n',mean(reshape(abs(Psi2-Psi1),[],1)))
fprintf('mean square error = %f\n',mean(reshape((Psi2-Psi1).^2,[],1)))
fprintf('max abs error = %f\n',max(reshape(abs(Psi2-Psi1),[],1)))
% Fast, smaller truncation
tic
[X,Y,Z] = deal(-10:.1:10);
Psi3 = zeros(numel(X),numel(Y),numel(Z));
for index = 1:N
Psi3 = Gauss3D_fast(Psi3,X,Y,Z,r0(index,:),3);
end
t = toc;
fprintf('tuncation = 3, time = %f\n',t)
fprintf('mean abs error = %f\n',mean(reshape(abs(Psi3-Psi1),[],1)))
fprintf('mean square error = %f\n',mean(reshape((Psi3-Psi1).^2,[],1)))
fprintf('max abs error = %f\n',max(reshape(abs(Psi3-Psi1),[],1)))
% DIPimage, same smaller truncation
tic
Psi4 = newim(201,201,201);
coords = (r0+10) * 10;
Psi4 = gaussianblob(Psi4,coords,10*sqrt(1/2),(pi*100).^(3/2));
t = toc;
fprintf('DIPimage, time = %f\n',t)
fprintf('mean abs error = %f\n',mean(reshape(abs(Psi4-Psi1),[],1)))
fprintf('mean square error = %f\n',mean(reshape((Psi4-Psi1).^2,[],1)))
fprintf('max abs error = %f\n',max(reshape(abs(Psi4-Psi1),[],1)))
end % of function gaussian_test
function output = Gauss3D(X,Y,Z,r0)
output = exp(-((X-r0(1)).^2 + (Y-r0(2)).^2 + (Z-r0(3)).^2));
end
function Psi = Gauss3D_fast(Psi,X,Y,Z,r0,trunc)
% sigma = sqrt(1/2)
x = X-r0(1);
y = Y-r0(2);
z = Z-r0(3);
mx = abs(x) < trunc*sqrt(1/2);
my = abs(y) < trunc*sqrt(1/2);
mz = abs(z) < trunc*sqrt(1/2);
Psi(my,mx,mz) = Psi(my,mx,mz) + exp(-x(mx).^2) .* reshape(exp(-y(my).^2),[],1) .* reshape(exp(-z(mz).^2),1,1,[]);
% Note! the line above uses implicit singleton expansion. For older MATLABs use bsxfun
end
This is the output on my machine, reordered for readability (I'm still on MATLAB R2017a):
| time(s) | mean abs | mean sq. | max abs
--------------+----------+----------+----------+----------
original | 5.035762 | | |
tuncation = 5 | 0.169807 | 0.000000 | 0.000000 | 0.000005
tuncation = 3 | 0.054737 | 0.000452 | 0.000002 | 0.024378
DIPimage | 0.044099 | 0.000452 | 0.000002 | 0.024378
As you can see, using these two properties of the Gaussian we can reduce time from 5.0 s to 0.17 s, a 30x speedup, with hardly noticeable differences (truncating at 5*sigma). A further 3x speedup can be gained by allowing a small error. The smallest the truncation value, the faster this will go, but the larger the error will be.
I added that last method, the gaussianblob function from DIPimage (I'm an author), just to show that option in case you need to squeeze that bit of extra time from your code. That function is implemented in C++. This version that I used you will need to compile yourself. Our current official release implements this function still in M-file code, and is not as fast.
Further chance of improvement is if the fractional part of the coordinates is always the same (w.r.t. the pixel grid). In this case, you can draw the Gaussian once, and shift it over to each of the centroids.
Another alternative involves computing the Gaussian once, at a somewhat larger scale, and interpolating into it to generate each of the 1D Gaussians needed to generate the output. I did not implement this, I have no idea if it will be faster or if the time difference will be significant. In the old days, exp was expensive, I'm not sure this is still the case.
So, I am building off of the answer above me #Durkee. I enjoy these kinds of problems, so I thought a little about how to make each of the expansions implicit, and I have the one-line function below. Using this function I shaved .11 seconds off of the call, which is completely negligible. It looks like yours is pretty decent. The only advantage of mine might be how the code scales on a finer mesh.
xLin = [-10:.1:10]';
tic
psi2 = sum(exp(-sqrt((permute(xLin-r0(:,1)',[3 1 4 2])).^2 ...
+ (permute(xLin-r0(:,2)',[1 3 4 2])).^2 ...
+ (permute(xLin-r0(:,3)',[3 4 1 2])).^2)),4);
toc
The relative run times on my computer were (all things kept the same):
Original - 1.234085
Other - 2.445375
Mine - 1.120701
So this is a bit of an unusual problem where on my computer the unvectorized code actually works better than the vectorized code, here is my script
clear
[X,Y,Z]=meshgrid(-10:.1:10);
Psi=0*X;
nGauss = 20; %Sample nGauss as you didn't specify
r0 = rand(nGauss,3); % Just make this up as it doesn't really matter in this case
% Your original code
tic
for index = 1:nGauss
Psi = Psi + Gauss3D(X,Y,Z,[r0(index,1),r0(index,2),r0(index,3)]);
end
toc
% Vectorize these functions so we can use implicit broadcasting
X1 = X(:);
Y1 = Y(:);
Z1 = Z(:);
tic
val = [X1 Y1 Z1];
% Change the dimensions so that r0 operates on the right elements
r0_temp = permute(r0,[3 2 1]);
% Perform the gaussian combination
out = sum(exp(-sqrt(sum((val-r0_temp).^2,2))),3);
toc
% Check to make sure both functions match
sum(abs(vec(Psi)-vec(out)))
function output=Gauss3D(X,Y,Z,r0)
output=exp(-sqrt((X-r0(1)).^2 + (Y-r0(2)).^2 + (Z-r0(3)).^2));
end
function out = vec(in)
out = in(:);
end
As you can see, this is probably about as vectorized as you can get. The whole function is done using broadcasting and vectorized operations which normally improve performance ten-one hundredfold. However, in this case, this is not what we see
Elapsed time is 1.876460 seconds.
Elapsed time is 2.909152 seconds.
This actually shows the unvectorized version as being faster.
There could be a few reasons for this of which I am by no means an expert.
MATLAB uses a JIT compiler now which means that for loops are no longer inefficient.
Your code is already reasonably vectorized, you are operating at 8 million elements at once
Unless nGauss is 1000 or something, you're not looping through that much, and at that point, vectorization means you will run out of memory
I could be hitting some memory threshold where I am using too much memory and that is making my code inefficient, I noticed that when I lowered the resolution on the meshgrid the vectorized version worked better
As an aside, I tested this on my GTX 1060 GPU with single precision(single precision is 10x faster than double precision on most GPUs)
Elapsed time is 0.087405 seconds.
Elapsed time is 0.241456 seconds.
Once again the unvectorized version is faster, sorry I couldn't help you out but it seems that your code is about as good as you are going to get unless you lower the tolerances on your meshgrid.
I ran through the algebra which I had previously done for the Verlet method without the force - this lead to the same code as you see below, but the "+(2*F/D)" term was missing when I ignored the external force. The algorithm worked accurately, as expected, however for the following parameters:
m = 7 ; k = 8 ; b = 0.1 ;
params = [m,k,b];
(and step size h = 0.001)
a force far above something like 0.00001 is much too big. I suspect I've missed a trick with the algebra.
My question is whether someone can spot the flaw in my addition of a force term in my Verlet method
% verlet.m
% uses the verlet step algorithm to integrate the simple harmonic
% oscillator.
% stepsize h, for a second-order ODE
function vout = verlet(vinverletx,h,params,F)
% vin is the particle vector (xn,yn)
x0 = vinverletx(1);
x1 = vinverletx(2);
% find the verlet coefficients
D = (2*params(1))+(params(3)*h);
A = (2/D)*((2*params(1))-(params(2)*h^2));
B=(1/D)*((params(3)*h)-(2*params(1)));
x2 = (A*x1)+(B*x0)+(2*F/D);
vout = x2;
% vout is the particle vector (xn+1,yn+1)
end
As written in the answer to the previous question, the moment friction enters the equation, the system is no longer conservative and the name "Verlet" does no longer apply. It is still a valid discretization of
m*x''+b*x'+k*x = F
(with some slight error with large consequences).
The discretization employs the central difference quotients of first and second order
x'[k] = (x[k+1]-x[k-1])/(2*h) + O(h^2)
x''[k] = (x[k+1]-2*x[k]+x[k-1])/(h^2) + O(h^2)
resulting in
(2*m+b*h)*x[k+1] - 2*(2*m+h^2*k) * x[k] + (2*m-b*h)*x[k-1] = 2*h^2 * F[k] + O(h^4)
Error: As you can see, you are missing a factor h^2 in the term with F.
I'm running a set of ODEs with ode45 in MATLAB and I need to save one of the variables (that's not the derivative) for later use. I'm using the function 'assignin' to assign a temporary variable in the base workspace and updating it at each step. This seems to work, however, the size of the array does not match the size of the solution vector acquired from ode45. For example, I have the following nested function:
function [Z,Y] = droplet_momentum(theta,K,G,P,zspan,Y0)
options = odeset('RelTol',1e-7,'AbsTol',1e-7);
[Z,Y] = ode45(#momentum,zspan,Y0,options);
function DY = momentum(z,y)
DY = zeros(4,1);
%Entrained Total Velocity
Ve = sin(theta)*(y(4));
%Total Relative Velocity
Urs = sqrt((y(1) - y(4))^2 + (y(2) - Ve*cos(theta))^2 + (y(3))^2);
%Coefficients
PSI = K*Urs/y(1);
PHI = P*Urs/y(1);
%Liquid Axial Velocity
DY(1) = PSI*sign(y(1) - y(4))*(1 + (1/6)*(abs(y(1) - y(4))*G)^(2/3));
%Liquid Radial Velocity
DY(2) = PSI*sign(y(2) - Ve*cos(theta))*(1 + (1/6)*(abs(y(2) - ...
Ve*cos(theta))*G)^(2/3));
%Liquid Tangential Velocity
DY(3) = PSI*sign(y(3))*(1 + (1/6)*(abs(y(3))*G)^(2/3));
%Gaseous Axial Velocity
DY(4) = (1/z/y(4))*((PHI/z)*sign(y(1) - y(4))*(1 + ...
(1/6)*(abs(y(1) - y(4))*G)^(2/3)) + Ve*Ve - y(4)*y(4));
assignin('base','Ve_step',Ve);
evalin('base','Ve_out(end+1) = Ve_step');
end
end
In the above code, theta (radians), K (negative value), P, & G are constants and for the sake of this example can be taken as any value. Zspan is just the integration time step for the ODE solver and Y0 is the initial conditions vector (4x1). Again, for the sake of this example these can take any reasonable value. Now in the main file, the function is called with the following:
Ve_out = 0;
[Z,Y] = droplet_momentum(theta,K,G,P,zspan,Y0);
Ve_out = Ve_out(2:end);
This method works without complaint from MATLAB, but the problem is that the size of Ve_out is not the same as the size of Z or Y. The reason for this is because MATLAB calls the ODE function multiple times for its algorithm, so the solution is going to be slightly smaller than Ve_out. As am304 suggested, I could just simply calculated DY by giving the ode function a Z and Y vector such as DY = momentum(Z,Y), however, I need to get this working with 'assignin' (or similar method) because another version of this problem has an implicit dependence between DY and Ve and it would be too computationally expensive to calculate DY at every iteration (I will be running this problem for many iterations).
Ok, so let's start off with a quick example of an SSCCE:
function [Z,Y] = khan
options = odeset('RelTol',1e-7,'AbsTol',1e-7);
[Z,Y] = ode45(#momentum,[0 12],[0 0],options);
end
function Dy = momentum(z,y)
Dy = [0 0]';
Dy(1) = 3*y(1) + 2* y(2) - 2;
Dy(2) = y(1) - y(2);
Ve = Dy(1)+ y(2);
assignin('base','Ve_step',Ve);
evalin('base','Ve_out(end+1) = Ve_step;');
assignin('base','T_step',z);
evalin('base','T_out(end+1) = T_step;');
end
By running [Z,Y] = khan as the command line, I get a complete functional code that demonstrates your problem, without all the headaches associated. My patience for this has been exhausted: live and learn.
This seems to work, however, the size of the array does not match the
size of the solution vector acquired from ode45
Note that I added two lines to your code which extracts time variable. From the command prompt, one simply has to run the following to understand what's going on:
Ve_out = [];
T_out = [];
[Z,Y] = khan;
size (Z)
size (T_out)
size (Ve_out)
plot (diff(T_out))
ans =
109 1
ans =
1 163
ans =
1 163
Basically ode45 is an iterative algorithm, which means it will regularly course correct (that's why you regularly see diff(T) = 0). You can't force the algorithm to do what you want, you have to live with it.
So your options are
1. Use a fixed step algorithm
2. Have a function call that reproduces what you want after the ode45 algorithm has done its dirty work. (am304's solution)
3. Collects the data with the time function, then have an algorithm parse through everything to removes the extra data.
Can you not do something like that? Obviously check the sizes of the matrices/vectors are correct and amend the code accordingly.
[Z,Y] = droplet_momentum2(theta,K,G,P,zspan,Y0);
DY = momentum(Z,Y);
Ve = sin(theta)*(0.5*z*DY(4) + y(4));
i.e. once the ODE is solved, computed the derivative DY as a function of Z and Y (which have just been solved by the ODE) and finally Ve.
The code in question is here:
function k = whileloop(odefun,args)
...
while (sign(costheta) == originalsign)
y=y(:) + odefun(0,y(:),vars,param)*(dt); % Line 4
costheta = dot(y-normpt,normvec);
k = k + 1;
end
...
end
and to clarify, odefun is F1.m, an m-file of mine. I pass it into the function that contains this while-loop. It's something like whileloop(#F1,args). Line 4 in the code-block above is the Euler method.
The reason I'm using a while-loop is because I want to trigger upon the vector "y" crossing a plane defined by a point, "normpt", and the vector normal to the plane, "normvec".
Is there an easy change to this code that will speed it up dramatically? Should I attempt learning how to make mex files instead (for a speed increase)?
Edit:
Here is a rushed attempt at an example of what one could try to test with. I have not debugged this. It is to give you an idea:
%Save the following 3 lines in an m-file named "F1.m"
function ydot = F1(placeholder1,y,placeholder2,placeholder3)
ydot = y/10;
end
%Run the following:
dt = 1.5e-12 %I do not know about this. You will have to experiment.
y0 = [.1,.1,.1];
normpt = [3,3,3];
normvec = [1,1,1];
originalsign = sign(dot(y0-normpt,normvec));
costheta = originalsign;
y = y0;
k = 0;
while (sign(costheta) == originalsign)
y=y(:) + F1(0,y(:),0,0)*(dt); % Line 4
costheta = dot(y-normpt,normvec);
k = k + 1;
end
disp(k);
dt should be sufficiently small that it takes hundreds of thousands of iterations to trigger.
Assume I must use the Euler method. I have a stochastic differential equation with state-dependent noise if you are curious as to why I tell you to take such an assumption.
I would focus on your actual ODE integration. The fewer steps you have to take, the faster the loop will run. I would only worry about the speed of the sign check after you've optimized the actual integration method.
It looks like you're using the first-order explicit Euler method. Have you tried a higher-order integrator or an implicit method? Often you can increase the time step significantly.