Use VectorContinuousCallback to take values of array to zero in Julia - callback

I’m having trouble getting VectorContinuousCallback to work as desired and I’m not sure what I’m doing wrong. I have a large system of equations, and essentially, any time any of the values cross some threshold value (in my system it’s 10e-30 but in this reprex 0.05), I want the value to go to zero.
That is, if at any point the values of u go below 0.05, I want the callback to take the value to zero, but right now, the solver seems to just almost ignore the callback? Not any of the crosses of the threshold are recognized.
A reprex:
using DifferentialEquations, Plots
function biomass_sim!(du, u, p, t)
# change = growth + gain from eating - loss from eating - loss
du[1] = 0.2*u[1] + (0.1*u[2] + 0.15*u[3]) - (0.2*u[4]) - 0.9*u[1]
du[2] = 0.2*u[2] + (0.1*u[1] + 0.05*u[3]) - (0.1*u[1] + 0.4*u[4]) - 0.5*u[2]
du[3] = 1.2*u[3] + 0 - (0.15*u[1] + 0.005*u[2]) - 1.3*u[3]
du[4] = 0.2*u[4] + (0.2*u[1] + 0.4*u[2]) - 1.9*u[1]
end
# set up extinction callback
function extinction_threshold(out,u,t,integrator)
# loop through all species to make the condition check all of them
for i in 1:4
out[i] = 0.05 - u[i]
end
end
function extinction_affect!(integrator, event_idx)
# loop again through all species
for i in 1:4
if event_idx == i
integrator.u[i] = 0
end
end
end
extinction_callback =
VectorContinuousCallback(extinction_threshold,
extinction_affect!,
4,
save_positions = (true, true),
interp_points = 1000
)
tspan = (0.0, 10.0)
u0 = [10, 10, 10, 10]
prob = ODEProblem(biomass_sim!,
u0,
tspan)
sol= solve(prob,
Tsit5(),
abstol = 1e-15,
reltol = 1e-10,
callback = extinction_callback,
progress = true,
progress_steps = 1)
plot(sol)
What I want to see here is that the two values that DO cross the threshold to be < 0.05 at least visually (u3 and u4 clearly go below zero), I want those values to become zero.
The application of this is an extinction to a species, so if they go below some threshold, I want to consider them extinct and therefore not able to be consumed by other species.
I’ve tried changing the tolerances, using different solvers (I’m not married to Tsit5()), but have yet to find a way to do this.
Any help much appreciated!!
The full output:
retcode: Success
Interpolation: specialized 4th order "free" interpolation
t: 165-element Vector{Float64}:
0.0
0.004242012928189618
0.01597661154828718
0.03189297583643294
0.050808376324350105
⋮
9.758563212772982
9.850431863368996
9.94240515017787
10.0
u: 165-element Vector{Vector{Float64}}:
[10.0, 10.0, 10.0, 10.0]
[9.972478301795496, 9.97248284719235, 9.98919421326857, 9.953394015118882]
[9.89687871881005, 9.896943262844019, 9.95942008072302, 9.825050883392004]
[9.795579619066798, 9.795837189358107, 9.919309541156514, 9.652323759097303]
[9.677029343447844, 9.67768414040866, 9.872046236050455, 9.449045554530718]
⋮
[43.86029800419986, 110.54225328286441, -12.173991695732434, -186.40702483057268]
[45.33660997599057, 114.35164541304869, -12.725800474246844, -192.72257104623995]
[46.86398454218351, 118.2922142830212, -13.295579652115606, -199.25572621838901]
[47.84633546050675, 120.82634905853745, -13.661479860003494, -203.45720035095707]

Answered in https://github.com/SciML/DifferentialEquations.jl/issues/843. This was a "user error". When you check the callback:
function extinction_affect!(integrator, event_idx)
# loop again through all species
#show integrator.u,event_idx
for i in 1:4
if event_idx == i
integrator.u[i] = 0
end
end
biomass_sim!(get_tmp_cache(integrator)[1], integrator.u, integrator.p, integrator.t)
#show get_tmp_cache(integrator)[1]
end
It is definitely called and does exactly as intended.
(integrator.u, event_idx) = ([5.347462662161639, 5.731062469090074, 7.64667777801325, 0.05000000000000008], 4)
(get_tmp_cache(integrator))[1] = [-2.0231159499021523, -1.3369848518263594, -1.5954424894710222, -6.798261538038756]
(integrator.u, event_idx) = ([12.968499097445866, 30.506371944743357, 0.050000000000001314, -53.521085634736835], 3)
(get_tmp_cache(integrator))[1] = [4.676904953209599, 12.256522670471728, -2.0978067243405967, -20.548116814707996]
but what this also shows is that, even if u[4] = 0, du[4] < 0 and so it's clear why it goes negative: that's due to how the ODE is defined. You should flip a parameter or something to make the derivative = 0 if you want to keep it at zero past the callback point.

Related

Metropolis-Hastings in matlab

I am trying to use the Metropolis Hastings algorithm with a random walk sampler to simulate samples from a function $$ in matlab, but something is wrong with my code. The proposal density is the uniform PDF on the ellipse 2s^2 + 3t^2 ≤ 1/4. Can I use the acceptance rejection method to sample from the proposal density?
N=5000;
alpha = #(x1,x2,y1,y2) (min(1,f(y1,y2)/f(x1,x2)));
X = zeros(2,N);
accept = false;
n = 0;
while n < 5000
accept = false;
while ~accept
s = 1-rand*(2);
t = 1-rand*(2);
val = 2*s^2 + 3*t^2;
% check acceptance
accept = val <= 1/4;
end
% and then draw uniformly distributed points checking that u< alpha?
u = rand();
c = u < alpha(X(1,i-1),X(2,i-1),X(1,i-1)+s,X(2,i-1)+t);
X(1,i) = c*s + X(1,i-1);
X(2,i) = c*t + X(2,i-1);
n = n+1;
end
figure;
plot(X(1,:), X(2,:), 'r+');
You may just want to use the native implementation of matlab mhsample.
Regarding your code, there are a few things missing:
- function alpha,
- loop variable i (it might be just n but it is not suited for indexing since it starts at zero).
And you should always allocate memory in matlab if you want to fill it dynamically, i.e. X in your case.
To expand on the suggestions by #max, the code appears to work if you change the i indices to n and replace
n = 0;
with
n = 2;
X(:,1) = [.1,.1];
It would probably be better to assign X(:,1) to random values within your accept region (using the same code you use later), and/or include a burn-in period.
Depending upon what you are going to do with this, it may also make things cleaner to evaluate the argument to sin in the f function to keep it within 0 to 2 pi (likely by shifting the value by 2 pi if it exceeds those bounds)

Approximation of cosh and sinh functions that give large values in MATLAB

My calculation involves cosh(x) and sinh(x) when x is around 700 - 1000 which reaches MATLAB's limit and the result is NaN. The problem in the code is elastic_restor_coeff rises when radius is small (below 5e-9 in the code). My goal is to do another integral over a radius distribution from 1e-9 to 100e-9 which is still a work in progress because I get stuck at this problem.
My work around solution right now is to approximate the real part of chi_para with a step function when threshold2 hits a value of about 300. The number 300 is obtained from using the lowest possible value of radius and look at the cut-off value from the plot. I think this approach is not good enough for actual calculation since this value changes with radius so I am looking for a better approximation method. Also, the imaginary part of chi_para is difficult to approximate since it looks like a pulse instead of a step.
Here is my code without an integration over a radius distribution.
k_B = 1.38e-23;
T = 296;
radius = [5e-9,10e-9, 20e-9, 30e-9,100e-9];
fric_coeff = 8*pi*1e-3.*radius.^3;
elastic_restor_coeff = 8*pi*1.*radius.^3;
time_const = fric_coeff/elastic_restor_coeff;
omega_ar = logspace(-6,6,60);
chi_para = zeros(1,length(omega_ar));
chi_perpen = zeros(1,length(omega_ar));
threshold = zeros(1,length(omega_ar));
threshold2 = zeros(1,length(omega_ar));
for i = 1:length(radius)
for k = 1:length(omega_ar)
omega = omega_ar(k);
fric_coeff = 8*pi*1e-3.*radius(i).^3;
elastic_restor_coeff = 8*pi*1.*radius(i).^3;
time_const = fric_coeff/elastic_restor_coeff;
G_para_func = #(t) ((cosh(2*k_B*T./elastic_restor_coeff.*exp(-t./time_const))-1).*exp(1i.*omega.*t))./(cosh(2*k_B*T./elastic_restor_coeff)-1);
G_perpen_func = #(t) ((sinh(2*k_B*T./elastic_restor_coeff.*exp(-t./time_const))).*exp(1i.*omega.*t))./(sinh(2*k_B*T./elastic_restor_coeff));
chi_para(k) = (1 + 1i*omega*integral(G_para_func, 0, inf));
chi_perpen(k) = (1 + 1i*omega*integral(G_perpen_func, 0, inf));
threshold(k) = 2*k_B*T./elastic_restor_coeff*omega;
threshold2(k) = 2*k_B*T./elastic_restor_coeff*(omega*time_const - 1);
end
figure(1);
semilogx(omega_ar,real(chi_para),omega_ar,imag(chi_para));
hold on;
figure(2);
semilogx(omega_ar,real(chi_perpen),omega_ar,imag(chi_perpen));
hold on;
end
Here is the simplified function that I would like to approximate:
where x is iterated in a loop and the maximum value of x is about 700.

Matlab : Help in entropy estimation of a disretized time series

This Question is in continuation to a previous one asked Matlab : Plot of entropy vs digitized code length
I want to calculate the entropy of a random variable that is discretized version (0/1) of a continuous random variable x. The random variable denotes the state of a nonlinear dynamical system called as the Tent Map. Iterations of the Tent Map yields a time series of length N.
The code should exit as soon as the entropy of the discretized time series becomes equal to the entropy of the dynamical system. It is known theoretically that the entropy of the system is log_2(2). The code exits but the frst 3 values of the entropy array are erroneous - entropy(1) = 1, entropy(2) = NaN and entropy(3) = NaN. I am scratching my head as to why this is happening and how I can get rid of it. Please help in correcting the code. THank you.
clear all
H = log(2)
threshold = 0.5;
x(1) = rand;
lambda(1) = 1;
entropy(1,1) = 1;
j=2;
tol=0.01;
while(~(abs(lambda-H)<tol))
if x(j - 1) < 0.5
x(j) = 2 * x(j - 1);
else
x(j) = 2 * (1 - x(j - 1));
end
s = (x>=threshold);
p_1 = sum(s==1)/length(s);
p_0 = sum(s==0)/length(s);
entropy(:,j) = -p_1*log2(p_1)-(1-p_1)*log2(1-p_1);
lambda = entropy(:,j);
j = j+1;
end
plot( entropy )
It looks like one of your probabilities is zero. In that case, you'd be trying to calculate 0*log(0) = 0*-Inf = NaN. The entropy should be zero in this case, so you you can just check for this condition explicitly.
Couple side notes: It looks like you're declaring H=log(2), but your post says the entropy is log_2(2). p_0 is always 1 - p_1, so you don't have to count everything up again. Growing the arrays dynamically is inefficient because matlab has to re-copy the entire contents at each step. You can speed things up by pre-allocating them (only worth it if you're going to be running for many timesteps).

MatLab using Fixed Point method to find a root

I wanna find a root for the following function with an error less than 0.05%
f= 3*x*tan(x)=1
In the MatLab i've wrote that code to do so:
clc,close all
syms x;
x0 = 3.5
f= 3*x*tan(x)-1;
df = diff(f,x);
while (1)
x1 = 1 / 3*tan(x0)
%DIRV.. z= tan(x0)^2/3 + 1/3
er = (abs((x1 - x0)/x1))*100
if ( er <= 0.05)
break;
end
x0 = x1;
pause(1)
end
But It keeps running an infinite loop with error 200.00 I dunno why.
Don't use while true, as that's usually uncalled for and prone to getting stuck in infinite loops, like here. Simply set a limit on the while instead:
while er > 0.05
%//your code
end
Additionally, to prevent getting stuck in an infinite loop you can use an iteration counter and set a maximum number of iterations:
ItCount = 0;
MaxIt = 1e5; %// maximum 10,000 iterations
while er > 0.05 & ItCount<MaxIt
%//your code
ItCount=ItCount+1;
end
I see four points of discussion that I'll address separately:
Why does the error seemingly saturate at 200.0 and the loop continue infinitely?
The fixed-point iterator, as written in your code, is finding the root of f(x) = x - tan(x)/3; in other words, find a value of x at which the graphs of x and tan(x)/3 cross. The only point where this is true is 0. And, if you look at the value of the iterants, the value of x1 is approaching 0. Good.
The bad news is that you are also dividing by that value converging toward 0. While the value of x1 remains finite, in a floating point arithmetic sense, the division works but may become inaccurate, and er actually goes NaN after enough iterations because x1 underflowed below the smallest denormalized number in the IEEE-754 standard.
Why is er 200 before then? It is approximately 200 because the value of x1 is approximately 1/3 of the value of x0 since tan(x)/3 locally behaves as x/3 a la its Taylor Expansion about 0. And abs(1 - 3)*100 == 200.
Divisions-by-zero and relative orders-of-magnitude are why it is sometimes best to look at the absolute and relative error measures for both the values of the independent variable and function value. If need be, even putting an extremely (relatively) small finite, constant value in the denominator of the relative calculation isn't entirely a bad thing in my mind (I remember seeing it in some numerical recipe books), but that's just a band-aid for robustness's sake that typically hides a more serious error.
This convergence is far different compared to the Newton-Raphson iterations because it has absolutely no knowledge of slope and the fixed-point iteration will converge to wherever the fixed-point is (forgive the minor tautology), assuming it does converge. Unfortunately, if I remember correctly, fixed-point convergence is only guaranteed if the function is continuous in some measure, and tan(x) is not; therefore, convergence is not guaranteed since those pesky poles get in the way.
The function it appears you want to find the root of is f(x) = 3*x*tan(x)-1. A fixed-point iterator of that function would be x = 1/(3*tan(x)) or x = 1/3*cot(x), which is looking for the intersection of 3*tan(x) and 1/x. However, due to point number (2), those iterators still behave badly since they are discontinuous.
A slightly different iterator x = atan(1/(3*x)) should behave a lot better since small values of x will produce a finite value because atan(x) is continuous along the whole real line. The only drawback is that the domain of x is limited to the interval (-pi/2,pi/2), but if it converges, I think the restriction is worth it.
Lastly, for any similar future coding endeavors, I do highly recommend #Adriaan's advice. If would like a sort of compromise between the styles, most of my iterative functions are written with a semantic variable notDone like this:
iter = 0;
iterMax = 1E4;
tol = 0.05;
notDone = 0.05 < er & iter < iterMax;
while notDone
%//your code
iter = iter + 1;
notDone = 0.05 < er & iter < iterMax;
end
You can add flags and all that jazz, but that format is what I frequently use.
I believe that the code below achieves what you are after using Newton's method for the convergence. Please leave a comment if I have missed something.
% find x: 3*x*tan(x) = 1
f = #(x) 3*x*tan(x)-1;
dfdx = #(x) 3*tan(x)+3*x*sec(x)^2;
tolerance = 0.05; % your value?
perturbation = 1e-2;
converged = 1;
x = 3.5;
f_x = f(x);
% Use Newton s method to find the root
count = 0;
err = 10*tolerance; % something bigger than tolerance to start
while (err >= tolerance)
count = count + 1;
if (count > 1e3)
converged = 0;
disp('Did not converge.');
break;
end
x0 = x;
dfdx_x = dfdx(x);
if (dfdx_x ~= 0)
% Avoid division by zero
f_x = f(x);
x = x - f_x/dfdx_x;
else
% Perturb x and go back to top of while loop
x = x + perturbation;
continue;
end
err = (abs((x - x0)/x))*100;
end
if (converged)
disp(['Converged to ' num2str(x,'%10.8e') ' in ' num2str(count) ...
' iterations.']);
end

Removing duplicate entries in a vector, when entries are complex and rounding errors are causing problems

I want to remove duplicate entries from a vector on Matlab. The problem I'm having is that rounding errors are stopping the inbuilt Matlab function 'unique' from working properly. Ideally I'd like a way to set some sort of tolerance on the 'unique' function, or a small procedure that will remove the duplicates otherwise. If both the real and imaginary parts of two entries differ by less than 0.0001, then I'm happy to consider them equal. How can I do this?
Any help will be greatly appreciated. Thanks
A simple approximation would be to round the numbers and the use the indices returned by unique:
X = ... (input vector)
[b, i] = unique(round(X / (tolerance * (1 + i))));
output = X(i);
(you can probably replace b with ~ depending on your Matlab version).
it won't quite have the behaviour you want, since it is possible that two numbers are very close but will be rounded differently. I think you could mitigate this by doing:
X = ... (input vector)
[b, ind] = unique(round(X / (tolerance * (1 + i))));
X = X(ind);
[b, ind] = unique(round(X / (tolerance * (1 + i)) + 0.5 * (1 + i)));
X = X(ind);
This will round them twice, so any numbers that are exactly on a rounding boundary will be caught by the second unique.
There is still some messiness in this - some numbers will be affected as though the tolerance was doubled. But it might be sufficient for your needs.
The alternative is probably a for loop:
X = sort(X);
last = X(1);
indices = ones(numel(X), 1);
for j=2:numel(X)
if X(j) > last + tolerance * (1 + i)
last = X(j) + tolerance * (1 + i) / 2;
else
indices(j) = 0;
end
end
X = X(logical(indices));
I think this has the best behaviour you can expect (because you want to represent the vector by as few unique values as possible - when there are lots of numbers that differ by less than the tolerance level, there may be multiple ways of splitting them. This algorithm does so greedily, starting with the smallest).
I'm almost certain the below ill always assume any values closer than 1e-8 are equal. Simply replace 1e-8 with whatever value you want.
% unique function that assumes 1e-8 is equal
function [out, I] = unique(input, first_last)
threshold = 1e-8;
if nargin < 2
first_last = 'last';
end
[out, I] = sort(input);
db = diff(out);
k = find(abs(db) < threshold);
if strcmpi(first_last, 'last')
k2 = min(I(k), I(k+1));
elseif strcmpi(first_last, 'first')
k2 = max(I(k), I(k+1));
else
error('unknown flag option for unique, must be first or last');
end
k3 = true(1, length(input));
k3(k2) = false;
out = out(k3(I));
I = I(k3(I));
end
The following might serve your purposes. Given X, an array of complex doubles, it sorts them, then checks whether the absolute value differences between elements is within the complex tolerance, real_tol and imag_tol. It removes elements that satisfy this tolerance.
function X_unique = unique_complex_with_tolerance(X,real_tol,imag_tol)
X_sorted = sort(X); %Sorts by magnitude first, then imaginary part.
dX_sorted = diff(X_sorted);
dX_sorted_real = real(dX_sorted);
dX_sorted_imag = imag(dX_sorted);
remove_idx = (abs(dX_sorted_real)<real_tol) & (abs(dX_sorted_imag)<imag_tol);
X_unique = X_sorted;
X_unique(remove_idx) = [];
return
Note that this code will remove all elements which satisfy this difference tolerance. For example, if X = [1+i,2+2i,3+3i,4+4i], real_tol = 1.1, imag_tol = 1.1, then this function will return only one element, X_unique = [4+4i], even though you might consider, for example, X_unique = [1+i,4+4i] to also be a valid answer.