How to solve a system of equations involving a tridiagonal matrix? MATLAB - matlab

I'm not sure if this typical behaviour or not but I am solving a finite-difference problem using the backward differencing method.
I populated a sparse matrix with the appropriate diagonal terms (along the central diagonal and one above and below it) and I attempted to solve the problem using MATLAB's built-in method (B=A\x) and it seems MATLAB just gets it wrong.
Furthermore, if use inv() and use the inverse of the tridiagonal matrix I get the correct solution.
Why does this behave like this?
Additional info:
http://pastebin.com/AbuEW6CR (Values are tabbed so its easier read)
Stiffness matrix K:
1 0 0 0
-0.009 1.018 -0.009 0
0 -0.009 1.018 -0.009
0 0 0 1
Values for d:
0
15.55
15.55
86.73
Built-in output:
-1.78595556155136e-05
0.00196073713853244
0.00196073713853244
0.0108149483252210
Output using inv(K):
0
15.42
16.19
86.73
Manual output:
0
15.28
16.18
85.16
Code
nx = 21; %number of spatial steps
nt = 501; %number of time steps (varies between 501 and 4001)
p = alpha * dt / dx^2; %arbitrary constant
a = [0 -p*ones(1,nx-2) 0]'; %diagonal below central diagonal
b = (1+2*p)*ones(nx,1); %central diagonal
c = [1 -p*ones(1,nx-2) 1]'; %diagonal above central diagonal
d = zeros(nx, 1); %rhs values
% Variables a,b,c,d are used for the manual tridiagonal method for
% comparison with MATLAB's built-in functions. The variables represent
% diagonals and the rhs of the matrix
% The equation is K*U(n+1)=U(N)
U = zeros(nx,nt);
% Setting initial conditions
U(:,[1 2]) = (60-32)*5/9;
K = sparse(nx,nx);
% Indices of the sparse matrix which correspond to the diagonal
diagonal = 1:nx+1:nx*nx;
% Populating diagonals
K(diagonal) =1+2*p;
K(diagonal(2:end)-1) =-p;
K(diagonal(1:end-1)+1) =-p;
% Applying dirichlet condition at final spatial step, the temperature is
% derived from a table for predefined values during the calculation
K(end,end-1:end)=[0 1];
% Applying boundary conditions at first spatial step
K(1,1:2) = [1 0];
% Populating rhs values and applying boundary conditions, d=U(n)
d(ivec) = U(ivec,n);
d(nx) = R; %From table
d(1) = 0;
U(:,n+1) = tdm(a,b,c,d); % Manual solver, gives correct answer
U(:,n+1) = d\K; % Built-in solver, gives wrong answer

The following line:
U(:,n+1) = d\K;
should have been
U(:,n+1) = K\d;
By mistake I had them the wrong way round and didn't notice it, it obviously changes the mathematical expression and hence the wrong answers.

Related

Matlab 'Matrix dimensions must agree' ode23s

The following is my code. I try to model PFR in Matlab using ode23s. It works well with one component irreversible reaction. But when extending more dependent variables, 'Matrix dimensions must agree' problem shows. Have no idea how to fix it. Is possible to use other software to solve similar problems?
Thank you.
function PFR_MA_length
clear all; clc; close all;
function dCdt = df(t,C)
dCdt = zeros(N,2);
dCddt = [0; -vo*diff(C(:,1))./diff(V)-(-kM*C(2:end,1).*C(2:end,2)-kS*C(2:end,1))];
dCmdt = [0; -vo*diff(C(:,2))./diff(V)-(-kM*C(2:end,1).*C(2:end,2))];
dCdt(:,1) = dCddt;
dCdt(:,2) = dCmdt;
end
kM = 1;
kS = 0.5; % assumptions of the rate constants
C0 = [2, 2]; % assumptions of the entering concentration
vo = 2; % volumetric flow rate
volume = 20; % total volume of reactor, spacetime = 10
N = 100; % number of points to discretize the reactor volume on
init = zeros(N,2); % Concentration in reactor at t = 0
init(1,:) = C0; % concentration at entrance
V = linspace(0,volume,N)'; % discretized volume elements, in column form
tspan = [0 20];
[t,C] = ode23s(#(t,C) df(t,C),tspan,init);
end
'''
You can put a break point on the line that computes dCddt and observe that the size of the matrices C and V are different.
>> size(C)
ans =
200 1
>> size(V)
ans =
100 1
The element-wise divide operation, ./, between these two variables would then result in the error that you mentioned.
Per ode23s's help, the output of the call to dCdt = df(t,C) needs to be a vector. However, you are returning a matrix of size 100x2. In the next call to the same function, ode32s converts it to a vector when computing the value of C, hence the size 200x1.
In the GNU octave interpretation of Matlab behavior, one has to explicitly make sure that the solver only sees flat one-dimensional state vectors. These have to be translated forward and back in the application of the model.
Explicitly reading the object A as flat array A(:) forgets the matrix dimension information, these can be added back with the reshape(A,m,n) command.
function dCdt = df(t,C)
C = reshape(C,N,2);
...
dCdt = dCdt(:);
end
...
[t,C] = ode45(#(t,C) df(t,C), tspan, init(:));

Solve function for real part = 0 instead of imaginary in MATLAB

I have a function which outputs a vector of complex eigenvalues. It takes a single argument, rho. I need to find a rho for which the complex eigenvalues lie on the imaginary axis. In other words, the real parts have to be 0.
When I run fzero() it throws the following error
Operands to the || and && operators must be convertible to logical scalar values.
Whereas fsolve() simply solves for the imaginary part = 0, which is exactly the oposite of what I want.
This is the function I wrote
function lambda = eigLorenz(rho)
beta = 8/3;
sigma = 10;
eta = sqrt(beta*(rho-1));
A = [ -beta 0 eta;0 -sigma sigma;-eta rho -1];
y = [rho-1; eta; eta];
% Calculate eigenvalues of jacobian
J = A + [0 y(3) 0; 0 0 0; 0 -y(1) 0]
lambda = eig(J)
It outputs 3 eigenvalues, 2 complex conjugates and 1 real eigenvalue (with complex part = 0).
I need to find rho for which the complex eigenvalues lie on the imaginary axis so that the real parts are 0.
Two problems:
fzero is only suited for scalar-valued functions (f: ℝ → ℝ)
complex numbers are single numbers, treated as signle entities by almost all functions. You'll have to force MATLAB to split up the complex number into its imaginary and real parts
So, one possible workaround is to take the real part of the first complex eigenvalue:
function [output, lambda] = eigLorenz(rho)
% Constants
beta = 8/3;
sigma = 10;
eta = sqrt(beta*(rho-1));
A = [-beta 0 eta
0 -sigma sigma
-eta rho -1];
y = [rho-1
eta
eta];
% Calculate eigenvalues of jacobian
J = A + [0 y(3) 0
0 0 0
0 -y(1) 0];
lambda = eig(J);
% Make it all work for all rho with FZERO(). Check whether:
% - the complex eigenvalues are indeed each other's conjugates
% - there are exactly 2 eigenvalues with nonzero imaginary part
complex = lambda(imag(lambda) ~= 0);
if numel(complex) == 2 && ...
( abs(complex(1) - conj(complex(2))) < sqrt(eps) )
output = real(complex(1));
else
% Help FZERO() get out of this hopeless valley
output = -norm(lambda);
end
end
Call like this:
rho = fzero(#eigLorenz, 0);
[~, lambda] = eigLorenz(rho);

Solve matrix DAE system in matlab

The equations can be found here. As you can see it is set of 8 scalar equations closed to 3 matrix ones. In order to let Matlab know that equations are matrix - wise, I declare variable time dependent vector functions as:
syms t p1(t) p2(t) p3(t)
p(t) = symfun([p1(t);p2(t);p3(t)], t);
p = formula(p(t)); % allows indexing for vector p
% same goes for w(t) and m(t)...
Known matrices are declared as follows:
A = sym('A%d%d',[3 3]);
Fq = sym('Fq%d%d',[2 3]);
Im = diag(sym('Im%d%d',[1 3]));
The system is now ready to be modeled according to guide:
eqs = [diff(p) == A*w + Fq'*m,...
diff(w) == -Im*p,...
Fq*w == 0];
vars = [p; w; m];
At this point, when I try to reduce index (since it equals 2), I receive following error:
[DAEs,DAEvars] = reduceDAEIndex(eqs,vars);
Error using sym/reduceDAEIndex (line 95)
Expecting as many equations as variables.
The error would not arise if we had declared all variables as scalars:
syms A Im Fq real p(t) w(t) m(t)
Quoting symfun documentation (tips section):
Symbolic functions are always scalars, therefore, you cannot index into a function.
However it is hard for me to believe that it's not possible to solve these equations matrix - wise. Obviously, one can expand it to 8 scalar equations, but the multi body system concerned here is very simple and the aim is to be able to solve complex ones - hence the question: is it possible to solve matrix DAE in Matlab, and if so - what has to be fixed in order for this to work?
Ps. I have another issue with Matlab DAE solver: input variables (known coefficient functions) for my model are time variant. As far as example is concerned, they are constant in all domain, however for my problem they change in time. This problem has been brought out here. I would be grateful if you referred to it, should you have any solution.
Finally, I managed to find correct syntax for this problem. I made a mistake of treating matrix variables (such as A, Fq) as a single entity. Below I present code that utilizes matrix approach and solves this particular DAE:
% Define symbolic variables.
q = sym('q%d',[3 1]); % state variables
a = sym('a'); k = sym('k'); % constant parameters
t = sym('t','real'); % independent variable
% Define system variables and group them in vectors:
p1(t) = sym('p1(t)'); p2(t) = sym('p2(t)'); p3(t) = sym('p3(t)');
w1(t) = sym('w1(t)'); w2(t) = sym('w2(t)'); w3(t) = sym('w3(t)');
m1(t) = sym('m1(t)'); m2(t) = sym('m2(t)');
pvect = [p1(t); p2(t); p3(t)];
wvect = [w1(t); w2(t); w3(t)];
mvect = [m1(t); m2(t)];
% Define matrices:
mass = diag(sym('ms%d',[1 3]));
Fq = [0 -1 a;
0 0 1];
A = [1 0 0;
0 1 a;
0 a -q(1)*a] * k;
% Define sets of equations and aggregate them into one set:
set1 = diff(pvect,t) == A*wvect + Fq'*mvect;
set2 = mass*diff(wvect,t) == -pvect;
set3 = Fq*wvect == 0;
eqs = [set1; set2; set3];
% Close all system variables in one vector:
vars = [pvect; wvect; mvect];
% Reduce index of the system and remove redundnat equations:
[DAEs,DAEvars] = reduceDAEIndex(eqs,vars);
[DAEs,DAEvars] = reduceRedundancies(DAEs,DAEvars);
[M,F] = massMatrixForm(DAEs,DAEvars);
We receive very simple 2x2 ODE for two variables p1(t) and w1(t). Keep in mind that after reducing redundancies we got rid of all elements from state vector q. This means that all left variables (k and mass(1,1)) are not time dependent. If there had been time dependency of some variables within the system, the case would have been much harder to solve.
% Replace symbolic variables with numeric ones:
M = odeFunction(M, DAEvars,mass(1,1));
F = odeFunction(F, DAEvars, k);
k = 2000; numericMass = 4;
F = #(t, Y) F(t, Y, k);
M = #(t, Y) M(t, Y, numericMass);
% set the solver:
opt = odeset('Mass', M); % Mass matrix of the system
TIME = [1; 0]; % Time boundaries of the simulation (backwards in time)
y0 = [1 0]'; % Initial conditions for left variables p1(t) and w1(t)
% Call the solver
[T, solution] = ode15s(F, TIME, y0, opt);
% Plot results
plot(T,solution(:,1),T,solution(:,2))

Matlab linear correlation matrix in copularnd (copula random number) function

Consider we have 4 vectors, V1,V2,nf1 and nf2. We need to generate n=8736 random numbers that each pair of (V1,V2) , (V1,nf1), (V2,nf2) and (nf1,nf2) to be correlate as follow:
Rvv=0.6 for (V1,V2)
Rvn=0.5 for (V1,nf1) and (V2,nf2)
Rnn=0 for (nf1,nf2)
(It's not important what correlation of (V1,nf2) and (V2, nf1) is). Now we use copula to generate correlated random numbers in MATLAB:
Rvv=0.6;
Rvn=0.5;
Rnn=0;
n = 8736;
%V1 V2 nf1 nf2
Rho = [1 Rvv Rvn 0 ; %V1
Rvv 1 0 Rvn; %V2
Rvn 0 1 Rnn; %nf1
0 Rvn Rnn 1 ]; %nf2
Random_no = copularnd('Gaussian',Rho,n);
It's all OK when Rvv is 0.6, and Random_no will be a 8736 by 4 matrix, that each pair of columns are correlated as we specified by Rho Matrix. BUT when Rvv=0.9, MATLAB returns error as follow:
Error using mvnrnd
SIGMA must be a symmetric positive semi-definite matrix.
Error in copularnd
u = normcdf(mvnrnd(zeros(1,d),Rho,n));
I can't understand what the problem is and how can I really generate correlated random numbers using copula. I'll really appreciate if anyone can help me through this problem.
I can't answer about theory, but got this in code of copularnd:
case 'gaussian'
Rho = varargin{1};
n = varargin{2};
d = size(Rho,1);
if isscalar(Rho)
if ~(-1 < Rho && Rho < 1)
error(message('stats:copularnd:BadScalarCorrelation'));
end
Rho = [1 Rho; Rho 1];
d = 2;
elseif any(diag(Rho) ~= 1)
error(message('stats:copularnd:BadCorrelationMatrix'));
end
% MVNRND will check that Rho is square, symmetric, and positive semi-definite.
u = normcdf(mvnrnd(zeros(1,d),Rho,n));
Because of there no any information about it in help, I think it must be clearly for people who knows what copula is :) but is not for me
There are a lot of ways to generate positive semi-definite matrices: http://www.mathworks.com/matlabcentral/newsreader/view_thread/163489

Accuracy issues with multiplication of matrices in Matlab R2012b

I have implemented a script that does constrained optimization for solving the optimal parameters of Support Vector Machines model. I noticed that my script for some reason gives inaccurate results (although very close to the real value). For example the typical situation is that the result of a calculation should be exactly 0, but instead it is something like
-1/18014398509481984 = -5.551115123125783e-17
This situation happens when I multiply matrices with vectors. What makes this also strange is that if I do the multiplications by hand in the command window in Matlab I get exactly 0 result.
Let me give an example: If I take the vectors Aq = [-1 -1 1 1] and x = [12/65 28/65 32/65 8/65]' I get exactly 0 result from their multiplication if I do this in the command window, as you can see in the picture below:
If on the other hand I do this in my function-script I don't get the result being 0 but rather the value -1/18014398509481984.
Here is the part of my script that is responsible for this multiplication (I've added the Aq and x into the script to show the contents of Aq and x as well):
disp('DOT PRODUCT OF ACTIVE SET AND NEW POINT: ')
Aq
x
Aq*x
Here is the result of the code above when run:
As you can see the value isn't exactly 0 even though it really should be. Note that this problem doesn't occur for all possible values of Aq and x. If Aq = [-1 -1 1 1] and x = [4/13 4/13 4/13 4/13] the result is exactly 0 as you can see below:
What is causing this inaccuracy? How can I fix this?
P.S. I didn't include my whole code because it's not very well documented and few hundred lines long, but I will if requested.
Thank you!
UPDATE: new test, by using Ander Biguri's advice:
UPDATE 2: THE CODE
function [weights, alphas, iters] = solveSVM(data, labels, C, e)
% FUNCTION [weights, alphas, iters] = solveSVM(data, labels, C, e)
%
% AUTHOR: jjepsuomi
%
% VERSION: 1.0
%
% DESCRIPTION:
% - This function will attempt to solve the optimal weights for a Support
% Vector Machines (SVM) model using active set method with gradient
% projection.
%
% INPUTS:
% "data" a n-by-m data matrix. The number of rows 'n' corresponds to the
% number of data points and the number of columns 'm' corresponds to the
% number of variables.
% "labels" a 1-by-n row vector of data labels from the set {-1,1}.
% "C" Box costraint upper limit. This will constrain the values of 'alphas'
% to the range 0 <= alphas <= C. If hard-margin SVM model is required set
% C=Inf.
% "e" a real value corresponding to the convergence criterion, that is if
% solution Xi and Xi-1 are within distance 'e' from each other stop the
% learning process, i.e. IF |F(Xi)-F(Xi-1)| < e ==> stop learning process.
%
% OUTPUTS:
% "weights" a vector corresponding to the optimal decision line parameters.
% "alphas" a vector of alpha-values corresponding to the optimal solution
% of the dual optimization problem of SVM.
% "iters" number of iterations until learning stopped.
%
% EXAMPLE USAGE 1:
%
% 'Hard-margin SVM':
%
% data = [0 0;2 2;2 0;3 0];
% labels = [-1 -1 1 1];
% [weights, alphas, iters] = solveSVM(data, labels, Inf, 10^-100)
%
% EXAMPLE USAGE 2:
%
% 'Soft-margin SVM':
%
% data = [0 0;2 2;2 0;3 0];
% labels = [-1 -1 1 1];
% [weights, alphas, iters] = solveSVM(data, labels, 0.8, 10^-100)
% STEP 1: INITIALIZATION OF THE PROBLEM
format long
% Calculate linear kernel matrix
L = kron(labels', labels);
K = data*data';
% Hessian matrix
Qd = L.*K;
% The minimization function
L = #(a) (1/2)*a'*Qd*a - ones(1, length(a))*a;
% Gradient of the minimizable function
gL = #(a) a'*Qd - ones(1, length(a));
% STEP 2: THE LEARNING PROCESS, ACTIVE SET WITH GRADIENT PROJECTION
% Initial feasible solution (required by gradient projection)
x = zeros(length(labels), 1);
iters = 1;
optfound = 0;
while optfound == 0 % criterion met
% Negative of the gradient at initial solution
g = -gL(x);
% Set the active set and projection matrix
Aq = labels; % In plane y^Tx = 0
P = eye(length(x))-Aq'*inv(Aq*Aq')*Aq; % In plane projection
% Values smaller than 'eps' are changed into 0
P(find(abs(P-0) < eps)) = 0;
d = P*g'; % Projection onto plane
if ~isempty(find(x==0 | x==C)) % Constraints active?
acinds = find(x==0 | x==C);
for i = 1:length(acinds)
if (x(acinds(i)) == 0 && d(acinds(i)) < 0) || x(acinds(i)) == C && d(acinds(i)) > 0
% Make the constraint vector
constr = zeros(1,length(x));
constr(acinds(i)) = 1;
Aq = [Aq; constr];
end
end
% Update the projection matrix
P = eye(length(x))-Aq'*inv(Aq*Aq')*Aq; % In plane / box projection
% Values smaller than 'eps' are changed into 0
P(find(abs(P-0) < eps)) = 0;
d = P*g'; % Projection onto plane / border
end
%%%% DISPLAY INFORMATION, THIS PART IS NOT NECESSAY, ONLY FOR DEBUGGING
if Aq*x ~= 0
disp('ACTIVE SET CONSTRAINTS Aq :')
Aq
disp('CURRENT SOLUTION x :')
x
disp('MULTIPLICATION OF Aq and x')
Aq*x
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Values smaller than 'eps' are changed into 0
d(find(abs(d-0) < eps)) = 0;
if ~isempty(find(d~=0)) && rank(P) < length(x) % Line search for optimal lambda
lopt = ((g*d)/(d'*Qd*d));
lmax = inf;
for i = 1:length(x)
if d(i) < 0 && -x(i) ~= 0 && -x(i)/d(i) <= lmax
lmax = -x(i)/d(i);
elseif d(i) > 0 && (C-x(i))/d(i) <= lmax
lmax = (C-x(i))/d(i);
end
end
lambda = max(0, min([lopt, lmax]));
if abs(lambda) < eps
lambda = 0;
end
xo = x;
x = x + lambda*d;
iters = iters + 1;
end
% Check whether search direction is 0-vector or 'e'-criterion met.
if isempty(find(d~=0)) || abs(L(x)-L(xo)) < e
optfound = 1;
end
end
%%% STEP 3: GET THE WEIGHTS
alphas = x;
w = zeros(1, length(data(1,:)));
for i = 1:size(data,1)
w = w + labels(i)*alphas(i)*data(i,:);
end
svinds = find(alphas>0);
svind = svinds(1);
b = 1/labels(svind) - w*data(svind, :)';
%%% STEP 4: OPTIMALITY CHECK, KKT conditions. See KKT-conditions for reference.
weights = [b; w'];
datadim = length(data(1,:));
Q = [zeros(1,datadim+1); zeros(datadim, 1), eye(datadim)];
A = [ones(size(data,1), 1), data];
for i = 1:length(labels)
A(i,:) = A(i,:)*labels(i);
end
LagDuG = Q*weights - A'*alphas;
Ac = A*weights - ones(length(labels),1);
alpA = alphas.*Ac;
LagDuG(any(abs(LagDuG-0) < 10^-14)) = 0;
if ~any(alphas < 0) && all(LagDuG == zeros(datadim+1,1)) && all(abs(Ac) >= 0) && all(abs(alpA) < 10^-6)
disp('Optimal found, Karush-Kuhn-Tucker conditions satisfied.')
else
disp('Optimal not found, Karush-Kuhn-Tucker conditions not satisfied.')
end
% VISUALIZATION FOR 2D-CASE
if size(data, 2) == 2
pinds = find(labels > 0);
ninds = find(labels < 0);
plot(data(pinds, 1), data(pinds, 2), 'o', 'MarkerFaceColor', 'red', 'MarkerEdgeColor', 'black')
hold on
plot(data(ninds, 1), data(ninds, 2), 'o', 'MarkerFaceColor', 'blue', 'MarkerEdgeColor', 'black')
Xb = min(data(:,1))-1;
Xe = max(data(:,1))+1;
Yb = -(b+w(1)*Xb)/w(2);
Ye = -(b+w(1)*Xe)/w(2);
lineh = plot([Xb Xe], [Yb Ye], 'LineWidth', 2);
supvh = plot(data(find(alphas~=0), 1), data(find(alphas~=0), 2), 'g.');
legend([lineh, supvh], 'Decision boundary', 'Support vectors');
hold off
end
NOTE:
If you run the EXAMPLE 1, you should get an output starting with the following:
As you can see, the multiplication between Aq and x don't produce value 0, even though they should. This is not a bad thing in this particular example, but if I have more data points with lots of decimals in them this inaccuracy becomes bigger and bigger problem, because the calculations are not exact. This is bad for example when I'm searching for a new direction vector when I'm moving towards the optimal solution in gradient projection method. The search direction isn't exactly the correct direction, but close to it. This is why I want the exactly correct values...is this possible?
I wonder if the decimals in the data points have something to do with the accuracy of my results. See the picture below:
So the question is: Is this caused by the data or is there something wrong in the optimization procedure...
Do you use format function inside your script? It looks like you used somewhere format rat.
You can always use matlab eps function, that returns precision that is used inside matlab. The absolute value of -1/18014398509481984 is smaller that this, according to my Matlab R2014B:
format long
a = abs(-1/18014398509481984)
b = eps
a < b
This basically means that the result is zero (but matlab stopped calculations because according to eps value, the result was just fine).
Otherwise you can just use format long inside your script before the calculation.
Edit
I see inv function inside your code, try replacing it with \ operator (mldivide). The results from it will be more accurate as it uses Gaussian elimination, without forming the inverse.
The inv documentation states:
In practice, it is seldom necessary to form the explicit inverse of a
matrix. A frequent misuse of inv arises when solving the system of
linear equations Ax = b. One way to solve this is with x = inv(A)*b. A
better way, from both an execution time and numerical accuracy
standpoint, is to use the matrix division operator x = A\b. This
produces the solution using Gaussian elimination, without forming the
inverse.
With the provided code, this is how I tested:
I added a break-point on the following code:
if Aq*x ~= 0
disp('ACTIVE SET CONSTRAINTS Aq :')
Aq
disp('CURRENT SOLUTION x :')
x
disp('MULTIPLICATION OF Aq and x')
Aq*x
end
When the if branch was taken, I typed at console:
K>> format rat; disp(x);
12/65
28/65
32/65
8/65
K>> disp(x == [12/65; 28/65; 32/65; 8/65]);
0
1
0
0
K>> format('long'); disp(max(abs(x - [12/65; 28/65; 32/65; 8/65])));
1.387778780781446e-17
K>> disp(eps(8/65));
1.387778780781446e-17
This suggests that this is a displaying problem: the format rat deliberately uses small integers for expressing the value, on the expense of precision. Apparently, the true value of x(4) is the next one to 8/65 than can be possibly put in double format.
So, this begs the question: are you sure that numeric convergence depends on flipping the least significant bit in a double precision value?