How to solve Ax=b for large condition numbers - scipy

I am dealing with highly ill-conditioned matrix (cond> 10^25).
I have tried most of scipy methods: splu, gmers solve
and pyamg.krylov.steepest_descent from PyAMG.
also iterative Refinement method:
def iterRef(A, b, tol=1e-5, maxiter=100, verbose=True):
"""
Solve the equation a x = b for x using Iterative Refinement method.
:param A: [(M,M) array_like] A square matrix.
:param b: [(M,) array like]Right-hand side matrix in a x = b.
:param maxIter: [int] max number of iteration for convergence.
:param t: #!
:return x: (M,) or (M, N) ndarray
Solution to the system a x = b. Shape of the return matches the shape of b.
**Reference:**
Burden, R.L. and Faires, J.D., 2011. Numerical analysis.
"""
# declarations
n = len(b)
xx = np.zeros_like(b)
r = np.zeros_like(b)
lu = splu(A)
x = lu.solve(b)
res = np.sum(A.dot(x)-b)
print("res :: A * x - b = {:e}".format(res))
# check if converged
if np.abs(res) < tol:
print("IR ::: A * x - b = {:e}".format(res))
return x
k = 1 # step 1
while (k <= maxiter): # step 2
r = b - A.dot(x) # step 3
y = lu.solve(r) # step 4
xx = copy(x + y) # step 5
if k == 1: # step 6
# t = 16
# COND = np.linalg.norm(y)/np.linalg.norm(xx) * 10**t
# print("cond is ::::", COND)
COND = np.linalg.cond(A.toarray())
norm_ = np.linalg.norm(x-xx) # * np.linalg.norm(x) * 1e10
print("iteration {:3d}, norm = {:e}".format(k, norm_))
if norm_ < tol: # step 7
if verbose:
print("Conditional number of matrix A is: {:e}".format(COND))
print("The procedure was successful.")
print("IR: A * x - b = {:e}".format(np.sum(A.dot(xx)-b)))
print(f"number of iteration is : {k:d}")
print(" ")
return xx
k += 1 # step 8
x = copy(xx) # step 9
print("Max iteration exceeded.")
print("The procedure was not successful.")
print("Conditional number of matrix A is: {:e}".format(COND))
print(" ")
return None
The accuracy depends on condition number and for larger ones, the accuracy disappears.
I tried some method from eigen3: Catalogue of decompositions offered by Eigen:
VectorXd x = A.partialPivLu().solve(b);
VectorXd x = A.fullPivLu().solve(b);
VectorXd x = A.bdcSvd(ComputeThinU | ComputeThinV).solve(b);
Here is the link to github for the full example in python and eigen.
It is possible to use Eigen modules with e.g quad precision or higher? Is there any example?
I am not sure we can do this with scipy modules.
output from C++ file is:
condition number is :3.16172e+28
The relative error for partialPivLu is 0.0000000000000011:
The relative error for fullPivLu is 0.0157277957331164:
The relative error for householderQr is 0.0000000000000020:
The relative error for colPivHouseholderQr is 0.0000000000577384:
The relative error for fullPivHouseholderQr is 0.0157277957331138:
The relative error for completeOrthogonalDecomposition is 0.0157277957331138:
The relative error for llt is -nan:
The relative error for ldlt is 0.0000000001045718:
The relative error for ldlt is 396613336624311706845184.0000000000000000:
The relative error for bdcSvd is 0.0157277957329992:
The relative error for jacobiSvd is 0.0157277957528586:
I put the python and C++ code in the link attached in case.

Related

Using scipy solve_bvp for a nonhomogeneous ODE

I am trying to solve the following 4th order BVP
y'''' = K - C*y
My x variable is a linspace with 100 nodes. As you can see, K is a vector of the same length=100 and makes the equation nonhomogeneous. When I press solve, however, there is the following error:
Cell In [11], line 18, in fun(x, y)
17 def fun(x, y):
---> 18 ans = vector-np.multiply(C,y[0])
19 return np.vstack((y[1],y[2],y[3],ans))
ValueError: operands could not be broadcast together with shapes (100,) (99,)
Why does the solver suddenly change the length of y by 1 and how can I fix this error?
EDIT: I must add that the solver works fine when K is absent i.e. the equation is homogeneous.
from scipy.integrate import solve_bvp
import numpy as np
L = 10
nodes = 100
A = 1000
B = 1500
C = 0.05
x = np.linspace(0,L,nodes)
vector = np.ones(nodes)
def fun(x, y):
ans = vector-np.multiply(C,y[0])
return np.vstack((y[1],y[2],y[3],ans))
def bc(ya, yb):
return np.array([ya[2], yb[2], ya[3]+A/B, yb[3]])
y_a = np.zeros((4, x.size))
res_a = solve_bvp(fun, bc, x, y_a)
res1 = res_a.sol(x)[0]
res2 = res_a.sol(x)[1]
res3 = B*res_a.sol(x)[2]
res4 = B*res_a.sol(x)[3]
The solver establishes in the first round a system for polynomial approximations over the nodes-1=99 segments of the first subdivision.
There is no guarantee that the subdivision remains unchanged in the later solver rounds. So your ODE right-side function has to work with arbitrary x arrays. This means that parameters given as a function table need to be interpolated for the general x array. There are procedures in numpy.interp for instantaneous interpolation and scipy.interpolate.interp1d to generate interpolation functions.

How do I implement Neumann series iteration to approximate Ax = b?

I am working on MatLab problems from my textbook and one of the problems (as an example of Neumann series iteration) asks me to follow the pseudocode below:
INPUT: A n x n matrix, b n x 1 vector, T a positive integer
OUTPUT: An approximation y of x after T iterations
STEP 1: Set y = zeros(n,1)
STEP 2: Set M = eye(n) - A
STEP 2: For i = 1,2,...,T do STEP 3
STEP 3: Set y = M*y + b
STEP 4: OUTPUT(y)
I am trying to find the smallest value of T such that the largest entry of the vector Ay - b in absolute value is less than the tolerance I set (the variable e as shown below). I then save T and E (the largest entry in absolute value of Ay - b).
function [T,E] = neumann(A,b,e)
n = size(A);
y = zeros(n(1,1),1);
M = eye(n(1,1)) - A;
t = 10000;
for ii = 1:t
y = M*y + b;
if max(abs(A*y - b)) < e
T = t;
E = max(abs(A*y - b));
break
end
end
end
A = [1.1,.2,-.2,.5;
.2,.9,.5,.3;
.1,0.,1.,.4;
.1,.1,.1,1.2];
b = [1;0;1;0];
[T_2, E_2] = neumann(A,b,1e-2);
[T_4, E_4] = neumann(A,b,1e-4);
[T_6, E_6] = neumann(A,b,1e-6);
output = [T_2, E_2; T_4, E_4; T_6, E_6];
Instead of getting the smallest possible T, the for loop goes through all of the iterations even though I used the break statement to end the loop's execution once the condition was met. I can't really figure out what's wrong with my loop. I followed the pseudocode as closely as possible. Any feedback or suggestions is appreciated, thank you in advance.
You always set T = t, you've perhaps forgotten what t is.
You define t = 10000 on line 5 of the neumann function, this doesn't change so your output T is always 10000.
Instead, I assume you wanted T = ii;, as ii is the current time step when the threshold is reached.

How to convert deep learning gradient descent equation into python

I've been following an online tutorial on deep learning. It has a practical question on gradient descent and cost calculations where I been struggling to get the given answers once it was converted to python code. Hope you can kindly help me get the correct answer please
Please see the following link for the equations used
Click here to see the equations used for the calculations
Following is the function given to calculate the gradient descent,cost etc. The values need to be found without using for loops but using matrix manipulation operations
import numpy as np
def propagate(w, b, X, Y):
"""
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size
(1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = # compute activation
cost = # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw =
db =
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
Following are the data given to test the above function
w, b, X, Y = np.array([[1],[2]]), 2, np.array([[1,2],[3,4]]),
np.array([[1,0]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
Following is the expected output of the above
Expected Output:
dw [[ 0.99993216] [ 1.99980262]]
db 0.499935230625
cost 6.000064773192205
For the above propagate function I have used the below replacements, but the output is not what is expected. Please kindly help on how to get the expected output
A = sigmoid(X)
cost = -1*((np.sum(np.dot(Y,np.log(A))+np.dot((1-Y),(np.log(1-A))),axis=0))/m)
dw = (np.dot(X,((A-Y).T)))/m
db = np.sum((A-Y),axis=0)/m
Following is the sigmoid function used to calculate the Activation:
def sigmoid(z):
"""
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1+np.exp(-z))
### END CODE HERE ###
return s
Hope someone could help me understand how to solve this as I couldn't continue with rest of the tutorials without understanding this. Many thanks
You can calculate A,cost,dw,db as the following:
A = sigmoid(np.dot(w.T,X) + b)
cost = -1 / m * np.sum(Y*np.log(A)+(1-Y)*np.log(1-A))
dw = 1/m * np.dot(X,(A-Y).T)
db = 1/m * np.sum(A-Y)
where sigmoid is :
def sigmoid(z):
s = 1 / (1 + np.exp(-z))
return s
After going through the code and notes a few times was finally able to figure out the error.
First it needs calculating Z and then pass it to the sigmoid function, instead of X
Formula for Z = w(T)X+b. So in python this is calculated as below
Z=np.dot(w.T,X)+b
Then calculate A by passing z to sigmoid function
A = sigmoid(Z)
Then dw can be calculated as below
dw=np.dot(X,(A-Y).T)/m
Calculation of the other variables; cost and derivative of b will be as follows
cost = -1*((np.sum((Y*np.log(A))+((1-Y)*(np.log(1-A))),axis=1))/m)
db = np.sum((A-Y),axis=1)/m
def sigmoid(x):
#You have it right
return 1/(1 + np.exp(-x))
def derivSigmoid(x):
return sigmoid(x) * (1 - sigmoid(x))
error = targetSample - output
#Make sure to keep the sigmoided value around. For instance, an output that has already been sigmoided can be used to get the sigmoid derivative faster (output = sigmoid(x)):
dOutput = output * (1 - output)
Looks like you're already working on the backprop. Just thought I'd help simplify some of the forward prop for you.

BVP4c solve for unknown boundary

I am trying to use bvp4c to solve a system of 4 odes. The issue is that one of the boundaries is unknown.
Can bvp4c handle this? In my code L is the unknown I am solving for.
I get an error message printed below.
function mat4bvp
L = 8;
solinit = bvpinit(linspace(0,L,100),#mat4init);
sol = bvp4c(#mat4ode,#mat4bc,solinit);
sint = linspace(0,L);
Sxint = deval(sol,sint);
end
% ------------------------------------------------------------
function dtdpdxdy = mat4ode(s,y,L)
Lambda = 0.3536;
dtdpdxdy = [y(2)
-sin(y(1)) + Lambda*(L-s)*cos(y(1))
cos(y(1))
sin(y(1))];
end
% ------------------------------------------------------------
function res = mat4bc(ya,yb,L)
res = [ ya(1)
ya(2)
ya(3)
ya(4)
yb(1)];
end
% ------------------------------------------------------------
function yinit = mat4init(s)
yinit = [ cos(s)
0
0
0
];
end
Unfortunately I get the following error message ;
>> mat4bvp
Not enough input arguments.
Error in mat4bvp>mat4ode (line 13)
-sin(y(1)) + Lambda*(L-s)*cos(y(1))
Error in bvparguments (line 105)
testODE = ode(x1,y1,odeExtras{:});
Error in bvp4c (line 130)
bvparguments(solver_name,ode,bc,solinit,options,varargin);
Error in mat4bvp (line 4)
sol = bvp4c(#mat4ode,#mat4bc,solinit);
One trick to transform a variable end point into a fixed one is to change the time scale. If x'(t)=f(t,x(t)) is the differential equation, set t=L*s, s from 0 to 1, and compute the associated differential equation for y(s)=x(L*s)
y'(s)=L*x'(L*s)=L*f(L*s,y(s))
The next trick to employ is to transform the global variable into a part of the differential equation by computing it as constant function. So the new system is
[ y'(s), L'(s) ] = [ L(s)*f(L(s)*s,y(s)), 0 ]
and the value of L occurs as additional free left or right boundary value, increasing the number of variables = dimension of the state vector to the number of boundary conditions.
I do not have Matlab readily available, in Python with the tools in scipy this can be implemented as
from math import sin, cos
import numpy as np
from scipy.integrate import solve_bvp, odeint
import matplotlib.pyplot as plt
# The original function with the interval length as parameter
def fun0(t, y, L):
Lambda = 0.3536;
#print t,y,L
return np.array([ y[1], -np.sin(y[0]) + Lambda*(L-t)*np.cos(y[0]), np.cos(y[0]), np.sin(y[0]) ]);
# Wrapper function to apply both tricks to transform variable interval length to a fixed interval.
def fun1(s,y):
L = y[-1];
dydt = np.zeros_like(y);
dydt[:-1] = L*fun0(L*s, y[:-1], L);
return dydt;
# Implement evaluation of the boundary condition residuals:
def bc(ya, yb):
return [ ya[0],ya[1], ya[2], ya[3], yb[0] ];
# Define the initial mesh with 5 nodes:
x = np.linspace(0, 1, 3)
# This problem has multiple solutions. Try two initial guesses.
L_a=8
L_b=9
y_a = odeint(lambda y,t: fun1(t,y), [0,0,0,0,L_a], x)
y_b = odeint(lambda y,t: fun1(t,y), [0,0,0,0,L_b], x)
# Now we are ready to run the solver.
res_a = solve_bvp(fun1, bc, x, y_a.T)
res_b = solve_bvp(fun1, bc, x, y_b.T)
L_a = res_a.sol(0)[-1]
L_b = res_b.sol(0)[-1]
print "L_a=%.8f, L_b=%.8f" % ( L_a,L_b )
# Plot the two found solutions. The solution are in a spline form, use this to produce a smooth plot.
x_plot = np.linspace(0, 1, 100)
y_plot_a = res_a.sol(x_plot)[0]
y_plot_b = res_b.sol(x_plot)[0]
plt.plot(L_a*x_plot, y_plot_a, label='L=%.8f'%L_a)
plt.plot(L_b*x_plot, y_plot_b, label='L=%.8f'%L_b)
plt.legend()
plt.xlabel("t")
plt.ylabel("y")
plt.grid(); plt.show()
which produces
Trying different initial values for L finds other solutions on quite different scales, among them
L=0.03195111
L=0.05256775
L=0.05846539
L=0.06888907
L=0.08231966
L=4.50411522
L=6.84868060
L=20.01725616
L=22.53189063

converting/ translate from Python to Octave or Matlab

I have a Python-Code and want to rewrite it in Octave, but I meet so many problems during the converting. I found a solution for some of them and some of them still need your help. Now i would start with this part of the code :
INVOLUTE_FI = 0
INVOLUTE_FO = 1
INVOLUTE_OI = 2
INVOLUTE_OO = 3
def coords_inv(phi, geo, theta, inv):
"""
Coordinates of the involutes
Parameters
----------
phi : float
The involute angle
geo : struct
The structure with the geometry obtained from get_geo()
theta : float
The crank angle, between 0 and 2*pi
inv : int
The key for the involute to be considered
"""
rb = geo.rb
ro = rb*(pi - geo.phi_fi0 + geo.phi_oo0)
Theta = geo.phi_fie - theta - pi/2.0
if inv == INVOLUTE_FI:
x = rb*cos(phi)+rb*(phi-geo.phi_fi0)*sin(phi)
y = rb*sin(phi)-rb*(phi-geo.phi_fi0)*cos(phi)
elif inv == INVOLUTE_FO:
x = rb*cos(phi)+rb*(phi-geo.phi_fo0)*sin(phi)
y = rb*sin(phi)-rb*(phi-geo.phi_fo0)*cos(phi)
elif inv == INVOLUTE_OI:
x = -rb*cos(phi)-rb*(phi-geo.phi_oi0)*sin(phi)+ro*cos(Theta)
y = -rb*sin(phi)+rb*(phi-geo.phi_oi0)*cos(phi)+ro*sin(Theta)
elif inv == INVOLUTE_OO:
x = -rb*cos(phi)-rb*(phi-geo.phi_oo0)*sin(phi)+ro*cos(Theta)
y = -rb*sin(phi)+rb*(phi-geo.phi_oo0)*cos(phi)+ro*sin(Theta)
else:
raise ValueError('flag not valid')
return x,y
def CVcoords(CVkey, geo, theta, N = 1000):
"""
Return a tuple of numpy arrays for x,y coordinates for the lines which
determine the boundary of the control volume
Parameters
----------
CVkey : string
The key for the control volume for which the polygon is desired
geo : struct
The structure with the geometry obtained from get_geo()
theta : float
The crank angle, between 0 and 2*pi
N : int
How many elements to include in each entry in the polygon
Returns
-------
x : numpy array
X-coordinates of the outline of the control volume
y : numpy array
Y-coordinates of the outline of the control volume
"""
Nc1 = Nc(theta, geo, 1)
Nc2 = Nc(theta, geo, 2)
if CVkey == 'sa':
r = (2*pi*geo.rb-geo.t)/2.0
xee,yee = coords_inv(geo.phi_fie,geo,0.0,'fi')
xse,yse = coords_inv(geo.phi_foe-2*pi,geo,0.0,'fo')
xoie,yoie = coords_inv(geo.phi_oie,geo,theta,'oi')
xooe,yooe = coords_inv(geo.phi_ooe,geo,theta,'oo')
x0,y0 = (xee+xse)/2,(yee+yse)/2
beta = atan2(yee-y0,xee-x0)
t = np.linspace(beta,beta+pi,1000)
x,y = x0+r*np.cos(t),y0+r*np.sin(t)
return np.r_[x,xoie,xooe,x[0]],np.r_[y,yoie,yooe,y[0]]
https://docs.scipy.org/doc/numpy/reference/generated/numpy.r_.html I just don´t understand the last Output, and I am still confuse what´s mean _r here, and how can I write it by Octave?....I read what is written in the link, but it still not clear for me.
return np.r_[x,xoie,xooe,x[0]], np.r_[y,yoie,yooe,y[0]]
The function returns 2 values, both arrays created by np.r_.
np.r_[....] has indexing syntax, and ends up being translated into a function call to the np.r_ object. The result is just the concatenation of the arguments:
In [355]: np.r_[1, 3, 6:8, np.array([3,2,1])]
Out[355]: array([1, 3, 6, 7, 3, 2, 1])
With the [] notation it can accept slice like objects (6:8) though I don't see any of those here. I'd have to study the rest of the code to identify whether the other arguments are scalars (single values) or arrays.
My Octave is rusty (though I could experiment with the conversion).
t = np.lispace... # I think that exists in Octave, a 1000 values
x = x0+r*np.cos(t) # a derived array of 1000 values
xoie one of the values returned by coords_inv; may be scalar or array. x[0] the first value of x. So the r_ probably produces a 1d array made up of x, and the subsequent values.