Is there an algorithm that computes the Drazin inverse of a singular matrix? I would like to apply it either in MATLAB or Mathematica.
Read this article:
Fanbin Bu and Yimin Wei, The algorithm for computing the Drazin inverses of two-variable polynomial matrices, Applied mathematics and computation 147.3 (2004): 805-836.
in appendix there are several MATLAB code. The first one is this:
function DrazinInverse1a = DrazinInverse1(a)
%-----------------------------------------
%Compute the Drazin Inverse of a matrix 'a' using the limited algorithm.
%Need computing the index of 'a'.
global q1 q2 s1 s2
[m,n] = size(a);
if m~= n
disp('Matrix is must be square!')
end
%-----------------------------------------
% Computer the index of A and note r = rank(A^k).
[k,r,a1,a] = index(a);
F = eye(n);
g = -trace(a);
g = collect(g);
for i = 1:r-1
G = g*eye(n);
F = a*F+G;
g = -trace(a*F)/(i+1);
g = collect(g);
end
DrazinInverse1a = a1*F;
DrazinInverse1a = -1/g*DrazinInverse1a;
DrazinInverse1a = simplify(DrazinInverse1a);
end
function [k,r,a1,a] = index(a)
%To compute the index of 'a'.
k = 0;
n = length(a);
r = n;
a0 = a;
r1 = rank(a);
a1 = eye(n);
while r ~= r1
r = r1;
a1 = a;
a = a*a0;
r1 = rank(a);
k = k+1;
end
r = sym2poly(r);
end
Related
I'm trying to implement Divide and Conquer SVD of an upper bidiagonal matrix B, but my code is not working. The error is:
"Unable to perform assignment because the size of the left side is
3-by-3 and the size of the right side is 2-by-2.
V_bar(1:k,1:k) = V1;"
Can somebody help me fix it? Thanks.
function [U,S,V] = DivideConquer_SVD(B)
[m,n] = size(B);
k = floor(m/2);
if k == 0
U = 1;
V = 1;
S = B;
return;
else
% Divide the input matrix
alpha = B(k,k);
beta = B(k,k+1);
e1 = zeros(m,1);
e2 = zeros(m,1);
e1(k) = 1;
e2(k+1) = 1;
B1 = B(1:k-1,1:k);
B2 = B(k+1:m,k+1:m);
%recursive computations
[U1,S1,V1] = DivideConquer_SVD(B1);
[U2,S2,V2] = DivideConquer_SVD(B2);
U_bar = zeros(m);
U_bar(1:k-1,1:k-1) = U1;
U_bar(k,k) = 1;
U_bar((k+1):m,(k+1):m) = U2;
D = zeros(m);
D(1:k-1,1:k) = S1;
D((k+1):m,(k+1):m) = S2;
V_bar = zeros(m);
V_bar(1:k,1:k) = V1;
V_bar((k+1):m,(k+1):m) = V2;
u = alpha*e1'*V_bar + beta*e2'*V_bar;
u = u';
D_tilde = D*D + u*u';
% compute eigenvalues and eigenvectors of D^2+uu'
[L1,Q1] = eig(D_tilde);
eigs = diag(L1);
S = zeros(m,n)
S(1:(m+1):end) = eigs
U_tilde = Q1;
V_tilde = Q1;
%Compute eigenvectors of the original input matrix T
U = U_bar*U_tilde;
V = V_bar*V_tilde;
return;
end
With limited mathematical knowledge, you need to help me a bit more -- as I cannot judge if the approach is correct in a mathematical way (with no theory given;) ). Anyway, I couldn't even reproduce the error e.g with this matrix, which The MathWorks use to illustrate their LU matrix factorization
A = [10 -7 0
-3 2 6
5 -1 5];
So I tried to structure your code a bit and gave some hints. Extend this to make your code clearer for those people (like me) who are not too familiar with matrix decomposition.
function [U,S,V] = DivideConquer_SVD(B)
% m x n matrix
[m,n] = size(B);
k = floor(m/2);
if k == 0
disp('if') % for debugging
U = 1;
V = 1;
S = B;
% return; % net necessary as you don't do anything afterwards anyway
else
disp('else') % for debugging
% Divide the input matrix
alpha = B(k,k); % element on diagonal
beta = B(k,k+1); % element on off-diagonal
e1 = zeros(m,1);
e2 = zeros(m,1);
e1(k) = 1;
e2(k+1) = 1;
% divide matrix
B1 = B(1:k-1,1:k); % upper left quadrant
B2 = B(k+1:m,k+1:m); % lower right quadrant
% recusrsive function call
[U1,S1,V1] = DivideConquer_SVD(B1);
[U2,S2,V2] = DivideConquer_SVD(B2);
U_bar = zeros(m);
U_bar(1:k-1,1:k-1) = U1;
U_bar(k,k) = 1;
U_bar((k+1):m,(k+1):m) = U2;
D = zeros(m);
D(1:k-1,1:k) = S1;
D((k+1):m,(k+1):m) = S2;
V_bar = zeros(m);
V_bar(1:k,1:k) = V1;
V_bar((k+1):m,(k+1):m) = V2;
u = (alpha*e1.'*V_bar + beta*e2.'*V_bar).'; % (little show-off tip: '
% is the complex transpose operator; .' is the "normal" transpose
% operator. It's good practice to distinguish between them but there
% is no difference for real matrices anyway)
D_tilde = D*D + u*u.';
% compute eigenvalues and eigenvectors of D^2+uu'
[L1,Q1] = eig(D_tilde);
eigs = diag(L1);
S = zeros(m,n);
S(1:(m+1):end) = eigs;
U_tilde = Q1;
V_tilde = Q1;
% Compute eigenvectors of the original input matrix T
U = U_bar*U_tilde;
V = V_bar*V_tilde;
% return; % net necessary as you don't do anything afterwards anyway
end % for
end % function
I have to solve this equation
(tan(qh))/(tan(ph))=-(4k^2 pq)/(q^2-k^2)^2
here
d = 6.35;
h = d/2;
ct = 3076.4;
cl = 6207.7;
I want to give k values and each k value I get many omega (w) values. Just like sin(x)=0 means x values are nπ
I tried in this way.
syms w
d = 0.0065;
h = d/2;
ct = 3.0764;
cl = 6.2077;
k = 1:50:1000;
for i=1:length(k)
p = sqrt((w/cl)^2-k(i)^2); % p
q = sqrt((w/ct)^2-k(i)^2); % q
f = (tan(q*h)*(q^2-k(i)^2)^2)==-(4*k(i)^2*p*q*tan(p*h));
z = zeros(1,30);
for j=1:30
z(j) = vpasolve(f,w,[0 1000],'Random',true);
end
z=z(z>=0);
b = double(z);
c = unique(b);
cp = c/k(i); %phase velocity
f = c/(2*pi); %frequency
plot(f,cp,'bo')
hold on
end
In my case, answers are not matching with this image. My answers are in 10^-18 that is too small.
citation for this image. pg#7-9
https://smartech.gatech.edu/handle/1853/37158
I have this code for LU decomposition but I want to include determinant of L and U so that the output will be determinant of LU or determinant of PLU.
function [ P, L, U ] = LUdecomposition(A)
A=input('matrix A =');
m = size(A);
n = m(1);
L = eye(n);
P = eye(n);
U = A;
for i=1:m(1)
if U(i,i)==0
maximum = max(abs(U(i:end,1)));
for k=1:n
if maximum == abs(U(k,i))
temp = U(1,:);
U(1,:) = U(k,:);
U(k,:) = temp;
temp = P(:,1);
P(1,:) = P(k,:);
P(k,:) = temp;
end
end
end
if U(i,i)~=1
temp = eye(n);
temp(i,i)=U(i,i);
L = L * temp;
U(i,:) = U(i,:)/U(i,i);
end
if i~=m(1)
for j=i+1:length(U)
temp = eye(n);
temp(j,i) = U(j,i);
L = L * temp;
U(j,:) = U(j,:)-U(j,i)*U(i,:);
end
end
end
P = P';
end
L and U are triangular matrices. So their determinants are the product of the diagonal elements. In a classical LU decomposition the diagonal elements of L are 1, therefore det(L) = 1. Because of A = L*U => det(A) = det(L)*det(U) you can easily compute the determinant of LU by computing the determinant of U. Therefore det(PLU) = + or - det(LU). I'm not sure how to figure it out :/
Similar problem to this question matrix that forms an orthogonal basis with a given vector but I'm having problems with my implementation. I have a given vector, u, and I want to construct an arbitrary orthonormal matrix V such that V(:,1) = u/norm(u). The problem I'm getting is that V'*V isn't I (but is diagonal). Here's my code:
function V = rand_orth(u)
n = length(u);
V = [u/norm(u) rand(n,n-1)];
for i = 1:n
vi = V(:,i);
rii = norm(vi);
qi = vi/rii;
for j = i+1:n
rij = dot(qi,V(:,j));
V(:,j) = V(:,j) - rij*qi;
end
end
end
Simply normalize columns of V once you've finished the orthogonalization. Note there's also no need to normalize u (first column) initially, since we will normalize the whole matrix at the end:
function V = rand_orth(u)
n = length(u);
V = [u, rand(n,n-1)];
for i = 1:n
vi = V(:,i);
rii = norm(vi);
qi = vi/rii;
for j = i+1:n
rij = dot(qi,V(:,j));
V(:,j) = V(:,j) - rij*qi;
end
end
V = bsxfun(#rdivide, V, sqrt(diag(V'*V))');
end
I am trying to implement my own LU decomposition with partial pivoting. My code is below and apparently is working fine, but for some matrices it gives different results when comparing with the built-in [L, U, P] = lu(A) function in matlab
Can anyone spot where is it wrong?
function [L, U, P] = lu_decomposition_pivot(A)
n = size(A,1);
Ak = A;
L = zeros(n);
U = zeros(n);
P = eye(n);
for k = 1:n-1
for i = k+1:n
[~,r] = max(abs(Ak(:,k)));
Ak([k r],:) = Ak([r k],:);
P([k r],:) = P([r k],:);
L(i,k) = Ak(i,k) / Ak(k,k);
for j = k+1:n
U(k,j-1) = Ak(k,j-1);
Ak(i,j) = Ak(i,j) - L(i,k)*Ak(k,j);
end
end
end
L(1:n+1:end) = 1;
U(:,end) = Ak(:,end);
return
Here are the two matrices I've tested with. The first one is correct, whereas the second has some elements inverted.
A = [1 2 0; 2 4 8; 3 -1 2];
A = [0.8443 0.1707 0.3111;
0.1948 0.2277 0.9234;
0.2259 0.4357 0.4302];
UPDATE
I have checked my code and corrected some bugs, but still there's something missing with the partial pivoting. In the first column the last two rows are always inverted (compared with the result of lu() in matlab)
function [L, U, P] = lu_decomposition_pivot(A)
n = size(A,1);
Ak = A;
L = eye(n);
U = zeros(n);
P = eye(n);
for k = 1:n-1
[~,r] = max(abs(Ak(k:end,k)));
r = n-(n-k+1)+r;
Ak([k r],:) = Ak([r k],:);
P([k r],:) = P([r k],:);
for i = k+1:n
L(i,k) = Ak(i,k) / Ak(k,k);
for j = 1:n
U(k,j) = Ak(k,j);
Ak(i,j) = Ak(i,j) - L(i,k)*Ak(k,j);
end
end
end
U(:,end) = Ak(:,end);
return
I forgot that If there was a swap in matrix P I had to swap also the matrix L. So just add the next line after after swapping P and everything will work excellent.
L([k r],:) = L([r k],:);
Both functions are not correct.
Here is the correct one.
function [L, U, P] = LU_pivot(A)
[m, n] = size(A); L=eye(n); P=eye(n); U=A;
for k=1:m-1
pivot=max(abs(U(k:m,k)))
for j=k:m
if(abs(U(j,k))==pivot)
ind=j
break;
end
end
U([k,ind],k:m)=U([ind,k],k:m)
L([k,ind],1:k-1)=L([ind,k],1:k-1)
P([k,ind],:)=P([ind,k],:)
for j=k+1:m
L(j,k)=U(j,k)/U(k,k)
U(j,k:m)=U(j,k:m)-L(j,k)*U(k,k:m)
end
pause;
end
end
My answer is here:
function [L, U, P] = LU_pivot(A)
[n, n1] = size(A); L=eye(n); P=eye(n); U=A;
for j = 1:n
[pivot m] = max(abs(U(j:n, j)));
m = m+j-1;
if m ~= j
U([m,j],:) = U([j,m], :); % interchange rows m and j in U
P([m,j],:) = P([j,m], :); % interchange rows m and j in P
if j >= 2; % very_important_point
L([m,j],1:j-1) = L([j,m], 1:j-1); % interchange rows m and j in columns 1:j-1 of L
end;
end
for i = j+1:n
L(i, j) = U(i, j) / U(j, j);
U(i, :) = U(i, :) - L(i, j)*U(j, :);
end
end