I want to create a D by D matrix, where D is an even positive integer. I then want to fill it with values depending on a lambda and calculate the eigenvalues of the matrix. I'm interested in the smallest eigenvalue as a function of D and lambda.
Is this doable? I absolutely cannot find any way of doing this, and the help only mentions creating matrices of symbolic variables with known dimension.
Related
I know that the eigenvectors produced by eig(A) have 2-norm 1. But what about the vectors produced in the generalized eigenvalue problem eig(A,B)? A natural conjecture is that such a vector v should satisfy v'Bv=1. When B is the identity matrix, then v'Bv is exactly the square of the 2-norm. I ran the following test for various matrices A and B:
[p,d]=eig(A,B);
v=p(:,1);
v'*B*v
I always choose B to be diagonal. I noticed that v'Bv is not always 1. However, it is indeed 1 when A is symmetric. Does anyone know the rule for the way that Matlab normalizes the generalized eigenvectors? I can't find it in the document.
According to the documentation (emphasis mine):
The form and normalization of V depends on the combination of input arguments:
[...]
[V,D] = eig(A,B) and [V,D] = eig(A,B,algorithm) returns V as a matrix whose columns are the generalized right eigenvectors that satisfy A*V = B*V*D. The 2-norm of each eigenvector is not necessarily 1. In this case, D contains the generalized eigenvalues of the pair, (A,B), along the main diagonal.
When eig uses the 'chol' algorithm with symmetric (Hermitian) A and symmetric (Hermitian) positive definite B, it normalizes the eigenvectors in V so that the B-norm of each is 1.
This means that, unless you are using the 'chol' algorithm, V is not normalized.
If I get you correctly, you are looking for a way to generalize a vector then given a vector you can divide it by its norm to obtain a secondary vector whose norm is 1.
If you are looking for the mathematical background, then Eigendecomposition of a matrix contains a good introduction.
When using eig function in Matlab, it seems that this function has already normalize the values of the eigenvalues. Do we need to write some lines of code to normalize the eigenvalues after using the eig function.
The function eig in MATLAB normalizes the eigenvectors (not the eigenvalues).
See the following from the documentation:
[V,D] = eig(A) returns matrix V, whose columns are the right
eigenvectors of A such that AV = VD. The eigenvectors in V are
normalized so that the 2-norm of each is 1.
Eigenvectors can vary by a scalar, so a computation algorithm has to choose a particular scaled value of an eigenvector to show you. eig chooses 2-norm = 1. Just look at the eigenvector definition to see why: AV=VD. V shows up on both sides, so you can multiple V by anything without affecting the equation.
Eigenvalues do not vary. Look again at AV=VD. D is only on one side, so it can't be scaled.
It is known that in Matlab SVD function outputs three matrices: [U,S,V] = svd(X).
Actually, 'U' is a square m X m matrix where m is the number of rows/columns. Also, 'S' is a non-square matrix with dimensions m X n that stores n singular values (produced from left singular vectors of U matrix) in descending order(in diagonal).
My question is how to determine (in Matlab) which 'm' singular vectors of matrix 'U' correspond to the first (greatest) singular value of the 'S' matrix. Furthermore, some values of the specific singular vector are positive and others are negative. Does this minus or plus sign hides any mathematical meaning? I have seen examples that use the sign of the 'greatest' singular vector as for classification purposes.
The diagonal of the S matrix contains the singular values. So for the ith singular value (in the i,i position on S matrix), ith column of the U and V vectors respectively for the two constraint equations.
I don't think the +/- hides any special meaning. After all, you could multiply both the U and the V matrices by a -1 constant and the result would still be valid.
To be perfectly accurate, by definition singular values of SVD are not necessarly reordered, but MATLAB SVD reorders them.
The ith column of U corresponds to the ith singular value of M.
Namely for the ith singular value sigma_j, there exist j such that
M* .u_i = sigma_j v_j
you also have
M. v_j = sigma_i u_i
Be careful, it might not be what you are looking for.
The coordinates of your singular values are the coordonates in the original basis. A positive values means your new variable is positively proportional to the corresponding original variable. In statistics it is generally used when you know that both original and transformed variables increase or decrease together.
In Matlab, I have created a matrix A with size (244x2014723)
and a matrix B with size (244x1)
I was able to calculate the correlation matrix using corr(A,B) which yielded in a matrix of size 2014723x1. So, every column of matrix A correlates with matrix B and gives one row value in the matrix of size 2014723x1.
My question is when I ask for a covariance matrix using cov(A,B), I get an error saying A and B should be of same sizes. Why do I get this error? How is the method to find corr(A,B) any different from cov(A,B)?
The answer is pretty clear if you read the documentation:
cov:
If A and B are matrices of observations, cov(A,B) treats A and B as vectors and is equivalent to cov(A(:),B(:)). A and B must have equal size.
corr
corr(X,Y) returns a p1-by-p2 matrix containing the pairwise correlation coefficient between each pair of columns in the n-by-p1 and n-by-p2 matrices X and Y.
The difference between corr(X,Y) and the MATLABĀ® function corrcoef(X,Y) is that corrcoef(X,Y) returns a matrix of correlation coefficients for the two column vectors X and Y. If X and Y are not column vectors, corrcoef(X,Y) converts them to column vectors.
One way you could get the covariances of your vector with each column of you matrix is to use a loop. Another way (might be in-efficient depending on the size) is
C = cov([B,A])
and then look at the first row (or column) or C.
See link
In the more about section, the equation describing how cov is computed for cov(A,B) makes it clear why they need to be the same size. The summation is over only one variable which enumerates the elements of A,B.
I want to solve, in MatLab, a linear system (corresponding to a PDE system of two equations written in finite difference scheme). The action of the system matrix (corresponding to one of the diffusive terms of the PDE system) reads, symbolically (u is one of the unknown fields, n is the time step, j is the grid point):
and fully:
The above matrix has to be intended as A, where A*U^n+1 = B is the system. U contains the 'u' and the 'v' (second unknown field of the PDE system) alternatively: U = [u_1,v_1,u_2,v_2,...,u_J,v_J].
So far I have been filling this matrix using spdiags and diag in the following expensive way:
E=zeros(2*J,1);
E(1:2:2*J) = 1;
E(2:2:2*J) = 0;
Dvec=zeros(2*J,1);
for i=3:2:2*J-3
Dvec(i)=D_11((i+1)/2);
end
for i=4:2:2*J-2
Dvec(i)=D_21(i/2);
end
A = diag(Dvec)*spdiags([-E,-E,2*E,2*E,-E,-E],[-3,-2,-1,0,1,2],2*J,2*J)/(dx^2);`
and for the solution
[L,U]=lu(A);
y = L\B;
U(:) =U\y;
where B is the right hand side vector.
This is obviously unreasonably expensive because it needs to build a JxJ matrix, do a JxJ matrix multiplication, etc.
Then comes my question: is there a way to solve the system without passing MatLab a matrix, e.g., by passing the vector Dvec or alternatively directly D_11 and D_22?
This would spare me a lot of memory and processing time!
Matlab doesn't store sparse matrices as JxJ arrays but as lists of size O(J). See
http://au.mathworks.com/help/matlab/math/constructing-sparse-matrices.html
Since you are using the spdiags function to construct A, Matlab should already recognize A as sparse and you should indeed see such a list if you display A in console view.
For a tridiagonal matrix like yours, the L and U matrices should already be sparse.
So you just need to ensure that the \ operator uses the appropriate sparse algorithm according to the rules in http://au.mathworks.com/help/matlab/ref/mldivide.html. It's not clear whether the vector B will already be considered sparse, but you could recast it as a diagonal matrix which should certainly be considered sparse.