I'm trying to find eigenvalues of a matrix without using eig function (my homework says so). In Matlab, I define the matrix and identity matrix. But I cannot set up this equation:
A - x*I
x here is lambda, A is the matrix that I should find eigenvalues of and I is the identity matrix. If you know how to find eigenvalues, you supposed to understand this. How can I go through?
you can get some inspiration here: http://en.wikipedia.org/wiki/Eigenvalue_algorithm
if the matrix is fixed size, you can easily do the det(A-lambda*eye)=0 solving by yourself and use that.
With power iteration you can already find the dominant eigenvalue, and I knew there was an extension to this algorithm to also find the other eigenvalues, but cannot recall how that works :(
Related
I have a linear equation such as
Ax=b
where A is full rank matrix which its size is 512x512. b is a vector of 512x1. x is unknown vector. I want to find x, hence, I have some options for doing that
1.Using the normal way
inv(A)*b
2.Using SVD ( Singular value decomposition)
[U S V]=svd(A);
x = V*(diag(diag(S).^-1)*(U.'*b))
Both methods give the same result. So, what is benefit of using SVD to solve Ax=b, especially in the case A is a 2D matrix?
Welcome to the world of numerical methods, let me be your guide.
You, as a new person in this world wonders, "Why would I do something this difficult with this SVD stuff instead of the so commonly known inverse?! Im going to try it in Matlab!"
And no answer was found. That is, because you are not looking at the problem itself! The problems arise when you have an ill-conditioned matrix. Then the computing of the inverse is not possible numerically.
example:
A=[1 1 -1;
1 -2 3;
2 -1 2];
try to invert this matrix using inv(A). Youll get infinite.
That is, because the condition number of the matrix is very high (cond(A)).
However, if you try to solve it using SVD method (b=[1;-2;3]) you will get a result. This is still a hot research topic. Solving Ax=b systems with ill condition numbers.
As #Stewie Griffin suggested, the best way to go is mldivide, as it does a couple of things behind it.
(yeah, my example is not very good because the only solution of X is INF, but there is a way better example in this youtube video)
inv(A)*b has several negative sides. The main one is that it explicitly calculates the inverse of A, which is both time demanding, and may result in inaccuracies if values vary by many orders of magnitude.
Although it might be better than inv(A)*b, using svd is not the "correct" approach here. The MATLAB-way to do this is using mldivide, \. Using this, MATLAB chooses the best algorithm to solve the linear system based on its properties (Hermation, upper Hessenberg, real and positive diagonal, symmetric, diagonal, sparse etc.). Often, the solution will be a LU-triangulation with partial permutation, but it varies. You'll have a hard time beating MATLABs implementation of mldivide, but using svd might give you some more insight of the properties of the system if you actually investigates U, S, V. If you don't want to do that, do with mldivide.
I need to find the eigen value decomposition of the symmetric matrix in Matlab. But I do not want to use the matlab inbuilt function eig Can anyone tell me efficient algortihm. I have already implemented the power Iteration Algorithm but it is not suitable for my project because it gives the best value in eigenvalue and eigen vector. Looking forward for suggestions.
PS: I am using eigenvalue decomposition several time in my algorithm...
I have a matrix equation and I want to solve it in MATLAB. The equation is
X^(-1)AX=B.
Where X is an unknown symmetric 3x3 matrix, A is known and B is a diagonal matrix (There isn't any condition on B's elements, just diagonal).
Please help me solve this problem!
From the information you've given it appears that you are looking for a symbolic solution which Matlab is not the best at (usually Mathematica is prefered or check out Wolfram Alpha). If you absolutely need Matlab then you could use the 'solve()' command.
However, honestly you are dealing with a rather simple linear system of small dimensions so it is probably easiest to just work this out by hand. Check out Matrix Cookbook for help. Also it really just looks like you are just trying to solve for the eigenvector matrix so possibly the command eig() might help ??
I'm not too familiar with MATLAB or computational mathematics so I was wondering how I might solve an equation involving the sum of squares, where each term involves two vectors- one known and one unknown. This formula is supposed to represent the error and I need to minimize the error. I think I'm supposed to use least squares but I don't know too much about it and I'm wondering what function is best for doing that and what arguments would represent my equation. My teacher also mentioned something about taking derivatives and he formed a matrix using derivatives which confused me even more- am I required to take derivatives?
The problem that you must be trying to solve is
Min u'u = min \sum_i u_i^2, u=y-Xbeta, where u is the error, y is the vector of dependent variables you are trying to explain, X is a matrix of independent variables and beta is the vector you want to estimate.
Since sum u_i^2 is diferentiable (and convex), you can evaluate the minimal of this expression calculating its derivative and making it equal to zero.
If you do that, you find that beta=inv(X'X)X'y. This maybe calculated using the matlab function regress http://www.mathworks.com/help/stats/regress.html or writing this formula in Matlab. However, you should be careful how to evaluate the inverse (X'X) see Most efficient matrix inversion in MATLAB
I have a big (400K*400K) sparse matrix and I need to calculate the largest eigenvalue of A'*A.
The problem is that Matlab can't even calculate A' due to memory problems.
I also tried [a,b,c] = find(A) and then transpose by creating a transpose sparse matrix, but although the find() works, the sprase creation doesn't.
Is there a nice solution for this? it can be either in a matlab function or in another technique to calculate the largest eigenvalue for this kind of multiplication.
Thanks.
If A is sparse, see this thread and some discussion in this documentation (basically do it part by part) for a way to transpose it etc.
But now you need to calculate B=A'*A. The question is, is it still sparse? assuming it is, there shouldn't be a problem to proceed using the previous technique mentioned in the link.
Then after you've obtained B=A'*A, use eigs
eigs(B,1)
to obtain the largest magnitude eigenvalue.