Proof of eigenvector and eigenvalues Matlab - matlab

I am finding the eigenvector and eigenvalues of a matrix, then I need to prove that Ax= λx where λ is the eigenvalue. Here is my code:
A = [1 1 -1;1 0 -2; 0 0 -1]
[evecs,evals]=eig(A)
for i = 1:3
A*evecs(:,i)== evals(i,i)*evecs(:,i)
end
Here is my output:
A =
1 1 -1
1 0 -2
0 0 -1
evecs =
0.8507 -0.5257 -0.3015
0.5257 0.8507 0.9045
0 0 0.3015
evals =
1.6180 0 0
0 -0.6180 0
0 0 -1.0000
ans =
0
0
1
ans =
0
1
1
ans =
0
0
1
Why are the ans not all equal to 1 as it should (in order to prove Ax= λx)

The calculations of your eigen solver are performed using finite precision floating point arithmetic. The true eigen values and eigen vectors are not even exactly representable in finite floating point data types.
Check for equality against a small tolerance to allow for this. That is check that Ax - λx is small in absolute value.
Required reading is What Every Computer Scientist Should Know About Floating-Point Arithmetic.

Related

How to reduce coefficients to their lowest possible integers using Matlab - Balancing Chemical Equations

I am attempting to develop a Matlab program to balance chemical equations. I am able to balance them via solving a system of linear equations. Currently my output is a column vector with the coefficients.
My problem is that I need to return the smallest integer values of these coefficients. For example, if [10, 20, 30] was returned. I want [1, 2, 3] to be returned.
What is the best way to accomplish this?
I want this program to be fully autonomous once it is fed a matrix with the linear system. Thus I can not play around with the values, I need to automate this from the code. Thanks!
% Chemical Equation in Matrix Form
Chem = [1 0 0 -1 0 0 0; 1 0 1 0 0 -3 0; 0 2 0 0 -1 0 0; 0 10 0 0 0 -1 0; 0 35 4 -4 0 12 1; 0 0 2 -1 -3 0 2]
%set x4 = 1 then Chem(:, 4) = b and
b = Chem(:, 4); % Arbitrarily set x4 = 1 and set its column equal to b
Chem(:,4) = [] % Delete the x4 column from Chem and shift over
g = 1; % Initialize variable for LCM
x = Chem\b % This is equivalent to the reduced row echelon form of
% Chem | b
% Below is my sad attempt at factoring the values, I divide by the smallest decimal to raise all the values to numbers greater than or equal to 1
for n = 1:numel(x)
g = x(n)*g
M = -min(abs(x))
y = x./M
end
I want code that will take some vector with coefficients, and return an equivalent coefficient vector with the lowest possible integer coefficients. Thanks!
I was able to find a solution without using integer programming. I converted the non-integer values to rational expressions, and used a built-in matlab function to extract the denominator of each of these expressions. I then used a built in matlab function to find the least common multiples of these values. Finally, I multiplied the least common multiple by the matrix to find my answer coefficients.
% Chemical Equation in Matrix Form
clear, clc
% Enter chemical equation as a linear system in matrix form as Chem
Chem = [1 0 0 -1 0 0 0; 1 0 1 0 0 -3 0; 0 2 0 0 -1 0 0; 0 10 0 0 0 -1 0; 0 35 4 -4 0 -12 -1; 0 0 2 -1 -3 0 -2];
% row reduce the system
C = rref(Chem);
% parametrize the system by setting the last variable xend (e.g. x7) = 1
x = [C(:,end);1];
% extract numerator and denominator from the rational expressions of these
% values
[N,D] = rat(x);
% take the least common multiple of the first pair, set this to the
% variable least
least = lcm(D(1),D(2));
% loop through taking the lcm of the previous values with the next value
% through x
for n = 3:numel(x)
least = lcm(least,D(n));
end
% give answer as column vector with the coefficients (now factored to their
% lowest possible integers
coeff = abs(least.*x)

Python equivalent for Matlab vector slice

I have the following code in Matlab (I do not have Matlab), that apparently constructs integers by sampling sequences of binary values:
velocity_LUT_10bit = zeros(2^10,1);
for n = 1:length(velocity_LUT_10bit),
imagAC = bin2dec(num2str(bitget(n-1,9:-1:6))) - bitget(n-1,10)*2^4; % Imaginary part of autocorrelation: signed 5-bit integer
realAC = bin2dec(num2str(bitget(n-1,4:-1:1))) - bitget(n-1, 5)*2^4; % Real part of autocorrelation: signed 5-bit integer
velocity_LUT_10bit(n) = velNyq_CF*angle((realAC+0.5)/16 + 1i*(imagAC+0.5)/16)/pi;
end;
I am having trouble understanding the bitget() function. From the docs, the first arg is the sampled sequence, while the second arg specifies the range of the sample, but I am confused about what the slicing x:-y:z means. I understand it from the docs as "sample from index x to z, going right to left by strides of y". Is that correct?
What would be the numpy equivalent of bin2dec(num2str(bitget(n-1,9:-1:6)))? I understood I should be using numpy.packbits(), but I am a bit stuck.
In an Octave session:
>> for n=0:3:15,
bitget(n,1:1:5)
end
ans =
0 0 0 0 0
ans =
1 1 0 0 0
ans =
0 1 1 0 0
ans =
1 0 0 1 0
ans =
0 0 1 1 0
ans =
1 1 1 1 0
This is just the binary representation of the number, with a sliced selection of the bits. Octave/Matlab is using the 'start:step:stop' syntax.
The rest converts the numbers to a string and from binary to decimal:
>> num2str(bitget(13,5:-1:1))
ans = 0 1 1 0 1
>> bin2dec(num2str(bitget(13,5:-1:1)))
ans = 13
bitget(n-1,9:-1:6) must be fetching the 9th to 6th bits (powers of 2) in reverse order. So for a number up to 2^10-1, it's pulling out 'bits' 1-4, 5, 6-9, and 10.
I'm not familiar with Python/numpy binary representations, but here's a start:
>> num2str(bitget(100,10:-1:1))
ans = 0 0 0 1 1 0 0 1 0 0
In [434]: np.binary_repr(100,10)
Out[434]: '0001100100'

Finding the Associated Eigenvector in Matlab

For this problem, I think I got most of code correct. However, the correct eigenvector contains the negative values of what I have.
The instructions:
My code:
clear all; close all;
M = [0 1/4 1/4 0 0 0 0 0 0 0;
1/2 0 1/4 1/4 1/6 0 0 0 0 0;
1/2 1/4 0 0 1/6 1/4 0 0 0 0;
0 1/4 0 0 1/6 0 1/2 1/4 0 0;
0 1/4 1/4 1/4 0 1/4 0 1/4 1/4 0;
0 0 1/4 0 1/6 0 0 0 1/4 1/2;
0 0 0 1/4 0 0 0 1/4 0 0;
0 0 0 1/4 1/6 0 1/2 0 1/4 0;
0 0 0 0 1/6 1/4 0 1/4 0 1/2;
0 0 0 0 0 1/4 0 0 1/4 0];
[Y, Z] = eig(M) % pull the first column of T
A8 = Y(:,1) % P
M*A8 % check
save ('A8.dat', 'A8', '-ascii')
I use,
[Y, Z] = eig(M)
to find the associated eigenvalue of 1 in Z with its associated eigenvector from Y. This yields P (or A8) to be:
0.1667
0.3333
0.3333
0.3333
0.5000
0.3333
0.1667
0.3333
0.3333
0.1667
And when I multiply M by P, I get P, which checks out. Apparently the proper values should be negative values of what I got. Can someone clarify?
This behaviour is correct. To understand the reason, we need to look at the definition of eigenvectors (source: wikipedia):
An eigenvector or characteristic vector of a square matrix A is a non-zero vector v that, when multiplied with A, yields a scalar multiple of itself. [...] That is: Av = nv.
where v is the eigenvector and n is the corresponding eigenvalue.
As these are linear operations, A*(kv)=n*(kv) for any non-zero, scalar k. That means, an eigenvector multiplied by a factor k will be another eigenvector to the corresponding eigenvalue.
Matlab outputs normalized eigenvectors, i.e. their length (norm(A8)) equals 1. But still, both the positive and the negative version are eigenvectors of M. You can verify this by creating the negative version of your result and multiplying it with P, which will again give you the negative version of your result.

problem in finding eigenvectors of a matrix in MATLAB

I have a symmetric matrix with the elements A=[8.8191,0,1.0261; 0,3,0; 1.0261,0,3.1809];
I used the eig(A) function in MATLAB , the eigenvalues and eigenvectors are given :
eigvect =
0.1736 0 0.9848
0 -1.0000 0
-0.9848 0 0.1736
eigval =
3.0000 0 0
0 3.0000 0
0 0 9.0000
Eigenvalues are correct but the eigenvectors are not which I expect, because I think 2 of them should be equal. Does MATLAB calculate correctly the eigenvectors?
The definition of an eigenvalue can be found anywhere on the web
A*v = lam*v
v being the eigenvector with lam, its corresponding eigenvalue.
So test your results:
i =1;
A*eigvect (:,i)-eigval(i,i)*eigvect(:,i) %which should be approx [0;0;0]
It is not necessary that each of the repeating eigenvalue should have its (independent) associated eigenvector. This means, an nxn matrix with an eigenvalue repeating more than once has less or equal to n linearly independent eigenvectors.
Example 1: Matrix
2 0;
0 2
has eigenvalue 2 (repeating twice), but it has two linearly independent eigenvectors associated with eigenvalue 2
Example 2:Matrix
A= 1 1 1 -2;
0 1 0 -1;
0 0 1 1;
0 0 0 1
has eigenvalue 1 (repeating four times), but it has only two independent eigenvectors associated with eigenvalue 1.

Linear programming - MATLAB

I've a [8x4] matrix, 'A', and a [8x1] matrix, 'B'. How do I check if there exists a [4x1] matrix 'x' such that A * X = B?
This can be done using linprog in MATLAB, but I'm not sure how to give the constraints. I tried x = linprog([],[],[],A,B);, but this doesn't seem to work.
How to specify the condition x>=0 and optimize it for A*X-B so that, if it returns 0, we know there is X.
Update:
pinv in MATLAB doesn't work in all the cases. Consider the following example:
A= [1 0 0 0
0 1 -1 -1
-1 -1 1 -1
-1 -1 -1 1
0 0 0 0
0 0 0 0
0 0 0 0
1 1 1 1]
B = [0
0
0
-1
0
0
0
1]
using pinv gives the the value of X as:
X = [-2.7756e-017
0.5000
0.5000
0]
but when linear programming is used I get x as:
X = [ 0
0.5000
0.5000
0]
This is the reason why I preferred linprog tool in MATLAB. I just used it the way I mentioned previously but it is throwing a lot of warnings. I still think there is a better way to use this function correctly. It did not throw for this matrix but in general when I loop through a lot of matrices my command window overflow with warnings.
Why use linear programming? You can just solve the system A*x=B directly:
A =[ 1 -1 -1 -1 1 0 0 1
-1 1 -1 0 0 1 0 0
-1 -1 1 0 1 0 0 0
-1 -1 -1 1 1 1 0 0]'; %'#
B = [-1 -1 0 0 0 0 1 1]'; %'#
x = A\B
x =
0.16327
0.097959
0.46531
0.11837
The problem you may face is that A can be rank deficient, but in that case, you'll get infinitely many solutions for x.
But why use a code that will do MORE work than necessary to solve the problem? Just use the pseudo-inverse. If A is of full rank, then backslash will be entirely sufficient.
Compute the solution. If the norm of your residuals is less than some tolerance, then you have a solution. Note that essentially no solution is ever assured to give you truly zero residuals, so you must apply a tolerance. Thus
x = A\B;
if norm(B - A*x) < tol
disp('Eureeka!')
end
Or use x=pinv(A)*B if you are worried about the rank of A.
Trying to throw linprog at the problem will surely not be more efficient than the direct solution itself.
Edit: Since non-negativity of the result has now been added as a requirement, use lsqnonneg instead. Just compare the norm of the residual vector to a tolerance. If the norm is too large, then no solution exists.
Unfortunately, you can not use array division. This is not the same a matrix division. However, you could use the inverse of the Matrix A to multiply it with matrix B to get x x = (A-1)B, but I am not sure if an inverse is possible for non-square matrix A (8x4). Hence, you might not have been able to work that with linprog