Wrong answer for lyap() function in matlab - matlab

I'm getting a weird answer from matlab using the lyap() function for generating a stable controller
my code is
m=1;c=2;k=1;
A=[0 1;-k/m -c/m]
B=[0 1/m]'
C=[1 0;0 1];
D=[0 0]';
u=2;
Q=eye(2);
ro=60;
k=0.99*ro;
P=lyap(A,Q)
What I'm getting is
P =
1.5000 -0.5000
-0.5000 0.5000
which is giving me an unstable controller,
while when solving it alone I get
p1 =
1.5000 0.5000
0.5000 0.5000
which is a stable controller.
Any ideas?
Thanks

from Mathworks Documentation
Limitations:
The continuous Lyapunov equation has a unique solution if the eigenvalues a1,a2,...,an of A and b1,b2,...,bn of B satisfy
ai+bj ~= 0 for all i,j
and from your values
eig(A)
ans =
-1
-1
eig(Q)
ans =
1
1
we can see these add to zero, thus there is not unique solution for these inputs
However I have no idea why the error message isn't generated, possibly time to report a bug

Related

moving mean on a circle

Is there a way to calculate a moving mean in a way that the values at the beginning and at the end of the array are averaged with the ones at the opposite end?
For example, instead of this result:
A=[2 1 2 4 6 1 1];
movmean(A,2)
ans = 2.0 1.5 1.5 3.0 5 3.5 1.0
I want to obtain the vector [1.5 1.5 1.5 3 5 3.5 1.0], as the initial array element 2 would be averaged with the ending element 1.
Generalizing to an arbitrary window size N, this is how you can add circular behavior to movmean in the way you want:
movmean(A([(end-floor(N./2)+1):end 1:end 1:(ceil(N./2)-1)]), N, 'Endpoints', 'discard')
For the given A and N = 2, you get:
ans =
1.5000 1.5000 1.5000 3.0000 5.0000 3.5000 1.0000
For an arbitrary window size n, you can use circular convolution with an averaging mask defined as [1/n ... 1/n] (with n entries; in your example n = 2):
result = cconv(A, repmat(1/n, 1, n), numel(A));
Convolution offers some nice ways of doing this. Though, you may need to tweak your input slightly if you are only going to partially average the ends (i.e. the first is averaged with the last in your example, but then the last is not averaged with the first).
conv([A(end),A],[0.5 0.5],'valid')
ans =
1.5000 1.5000 1.5000 3.0000 5.0000 3.5000 1.0000
The generalized case here, for a moving average of size N, is:
conv(A([end-N+2:end, 1:end]),repmat(1/N,1,N),'valid')

Power Method in MATLAB

I would like to implement the Power Method for determining the dominant eigenvalue and eigenvector of a matrix in MATLAB.
Here's what I wrote so far:
%function to implement power method to compute dominant
%eigenvalue/eigenevctor
function [m,y_final]=power_method(A,x);
m=0;
n=length(x);
y_final=zeros(n,1);
y_final=x;
tol=1e-3;
while(1)
mold=m;
y_final=A*y_final;
m=max(y_final);
y_final=y_final/m;
if (m-mold)<tol
break;
end
end
end
With the above code, here is a numerical example:
A=[1 1 -2;-1 2 1; 0 1 -1]
A =
1 1 -2
-1 2 1
0 1 -1
>> x=[1 1 1];
>> x=x';
>> [m,y_final]=power_method(A,x);
>> A*x
ans =
0
2
0
When comparing with the eigenvalues and eigenvectors of the above matrix in MATLAB, I did:
[V,D]=eig(A)
V =
0.3015 -0.8018 0.7071
0.9045 -0.5345 0.0000
0.3015 -0.2673 0.7071
D =
2.0000 0 0
0 1.0000 0
0 0 -1.0000
The eigenvalue coincides, but the eigenvector should be approaching [1/3 1 1/3]. Here, I get:
y_final
y_final =
0.5000
1.0000
0.5000
Is this acceptable to see this inaccuracy, or am I making some mistake?
You have the correct implementation, but you're not checking both the eigenvector and eigenvalue for convergence. You're only checking the eigenvalue for convergence. The power method estimates both the prominent eigenvector and eigenvalue, so it's probably a good idea to check to see if both converged. When I did that, I managed to get [1/3 1 1/3]. Here is how I modified your code to facilitate this:
function [m,y_final]=power_method(A,x)
m=0;
n=length(x);
y_final=x;
tol=1e-10; %// Change - make tolerance more small to ensure convergence
while(1)
mold = m;
y_old=y_final; %// Change - Save old eigenvector
y_final=A*y_final;
m=max(y_final);
y_final=y_final/m;
if abs(m-mold) < tol && norm(y_final-y_old,2) < tol %// Change - Check for both
break;
end
end
end
When I run the above code with your example input, I get:
>> [m,y_final]=power_method(A,x)
m =
2
y_final =
0.3333
1.0000
0.3333
On a side note with regards to eig, MATLAB most likely scaled that eigenvector using another norm. Remember that eigenvectors are not unique and are accurate up to scale. If you want to be sure, simply take the first column of V, which coincides with the dominant eigenvector, and divide by the largest value so that we can get one component to be normalized with the value of 1, just like the Power Method:
>> [V,D] = eig(A);
>> V(:,1) / max(abs(V(:,1)))
ans =
0.3333
1.0000
0.3333
This agrees with what you have observed.

Checking which rows switched given an original and an altered matrix in Matlab

I've been trying to wrap my head around this for awhile and was hoping to get some insight.
Suppose you have matrix A, then you switched rows until you ended up with matrix B;
A = [1 3 1;
3 2 1;
2 3 1;];
B = [3 2 1;
1 3 1;
2 3 1;];
invA =
0.0000 -1.0000 1.0000
-1.0000 -1.0000 2.0000
3.0000 5.0000 -7.0000
invB =
-1.0000 0.0000 1.0000
-1.0000 -1.0000 2.0000
5.0000 3.0000 -7.0000
How would I document these row switches?. I'm ultimately trying to alter the inverse of B to match with the inverse of A. My conclusion was that given 2 rows switched (aka between rows 1 and 2), the end result of the inverse would be identical except for switching the columns of (1 and 2) of the inverse B.
This is quite a basic algebra question.
You can write your matrix B as a product of a permutation matrix P and A:
B = PA;
(in your example: P = [0 1 0;1 0 0;0 0 1];).
Now you can invert B:
inv( B ) = inv( PA )
The inverse of a product is
= inv(A) * inv(P)
Since matrix P is a permutation matrix: inv(P) = P.'. Thus
= inv(A) * P.'
That is, inv(B) = inv(A) * P.' which means that you apply the permutation P to the columns of inv(A).
Note that a permutation P can represent more than a single switch between rows, moreover, permutations can be multiplies to account for repeated switching of rows.
An important comment: I use inv in this answer to denote the inverse of a matrix. However, when running Matlab and numerically inverting matrices it is un-recommended to use inv function explicitly.

Finding generalized eigenvectors numerically in Matlab

I have a matrix such as this example (my actual matrices can be much larger)
A = [-1 -2 -0.5;
0 0.5 0;
0 0 -1];
that has only two linearly-independent eigenvalues (the eigenvalue -1 is repeated). I would like to obtain a complete basis with generalized eigenvectors. One way I know how to do this is with Matlab's jordan function in the Symbolic Math toolbox, but I'd prefer something designed for numeric inputs (indeed, with two outputs, jordan fails for large matrices: "Error in MuPAD command: Similarity matrix too large."). I don't need the Jordan canonical form, which is notoriously unstable in numeric contexts, just a matrix of generalized eigenvectors. Is there a function or combination of functions that automates this in a numerically stable way or must one use the generic manual method (how stable is such a procedure)?
NOTE: By "generalized eigenvector," I mean a non-zero vector that can be used to augment the incomplete basis of a so-called defective matrix. I do not mean the eigenvectors that correspond to the eigenvalues obtained from solving the generalized eigenvalue problem using eig or qz (though this latter usage is quite common, I'd say that it's best avoided). Unless someone can correct me, I don't believe the two are the same.
UPDATE 1 – Five months later:
See my answer here for how to obtain generalized eigenvectors symbolically for matrices larger than 82-by-82 (the limit for my test matrix in this question).
I'm still interested in numeric schemes (or how such schemes might be unstable if they're all related to calculating the Jordan form). I don't wish to blindly implement the linear algebra 101 method that has been marked as a duplicate of this question as it's not a numerical algorithm, but rather a pencil-and paper method used to evaluate students (I suppose that it could be implemented symbolically however). If anyone can point me to either an implementation of that scheme or a numerical analysis of it, I'd be interested in that.
UPDATE 2 – Feb. 2015: All of the above is still true as tested in R2014b.
As mentioned in my comments, if your matrix is defective, but you know which eigenvectors/eigenvalue pair you want to consider as identical given your tolerance, you can proceed as with this example below:
% example matrix A:
A = [1 0 0 0 0;
3 1 0 0 0;
6 3 2 0 0;
10 6 3 2 0;
15 10 6 3 2]
% Produce eigenvalues and eigenvectors (not generalized ones)
[vecs,vals] = eig(A)
This should output:
vecs =
0 0 0 0 0.0000
0 0 0 0.2236 -0.2236
0 0 0.0000 -0.6708 0.6708
0 0.0000 -0.0000 0.6708 -0.6708
1.0000 -1.0000 1.0000 -0.2236 0.2236
vals =
2 0 0 0 0
0 2 0 0 0
0 0 2 0 0
0 0 0 1 0
0 0 0 0 1
Where we see that the first three eigenvectors are almost identical to working precision, as are the two last ones. Here, you must know the structure of your problem and identify the identical eigenvectors of identical eigenvalues. Here, eigenvalues are exactly identical, so we know which ones to consider, and we will assume that corresponding vectors 1-2-3 are identical and vectors 4-5. (In practice you will likely check the norm of the differences of eigenvectors and compare it to your tolerance)
Now we proceed to compute the generalized eigenvectors, but this is ill-conditioned to solve simply with matlab's \, because obviously (A - lambda*I) is not full rank. So we use pseudoinverses:
genvec21 = pinv(A - vals(1,1)*eye(size(A)))*vecs(:,1);
genvec22 = pinv(A - vals(1,1)*eye(size(A)))*genvec21;
genvec1 = pinv(A - vals(4,4)*eye(size(A)))*vecs(:,4);
Which should give:
genvec21 =
-0.0000
0.0000
-0.0000
0.3333
0
genvec22 =
0.0000
-0.0000
0.1111
-0.2222
0
genvec1 =
0.0745
-0.8832
1.5317
0.6298
-3.5889
Which are our other generalized eigenvectors. If we now check these to obtain the jordan normal form like this:
jordanJ = [vecs(:,1) genvec21 genvec22 vecs(:,4) genvec1];
jordanJ^-1*A*jordanJ
We obtain:
ans =
2.0000 1.0000 0.0000 -0.0000 -0.0000
0 2.0000 1.0000 -0.0000 -0.0000
0 0.0000 2.0000 0.0000 -0.0000
0 0.0000 0.0000 1.0000 1.0000
0 0.0000 0.0000 -0.0000 1.0000
Which is our Jordan normal form (with working precision errors).

Make a general import of upper triangular matrix in MATLAB

I have been trying to make a general import of Ghaul's answer to my earlier question about importing an upper triangular matrix.
Initial Data:
1.0 3.32 -7.23
1.00 0.60
1.00
A = importdata('A.txt')
A =
1.0000 3.3200 -7.2300
1.0000 0.6000 NaN
1.0000 NaN NaN
So you will have to shift the two last rows, like this:
A(2,:) = circshift(A(2,:),[0 1])
A(3,:) = circshift(A(3,:),[0 2])
A =
1.0000 3.3200 -7.2300
NaN 1.0000 0.6000
NaN NaN 1.0000
and then replace the NaNs with their symmetric counterparts:
A(isnan(A)) = A(isnan(A)')
A =
1.0000 3.3200 -7.2300
3.3200 1.0000 0.6000
-7.2300 0.6000 1.0000
I have this, so we get the complete matrix for any size:
A = importdata('A.txt')
for i = (1:size(A)-1)
A(i+1,:) = circshift(A(i+1,:),[0 i]);
end
A(isnan(A)) = A(isnan(A)');
Is this approach the best? There must be something better. I remember someone told me to try not to use for loops in MATLAB.
UPDATE
So this is the result. Is there any way to make it faster without the loop?
A = importdata('A.txt')
for i = (1:size(A)-1)
A(i+1,:) = circshift(A(i+1,:),[0 i])
end
A(isnan(A)) = 0;
A = A + triu(A, 1)';
Here's another general solution that should work for any size upper triangular matrix. It uses the functions ROT90, SPDIAGS, and TRIU:
>> A = [1 3.32 -7.23; 1 0.6 nan; 1 nan nan]; %# Sample matrix
>> A = spdiags(rot90(A),1-size(A,2):0); %# Shift the rows
>> A = A+triu(A,1).' %'# Mirror around the main diagonal
A =
1.0000 3.3200 -7.2300
3.3200 1.0000 0.6000
-7.2300 0.6000 1.0000
Here's one way without loop. If you have a more recent version of Matlab, you may want to check which solution is really faster, since loops aren't as bad as they used to be.
A = A'; %'# transpose so that linear indices get the right order
out = tril(ones(size(A))); %# create an array of indices
out(out>0) = A(~isnan(A)); %# overwrite the indices with the right number
out = out + triu(out',1); %'# fix the upper half of the array