This problem was discovered when I was solving this second order differential equation by matlab
syms x(t)
diff(x,t,2)+k*diff(x,t)+b*x==0
I transform it into a system of first order equations, whose coefficient matrix is
syms b k
A=[0 1;
-b-k]
Then I solve this system by following code
syms x1(t) x2(t)
X=[x1;x2];
odes=diff(X)==A*X;
[x1sol(t),x2sol(t)]=dsolve(odes);
x1sol=simplify(x1sol(t))
x2sol=simplify(x2sol(t))
But this solution is different from the one I calculated manually, because the correct answer is based on the eigenvalues and eigenvectors of A:
λ1=(-b-(b^2-4*k)^(1/2))/2
v1=[-1;
(b+(b^2-4*k)^(1/2))/2]
and
λ2=(-b+(b^2-4*k)^(1/2))/2
v2=[-1;
(-b+(b^2-4*k)^(1/2))/2]
Therefore, Instead of solving the system directly, I calculate the the eigenvalues and eigenvectors of A directly
[V,D]=eig(A)
or indirectly
syms k b t
I=eye(2);
A=[0 1;-k -b];
e=eig(A);
B1=e(1)*I-A;
B2=e(2)*I-A;
P1=null(B1)
P2=null(B2)
By comparing the results of MATLAB and manual calculation, I find that the eigenvalues are correct calculated by both means, but the eigenvectors are wrong calculated by both means either (and they are same with dsolve(odes) above). If P1 is the solution to B1X=0 (means P1 is one of the eigenvector of A), then B1P1=0 should be true, so does P2.
Finally, I find that MATLAB seems to make an error when converting B1 a to the simplest row form
B1=
[- b/2 - (b^2 - 4*k)^(1/2)/2, -1]
[ k, b/2 - (b^2 - 4*k)^(1/2)/2]
rrefB1=rref(B1)
ans=
[1, (b - (b^2 - 4*k)^(1/2))/(2*k)]
[0, 0]
and I check rrefB1 multiplies P1, and it equals to 0.
So, the problem is that B1 multiplies P1 doesn't equals to 0, but rrefB1 multiplies P1 equals to 0.
In theory, The original matrix (B1) and the simplest row form of it (rrefB1) should have the same fundamental system of solutions.
What's wrong here?
Related
In Matlab, I have to study the eventual existence of common eigenvectors basis between 2 Fisher matrices FISH_sp and FISH_xc of size 7x7 and diagonalisable.
I get from my computation the following result:
>> x=null(FISH_sp*FISH_xc-FISH_xc*FISH_sp)
x =
-0.0085
-0.0048
-0.2098
0.9776
-0.0089
-0.0026
0.0109
In this result, It appears that condition to get a common eigenvectors basis on commutator is true. But I need to further examine the mathematics. If one gets a single column vector, then nullspace of the commutator is 1-dimensional as far as Matlab can tell. With that result, one can think about how to verify that vector is indeed an eigenvector of FISH_sp and FISH_xc down to a small tolerance.
But I don't know how to introduce this tolerance in a small Matlab script.
All I have done for instant is :
x=null(FISH_sp*FISH_xc-FISH_xc*FISH_sp)
How can I introduce tolerance in the checking of eigenvector x as being really an eigenvector given a tolerance tol.
And what about the eigenvalues ? : normally, they should not equal to D1 in [V1, D1] =eig(FISH_sp)and not equal to D2 in [V2, D2] =eig(FISH_xc) ? I said they shouldn't since we have to express them in a new and different basis of eigenvectors : then I call these 2 news diagonal matrices D1_new and D2_new. So, I could write :
If I have a passing matrix of all the common eigen vectors basis called P, then one has :
F = P (D1_new + D2_new) p^-1
This endomorphism F is wanted with this expression (to respect the Maximum Likelihood Estimator = MLE).
the problem for instant is that I have only one eigen vector x and not the entire passing matrix P of new eigenvectors. How can I build this passing matrix P from only the single x values of common eigen vector mentioned above ?
I have a matrix of A=[60x60],and two coefficients a,b. Since matrix A was moved by a,b, how to multiply the coefficients into matrix A so that I could obtain A_moved? Any function to do it?
Here's part of matlab code implemented:
A=rand(60); %where it's in 2D, A(k1,k2)
a=0.5; b=0.8;
[m, n]=size(A);
[M,N] = meshgrid(1:m,1:n);
X = [M(:), N(:)];
A_moved=A(:)(X)*[a b] %I know this is not valid but you get the idea
In another word A_moved is calculated by A_moved=a*k1+b*k2.
This line of code A_moved=A(:)(X)*[a b] is to represent my idea that a,b multiply back into the original A because X represent correspond coordinates of k1 and k2. The first column represent k1, and second column represent k2. Thus it become A_moved=a*k1+b*k2. But this couldn't get me anyway.
In the end A-moved is a 60x60 matrix which have been multiplied by coefficients a,b correspondingly. To make it clearer,A is the phase of image. a,b moved it phase.
Appreciate any help. Thank you!
Reference paper: Here
EDIT:
As suggested by Noel for better understanding.
A=[2 3;5 7], a=1.5 and b=2.5.
Since A is approximated as a*k1+b*k2
Thus,
A_moved=[1.5*k1_1+2.5k2_1 1.5*k1_2+2.5k2_2; 1.5*k1_2+2.5k2_1 1.5*k1_2+2.5k2_2];
where k1 and k2, If I'm understood correctly is the coordinates of the original A matrix, as defined in X above.
On the chat we found that your problem was matrix algebra related
What you want to obtain in A_moved is the x coordinate multiplied by a contant a plus the y coordinate multiplied by a constant b.
You already have this coordinates in M and N, so you can obtain A_moved as
A_moved = (a*M) + (b*N);
And it will retain same shape as A
Suppose I have the matrix below:
syms x y z
M = [x+y-z;2*x+3*y+-5*z;-x-y6*z];
I want to have the a matrix consisting the coefficients of the variables x,y, and z:
CM = [1,1,-1;2,3,-5;-1,-1,6];
If I multiply CM by [x;y;z], I expect to get M.
Edit
I have a system of ODE:
(d/dt)A = B
A and B are square matrices. I want to solve these set of equations. I don't want to use ode solving commands of Matlab.
If I turn the above set of equations into:
(d/dt)a = M*a
then I can solve it easily by the eigen vectors and values of matrix M. Here a is a column vector containing the variables, and M is the matrix of coefficient extracted from B.
Since you seem to be using the Symbolic Math Toolbox, you should diff symbolically, saving the derivative with respect to each variable:
syms x y z;
M=[x+y-z;2*x+3*y-5*z;-x-y+6*z];
Mdiff=[];
for k=symvar(M)
Mdiff=[Mdiff diff(M,k)];
end
Then you get
Mdiff =
[ 1, 1, -1]
[ 2, 3, -5]
[ -1, -1, 6]
If you want to order the columns in a non-lexicographical way, then you need to use a vector of your own instead of symvar.
Update
Since you mentioned that this approach is slow, it might be faster to use coeffs to treat M as a polynomial of its variables:
syms x y z;
M=[x+y-z;2*x+3*y-5*z;-x-y+6*z];
Mdiff2=[];
varnames=symvar(M);
for k=1:length(M)
Mdiff2=[Mdiff2; coeffs(M(k),varnames(end:-1:1))];
end
Note that for some reason (which I don't understand) the output of coeffs is reversed compare to its input variable list, this is why we call it with an explicitly reversed version of symvar(M).
Output:
>> Mdiff2
Mdiff2 =
[ 1, 1, -1]
[ 2, 3, -5]
[ -1, -1, 6]
As #horchler pointed out, this second solution will not work if your symbolic vector has varying number of variables in its components. Since speed only matters if you have to do this operation a lot of times, with many configurations of the parameters in your M, I would suggest constructing M parametrically (so that the coefficients are also syms) is possible, then you only have to perform the first version once. The rest is only substitution into the result.
I have got a problem like A*x=lambda*x, where A is of order d*d, x is of order d*c and lambda is a constant. A and lambda are known and the matrix x is unknown.
Is there any way to solve this problem in matlab?? (Like eigen values but x is a d*c matrix instead of being a vector).
If I've understood you correctly, there will not necessarily be any solutions for x. If A*x=lambda*x, then any column y of x satisfies A*y=lambda*y, so the columns of x are simply eigenvectors of A corresponding to the eigenvalue lambda, and there will only be any solutions if lambda is in fact an eigenvalue.
From the documentation:
[V,D] = eig(A) produces matrices of eigenvalues (D) and eigenvectors
(V) of matrix A, so that A*V = V*D. Matrix D is the canonical form of
A — a diagonal matrix with A's eigenvalues on the main diagonal.
Matrix V is the modal matrix — its columns are the eigenvectors of A.
You can use this to check if lambda is an eigenvalue, and find any corresponding eigenvectors.
You can transform this problem. Write x as vector by by using x(:) (has size d*c x 1). Then A can be rewritten to a d*c x d*c matrix which has c versions of A along the diagonal.
Now it's a simple eigenvalue problem.
Its actually trivial. Your requirement is that A*X = lambda*X, where X is an array. Effectively, look at what happens for a single column of X. If An array X exists, then it is true that
A*X(:,i) = lambda*X(:,i)
And this must be true for the SAME value of lambda for all columns of X. Essentially, this means that X(:,i) is an eigenvector of A, with corresponding eigenvalue lambda. More importantly, it means that EVERY column of X has the same eigenvalue as every other column.
So a trivial solution to this problem is to simply have a matrix X with identical columns, as long as that column is an eigenvector of A. If an eigenvalue has multiplicity greater than one (therefore there are multiple eigenvectors with the same eigenvalue) then the columns of X may be any linear combination of those eigenvectors.
Try it in practice. I'll pick some simple matrix A.
>> A = [2 3;3 2];
>> [V,D] = eig(A)
V =
-0.70711 0.70711
0.70711 0.70711
D =
-1 0
0 5
The second column of V is an eigenvector, with eigenvalue of 5. We can arbitrarily scale an eigenvector by any constant. So now pick the vector vec, and create a matrix with replicated columns.
>> vec = [1;1];
>> A*[vec,vec,vec]
ans =
5 5 5
5 5 5
This should surprise nobody.
I have a high dimensional Gaussian with mean M and covariance matrix V. I would like to calculate the distance from point p to M, taking V into consideration (I guess it's the distance in standard deviations of p from M?).
Phrased differentially, I take an ellipse one sigma away from M, and would like to check whether p is inside that ellipse.
If V is a valid covariance matrix of a gaussian, it then is symmetric positive definite and therefore defines a valid scalar product. By the way inv(V) also does.
Therefore, assuming that M and p are column vectors, you could define distances as:
d1 = sqrt((M-p)'*V*(M-p));
d2 = sqrt((M-p)'*inv(V)*(M-p));
the Matlab way one would rewrite d2as (probably some unnecessary parentheses):
d2 = sqrt((M-p)'*(V\(M-p)));
The nice thing is that when V is the unit matrix, then d1==d2and it correspond to the classical euclidian distance. To find wether you have to use d1 or d2is left as an exercise (sorry, part of my job is teaching). Write the multi-dimensional gaussian formula and compare it to the 1D case, since the multidimensional case is only a particular case of the 1D (or perform some numerical experiment).
NB: in very high dimensional spaces or for very many points to test, you might find a clever / faster way from the eigenvectors and eigenvalues of V (i.e. the principal axes of the ellipsoid and their corresponding variance).
Hope this helps.
A.
Consider computing the probability of the point given the normal distribution:
M = [1 -1]; %# mean vector
V = [.9 .4; .4 .3]; %# covariance matrix
p = [0.5 -1.5]; %# 2d-point
prob = mvnpdf(p,M,V); %# probability P(p|mu,cov)
The function MVNPDF is provided by the Statistics Toolbox
Maybe I'm totally off, but isn't this the same as just asking for each dimension: Am I inside the sigma?
PSEUDOCODE:
foreach(dimension d)
(M(d) - sigma(d) < p(d) < M(d) + sigma(d)) ?
Because you want to know if p is inside every dimension of your gaussian. So actually, this is just a space problem and your Gaussian hasn't have to do anything with it (except for M and sigma which are just distances).
In MATLAB you could try something like:
all(M - sigma < p < M + sigma)
A distance to that place could be, where I don't know the function for the Euclidean distance. Maybe dist works:
dist(M, p)
Because M is just a point in space and p as well. Just 2 vectors.
And now the final one. You want to know the distance in a form of sigma's:
% create a distance vector and divide it by sigma
M - p ./ sigma
I think that will do the trick.