MATLAB Matrix Problem - matlab

I have a system of equations (5 in total) with 5 unknowns. I've set these out into matrices to try solve, but I'm not sure if this comes out right. Basically the setup is AX = B, where A,X, and B are matrices. A is a 5x5, X is a 1x5 and B is a 5x1.
When I use MATLAB to solve for X using the formula X = A\B, it gives me a warning:
Matrix is singular to working precision.
and gives me 0 for all 5 X unknowns, but if I say X = B\A it doesnt, and gives me values for the 5 X unknowns.
Anyone know what I'm doing wrong? In case this is important, this is what my X matrix looks like:
X= [1/C3; 1/P1; 1/P2; 1/P3; 1/P4]
Where C3, P1, P2, P3, P4 are my unknowns.

Your matrix is singular, which means its determinant is 0. Such system of equations does not give you enough information to find a unique solution. One odd thing I see in your question is that X is 1x5 while B is 5x1. This is not a correct way of posing the problem. Both X and B must be 5x1. In case you're wondering, this is not a Matlab thing - this is a linear algebra thing. This [5x5]*[1x5] is illegal. This [5x5]*[5x1] produces a [5x1] result. This [1x5]*[5x5] produces a [1x5] result. Check you algebra first, and then check whether the determinant (det function in Matlab) is 0.

So, the next thing is to figure out why A is singular. (Note that it's possible that you'd want to solve
A x = b
in cases with square and singular A, but they'd only be in cases where b is in the range space of A.)
Maybe you can write your matrix A and vector b out (since it's only 5x5)? Or explain how you create it. That might give a clue as to why A isn't full rank or as to why b isn't in the range space of A.

Related

applying the same solver for complex matrix

Let assume an equation A x = b, where A is a real n x n matrix, b is a real valued vector of length n and x is the solution vector of this linear system.
We can find the solution of x by finding the inverse of A.
B= inv(A)
And therefore
x =A^{-1}b.
x= B*b
Can I apply the same solver if A and b are complex?
EDIT: I'm looking for explanation why it should work. Thanks :)
You can do that. Better in Matlab would be x = A\b. That will usually get the answer faster and more accurately.
In short yes, it works for any matrices irrespective of the field. (Well, at least real or complex field works.)
I think your question is whether B exists or not. It does as long as the determinant of B is non-zero. Vice versa.

How to solve A*X - X*A' = 0

I have an equation of the form A*X = X*A', where A and X are real, square matrices (3x3 in that case) and A is known and A' represent the transpose of A. How to solve for X using MATLAB? (up to a scale factor)
This is a Sylvester equation. However, it is singular because the eigenvalues of A and A' are the same. But you can use the formulae
[I⊗A-A'⊗I]X(:)=C(:):
m=kron(eye(3),a)+kron(-a,eye(3))
v=null(m)
x1=reshape(v(:,1),[3 3])
x2=reshape(v(:,2),[3 3])
x3=reshape(v(:,3),[3 3])
Now the solution is span{x1,x2,x2}, i.e. any matrix of the form
b x1 + c x2 +d x3, where b,c,d are any real numbers
I don't think Matlab has facilities for symbolic algebra.
If you expand A and X, and work through the expression, you obtain an 3x3 matrix with equation in several unknowns, all of which are zero. You then solve that.
But I don't think Matlab allows you to set a matrix to a symbol, rather than an value and expand it for you. For this simple case, you could easily write such a function, that multiples a string matrix by a numerical matrix. The snag is it's hard to scale it up to the general case without throwing the entire Maple / Mathematica engine at it.

Matlab Multiply A Matrix By Individual Sections of Another Matrix And Get the Diagonal Elements

The title of this post may be a bit confusing. Please allow me to provide a bit of context and then elaborate on what I'm asking. For your reference, the question I'm asking is toward the end and is denoted by bold letters. I provide some code, outlining where I'm currently at in solving the problem, immediately beforehand.
Essentially what I'm trying to do is Kernel Regression, which is usually done using a single test point x and a set of training instances . A reference to this can be found on wikipedia here. The kernel I'm using is the RBF kernel, a Wikipedia reference for which can be found here.
Anyway, I have some code written in Matlab so that this can be done quickly for a single instance of x, which is 1 x p in size. What I'd like to do is make it so I can estimate for numerous points very quickly, say m x p.
For the sake of avoiding notational mixups, I'll let the training instances be denoted Train and the instances I want estimates for as Test: and . It also needs to be mentioned that I want to estimate a vector of numbers for each of the m points. For a single point this vector would be 1 x v in size. Now I need it to be m x v. Therefore, Train will also have a vector of these know values associated with it called TS: . Lastly, we need a vector of sigmas that is 1 x v in size. This is denoted as Sig.
Here's the code I have so far:
%First, we have to get the matrices to equivalent size so we can subtract Train from Test
tm0 = kron(ones(size(Train,1),1),Test) - kron(ones(size(Test,1),1),Train);
%Secondly, we apply the Euclidean norm sq by row and then multiply each of these results by each element (j) in Sig times 1/2j^2
tm3 = exp(-kron(sum((tm0).^2,2),1/2./(Sig.^2)));
Now, at this point tm3 is an (m*n) x v matrix. This is where my question is: I now need to multiply TS' (TS transpose) times each of the n x v-sized segments in tm3 (there are m of these segments), get the diagonal elements of each of these resulting segments (after multiplication one of the m segments will be v x v, so each chunk of diagonal elements will be 1 x v meaning the resulting matrix is m x v) and sum these diagonal elements together to produce an m x 1 sized matrix. Lastly, I will need to divide each entry i in this m x 1 matrix by each of the v elements in the ith row of the diagonal-holding m x v-sized matrix, producing an m x v-sized result matrix.
I hope all of that makes sense. I'm sure there's some kind of trick that can be employed, but I'm just not coming up with it. Any help is greatly appreciated.
Edit 1: I was asked to provide more of an example to help demonstrate what it is that I would like done. The following represent that two matrices I'm talking about, TS and tm3:
As you can see, TS'(TS transpose) is v x n and tm3 is mn x v. In tm3 there are blocks that are of size n x v -- there are m blocks of this size. Notice that the size of TS' is of size v x n. This means that I can multiply TS' by a single block of tm3, which again is of size n x v. This would result in a matrix that is v x v in size. I would like to do this operation -- individually multiplying TS' by each of the n x v-sized blocks of tm3, which would produce m v x v matrices.
From here, though, I would like to obtain the diagonal elements from each of these v x v matrices. So, for a single v x v matrix, denoted using a:
Ultimately, I would to do this for each of the m v x v matrices giving me something that looks like the following, where s is the mth v x v matrix:
If I denote this last matrix as Q, which is m x v in size, it is trivial to sum the elements across the rows to produce the m x 1 vector I was looking for. I will refer to this vector as C. However, I would then like to divide each of these m scalar values by the corresponding row of matrix Q, to produce another m x v matrix:
This is the final matrix I'm looking for. Hopefully this helps make it clear what I'm looking for. Thanks for taking the time to read this!
Thought: I'm pretty sure I could accomplish this by converting tm3 to a cell matrix by doing tc1 = mat2cell(tm3,repmat(length(Train),1,m),length(Sig)), and then put replicate TS m times in another cell matrix tc2 = mat2cell(TS',length(indirectSigma),repmat(length(Train),1,m))'. Finally, I could do operations like tc3 = cellfun(#(a,b) a*b, tc2,tc1,'UniformOutput',false), which would give me m cells filled with the v x v matrices I was looking for. I could proceed from there. However, I'm not sure how fast these cell operations are. Can anybody comment? I'm afraid they might be slow, so I would prefer operations be performed on normal matrices, which I know to be fast. Thanks!

How can I make all-in-one polynomial from multi-polynomial?

I'm not familiar with expert math. so I don't know where to start from.
I have get a some article like this. I am just following this article description. But this is not easy to me.
But I'm not sure how to make just one polynomial equation(or something like that) from above 4 polynomial equations. Is this can be possible way?
If yes, Would you please help me how to get a polynomial(or something like equation)? If not, would you let me know the reason of why?
UPDATE
I'd like to try as following
clear all ;
clc
ab = (H' * H)\H' * y;
y2 = H*ab;
Finally I can get some numbers like this.
So, is this meaning?
As you can see the red curve line, something wrong.
What did I miss anythings?
All the article says is "you can combine multiple data sets into one to get a single polynomial".
You can also go in the other direction: subdivide your data set into pieces and get as many separate ones as you wish. (This is called n-fold validation.)
You start with a collection of n points (x, y). (Keep it simple by having only one independent variable x and one dependent variable y.)
Your first step should be to plot the data, look at it, and think about what kind of relationship between the two would explain it well.
Your next step is to assume some form for the relationship between the two. People like polynomials because they're easy to understand and work with, but other, more complex relationships are possible.
One polynomial might be:
y = c0 + c1*x + c2*x^2 + c3*x^3
This is your general relationship between the dependent variable y and the independent variable x.
You have n points (x, y). Your function can't go through every point. In the example I gave there are only four coefficients. How do you calculate the coefficients for n >> 4?
That's where the matricies come in. You have n equations:
y(1) = c0 + c1*x(1) + c2*x(1)^2 + c3*x(1)^3
....
y(n) = c0 + c1*x(n) + c2*x(n)^2 + c3*x(n)^3
You can write these as a matrix:
y = H * c
where the prime denotes "transpose".
Premultiply both sides by transpose(X):
transpose(X)* y = transpose(H)* H * c
Do a standard matrix inversion or LU decomposition to solve for the unknown vector of coefficients c. These particular coefficients minimize the sum of squares of differences between the function evaluated at each point x and your actual value y.
Update:
I don't know where this fixation with those polynomials comes from.
Your y vector? Wrong. Your H matrix? Wrong again.
If you must insist on using those polynomials, here's what I'd recommend: You have a range of x values in your plot. Let's say you have 100 x values, equally spaced between 0 and your max value. Those are the values to plug into your H matrix.
Use the polynomials to synthesize sets of y values, one for each polynomial.
Combine all of them into a single large problem and solve for a new set of coefficients. If you want a 3rd order polynomial, you'll only have four coefficients and one equation. It'll represent the least squares best approximation of all the synthesized data you created with your four polynomials.

Cannonical Correlation Analysis

I have just started working using CCA in Matlab. I have two vectors X and Y of dimension 60x1920 and 60x1536 with the number of samples being 60 and variables in the different set of vectors being 1920 and 1536 respectively. I want to know do CCA for reducing them to the subspace and then do feature matching.
I am using this commands.
%% DO CCA
[A,B,r,U,V] = canoncorr(X,Y);
The output I get is this :
Name Size Bytes Class Attributes
A 1920x58 890880 double
B 1536x58 712704 double
U 60x58 27840 double
V 60x58 27840 double
r 1x58 464 double
Can anyone please tell me what these variables mean. I have gone over the documentation several times and still is unclear about them. As I understand CCA finds two linear projection matrices Wx and Wy such that the projection of X and Y on Wx and Wy are maximally correlated.
1) Could anyone please tell me which of the following matrices are these?
2) Also how can I find the projected vectors in the learned subspace of CCA?
Any help will be appreciated. Thanks in advance.
As I understand it, with X and Y being your original data matrices, A and B are the sets of coefficients that perform a change of basis to maximally correlate your original data. Your data is represented in the new bases as the matrices U and V.
So to answer your questions:
The projection matrices you are looking for would be A and B since they transform X and Y into the new space.
The resulting projections of X and Y into the new space would be U and V, respectively. (The r vector represents the entries of the correlation matrix between U and V, which is a diagonal matrix.)
The The MATLAB documentation says this transformation can be done with the following formulae, where N is the number of observations:
U = (X-repmat(mean(X),N,1))*A
V = (Y-repmat(mean(Y),N,1))*B
This page lays out the process nicely so you can see what each coefficient means in the transformation process.